ADDED .fossil-settings/binary-glob Index: .fossil-settings/binary-glob ================================================================== --- /dev/null +++ .fossil-settings/binary-glob @@ -0,0 +1,14 @@ +*.bmp +*.chm +*.db +*.dll +*.exe +*.gif +*.gz +*.lib +*.msi +*.pdb +*.png +*.resources +*.snk +*.zip ADDED .fossil-settings/crnl-glob Index: .fossil-settings/crnl-glob ================================================================== --- /dev/null +++ .fossil-settings/crnl-glob @@ -0,0 +1,1 @@ +* ADDED .fossil-settings/encoding-glob Index: .fossil-settings/encoding-glob ================================================================== --- /dev/null +++ .fossil-settings/encoding-glob @@ -0,0 +1,1 @@ +Tests/data/testlinq.out ADDED .fossil-settings/ignore-glob Index: .fossil-settings/ignore-glob ================================================================== --- /dev/null +++ .fossil-settings/ignore-glob @@ -0,0 +1,11 @@ +Externals/Eagle/bin/net5/Eagle*.pdb +Externals/Eagle/bin/net6/Eagle*.pdb +Externals/Eagle/bin/netCore20/Eagle*.pdb +Externals/Eagle/bin/netCore30/Eagle*.pdb +Externals/Eagle/bin/netFramework20/Eagle*.pdb +Externals/Eagle/bin/netFramework20/x64/Spilornis.pdb +Externals/Eagle/bin/netFramework20/x86/Spilornis.pdb +Externals/Eagle/bin/netFramework40/Eagle*.pdb +Externals/Eagle/bin/netFramework40/x64/Spilornis.pdb +Externals/Eagle/bin/netFramework40/x86/Spilornis.pdb +Targets/SQLite.NET.Settings.targets.netFx40.user ADDED Doc/Extra/Core/images/sqlite370_banner.gif Index: Doc/Extra/Core/images/sqlite370_banner.gif ================================================================== --- /dev/null +++ Doc/Extra/Core/images/sqlite370_banner.gif cannot compute difference between binary files ADDED Doc/Extra/Core/json1.html Index: Doc/Extra/Core/json1.html ================================================================== --- /dev/null +++ Doc/Extra/Core/json1.html @@ -0,0 +1,1639 @@ + + + + + +JSON Functions And Operators + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+JSON Functions And Operators +
+ + +
+ + + + + +

1. Overview

+

+By default, SQLite supports thirty functions and two operators for +dealing with JSON values. There are also two table-valued functions +that can be used to decompose a JSON string. + +

+There are twenty-six scalar functions and operators: + +

    +
  1. +json(json) +
  2. + +
  3. +jsonb(json) +
  4. + +
  5. +json_array(value1,value2,...) +
  6. + +
  7. +jsonb_array(value1,value2,...) +
  8. + +
  9. +json_array_length(json)
    json_array_length(json,path) +
  10. + +
  11. +json_error_position(json) +
  12. + +
  13. +json_extract(json,path,...) +
  14. + +
  15. +jsonb_extract(json,path,...) +
  16. + +
  17. +json -> path +
  18. + +
  19. +json ->> path +
  20. + +
  21. +json_insert(json,path,value,...) +
  22. + +
  23. +jsonb_insert(json,path,value,...) +
  24. + +
  25. +json_object(label1,value1,...) +
  26. + +
  27. +jsonb_object(label1,value1,...) +
  28. + +
  29. +json_patch(json1,json2) +
  30. + +
  31. +jsonb_patch(json1,json2) +
  32. + +
  33. +json_pretty(json) +
  34. + +
  35. +json_remove(json,path,...) +
  36. + +
  37. +jsonb_remove(json,path,...) +
  38. + +
  39. +json_replace(json,path,value,...) +
  40. + +
  41. +jsonb_replace(json,path,value,...) +
  42. + +
  43. +json_set(json,path,value,...) +
  44. + +
  45. +jsonb_set(json,path,value,...) +
  46. + +
  47. +json_type(json)
    json_type(json,path) +
  48. + +
  49. +json_valid(json)
    json_valid(json,flags) +
  50. + +
  51. +json_quote(value) +
  52. + + +
+ +

There are four aggregate SQL functions: + +

    +
  1. +json_group_array(value) +
  2. + +
  3. +jsonb_group_array(value) +
  4. + +
  5. +json_group_object(label,value) +
  6. + +
  7. +jsonb_group_object(name,value) +
  8. + + +
+ +

The two table-valued functions are: + +

    +
  1. +json_each(json)
    json_each(json,path) +
  2. + +
  3. +json_tree(json)
    json_tree(json,path) +
  4. + + +
+ + + + + + + + +

2. Compiling in JSON Support

+ +

+The JSON functions and operators are built into SQLite by default, +as of SQLite version 3.38.0 (2022-02-22). They can be omitted +by adding the -DSQLITE_OMIT_JSON compile-time option. Prior to +version 3.38.0, the JSON functions were an extension that would only +be included in builds if the -DSQLITE_ENABLE_JSON1 compile-time option +was included. In other words, the JSON functions went from being +opt-in with SQLite version 3.37.2 and earlier to opt-out with +SQLite version 3.38.0 and later. + +

3. Interface Overview

+ +

+SQLite stores JSON as ordinary text. +Backwards compatibility constraints mean that SQLite is only able to +store values that are NULL, integers, floating-point numbers, text, +and BLOBs. It is not possible to add a new "JSON" type. + + +

3.1. JSON arguments

+ +

+For functions that accept JSON as their first argument, that argument +can be a JSON object, array, number, string, or null. SQLite numeric +values and NULL values are interpreted as JSON numbers and nulls, respectively. +SQLite text values can be understood as JSON objects, arrays, or strings. +If an SQLite text value that is not a well-formed JSON object, array, or +string is passed into JSON function, that function will usually throw +an error. (Exceptions to this rule are json_valid(), +json_quote(), and json_error_position().) + +

+These routines understand all +rfc-8259 JSON syntax +and also JSON5 extensions. JSON text +generated by these routines always strictly conforms to the +canonical JSON definition and does not contain any JSON5 +or other extensions. The ability to read and understand JSON5 was added in +version 3.42.0 (2023-05-16). +Prior versions of SQLite would only read canonical JSON. + + + +

3.2. JSONB

+ +

+Beginning with version 3.45.0 (2024-01-15), SQLite allows its +internal "parse tree" representation of JSON to be stored on disk, +as a BLOB, in a format that we call "JSONB". By storing SQLite's internal +binary representation of JSON directly in the database, applications +can bypass the overhead of parsing and rendering JSON when reading and +updating JSON values. The internal JSONB format also uses slightly +less disk space then text JSON. + +

+Any SQL function parameter that accepts text JSON as an input will also +accept a BLOB in the JSONB format. The function will operate the +same in either case, except that it will run faster when +the input is JSONB, since it does not need to run the JSON parser. + +

+Most SQL functions that return JSON text have a corresponding function +that returns the equivalent JSONB. The functions that return JSON +in the text format begin with "json_" and functions that +return the binary JSONB format begin with "jsonb_". + +

3.2.1. The JSONB format

+ +

+JSONB is a binary representation of JSON used by SQLite and +is intended for internal use by SQLite only. Applications +should not use JSONB outside of SQLite nor try to reverse-engineer the +JSONB format. + +

+The "JSONB" name is inspired by PostgreSQL, but the +on-disk format for SQLite's JSONB is not the same as PostgreSQL's. +The two formats have the same name, but are not binary compatible. +The PostgreSQL JSONB format claims to offer O(1) +lookup of elements in objects and arrays. SQLite's JSONB format makes no +such claim. SQLite's JSONB has O(N) time complexity for +most operations in SQLite, just like text JSON. The advantage of JSONB in +SQLite is that it is smaller and faster than text JSON - potentially several +times faster. There is space in the +on-disk JSONB format to add enhancements and future versions of SQLite might +include options to provide O(1) lookup of elements in JSONB, but no such +capability is currently available. + +

3.2.2. Handling of malformed JSONB

+ +

+The JSONB that is generated by SQLite will always be well-formed. If you +follow recommended practice and +treat JSONB as an opaque BLOB, then you will not have any problems. But +JSONB is just a BLOB, so a mischievous programmer could devise BLOBs +that are similar to JSONB but that are technically malformed. When +misformatted JSONB is feed into JSON functions, any of the following +might happen: + +

+ +

+The way in which SQLite handles invalid JSONB might change +from one version of SQLite to the next. The system follows +the garbage-in/garbage-out rule: If you feed the JSON functions invalid +JSONB, you get back an invalid answer. If you are in doubt about the +validity of our JSONB, use the json_valid() function to verify it. + +

+We do make this one promise: +Malformed JSONB will never cause a memory +error or similar problem that might lead to a vulnerability. +Invalid JSONB might lead to crazy answers, +or it might cause queries to abort, but it won't cause a crash. + + + +

3.3. PATH arguments

+ +

+For functions that accept PATH arguments, that PATH must be well-formed or +else the function will throw an error. +A well-formed PATH is a text value that begins with exactly one +'$' character followed by zero or more instances +of ".objectlabel" or "[arrayindex]". + +

+The arrayindex is usually a non-negative integer N. In +that case, the array element selected is the N-th element +of the array, starting with zero on the left. +The arrayindex can also be of the form "#-N" +in which case the element selected is the N-th from the +right. The last element of the array is "#-1". Think of +the "#" characters as the "number of elements in the array". Then +the expression "#-1" evaluates to the integer that corresponds to +the last entry in the array. It is sometimes useful for the array +index to be just the # character, for example when appending +a value to an existing JSON array: + +

+ + + + +

3.4. VALUE arguments

+ +

+For functions that accept "value" arguments (also shown as +"value1" and "value2"), +those arguments are usually understood +to be literal strings that are quoted and become JSON string values +in the result. Even if the input value strings look like +well-formed JSON, they are still interpreted as literal strings in the +result. + +

+However, if a value argument comes directly from the result of another +JSON function or from the -> operator (but not the ->> operator), +then the argument is understood to be actual JSON and +the complete JSON is inserted rather than a quoted string. + +

+For example, in the following call to json_object(), the value +argument looks like a well-formed JSON array. However, because it is just +ordinary SQL text, it is interpreted as a literal string and added to the +result as a quoted string: + +

+ + +

+But if the value argument in the outer json_object() call is the +result of another JSON function like json() or json_array(), then +the value is understood to be actual JSON and is inserted as such: + +

+ + +

+To be clear: "json" arguments are always interpreted as JSON +regardless of where the value for that argument comes from. But +"value" arguments are only interpreted as JSON if those arguments +come directly from another JSON function or the -> operator. + +

+Within JSON value arguments interpreted as JSON strings, Unicode escape +sequences are not treated as equivalent to the characters or escaped +control characters represented by the expressed Unicode code point. +Such escape sequences are not translated or specially treated; they +are treated as plain text by SQLite's JSON functions. + +

3.5. Compatibility

+ +

+The current implementation of this JSON library uses a recursive descent +parser. In order to avoid using excess stack space, any JSON input that has +more than 1000 levels of nesting is considered invalid. Limits on nesting +depth are allowed for compatible implementations of JSON by +RFC-8259 section 9. + + + +

3.6. JSON5 Extensions

+ +

+Beginning in version 3.42.0 (2023-05-16), these routines will +read and interpret input JSON text that includes +JSON5 extensions. However, JSON text generated +by these routines will always be strictly conforming to the +canonical definition of JSON. + +

+Here is a synopsis of JSON5 extensions (adapted from the +JSON5 specification): + +

+ +

+To convert string X from JSON5 into canonical JSON, invoke +"json(X)". The output of the "json()" function will be canonical +JSON regardless of any JSON5 extensions that are present in the input. +For backwards compatibility, the json_valid(X) function without a +"flags" argument continues +to report false for inputs that are not canonical JSON, even if the +input is JSON5 that the function is able to understand. To determine +whether or not an input string is valid JSON5, include the 0x02 bit +in the "flags" argument to json_valid: "json_valid(X,2)". + +

+These routines understand all of JSON5, plus a little more. +SQLite extends the JSON5 syntax in these two ways: + +

    +
  1. +Strict JSON5 requires that +unquoted object keys must be ECMAScript 5.1 IdentifierNames. But large +unicode tables and lots of code is required in order to determine whether or +not a key is an ECMAScript 5.1 IdentifierName. For this reason, +SQLite allows object keys to include any unicode characters +greater than U+007f that are not whitespace characters. This relaxed +definition of "identifier" greatly simplifies the implementation and allows +the JSON parser to be smaller and run faster. + +

  2. +JSON5 allows floating-point infinities to be expressed as +"Infinity", "-Infinity", or "+Infinity" +in exactly that case - the initial "I" is capitalized and all other +characters are lower case. SQLite also allows the abbreviation "Inf" +to be used in place of "Infinity" and it allows both keywords +to appear in any combination of upper and lower case letters. +Similarly, +JSON5 allows "NaN" for not-a-number. SQLite extends this to also allow +"QNaN" and "SNaN" in any combination of upper and lower case letters. +Note that SQLite interprets NaN, QNaN, and SNaN as just an alternative +spellings for "null". +This extension has been added because (we are told) there exists a lot +of JSON in the wild that includes these non-standard representations +for infinity and not-a-number. +

+ + +

3.7. Performance Considerations

+ +

+Most JSON functions do their internal processing using JSONB. So if the +input is text, they first most translate the input text into JSONB. +If the input is already in the JSONB format, no translation is needed, +that step can be skipped, and performance is faster. + +

+For that reason, +when an argument to one JSON function is supplied by another +JSON function, it is usually more efficient to use the "jsonb_" +variant for the function used as the argument. + +

+ + +

+The aggregate JSON SQL functions are an exception to this rule. Those +functions all do their processing using text instead of JSONB. So for the +aggregate JSON SQL functions, it is more efficient for the arguments +to be supplied using "json_" functions than "jsonb_" +functions. + +

+ + + +

3.8. The JSON BLOB Input Bug

+ +

If a JSON input is a BLOB that is not JSONB and that looks like +text JSON when cast to text, then it is accepted as text JSON. +This is actually a long-standing bug in the original implementation +that the SQLite developers were unaware of. The documentation stated +that a BLOB input to a JSON function should raise an error. But in the +actual implementation, the input would be accepted as long +as the BLOB content was a valid JSON string in the text encoding of +the database. + +

This JSON BLOB input bug was accidentally fixed when the JSON routines +were reimplemented for the 3.45.0 release (2024-01-15). +That caused breakage in applications that had come to depend on the old +behavior. (In defense of those applications: they were often lured into +using BLOBs as JSON by the readfile() SQL function +available in the CLI. Readfile() was used to read JSON from disk files, +but readfile() returns a BLOB. And that worked for them, so why not just +do it?) + +

For backwards compatibility, +the (formerly incorrect) legacy behavior of interpreting BLOBs as text JSON +if no other interpretation works +is hereby documented and is be officially supported in +version 3.45.1 (2024-01-30) and all subsequent releases. + +

4. Function Details

+ +

The following sections provide additional detail on the operation of +the various JSON functions and operators: + + + +

4.1. The json() function

+ +

The json(X) function verifies that its argument X is a valid +JSON string or JSONB blob and returns a minified version of that JSON string +with all unnecessary whitespace removed. If X is not a well-formed +JSON string or JSONB blob, then this routine throws an error. + +

If the input is JSON5 text, then it is converted into canonical +RFC-8259 text prior to being returned. + +

If the argument X to json(X) contains JSON objects with duplicate +labels, then it is undefined whether or not the duplicates are +preserved. The current implementation preserves duplicates. +However, future enhancements +to this routine may choose to silently remove duplicates. + +

+Example: + +

+ + + + +

4.2. The jsonb() function

+ +

The jsonb(X) function returns the binary JSONB representation +of the JSON provided as argument X. An error is raised if X is +TEXT that does not have valid JSON syntax. + +

If X is a BLOB and appears to be JSONB, +then this routine simply returns a copy of X. +Only the outer-most element of the JSONB input is examined, however. +The deep structure of the JSONB is not validated. + + + +

4.3. The json_array() function

+ +

The json_array() SQL function accepts zero or more arguments and +returns a well-formed JSON array that is composed from those arguments. +If any argument to json_array() is a BLOB then an error is thrown. + +

An argument with SQL type TEXT is normally converted into a quoted +JSON string. However, if the argument is the output from another json1 +function, then it is stored as JSON. This allows calls to json_array() +and json_object() to be nested. The json() function can also +be used to force strings to be recognized as JSON. + +

Examples: + +

+ + + + + +

4.4. The jsonb_array() function

+ +

The jsonb_array() SQL function works just like the json_array() +function except that it returns the constructed JSON array in the +SQLite's private JSONB format rather than in the standard +RFC 8259 text format. + + + +

4.5. The json_array_length() function

+ +

The json_array_length(X) function returns the number of elements +in the JSON array X, or 0 if X is some kind of JSON value other +than an array. The json_array_length(X,P) locates the array at path P +within X and returns the length of that array, or 0 if path P locates +an element in X that is not a JSON array, and NULL if path P does not +locate any element of X. Errors are thrown if either X is not +well-formed JSON or if P is not a well-formed path. + +

Examples: + +

+ + + + + +

4.6. The json_error_position() function

+ +

The json_error_position(X) function returns 0 if the input X is a +well-formed JSON or JSON5 string. If the input X contains one or more +syntax errors, then this function returns the character position of the +first syntax error. The left-most character is position 1. + +

If the input X is a BLOB, then this routine returns 0 if X is +a well-formed JSONB blob. If the return value is positive, then it +represents the approximate 1-based position in the BLOB of the +first detected error. + +

+The json_error_position() function was added with +SQLite version 3.42.0 (2023-05-16). + + + + +

4.7. The json_extract() function

+ +

The json_extract(X,P1,P2,...) extracts and returns one or more +values from the +well-formed JSON at X. If only a single path P1 is provided, then the +SQL datatype of the result is NULL for a JSON null, INTEGER or REAL +for a JSON numeric value, an INTEGER zero for a JSON false value, +an INTEGER one for a JSON true value, the dequoted text for a +JSON string value, and a text representation for JSON object and array values. +If there are multiple path arguments (P1, P2, and so forth) then this +routine returns SQLite text which is a well-formed JSON array holding +the various values. + +

Examples: + +

+ + +

There is a subtle incompatibility between the json_extract() function +in SQLite and the json_extract() function in MySQL. The MySQL version +of json_extract() always returns JSON. The SQLite version of +json_extract() only returns JSON if there are two or more PATH arguments +(because the result is then a JSON array) or if the single PATH argument +references an array or object. In SQLite, if json_extract() has only +a single PATH argument and that PATH references a JSON null or a string +or a numeric value, then json_extract() returns the corresponding SQL +NULL, TEXT, INTEGER, or REAL value. + +

The difference between MySQL json_extract() and SQLite json_extract() +really only stands out when accessing individual values within the JSON +that are strings or NULLs. The following table demonstrates the difference: + +

+ +
OperationSQLite ResultMySQL Result +
json_extract('{"a":null,"b":"xyz"}','$.a')NULL'null' +
json_extract('{"a":null,"b":"xyz"}','$.b')'xyz''"xyz"' +
+ + + +

4.8. The jsonb_extract() function

+ +

+The jsonb_extract() function works the same as the json_extract() function, +except in cases where json_extract() would normally return a text +JSON array object, this routine returns the array or object in the +JSONB format. For the common case where a text, numeric, null, or +boolean JSON element is returned, this routine works exactly the same +as json_extract(). + + + + +

4.9. The -> and ->> operators

+ +

Beginning with SQLite version 3.38.0 (2022-02-22), the -> +and ->> operators are available for extracting subcomponents of JSON. +The SQLite implementation of -> and ->> strives to be +compatible with both MySQL and PostgreSQL. +The -> and ->> operators take a JSON string or JSONB blob +as their left operand and a PATH expression or object field +label or array index as their right operand. The -> operator +returns a text JSON representation of the selected subcomponent or +NULL if that subcomponent does not exist. The ->> operator returns +an SQL TEXT, INTEGER, REAL, or NULL value that represents the selected +subcomponent, or NULL if the subcomponent does not exist. + +

Both the -> and ->> operators select the same subcomponent +of the JSON to their left. The difference is that -> always returns a +JSON representation of that subcomponent and the ->> operator always +returns an SQL representation of that subcomponent. Thus, these operators +are subtly different from a two-argument json_extract() function call. +A call to json_extract() with two arguments will return a JSON representation +of the subcomponent if and only if the subcomponent is a JSON array or +object, and will return an SQL representation of the subcomponent if the +subcomponent is a JSON null, string, or numeric value. + +

When the -> operator returns JSON, it always returns the +RFC 8565 text representation of that JSON, not JSONB. Use the +jsonb_extract() function if you need a subcomponent in the +JSONB format. + +

The right-hand operand to the -> and ->> operators can +be a well-formed JSON path expression. This is the form used by MySQL. +For compatibility with PostgreSQL, +the -> and ->> operators also accept a text object label or +integer array index as their right-hand operand. +If the right operand is a text +label X, then it is interpreted as the JSON path '$.X'. If the right +operand is an integer value N, then it is interpreted as the JSON path '$[N]'. + +

Examples: + +

+ + + + + + + + +

4.10. The json_insert(), json_replace, and json_set() functions

+ +

The json_insert(), json_replace, and json_set() functions all take +a single JSON value as their first argument followed by zero or more +pairs of path and value arguments, and return a new JSON string formed +by updating the input JSON by the path/value pairs. The functions +differ only in how they deal with creating new values and overwriting +preexisting values. + +

+ + + + + +
FunctionOverwrite if already exists?Create if does not exist? +
json_insert()NoYes +
json_replace()YesNo +
json_set()YesYes +
+ +

The json_insert(), json_replace(), and json_set() functions always +take an odd number of arguments. The first argument is always the original +JSON to be edited. Subsequent arguments occur in pairs with the first +element of each pair being a path and the second element being the value +to insert or replace or set on that path. + +

Edits occur sequentially from left to right. Changes caused by +prior edits can affect the path search for subsequent edits. + +

If the value of a path/value pair is an SQLite TEXT value, then it +is normally inserted as a quoted JSON string, even if the string looks +like valid JSON. However, if the value is the result of another +json function (such as json() or json_array() or json_object()) +or if it is the result of the -> operator, +then it is interpreted as JSON and is inserted as JSON retaining all +of its substructure. Values that are the result of the ->> operator +are always interpreted as TEXT and are inserted as a JSON string even +if they look like valid JSON. + +

These routines throw an error if the first JSON argument is not +well-formed or if any PATH argument is not well-formed or if any +argument is a BLOB. + +

To append an element onto the end of an array, using json_insert() +with an array index of "#". Examples: + +

+ + +

Other examples: + +

+ + + + + + + + +

4.11. The jsonb_insert(), jsonb_replace, and jsonb_set() functions

+ +

The jsonb_insert(), jsonb_replace(), and jsonb_set() functions work the +same as json_insert(), json_replace(), and json_set(), respectively, +except that "jsonb_" versions return their result in the binary +JSONB format. + + + +

4.12. The json_object() function

+ +

The json_object() SQL function accepts zero or more pairs of arguments +and returns a well-formed JSON object that is composed from those arguments. +The first argument of each pair is the label and the second argument of +each pair is the value. +If any argument to json_object() is a BLOB then an error is thrown. + +

The json_object() function currently allows duplicate labels without +complaint, though this might change in a future enhancement. + +

An argument with SQL type TEXT it is normally converted into a quoted +JSON string even if the input text is well-formed JSON. +However, if the argument is the direct result from another JSON +function or the -> operator (but not the ->> operator), +then it is treated as JSON and all of its JSON type information +and substructure is preserved. This allows calls to json_object() +and json_array() to be nested. The json() function can also +be used to force strings to be recognized as JSON. + +

Examples: + +

+ + + + +

4.13. The jsonb_object() function

+ +

+The jsonb_object() function works just like the json_object() function +except that the generated object is returned in the binary JSONB format. + + + +

4.14. The json_patch() function

+ +

The json_patch(T,P) SQL function runs the +RFC-7396 MergePatch algorithm +to apply patch P against input T. The patched copy of T is returned. + +

MergePatch can add, modify, or delete elements of a JSON Object, +and so for JSON Objects, the json_patch() routine is a generalized +replacement for json_set() and json_remove(). However, MergePatch +treats JSON Array objects as atomic. MergePatch cannot append to an +Array nor modify individual elements of an Array. It can only insert, +replace, or delete the whole Array as a single unit. Hence, json_patch() +is not as useful when dealing with JSON that includes Arrays, +especially Arrays with lots of substructure. + +

Examples: + +

+ + + + + +

4.15. The jsonb_patch() function

+ +

+The jsonb_patch() function works just like the json_patch() function +except that the patched JSON is returned in the binary JSONB format. + + + +

4.16. The json_pretty() function

+

+The json_pretty() function works like json() except that it adds +extra whitespace to make the JSON result easier for humans to read. +The first argument is the JSON or JSONB that is to be pretty-printed. +The optional second argument is a text string that is used for indentation. +If the second argument is omitted or is NULL, then indentation is four +spaces per level. +

+The json_pretty() function was added with SQLite version 3.46.0 +(2024-05-23). + + + +

4.17. The json_remove() function

+ +

The json_remove(X,P,...) function takes a single JSON value as its +first argument followed by zero or more path arguments. +The json_remove(X,P,...) function returns +a copy of the X parameter with all the elements +identified by path arguments removed. Paths that select elements +not found in X are silently ignored. + +

Removals occurs sequentially from left to right. Changes caused by +prior removals can affect the path search for subsequent arguments. + +

If the json_remove(X) function is called with no path arguments, +then it returns the input X reformatted, with excess whitespace +removed. + +

The json_remove() function throws an error if the first argument +is not well-formed JSON or if any later argument is not a well-formed +path. + +

Examples: + +

+ + + + +

4.18. The jsonb_remove() function

+

+The jsonb_remove() function works just like the json_remove() function +except that the edited JSON result is returned in the binary JSONB format. + + + +

4.19. The json_type() function

+ +

The json_type(X) function returns the "type" of the outermost element +of X. The json_type(X,P) function returns the "type" of the element +in X that is selected by path P. The "type" returned by json_type() is +one of the following SQL text values: +'null', 'true', 'false', 'integer', 'real', 'text', 'array', or 'object'. +If the path P in json_type(X,P) selects an element that does not exist +in X, then this function returns NULL. + +

The json_type() function throws an error if its first argument is +not well-formed JSON or JSONB or if its second argument is not a well-formed +JSON path. + +

Examples: + +

+ + + + +

4.20. The json_valid() function

+ +

The json_valid(X,Y) function return 1 if the argument X is well-formed +JSON, or returns 0 if X is not well-formed. The Y parameter is an integer +bitmask that defines what is meant by "well-formed". The following bits +of Y are currently defined: + +

+ +

By combining bits, the following useful values of Y can be derived: + +

+ +

The Y parameter is optional. If omitted, it defaults to 1, which means +that the default behavior is to return true only if the input X is +strictly conforming RFC-8259 JSON text without any extensions. This +makes the one-argument version of json_valid() compatible with older +versions of SQLite, prior to the addition of support for +JSON5 and JSONB. + +

The difference between 0x04 and 0x08 bits in the Y parameter is that +0x04 only examines the outer wrapper of the BLOB to see if it superficially +looks like JSONB. This is sufficient for must purposes and is very fast. +The 0x08 bit does a thorough examination of all internal details of the BLOB. +The 0x08 bit takes time that is linear in the size of the X input and is much +slower. The 0x04 bit is recommended for most purposes. + +

If you just want to know if a value is a plausible input to one of +the other JSON functions, a Y value of 6 is probably what you want to use. + +

Any Y value less than 1 or greater than 15 raises an error, for the +latest version of json_valid(). However, future versions of json_valid() +might be enhanced to accept flag values outside of this range, having new +meanings that we have not yet thought of. + +

If either X or Y inputs to json_valid() are NULL, then the function +returns NULL. + +

Examples: + +

+ + + + +

4.21. The json_quote() function

+ +

The json_quote(X) function converts the SQL value X (a number or a +string) into its corresponding JSON representation. If X is a JSON value +returned by another JSON function, then this function is a no-op. + +

Examples: + +

+ + + + + + + +

4.22. Array and object aggregate functions

+ +

The json_group_array(X) function is an +aggregate SQL function that returns a JSON array +comprised of all X values in the aggregation. +Similarly, the json_group_object(NAME,VALUE) function returns a JSON object +comprised of all NAME/VALUE pairs in the aggregation. +The "jsonb_" variants are the same except that they return their +result in the binary JSONB format. + + + + + +

4.23. The json_each() and json_tree() table-valued functions

+ +

The json_each(X) and json_tree(X) table-valued functions walk the +JSON value provided as their first argument and return one row for each +element. The json_each(X) function only walks the immediate children +of the top-level array or object, +or just the top-level element itself if the top-level +element is a primitive value. +The json_tree(X) function recursively walks through the +JSON substructure starting with the top-level element. + +

The json_each(X,P) and json_tree(X,P) functions work just like +their one-argument counterparts except that they treat the element +identified by path P as the top-level element. + +

The schema for the table returned by json_each() and json_tree() is +as follows: + +

+CREATE TABLE json_tree(
+    key ANY,             -- key for current element relative to its parent
+    value ANY,           -- value for the current element
+    type TEXT,           -- 'object','array','string','integer', etc.
+    atom ANY,            -- value for primitive types, null for array & object
+    id INTEGER,          -- integer ID for this element
+    parent INTEGER,      -- integer ID for the parent of this element
+    fullkey TEXT,        -- full path describing the current element
+    path TEXT,           -- path to the container of the current row
+    json JSON HIDDEN,    -- 1st input parameter: the raw JSON
+    root TEXT HIDDEN     -- 2nd input parameter: the PATH at which to start
+);
+
+ +

+The "key" column is the integer array index for elements of a JSON array +and the text label for elements of a JSON object. The key column is +NULL in all other cases. + +

+The "atom" column is the SQL value corresponding to primitive elements - +elements other than JSON arrays and objects. The "atom" column is NULL +for a JSON array or object. The "value" column is the same as the +"atom" column for primitive JSON elements but takes on the text JSON value +for arrays and objects. + +

+The "type" column is an SQL text value taken from ('null', 'true', 'false', +'integer', 'real', 'text', 'array', 'object') according to the type of +the current JSON element. + +

+The "id" column is an integer that identifies a specific JSON element +within the complete JSON string. The "id" integer is an internal housekeeping +number, the computation of which might change in future releases. The +only guarantee is that the "id" column will be different for every row. + +

+The "parent" column is always NULL for json_each(). +For json_tree(), +the "parent" column is the "id" integer for the parent of the current +element, or NULL for the top-level JSON element or the element identified +by the root path in the second argument. + +

+The "fullkey" column is a text path that uniquely identifies the current +row element within the original JSON string. The complete key to the +true top-level element is returned even if an alternative starting point +is provided by the "root" argument. + +

+The "path" column is the path to the array or object container that holds +the current row, or the path to the current row in the case where the +iteration starts on a primitive type and thus only provides a single +row of output. + +

4.23.1. Examples using json_each() and json_tree()

+ +

Suppose the table "CREATE TABLE user(name,phone)" stores zero or +more phone numbers as a JSON array object in the user.phone field. +To find all users who have any phone number with a 704 area code: + +

+SELECT DISTINCT user.name
+  FROM user, json_each(user.phone)
+ WHERE json_each.value LIKE '704-%';
+
+ +

Now suppose the user.phone field contains plain text if the user +has only a single phone number and a JSON array if the user has multiple +phone numbers. The same question is posed: "Which users have a phone number +in the 704 area code?" But now the json_each() function can only be called +for those users that have two or more phone numbers since json_each() +requires well-formed JSON as its first argument: + +

+SELECT name FROM user WHERE phone LIKE '704-%'
+UNION
+SELECT user.name
+  FROM user, json_each(user.phone)
+ WHERE json_valid(user.phone)
+   AND json_each.value LIKE '704-%';
+
+ +

Consider a different database with "CREATE TABLE big(json JSON)". +To see a complete line-by-line decomposition of the data: + +

+SELECT big.rowid, fullkey, value
+  FROM big, json_tree(big.json)
+ WHERE json_tree.type NOT IN ('object','array');
+
+ +

In the previous, the "type NOT IN ('object','array')" term of the +WHERE clause suppresses containers and only lets through leaf elements. +The same effect could be achieved this way: + +

+SELECT big.rowid, fullkey, atom
+  FROM big, json_tree(big.json)
+ WHERE atom IS NOT NULL;
+
+ +

Suppose each entry in the BIG table is a JSON object +with a '$.id' field that is a unique identifier +and a '$.partlist' field that can be a deeply nested object. +You want to find the id of every entry that contains one +or more references to uuid '6fa5181e-5721-11e5-a04e-57f3d7b32808' anywhere +in its '$.partlist'. + +

+SELECT DISTINCT json_extract(big.json,'$.id')
+  FROM big, json_tree(big.json, '$.partlist')
+ WHERE json_tree.key='uuid'
+   AND json_tree.value='6fa5181e-5721-11e5-a04e-57f3d7b32808';
+
+

This page last modified on 2024-07-25 15:06:57 UTC

+ ADDED Doc/Extra/Core/lang.html Index: Doc/Extra/Core/lang.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang.html @@ -0,0 +1,436 @@ + + + + + +Query Language Understood by SQLite + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ + + +

SQL As Understood By SQLite

+ +

SQLite understands most of the standard SQL +language. But it does omit some features +while at the same time +adding a few features of its own. This document attempts to +describe precisely what parts of the SQL language SQLite does +and does not support. A list of SQL keywords is +also provided. The SQL language syntax is described by +syntax diagrams. + +

The following syntax documentation topics are available:

+ + +
+ +

The routines sqlite3_prepare_v2(), sqlite3_prepare(), +sqlite3_prepare16(), sqlite3_prepare16_v2(), +sqlite3_exec(), and sqlite3_get_table() accept +an SQL statement list (sql-stmt-list) which is a semicolon-separated +list of statements.

+ +

sql-stmt-list:

+
+ + + + + +sql-stmt + + + + + + + + +; + + + + +
+
+ + +

Each SQL statement in the statement list is an instance of the +following:

+ +

sql-stmt:

+
+ + + + +EXPLAIN + + + +QUERY + + + +PLAN + + + + + + + + + +alter-table-stmt + + + + + + +analyze-stmt + + + + + + +attach-stmt + + + + + + +begin-stmt + + + + + + +commit-stmt + + + + + + +create-index-stmt + + + + + + +create-table-stmt + + + + + + +create-trigger-stmt + + + + + + +create-view-stmt + + + + + + +create-virtual-table-stmt + + + + + + +delete-stmt + + + + + + +delete-stmt-limited + + + + + + +detach-stmt + + + + + + +drop-index-stmt + + + + + + +drop-table-stmt + + + + + + +drop-trigger-stmt + + + + + + +drop-view-stmt + + + + + + +insert-stmt + + + + + + +pragma-stmt + + + + + + +reindex-stmt + + + + + + +release-stmt + + + + + + +rollback-stmt + + + + + + +savepoint-stmt + + + + + + +select-stmt + + + + + + +update-stmt + + + + + + +update-stmt-limited + + + + + + +vacuum-stmt + + + + + + +
+
+ +

This page last modified on 2024-04-01 12:41:31 UTC

+ ADDED Doc/Extra/Core/lang_aggfunc.html Index: Doc/Extra/Core/lang_aggfunc.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_aggfunc.html @@ -0,0 +1,2575 @@ + + + + + +Built-in Aggregate Functions + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+Built-in Aggregate Functions +
+
+ + + + + +

1. Syntax

+

aggregate-function-invocation: +

+
+
+ + + + +aggregate-func + + + +( + + + + + +DISTINCT + + + + + + + +expr + + + +) + + + +filter-clause + + + + + + + + + + + + +, + + + + + +* + + + + + + + + + + + +ORDER + + + +BY + + + +ordering-term + +, + + + + + + +
+

expr: +

+ +

filter-clause: +

+ +

ordering-term: +

+ +
+ + +

+The aggregate functions shown below are available by default. +There are two more aggregates grouped with the JSON SQL functions. +Applications can define custom aggregate functions using the +sqlite3_create_function() interface. +API.

+ +

+In any aggregate function that takes a single argument, that argument +can be preceded by the keyword DISTINCT. In such cases, duplicate +elements are filtered before being passed into the aggregate function. +For example, the function "count(distinct X)" will return the number +of distinct values of column X instead of the total number of non-null +values in column X. +

+ + + +

+If a FILTER clause is provided, then only rows for which the expr is +true are included in the aggregate. +

+ + + +

+If an ORDER BY clause is provided, that clause determines the order in which +the inputs to the aggregate are processed. For aggregate functions like max() +and count(), the input order does not matter. But for things like +string_agg() and json_group_object(), the ORDER BY clause will make a +difference in the result. If no ORDER BY clause is specified, the inputs to the +aggregate occur in an arbitrary order that might change from one invocation +to the next. +

+ +

+See also: scalar functions and window functions. + + + +

2. List of built-in aggregate functions

+ + + + +

3. Descriptions of built-in aggregate functions

+
+ + +

avg(X)

+ The avg() function + returns the average value of all non-NULL X within a + group. String and BLOB values that do not look like numbers are + interpreted as 0. + The result of avg() is always a floating point value whenever + there is at least one non-NULL input even if all + inputs are integers. The result of avg() is NULL if + there are no non-NULL inputs. The result of avg() is computed + as total()/count() so all of the constraints that apply to + total() also apply to avg(). +

+ +

count(X)
count(*)

+ The count(X) function returns + a count of the number of times + that X is not NULL in a group. The count(*) function + (with no arguments) returns the total number of rows in the group. +

+ +

group_concat(X)
group_concat(X,Y)
string_agg(X,Y)

+ The group_concat() function returns + a string which is the concatenation of + all non-NULL values of X. If parameter Y is present then + it is used as the separator + between instances of X.A comma (",") is used as the separator + if Y is omitted. +

+ The string_agg(X,Y) function is an alias + for group_concat(X,Y). String_agg() is compatible with PostgreSQL + and SQL-Server and group_concat() is compatible with MySQL. +

+ The order of the concatenated elements is arbitrary unless an + ORDER BY argument is included immediately after the last parameter. +

+ +

max(X)

+ The max() aggregate function + returns the maximum value of all values in the group. + The maximum value is the value that would be returned last in an + ORDER BY on the same column. Aggregate max() returns NULL + if and only if there are no non-NULL values in the group. +

+ +

min(X)

+ The min() aggregate function + returns the minimum non-NULL value of all values in the group. + The minimum value is the first non-NULL value that would appear + in an ORDER BY of the column. + Aggregate min() returns NULL if and only if there are no non-NULL + values in the group. +

+ +

sum(X)
total(X)

+ The sum() and total() aggregate functions + return the sum of all non-NULL values in the group. + If there are no non-NULL input rows then sum() returns + NULL but total() returns 0.0. + NULL is not normally a helpful result for the sum of no rows + but the SQL standard requires it and most other + SQL database engines implement sum() that way so SQLite does it in the + same way in order to be compatible. The non-standard total() function + is provided as a convenient way to work around this design problem + in the SQL language.

+ +

The result of total() is always a floating point value. + The result of sum() is an integer value if all non-NULL inputs are integers. + If any input to sum() is neither an integer nor a NULL, + then sum() returns a floating point value + which is an approximation of the mathematical sum.

+ +

Sum() will throw an "integer overflow" exception if all inputs + are integers or NULL + and an integer overflow occurs at any point during the computation. + No overflow error is ever raised if any prior input was a floating point + value. + Total() never throws an integer overflow. + +

When summing floating-point values, if the magnitudes of the values + differ wildly then the resulting sum might be imprecise due to the fact that + IEEE 754 floating point values are approximations. + Use the decimal_sum(X) aggregate in the decimal extension to obtain + an exact summation of floating point numbers. Consider this test case: + +

+CREATE TABLE t1(x REAL);
+INSERT INTO t1 VALUES(1.55e+308),(1.23),(3.2e-16),(-1.23),(-1.55e308);
+SELECT sum(x), decimal_sum(x) FROM t1;
+

+ +

The large values ±1.55e+308 cancel each other out, but the + cancellation does not occur until the end of the sum and in the meantime + the large +1.55e+308 swamps the tiny 3.2e-16 value. The end result is + an imprecise result for the sum(). The decimal_sum() aggregate + generates an exact answer, at the cost of additional CPU and memory usage. + Note also that decimal_sum() is not built into the SQLite core; it is a + loadable extension. + +

If sum of inputs is too large to represent as a IEEE 754 floating + point value, then a +Infinity or -Infinity result may be returned. + If very large values with differing signs are used + such that the SUM() or TOTAL() function is + unable to determine if the correct result is +Infinity or -Infinity + or some other value in between, then the result is NULL. Hence, for + example, the following query returns NULL: + +

+WITH t1(x) AS (VALUES(1.0),(-9e+999),(2.0),(+9e+999),(3.0))
+ SELECT sum(x) FROM t1;
+

+
+ +
+

This page last modified on 2023-12-05 14:43:20 UTC

+ ADDED Doc/Extra/Core/lang_altertable.html Index: Doc/Extra/Core/lang_altertable.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_altertable.html @@ -0,0 +1,3430 @@ + + + + + +ALTER TABLE + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+ALTER TABLE +
+
+ + + + + +

1. Overview

+ +

alter-table-stmt: +

+
+
+ + + + + +ALTER + + + +TABLE + + + + + +schema-name + + + +. + + + +table-name + + + + + + + + + + + +RENAME + + + +TO + + + +new-table-name + + + +RENAME + + + + + +COLUMN + + + +column-name + + + +TO + + + +new-column-name + + + + + + + + + + + + + + +ADD + + + + + +COLUMN + + + +column-def + + + + + + + + + + + +DROP + + + + + +COLUMN + + + +column-name + + + + + + + +
+

column-def: +

+ +
+ + +

SQLite supports a limited subset of ALTER TABLE. +The ALTER TABLE command in SQLite allows +these alterations of an existing table: +it can be renamed; +a column can be renamed; +a column can be added to it; +or +a column can be dropped from it. + + + +

2. ALTER TABLE RENAME

+ +

The RENAME TO syntax changes the name of table-name +to new-table-name. +This command +cannot be used to move a table between attached databases, only to rename +a table within the same database. +If the table being renamed has triggers or indices, then these remain +attached to the table after it has been renamed. + +

+Compatibility Note: +The behavior of ALTER TABLE when renaming a table was enhanced +in versions 3.25.0 (2018-09-15) and 3.26.0 (2018-12-01) +in order to carry the rename operation forward into triggers and +views that reference the renamed table. This is considered an +improvement. Applications that depend on the older (and +arguably buggy) behavior can use the +PRAGMA legacy_alter_table=ON statement or the +SQLITE_DBCONFIG_LEGACY_ALTER_TABLE configuration parameter +on sqlite3_db_config() interface to make ALTER TABLE RENAME +behave as it did prior to version 3.25.0. +
+ +

+Beginning with release 3.25.0 (2018-09-15), references to the table +within trigger bodies and view definitions are also renamed. +

+ +

Prior to version 3.26.0 (2018-12-01), FOREIGN KEY references +to a table that is renamed were only edited if the +PRAGMA foreign_keys=ON, or in other words if +foreign key constraints were being enforced. With +PRAGMA foreign_keys=OFF, FOREIGN KEY constraints would not be changed +when the table that the foreign key referred to (the "parent table") +was renamed. Beginning with version 3.26.0, FOREIGN KEY constraints +are always converted when a table is renamed, unless the +PRAGMA legacy_alter_table=ON setting is engaged. The following +table summarizes the difference:

+ +
+ + +
PRAGMA foreign_keys +PRAGMA legacy_alter_table +Parent Table +references are updated +SQLite version +
OffOffNo< 3.26.0 +
OffOffYes>= 3.26.0 +
OnOffYesall +
OffOnNoall +
OnOnYesall +
+
+ + + +

3. ALTER TABLE RENAME COLUMN

+ +

The RENAME COLUMN TO syntax changes the +column-name of table table-name +into new-column-name. The column name is changed both +within the table definition itself and also within all indexes, triggers, +and views that reference the column. If the column name change would +result in a semantic ambiguity in a trigger or view, then the RENAME +COLUMN fails with an error and no changes are applied. + + + +

4. ALTER TABLE ADD COLUMN

+ +

The ADD COLUMN syntax +is used to add a new column to an existing table. +The new column is always appended to the end of the list of existing columns. +The column-def rule defines the characteristics of the new column. +The new column may take any of the forms permissible in a CREATE TABLE +statement, with the following restrictions: +

    +
  • The column may not have a PRIMARY KEY or UNIQUE constraint.
  • +
  • The column may not have a default value of CURRENT_TIME, CURRENT_DATE, + CURRENT_TIMESTAMP, or an expression in parentheses.
  • +
  • If a NOT NULL constraint is specified, then the column must have a + default value other than NULL. +
  • If foreign key constraints are enabled and + a column with a REFERENCES clause + is added, the column must have a default value of NULL. +
  • The column may not be GENERATED ALWAYS ... STORED, +though VIRTUAL columns are allowed. +
+ +

When adding a column with a CHECK constraint, or a NOT NULL constraint +on a generated column, the added constraints are tested against all +preexisting rows in the table and the ADD COLUMN fails +if any constraint fails. The testing of added constraints +against preexisting rows is a new enhancement as of SQLite version +3.37.0 (2021-11-27).

+ +

The ALTER TABLE command works by modifying the SQL text of the schema +stored in the sqlite_schema table. +No changes are made to table content for renames or column addition without +constraints. +Because of this, +the execution time of such ALTER TABLE commands is independent of +the amount of data in the table and such commands will +run as quickly on a table with 10 million rows as on a table with 1 row. +When adding new columns that have CHECK constraints, or adding generated +columns with NOT NULL constraints, or when deleting columns, then all +existing data in the table must be either read (to test new constraints +against existing rows) or written (to remove deleted columns). In those +cases, the ALTER TABLE command takes time that is proportional to the +amount of content in the table being altered. +

+ +

After ADD COLUMN has been run on a database, that database will not +be readable by SQLite version 3.1.3 (2005-02-20) and earlier.

+ + + +

5. ALTER TABLE DROP COLUMN

+ +

The DROP COLUMN syntax +is used to remove an existing column from a table. +The DROP COLUMN command removes the named column from the table, +and rewrites its content to purge the data associated +with that column. +The DROP COLUMN command only works if the column is not referenced +by any other parts of the schema and is not a PRIMARY KEY and +does not have a UNIQUE constraint. +Possible reasons why the DROP COLUMN command can fail include: +

    +
  • The column is a PRIMARY KEY or part of one. +
  • The column has a UNIQUE constraint. +
  • The column is indexed. +
  • The column is named in the WHERE clause of a partial index. +
  • The column is named in a table or column CHECK constraint +not associated with the column being dropped. +
  • The column is used in a foreign key constraint. +
  • The column is used in the expression of a generated column. +
  • The column appears in a trigger or view. +
+ +

5.1. How It Works

+ +

SQLite stores the schema as plain text in the sqlite_schema table. +The DROP COLUMN command (and all of the other variations of ALTER TABLE +as well) modify that text and then attempt to reparse the entire schema. +The command is only successful if the schema is still valid after the +text has been modified. In the case of the DROP COLUMN command, the +only text modified is that the column definition is removed from the +CREATE TABLE statement. The DROP COLUMN command will fail if there +are any traces of the column in other parts of the schema that will +prevent the schema from parsing after the CREATE TABLE statement has +been modified. + +

6. Disable Error Checking Using PRAGMA writable_schema=ON

+ +

ALTER TABLE will normally fail and make no changes if it encounters +any entries in the sqlite_schema table that do not parse. For +example, if there is a malformed VIEW or TRIGGER associated with +table named "tbl1", then an attempt to rename "tbl1" to "tbl1neo" will +fail because the associated views and triggers could not be parsed. + +

Beginning with SQLite 3.38.0 (2022-02-22), this error checking +can be disabled by setting "PRAGMA writable_schema=ON;". When +the schema is writable, ALTER TABLE silently ignores any rows of the +sqlite_schema table that do not parse. + + + +

7. Making Other Kinds Of Table Schema Changes

+ +

The only schema altering commands directly supported by SQLite are the +"rename table", "rename column", "add column", "drop column" +commands shown above. However, applications +can make other arbitrary changes to the format of a table using a simple +sequence of operations. +The steps to make arbitrary changes to the schema design of some table X +are as follows: + +

    +
  1. +If foreign key constraints are enabled, disable them using PRAGMA foreign_keys=OFF. + +

  2. +Start a transaction. + +

  3. +Remember the format of all indexes, triggers, and views associated with table X. +This information will be needed in step 8 below. One way to do this is +to run a query like the following: +SELECT type, sql FROM sqlite_schema WHERE tbl_name='X'. + +

  4. +Use CREATE TABLE to construct a new table "new_X" that is in the desired +revised format of table X. Make sure that the name "new_X" does not collide +with any existing table name, of course. + +

  5. +Transfer content from X into new_X using a statement +like: INSERT INTO new_X SELECT ... FROM X. + +

  6. +Drop the old table X: DROP TABLE X. + +

  7. +Change the name of new_X to X using: ALTER TABLE new_X RENAME TO X. + +

  8. +Use CREATE INDEX, CREATE TRIGGER, and CREATE VIEW +to reconstruct indexes, triggers, and views +associated with table X. Perhaps use the old format of the triggers, +indexes, and views saved from step 3 above as a guide, making changes +as appropriate for the alteration. + +

  9. If any views refer to table X in a way that is affected by the +schema change, then drop those views using DROP VIEW and recreate them +with whatever changes are necessary to accommodate the schema change +using CREATE VIEW. + +

  10. +If foreign key constraints were originally enabled +then run PRAGMA foreign_key_check to verify that the schema +change did not break any foreign key constraints. + + +

  11. +Commit the transaction started in step 2. + +

  12. +If foreign keys constraints were originally enabled, reenable them now. +

+ + + +

+Caution: +Take care to follow the procedure above precisely. The boxes below +summarize two procedures for modifying a table definition. At first +glance, they both appear to accomplish the same thing. However, the +procedure on the right does not always work, especially with the +enhanced rename table capabilities added by versions 3.25.0 and +3.26.0. In the procedure on the right, the initial rename of the +table to a temporary name might corrupt references to that table in +triggers, views, and foreign key constraints. The safe procedure on +the left constructs the revised table definition using a new temporary +name, then renames the table into its final name, which does not break +links. + +

+ + + +
+
    +
  1. Create new table +
  2. Copy data +
  3. Drop old table +
  4. Rename new into old +
+
+
    +
  1. Rename old table +
  2. Create new table +
  3. Copy data +
  4. Drop old table +
+

Correct +

Incorrect +
+
+ +

The 12-step generalized ALTER TABLE procedure +above will work even if the +schema change causes the information stored in the table to change. +So the full 12-step procedure above is appropriate for dropping a column, +changing the order of columns, adding or removing a UNIQUE constraint +or PRIMARY KEY, adding CHECK or FOREIGN KEY or NOT NULL constraints, +or changing the datatype for a column, for example. However, a simpler +and faster procedure can optionally be used for +some changes that do no affect the on-disk content in any way. +The following simpler procedure is appropriate for removing +CHECK or FOREIGN KEY or NOT NULL constraints, +or adding, removing, or changing default values on +a column. + +

    +
  1. Start a transaction. + +

  2. Run PRAGMA schema_version to determine the current schema +version number. This number will be needed for step 6 below. + +

  3. Activate schema editing using +PRAGMA writable_schema=ON. + +

  4. Run an UPDATE statement to change the definition of table X +in the sqlite_schema table: +UPDATE sqlite_schema SET sql=... WHERE type='table' AND name='X'; +

    Caution: Making a change to the sqlite_schema table like this will +render the database corrupt and unreadable if the change contains +a syntax error. It is suggested that careful testing of the UPDATE +statement be done on a separate blank database prior to using it on +a database containing important data. + +

  5. If the change to table X also affects other tables or indexes or +triggers are views within schema, then run UPDATE statements to modify +those other tables indexes and views too. For example, if the name of +a column changes, all FOREIGN KEY constraints, triggers, indexes, and +views that refer to that column must be modified. +

    Caution: Once again, making changes to the sqlite_schema +table like this will render the database corrupt and unreadable if the +change contains an error. Carefully test this entire procedure +on a separate test database prior to using it on +a database containing important data and/or make backup copies of +important databases prior to running this procedure. + +

  6. Increment the schema version number using +PRAGMA schema_version=X where X is one +more than the old schema version number found in step 2 above. + +

  7. Disable schema editing using +PRAGMA writable_schema=OFF. + +

  8. (Optional) Run PRAGMA integrity_check to verify that the +schema changes did not damage the database. + +

  9. Commit the transaction started on step 1 above. +

+ +

If some future version of SQLite adds new ALTER TABLE capabilities, +those capabilities will very likely use one of the two procedures +outlined above. + + + +

8. Why ALTER TABLE is such a problem for SQLite

+ +

Most SQL database engines store the schema already parsed into +various system tables. On those database engines, ALTER TABLE merely +has to make modifications to the corresponding system tables. + +

SQLite is different in that it stores the schema +in the sqlite_schema table as the original text of the CREATE +statements that define the schema. Hence ALTER TABLE needs +to revise the text of the CREATE statement. Doing +so can be tricky for certain "creative" schema designs. + +

The SQLite approach of storing the schema as text has advantages +for an embedded relational database. For one, it means that the +schema takes up less space in the database file. This is important +since a common SQLite usage pattern is to have many small, +separate database files instead of putting everything in one +big global database file, which is the usual approach for client/server +database engines. +Since the schema is duplicated in each separate database file, it is +important to keep the schema representation compact. + +

Storing the schema as text rather than as parsed tables also +give flexibility to the implementation. Since the internal parse +of the schema is regenerated each time the database is opened, the +internal representation of the schema can change from one release +to the next. This is important, as sometimes new features require +enhancements to the internal schema representation. Changing the +internal schema representation would be much more difficult if the +schema representation was exposed in the database file. So, in other +words, storing the schema as text helps maintain backwards +compatibility, and helps ensure that older database files can be +read and written by newer versions of SQLite. + +

Storing the schema as text also makes the +SQLite database file format easier to define, document, and +understand. This helps make SQLite database files a +recommended storage format for long-term archiving of data. + +

The downside of storing schema a text is that it can make +the schema tricky to modify. And for that reason, the ALTER TABLE +support in SQLite has traditionally lagged behind other SQL +database engines that store their schemas as parsed system tables +that are easier to modify. + + + + +

This page last modified on 2022-08-10 18:45:48 UTC

+ ADDED Doc/Extra/Core/lang_analyze.html Index: Doc/Extra/Core/lang_analyze.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_analyze.html @@ -0,0 +1,482 @@ + + + + + +ANALYZE + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+ANALYZE +
+ + +
+ + + + +

1. Overview

+ +

analyze-stmt: +

+
+
+ + + + +ANALYZE + + + + + +schema-name + + + +. + + + +table-or-index-name + + + + + + + + + + + +schema-name + + + + + + +index-or-table-name + + + + +
+
+ + +

The ANALYZE command gathers statistics about tables and +indices and stores the collected information +in internal tables of the database where the query optimizer can +access the information and use it to help make better query planning choices. +If no arguments are given, the main database and all attached databases are +analyzed. If a schema name is given as the argument, then all tables +and indices in that one database are analyzed. +If the argument is a table name, then only that table and the +indices associated with that table are analyzed. If the argument +is an index name, then only that one index is analyzed.

+ + + +

2. Recommended usage patterns

+ +

The use of ANALYZE is never required. However, if an application +makes complex queries that have many possible query plans, the query +planner will be better able to pick the best plan if ANALYZE has +been run. This can result it significant performance improvements for +some queries. + +

Two recommended approaches for when and how to run ANALYZE are +described in the next subsections, in order of preference. + + + +

2.1. Periodically run "PRAGMA optimize"

+ +

The PRAGMA optimize command will automatically run ANALYZE when +needed. Suggested use: + +

    +
  1. +Applications with short-lived database connections should run +"PRAGMA optimize;" once, just prior to closing each database connection. + +

  2. +Applications that use long-lived database connections should run +"PRAGMA optimize=0x10002;" when the connection is first opened, and then +also run "PRAGMA optimize;" periodically, perhaps once per day, or more if +the database is evolving rapidly. + +

  3. All applications should run "PRAGMA optimize;" after a schema change, +especially after one or more CREATE INDEX statements. +

+ +

+ +

The PRAGMA optimize command is usually a no-op but it will occasionally +run one or more ANALYZE subcommands on individual tables of the database +if doing so will be useful to the query planner. +Since SQLite version 3.46.0 (2024-05-23), the "PRAGMA optimize" command +automatically limits the scope of ANALYZE subcommands so that +the overall "PRAGMA optimize" command completes quickly even on enormous +databases. There is no need to use PRAGMA analysis_limit. This is the +recommended way of running ANALYZE moving forward. + +

The PRAGMA optimize command will normally only consider running ANALYZE on +tables that have been previously queried by the same database connection or +that do not have entries in the sqlite_stat1 table. +However, if the 0x10000 bit is added to the argument, PRAGMA optimize will +examine all tables to see if they can benefit from ANALYZE, not just those +that have been recently queried. +There is no query history when a database connection first opens, and +that is why adding the 0x10000 bit is recommended when running PRAGMA optimize +on a fresh database connection. + +

See the Automatically Running ANALYZE and +Approximate ANALYZE For Large Databases sections below for additional +information. + + + +

2.2. Fixed results of ANALYZE

+ +

Running ANALYZE can cause SQLite to choose different query plans +for subsequent queries. This is almost always a positive thing, as the +query plans chosen after ANALYZE will in nearly every case be better than +the query plans picked before ANALYZE. That is the whole point of ANALYZE. +But there can be no proof of running ANALYZE will always be beneficial. +One can construct pathological cases where running +ANALYZE could make some subsequent queries run slower. + +

Some developers prefer that once the design of an application is frozen, +SQLite will always pick the same query plans as it did during +development and testing. +Then if a millions of copies of the application are shipped to customers, +the developers are assured that all of those millions of copies are running +the same query plans regardless of what data the individual customers insert +into their particular databases. This can help in reproducing complaints +of performance problems coming back from the field. + +

To achieve this objection, never run a full ANALYZE nor the +"PRAGMA optimize" command in the application. +Rather, only run ANALYZE during development, manually using the +command-line interface or similar, on a test database +that is similar in size and content to live databases. Then capture +the result of this one-time ANALYZE using a script like the +following: + +

.mode list
+SELECT 
+  'ANALYZE sqlite_schema;' ||
+  'DELETE FROM sqlite_stat1;' ||
+  'INSERT INTO sqlite_stat1(tbl,idx,stat)VALUES' ||
+  (SELECT group_concat(format('(%Q,%Q,%Q)',tbl,idx,stat),',')
+    FROM sqlite_stat1) ||
+  ';ANALYZE sqlite_schema;';
+
+ +

When creating a new instance of the database in deployed instances of +the application, or perhaps every time the application is started up in +the case of long-running applications, run the commands generated by +script above. This will populate the sqlite_stat1 table exactly as +it was during development and testing and ensure that the query plans +selected in the field are same has those selected during testing in the +lab. Maybe copy/paste the string generated by the script above into +a static string constant named "zStat1Init" and then invoke: + +

sqlite3_exec(db, zStat1Init, 0, 0, 0);
+
+ +

Perhaps also add "BEGIN;" at the start of the string constant and +"COMMIT;" at the end, depending on the context in which the script is run. + +

See the query planner stability guarantee for addition information. + +

3. Details

+ +

The default implementation stores all statistics in a single +table named "sqlite_stat1". + If SQLite is compiled with the +SQLITE_ENABLE_STAT4 option, then additional histogram data is +collected and stored in sqlite_stat4. +Older versions of SQLite would make use of the sqlite_stat2 table +or sqlite_stat3 table +when compiled with SQLITE_ENABLE_STAT2 or SQLITE_ENABLE_STAT3, +but all recent versions of +SQLite ignore the sqlite_stat2 and sqlite_stat3 tables. +Future enhancements may create +additional internal tables with the same name pattern except with +final digit larger than "4". +All of these tables are collectively referred to as "statistics tables". +

+ +

The content of the statistics tables can be queried using SELECT +and can be changed using the DELETE, INSERT, and UPDATE commands. +The DROP TABLE command works on statistics tables +as of SQLite version 3.7.9. (2011-11-01) +The ALTER TABLE command does not work on statistics tables. +Appropriate care should be used when changing the content of the statistics +tables as invalid content can cause SQLite to select inefficient +query plans. Generally speaking, one should not modify the content of +the statistics tables by any mechanism other than invoking the +ANALYZE command. +See "Manual Control Of Query Plans Using SQLITE_STAT Tables" for +further information.

+ +

Statistics gathered by ANALYZE are not updated as +the content of the database changes. If the content of the database +changes significantly, or if the database schema changes, then one should +consider rerunning the ANALYZE command in order to update the statistics.

+ +

The query planner loads the content of the statistics tables +into memory when the schema is read. Hence, when an application +changes the statistics tables directly, SQLite will not immediately +notice the changes. An application +can force the query planner to reread the statistics tables by running +ANALYZE sqlite_schema.

+ + + +

4. Automatically Running ANALYZE

+ +

The PRAGMA optimize command will automatically run ANALYZE on individual +tables on an as-needed basis. The recommended practice is for applications +to invoke the PRAGMA optimize statement just before closing each database +connection. Or, if the application keeps a single database connection open +for a long time, then it should run "PRAGMA optimize=0x10002" when the +connection is first opened and run "PRAGMA optimize;" periodically thereafter, +perhaps once per day or even once per hour.

+ +

Each SQLite database connection records cases when the query planner would +benefit from having accurate results of ANALYZE at hand. These records +are held in memory and accumulate over the life of a database connection. +The PRAGMA optimize command looks at those records and runs ANALYZE on only +those tables for which new or updated ANALYZE data seems likely to be useful. +In most cases PRAGMA optimize will not run ANALYZE, but it will occasionally +do so either for tables that have never before been analyzed, or for tables +that have grown significantly since they were last analyzed.

+ +

Since the actions of PRAGMA optimize are determined to some extent by +prior queries that have been evaluated on the same database connection, it +is recommended that PRAGMA optimize be deferred until the database connection +is closing and has thus had an opportunity to accumulate as much usage information +as possible. It is also reasonable to set a timer to run PRAGMA optimize +every few hours, or every few days, for database connections that stay open +for a long time. When running PRAGMA optimize immediately after a +database connection is opened, one can add the 0x10000 bit to the bitmask +argument (thus making the command read "PRAGMA optimize=0x10002") which +causes all tables to be examined, even tables that have not been +queried during the current connection.

+ +

The PRAGMA optimize command was first introduced with +SQLite 3.18.0 (2017-03-28) and is a no-op for all prior releases +of SQLite. The PRAGMA optimize command was significantly enhanced +in SQLite 3.46.0 (2024-05-23) and the advice given in this +documentation is based on those enhancements. Applications that +use earlier versions of SQLite should consult the corresponding +documentation for better advice on the best ways to use PRAGMA optimize.

+ + + +

5. Approximate ANALYZE For Large Databases

+ +

By default, ANALYZE does a full scan of every index. This can be slow for +large databases. So beginning with SQLite version 3.32.0 (2020-05-22), the +PRAGMA analysis_limit command can be used to limit the amount of +scanning performed by ANALYZE, and thus help ANALYZE to run faster, +even on very large database files. We call this running an +"approximate ANALYZE". + +

The recommended usage pattern for the analysis_limit pragma is +like this: + +

PRAGMA analysis_limit=1000;
+
+ +

This pragma tells the ANALYZE command to start a full scan +of the index as it normally would. But when the number of rows visited +reaches 1000 (or whatever other limit is specified by the pragma), the +ANALYZE command will begin taking actions to stop the scan. If +the left-most column of the index has changed at least once during the +previous 1000 steps, then the analysis stops immediately. But if the +left-most column has always been the same, then ANALYZE skips ahead to +the first entry with a different left-most column and reads an additional +1000 rows before terminating. + +

The details of the effects of the analysis limit described in the previous +paragraph are subject to change in future versions of SQLite. But the +core idea will remain the same. An analysis limit of N will strive to +limit the number of rows visited in each index to approximately N. + +

Values of N between 100 and 1000 are recommended. +Or, to disable the analysis limit, causing ANALYZE to do a +complete scan of each index, set the analysis limit to 0. The default +value for the analysis limit is 0 for backwards compatibility. + +

The values placed in the sqlite_stat1 table by an approximate ANALYZE +are not exactly the same as what would be computed by an unrestricted +analysis. But they are usually close enough. The index statistics in +the sqlite_stat1 table are approximations in any case, so the fact that +the results of an approximate ANALYZE are slightly different from +a traditional full scan ANALYZE has little practical impact. It is +possible to construct a pathological case where an approximate ANALYZE +is noticeably inferior to a full-scan ANALYZE, but such cases are rare in +real-world problems. + +

A good rule of thumb seems to be to always set "PRAGMA analysis_limit=N" +for N between 100 and 1000 prior to running either "ANALYZE". It used +to be that this was also recommended prior to running +"PRAGMA optimize", but since version 3.46.0 (2024-05-23) that +happens automatically. The results are not quite as precise when using +PRAGMA analysis_limit, but they are precise enough, and the fact that +the results are computed so much faster means that developers are more +likely to compute them. An approximate ANALYZE is better than not +running ANALYZE at all. + +

5.1. Limitations of approximate ANALYZE

+ +

The content in the sqlite_stat4 table cannot be computed with +anything less than a full scan. Hence, if a non-zero analysis limit +is specified, the sqlite_stat4 table is not computed. +

This page last modified on 2024-05-05 15:23:53 UTC

+ ADDED Doc/Extra/Core/lang_attach.html Index: Doc/Extra/Core/lang_attach.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_attach.html @@ -0,0 +1,2521 @@ + + + + + +ATTACH DATABASE + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+ATTACH DATABASE +
+
+ + + + +

1. Overview

+

attach-stmt: +

+
+
+ + + + +ATTACH + + + + + +DATABASE + + + +expr + + + +AS + + + +schema-name + + + + + + + +
+

expr: +

+ +
+ + +

The ATTACH DATABASE statement adds another database +file to the current database connection. +Database files that were previously attached can be removed using +the DETACH DATABASE command. + +

2. Details

+ +

The filename for the database to be attached is the value of +the expression that occurs before the AS keyword. +The filename of the database follows the same semantics as the +filename argument to sqlite3_open() and sqlite3_open_v2(); the +special name ":memory:" results in an in-memory database and an +empty string results in a new temporary database. +The filename argument can be a URI filename if URI filename processing +is enabled on the database connection. The default behavior is for +URI filenames to be disabled, however that might change in a future release +of SQLite, so application developers are advised to plan accordingly. + +

The name that occurs after the AS keyword is the name of the database +used internally by SQLite. +The schema-names 'main' and +'temp' refer to the main database and the database used for +temporary tables. The main and temp databases cannot be attached or +detached.

+ +

Tables in an attached database can be referred to using the syntax +schema-name.table-name. If the name of the table is unique +across all attached databases and the main and temp databases, then the +schema-name prefix is not required. If two or more tables in +different databases have the same name and the +schema-name prefix is not used on a table reference, then the +table chosen is the one in the database that was least recently attached.

+ +

+Transactions involving multiple attached databases are atomic, +assuming that the main database is not ":memory:" and the +journal_mode is not WAL. If the main +database is ":memory:" or if the journal_mode is WAL, then +transactions continue to be atomic within each individual +database file. But if the host computer crashes in the middle +of a COMMIT where two or more database files are updated, +some of those files might get the changes where others +might not. +

+ +

There is a limit, set using sqlite3_limit() and +SQLITE_LIMIT_ATTACHED, to the number of databases that can be +simultaneously attached to a single database connection.

+

This page last modified on 2023-12-05 14:43:20 UTC

+ ADDED Doc/Extra/Core/lang_comment.html Index: Doc/Extra/Core/lang_comment.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_comment.html @@ -0,0 +1,209 @@ + + + + + +SQL Comment Syntax + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+SQL Comment Syntax +
+
+ + + + + +

comment-syntax: +

+
+
+ + + + + + +-- + + + +anything-except-newline + + + + + + + + + +newline + +end-of-input + + + + + + + + + +/* + + + + + + +anything-except-*/ + +*/ + + + + + + + + + +
+
+ + +

Comments are not SQL commands, but can occur within the text of +SQL queries passed to sqlite3_prepare_v2() and related interfaces. +Comments are treated as whitespace by the parser. +Comments can begin anywhere whitespace +can be found, including inside expressions that span multiple lines. +

+ +

SQL comments begin with two consecutive "-" characters (ASCII 0x2d) +and extend up to and including the next newline character (ASCII 0x0a) +or until the end of input, whichever comes first.

+ +

C-style comments begin +with "/*" and extend up to and including the next "*/" character pair +or until the end of input, whichever comes first. C-style comments +can span multiple lines.

+ +

Comments can appear anywhere whitespace can occur, +including inside expressions and in the middle of other SQL statements. +Comments do not nest. +

+

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_conflict.html Index: Doc/Extra/Core/lang_conflict.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_conflict.html @@ -0,0 +1,294 @@ + + + + + +The ON CONFLICT Clause + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+The ON CONFLICT Clause +
+
+ + + + +

conflict-clause: +

+
+
+ + + + + + +ON + + + +CONFLICT + + + +ROLLBACK + +ABORT + +FAIL + +IGNORE + +REPLACE + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +

The ON CONFLICT clause is a +non-standard extension specific to SQLite +that can appear in many other SQL commands. +It is given its own section in this document because it is not +part of standard SQL and therefore might not be familiar.

+ +

The ON CONFLICT clause described here has been a part of +SQLite since before version 3.0.0 (2004-06-18). The phrase +"ON CONFLICT" is also part of UPSERT, which is an extension +to INSERT added in version 3.24.0 (2018-06-04). Do not +confuse these two separate uses of the "ON CONFLICT" phrase. + +

The syntax for the ON CONFLICT clause is as shown above for +the CREATE TABLE command. For the INSERT and +UPDATE commands, the keywords "ON CONFLICT" are replaced by "OR" so that +the syntax reads more naturally. For example, instead of +"INSERT ON CONFLICT IGNORE" we have "INSERT OR IGNORE". +The keywords change but the meaning of the clause is the same +either way.

+ +

The ON CONFLICT clause applies to UNIQUE, NOT NULL, +CHECK, and PRIMARY KEY constraints. +The ON CONFLICT algorithm does not +apply to FOREIGN KEY constraints. +There are five conflict resolution algorithm choices: +ROLLBACK, ABORT, FAIL, IGNORE, and REPLACE. +The default conflict resolution algorithm is ABORT. This +is what they mean:

+ +
+
ROLLBACK
+

When an applicable constraint violation occurs, the ROLLBACK +resolution algorithm aborts the current SQL statement with +an SQLITE_CONSTRAINT error and rolls back the current transaction. +If no transaction is +active (other than the implied transaction that is created on every +command) then the ROLLBACK resolution algorithm works the same as the +ABORT algorithm.

+ +
ABORT
+

When an applicable constraint violation occurs, the ABORT +resolution algorithm aborts the current SQL statement +with an SQLITE_CONSTRAINT error and backs out any changes +made by the current SQL statement; but changes caused +by prior SQL statements within the same transaction are preserved and the +transaction remains active. +This is the default behavior and the behavior specified by the SQL +standard.

+ +
FAIL
+

When an applicable constraint violation occurs, the FAIL +resolution algorithm aborts the current SQL statement with an +SQLITE_CONSTRAINT error. But the FAIL resolution does not +back out prior changes of the SQL statement that failed nor does +it end the transaction. +For example, if an UPDATE +statement encountered a constraint violation on the 100th row that +it attempts to update, then the first 99 row changes are preserved +but changes to rows 100 and beyond never occur.

+ +

The FAIL behavior only works for uniqueness, NOT NULL, and CHECK +constraints. A foreign key constraint violation causes an ABORT. +

+ +
IGNORE
+

When an applicable constraint violation occurs, +the IGNORE resolution algorithm skips the one row that contains +the constraint violation and continues processing subsequent rows +of the SQL statement as if nothing went wrong. +Other rows before and after the row that +contained the constraint violation are inserted or updated +normally. No error is returned for uniqueness, NOT NULL, and +UNIQUE constraint errors when the IGNORE conflict resolution +algorithm is used. However, the IGNORE conflict resolution +algorithm works like ABORT for foreign key constraint errors. +

+
+ +
REPLACE
+

When a UNIQUE or PRIMARY KEY constraint violation occurs, +the REPLACE algorithm +deletes pre-existing rows that are causing the constraint violation +prior to inserting or updating the current row and the command continues +executing normally. +If a NOT NULL constraint violation occurs, the REPLACE conflict +resolution replaces the NULL value with +the default value for that column, or if the column has no default +value, then the ABORT algorithm is used. +If a CHECK constraint or foreign key constraint violation occurs, +the REPLACE conflict resolution algorithm works like ABORT.

+ +

When the REPLACE conflict resolution strategy deletes rows in order to +satisfy a constraint, delete triggers fire if and only if +recursive triggers are enabled.

+ +

The update hook is not invoked for rows that +are deleted by the REPLACE conflict resolution strategy. Nor does +REPLACE increment the change counter. +The exceptional behaviors defined in this paragraph might change +in a future release.

+
+ +

The algorithm specified in the OR clause of an INSERT or UPDATE +overrides any algorithm specified in a CREATE TABLE. +If no algorithm is specified anywhere, the ABORT algorithm is used.

+ + ADDED Doc/Extra/Core/lang_corefunc.html Index: Doc/Extra/Core/lang_corefunc.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_corefunc.html @@ -0,0 +1,737 @@ + + + + + +Built-In Scalar SQL Functions + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+Built-In Scalar SQL Functions +
+
+ + + + + +

1. Overview

+ +

The core functions shown below are available by default. +Date & Time functions, +aggregate functions, +window functions, +math functions, and +JSON functions are documented separately. An +application may define additional +functions written in C and added to the database engine using +the sqlite3_create_function() API.

+ +

simple-function-invocation:

+
+ + + + +simple-func + + + +( + + + +expr + + + +) + + + + +, + + + + + + + +* + + + + +
+
+ + +

+See the functions within expressions documentation for +more information about how SQL function invocations fit into the context +of an SQL expression. + +

2. List Of Core Functions

+ +

+

+ + +

3. Descriptions of built-in scalar SQL functions

+
+ + +

abs(X)

+ The abs(X) function returns the absolute value of the numeric + argument X. Abs(X) returns NULL if X is NULL. + Abs(X) returns 0.0 if X is a string or blob + that cannot be converted to a numeric value. If X is the + integer -9223372036854775808 then abs(X) throws an integer overflow + error since there is no equivalent positive 64-bit two complement value. +

+ +

changes()

+ The changes() function returns the number of database rows that were changed + or inserted or deleted by the most recently completed INSERT, DELETE, + or UPDATE statement, exclusive of statements in lower-level triggers. + The changes() SQL function is a wrapper around the sqlite3_changes64() + C/C++ function and hence follows the same rules for counting changes. +

+ +

char(X1,X2,...,XN)

+ The char(X1,X2,...,XN) function returns a string composed of characters having the + unicode code point values of integers X1 through XN, respectively. +

+ +

coalesce(X,Y,...)

+ The coalesce() function returns a copy of its first non-NULL argument, or + NULL if all arguments are NULL. Coalesce() must have at least + 2 arguments. +

+ +

concat(X,...)

+ The concat(...) function returns a string which is the concatenation of the + string representation of all of its non-NULL arguments. If all arguments are + NULL, then concat() returns an empty string. +

+ +

concat_ws(SEP,X,...)

+ The concat_ws(SEP,...) function returns a string that is the concatenation of + all non-null arguments beyond the first argument, using the text value of the + first argument as a separator. If the first argument is NULL, then concat_ws() + returns NULL. If all arguments other than the first are NULL, then concat_ws() + returns an empty string. +

+ +

format(FORMAT,...)

+ The format(FORMAT,...) SQL function works like the sqlite3_mprintf() C-language + function and the printf() function from the standard C library. + The first argument is a format string that specifies how to construct the output + string using values taken from subsequent arguments. If the FORMAT argument is + missing or NULL then the result is NULL. The %n format is silently ignored and + does not consume an argument. The %p format is an alias for %X. The %z format + is interchangeable with %s. If there are too few arguments in the argument list, + missing arguments are assumed to have a NULL value, which is translated into + 0 or 0.0 for numeric formats or an empty string for %s. See the + built-in printf() documentation for additional information. +

+ +

glob(X,Y)

+ The glob(X,Y) function is equivalent to the + expression "Y GLOB X". + Note that the X and Y arguments are reversed in the glob() function + relative to the infix GLOB operator. Y is the string and X is the + pattern. So, for example, the following expressions are equivalent: +

+     name GLOB '*helium*'
+     glob('*helium*',name)
+  
+

If the sqlite3_create_function() interface is used to + override the glob(X,Y) function with an alternative implementation then + the GLOB operator will invoke the alternative implementation. +

+ +

hex(X)

+ The hex() function interprets its argument as a BLOB and returns + a string which is the upper-case hexadecimal rendering of the content of + that blob. +

+ If the argument X in "hex(X)" is an + integer or floating point number, then "interprets its argument as a BLOB" means + that the binary number is first converted into a UTF8 text representation, then + that text is interpreted as a BLOB. Hence, "hex(12345678)" renders + as "3132333435363738" not the binary representation of the integer value + "0000000000BC614E". +

+ See also: unhex() +

+ +

ifnull(X,Y)

+ The ifnull() function returns a copy of its first non-NULL argument, or + NULL if both arguments are NULL. Ifnull() must have exactly 2 arguments. + The ifnull() function is equivalent to coalesce() with two arguments. +

+ +

iif(X,Y,Z)

+ The iif(X,Y,Z) function returns the value Y if X is true, and Z otherwise. + The iif(X,Y,Z) function is logically equivalent to and generates the same + bytecode as the CASE expression "CASE WHEN X THEN Y ELSE Z END". +

+ +

instr(X,Y)

+ The instr(X,Y) function finds the first occurrence of string Y within + string X and returns the number of prior characters plus 1, or 0 if + Y is nowhere found within X. + Or, if X and Y are both BLOBs, then instr(X,Y) returns one + more than the number bytes prior to the first occurrence of Y, or 0 if + Y does not occur anywhere within X. + If both arguments X and Y to instr(X,Y) are non-NULL and are not BLOBs + then both are interpreted as strings. + If either X or Y are NULL in instr(X,Y) then the result is NULL. +

+ +

last_insert_rowid()

+ The last_insert_rowid() function returns the ROWID + of the last row insert from the database connection which invoked the + function. + The last_insert_rowid() SQL function is a wrapper around the + sqlite3_last_insert_rowid() C/C++ interface function. +

+ +

length(X)

+ For a string value X, the length(X) function returns the number of + characters (not bytes) in X prior to the first NUL character. + Since SQLite strings do not normally contain NUL characters, the length(X) + function will usually return the total number of characters in the string X. + For a blob value X, length(X) returns the number of bytes in the blob. + If X is NULL then length(X) is NULL. + If X is numeric then length(X) returns the length of a string + representation of X. +

+ Note that for strings, the length(X) function returns the character + length of the string, not the byte length. The character length is the number + of characters in the string. The character length is always different from + the byte length for UTF-16 strings, and can be different from the byte length + for UTF-8 strings if the string contains multi-byte characters. Use the + octet_length() function to find the byte length of a string. +

+ For BLOB values, length(X) always returns the byte-length of the BLOB. +

+ For string values, length(X) must read the entire string into memory in order + to compute the character length. But for BLOB values, that is not necessary as + SQLite knows how many bytes are in the BLOB. Hence, for multi-megabyte values, + the length(X) function is usually much faster for BLOBs than for strings, since + it does not need to load the value into memory. +

+ +

like(X,Y)
like(X,Y,Z)

+ The like() function is used to implement the + "Y LIKE X [ESCAPE Z]" expression. + If the optional ESCAPE clause is present, then the + like() function is invoked with three arguments. Otherwise, it is + invoked with two arguments only. Note that the X and Y parameters are + reversed in the like() function relative to the infix LIKE operator. + X is the pattern and Y is the string to match against that pattern. + Hence, the following expressions are equivalent: +

+     name LIKE '%neon%'
+     like('%neon%',name)
+  
+

The sqlite3_create_function() interface can be used to override the + like() function and thereby change the operation of the + LIKE operator. When overriding the like() function, it may be important + to override both the two and three argument versions of the like() + function. Otherwise, different code may be called to implement the + LIKE operator depending on whether or not an ESCAPE clause was + specified. +

+ +

likelihood(X,Y)

+ The likelihood(X,Y) function returns argument X unchanged. + The value Y in likelihood(X,Y) must be a floating point constant + between 0.0 and 1.0, inclusive. + The likelihood(X) function is a no-op that the code generator + optimizes away so that it consumes no CPU cycles during run-time + (that is, during calls to sqlite3_step()). + The purpose of the likelihood(X,Y) function is to provide a hint + to the query planner that the argument X is a boolean that is + true with a probability of approximately Y. + The unlikely(X) function is short-hand for likelihood(X,0.0625). + The likely(X) function is short-hand for likelihood(X,0.9375). +

+ +

likely(X)

+ The likely(X) function returns the argument X unchanged. + The likely(X) function is a no-op that the code generator + optimizes away so that it consumes no CPU cycles at + run-time (that is, during calls to sqlite3_step()). + The purpose of the likely(X) function is to provide a hint + to the query planner that the argument X is a boolean value + that is usually true. The likely(X) function is equivalent + to likelihood(X,0.9375). See also: unlikely(X). +

+ +

load_extension(X)
load_extension(X,Y)

+ The load_extension(X,Y) function loads SQLite extensions out of the shared + library file named X using the entry point Y. The result of load_extension() + is always a NULL. If Y is omitted then the default entry point name is used. + The load_extension() function raises an exception if the extension fails to + load or initialize correctly. + +

The load_extension() function will fail if the extension attempts to + modify or delete an SQL function or collating sequence. The + extension can add new functions or collating sequences, but cannot + modify or delete existing functions or collating sequences because + those functions and/or collating sequences might be used elsewhere + in the currently running SQL statement. To load an extension that + changes or deletes functions or collating sequences, use the + sqlite3_load_extension() C-language API.

+ +

For security reasons, extension loading is disabled by default and must + be enabled by a prior call to sqlite3_enable_load_extension().

+
+ +

lower(X)

+ The lower(X) function returns a copy of string X with all ASCII characters + converted to lower case. The default built-in lower() function works + for ASCII characters only. To do case conversions on non-ASCII + characters, load the ICU extension. +

+ +

ltrim(X)
ltrim(X,Y)

+ The ltrim(X,Y) function returns a string formed by removing any and all + characters that appear in Y from the left side of X. + If the Y argument is omitted, ltrim(X) removes spaces from the left side + of X. +

+ +

max(X,Y,...)

+ The multi-argument max() function returns the argument with the + maximum value, or return NULL if any argument is NULL. + The multi-argument max() function searches its arguments from left to right + for an argument that defines a collating function and uses that collating + function for all string comparisons. If none of the arguments to max() + define a collating function, then the BINARY collating function is used. + Note that max() is a simple function when + it has 2 or more arguments but operates as an + aggregate function if given only a single argument. +

+ +

min(X,Y,...)

+ The multi-argument min() function returns the argument with the + minimum value. + The multi-argument min() function searches its arguments from left to right + for an argument that defines a collating function and uses that collating + function for all string comparisons. If none of the arguments to min() + define a collating function, then the BINARY collating function is used. + Note that min() is a simple function when + it has 2 or more arguments but operates as an + aggregate function if given + only a single argument. +

+ +

nullif(X,Y)

+ The nullif(X,Y) function returns its first argument if the arguments are + different and NULL if the arguments are the same. The nullif(X,Y) function + searches its arguments from left to right for an argument that defines a + collating function and uses that collating function for all string + comparisons. If neither argument to nullif() defines a collating function + then the BINARY collating function is used. +

+ +

octet_length(X)

+ The octet_length(X) function returns the number of bytes in the encoding + of text string X. If X is NULL then octet_length(X) returns NULL. If X is + a BLOB value, then octet_length(X) is the same as length(X). If X is a + numeric value, then octet_length(X) returns the number of bytes in a text + rendering of that number. +

+ Because octet_length(X) returns the number of bytes in X, not the number + of characters, the value returned depends on the database encoding. The + octet_length() function can return different answers for the same input string + if the database encoding is UTF16 instead of UTF8. +

+ If argument X is a table column and the value is of type text or blob, + then octet_length(X) avoids reading the content of X from disk, as the byte + length can be computed from metadata. Thus, octet_length(X) is efficient + even if X is a column containing a multi-megabyte text or blob value. +

+ +

printf(FORMAT,...)

+ The printf() SQL function is an alias for the format() SQL function. + The format() SQL function was originally named printf(). But the name was + later changed to format() for compatibility with other database engines. + The printf() name is retained as an alias so as not to break legacy code. +

+ +

quote(X)

+ The quote(X) function returns the text of an SQL literal which + is the value of its argument suitable for inclusion into an SQL statement. + Strings are surrounded by single-quotes with escapes on interior quotes + as needed. BLOBs are encoded as hexadecimal literals. + Strings with embedded NUL characters cannot be represented as string + literals in SQL and hence the returned string literal is truncated prior + to the first NUL. +

+ +

random()

+ The random() function returns a pseudo-random integer + between -9223372036854775808 and +9223372036854775807. +

+ +

randomblob(N)

+ The randomblob(N) function return an N-byte blob containing pseudo-random + bytes. If N is less than 1 then a 1-byte random blob is returned. + +

Hint: applications can generate globally unique identifiers + using this function together with hex() and/or + lower() like this:

+ +
+ hex(randomblob(16))

+ lower(hex(randomblob(16))) +
+
+ +

replace(X,Y,Z)

+ The replace(X,Y,Z) function returns a string formed by substituting + string Z for every occurrence of string Y in string X. The BINARY + collating sequence is used for comparisons. If Y is an empty + string then return X unchanged. If Z is not initially + a string, it is cast to a UTF-8 string prior to processing. +

+ +

round(X)
round(X,Y)

+ The round(X,Y) function returns a floating-point + value X rounded to Y digits to the right of the decimal point. + If the Y argument is omitted or negative, it is taken to be 0. +

+ +

rtrim(X)
rtrim(X,Y)

+ The rtrim(X,Y) function returns a string formed by removing any and all + characters that appear in Y from the right side of X. + If the Y argument is omitted, rtrim(X) removes spaces from the right + side of X. +

+ +

sign(X)

+ The sign(X) function returns -1, 0, or +1 if the argument X is a numeric + value that is negative, zero, or positive, respectively. If the argument + to sign(X) is NULL or is a string or blob that cannot be losslessly converted + into a number, then sign(X) returns NULL. +

+ +

soundex(X)

+ The soundex(X) function returns a string that is the soundex encoding + of the string X. + The string "?000" is returned if the argument is NULL or contains + no ASCII alphabetic characters. + This function is omitted from SQLite by default. + It is only available if the SQLITE_SOUNDEX compile-time option + is used when SQLite is built. +

+ +

sqlite_compileoption_get(N)

+ The sqlite_compileoption_get() SQL function is a wrapper around the + sqlite3_compileoption_get() C/C++ function. + This routine returns the N-th compile-time option used to build SQLite + or NULL if N is out of range. See also the compile_options pragma. +

+ +

sqlite_compileoption_used(X)

+ The sqlite_compileoption_used() SQL function is a wrapper around the + sqlite3_compileoption_used() C/C++ function. + When the argument X to sqlite_compileoption_used(X) is a string which + is the name of a compile-time option, this routine returns true (1) or + false (0) depending on whether or not that option was used during the + build. +

+ +

sqlite_offset(X)

+ The sqlite_offset(X) function returns the byte offset in the database + file for the beginning of the record from which value would be read. + If X is not a column in an ordinary table, then sqlite_offset(X) returns + NULL. The value returned by sqlite_offset(X) might reference either the + original table or an index, depending on the query. If the value X would + normally be extracted from an index, the sqlite_offset(X) returns the + offset to the corresponding index record. If the value X would be + extracted from the original table, then sqlite_offset(X) returns the offset + to the table record. + +

The sqlite_offset(X) SQL function is only available if SQLite is built + using the -DSQLITE_ENABLE_OFFSET_SQL_FUNC compile-time option. +

+ +

sqlite_source_id()

+ The sqlite_source_id() function returns a string that identifies the + specific version of the source code that was used to build the SQLite + library. The string returned by sqlite_source_id() is + the date and time that the source code was checked in followed by + the SHA3-256 hash for that check-in. This function is + an SQL wrapper around the sqlite3_sourceid() C interface. +

+ +

sqlite_version()

+ The sqlite_version() function returns the version string for the SQLite + library that is running. This function is an SQL + wrapper around the sqlite3_libversion() C-interface. +

+ +

substr(X,Y,Z)
substr(X,Y)
substring(X,Y,Z)
substring(X,Y)

+ The substr(X,Y,Z) function returns a substring of input string X that begins + with the Y-th character and which is Z characters long. + If Z is omitted then substr(X,Y) returns all characters through the end + of the string X beginning with the Y-th. + The left-most character of X is number 1. If Y is negative + then the first character of the substring is found by counting from the + right rather than the left. If Z is negative then + the abs(Z) characters preceding the Y-th character are returned. + If X is a string then characters indices refer to actual UTF-8 + characters. If X is a BLOB then the indices refer to bytes. +

+ "substring()" is an alias for "substr()" beginning with SQLite version 3.34. +

+ +

total_changes()

+ The total_changes() function returns the number of row changes + caused by INSERT, UPDATE or DELETE + statements since the current database connection was opened. + This function is a wrapper around the sqlite3_total_changes64() + C/C++ interface. +

+ +

trim(X)
trim(X,Y)

+ The trim(X,Y) function returns a string formed by removing any and all + characters that appear in Y from both ends of X. + If the Y argument is omitted, trim(X) removes spaces from both ends of X. +

+ +

typeof(X)

+ The typeof(X) function returns a string that indicates the datatype of + the expression X: "null", "integer", "real", "text", or "blob". +

+ +

unhex(X)
unhex(X,Y)

+ The unhex(X,Y) function returns a BLOB value which is the decoding of the + hexadecimal string X. If X contains any + characters that are not hexadecimal digits and which are not in Y, + then unhex(X,Y) returns NULL. If Y is omitted, it is + understood to be an empty string and hence X must be a pure hexadecimal string. + All hexadecimal digits in X must occur in pairs, with both digits of each + pair beginning immediately adjacent to one another, or else unhex(X,Y) + returns NULL. + If either parameter X or Y is NULL, then unhex(X,Y) returns NULL. + The X input may contain an arbitrary mix of upper and lower case hexadecimal + digits. + Hexadecimal digits in Y have no affect on the translation of X. Only + characters in Y that are not hexadecimal digits are ignored in X. +

+ See also: hex() +

+ +

unicode(X)

+ The unicode(X) function returns the numeric unicode code point corresponding to + the first character of the string X. If the argument to unicode(X) is not a string + then the result is undefined. +

+ +

unlikely(X)

+ The unlikely(X) function returns the argument X unchanged. + The unlikely(X) function is a no-op that the code generator + optimizes away so that it consumes no CPU cycles at + run-time (that is, during calls to sqlite3_step()). + The purpose of the unlikely(X) function is to provide a hint + to the query planner that the argument X is a boolean value + that is usually not true. The unlikely(X) function is equivalent + to likelihood(X, 0.0625). +

+ +

upper(X)

+ The upper(X) function returns a copy of input string X in which all + lower-case ASCII characters are converted to their upper-case equivalent. +

+ +

zeroblob(N)

+ The zeroblob(N) function returns a BLOB consisting of N bytes of 0x00. + SQLite manages these zeroblobs very efficiently. Zeroblobs can be used to + reserve space for a BLOB that is later written using + incremental BLOB I/O. + This SQL function is implemented using the sqlite3_result_zeroblob() + routine from the C/C++ interface. +

+ +
+

This page last modified on 2023-12-05 14:43:20 UTC

+ ADDED Doc/Extra/Core/lang_createindex.html Index: Doc/Extra/Core/lang_createindex.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_createindex.html @@ -0,0 +1,2667 @@ + + + + + +CREATE INDEX + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+CREATE INDEX +
+
+ + + + +

1. Syntax

+

create-index-stmt: +

+
+
+ + + + +CREATE + +UNIQUE + +INDEX + + + + + + + + + + +IF + + + +NOT + + + +EXISTS + + + + + + + + +schema-name + + + +. + + + +index-name + + + +ON + + + +table-name + + + +( + + + +indexed-column + + + +) + +, + + + + + + + +WHERE + + + +expr + + + + + + + + + + + +
+

expr: +

+ +

indexed-column: +

+ +
+ + +

The CREATE INDEX command consists of the keywords "CREATE INDEX" followed +by the name of the new index, the keyword "ON", the name of a previously +created table that is to be indexed, and a parenthesized list of table column +names and/or expressions that are used for the index key. +If the optional WHERE clause is included, then the index is a "partial index". +

+ +

If the optional IF NOT EXISTS clause is present and another index +with the same name already exists, then this command becomes a no-op.

+ +

There are no arbitrary limits on the number of indices that can be +attached to a single table. The number of columns in an index is +limited to the value set by +sqlite3_limit(SQLITE_LIMIT_COLUMN,...).

+ +

Indexes are removed with the DROP INDEX command.

+ + + +

1.1. Unique Indexes

+ +

If the UNIQUE keyword appears between CREATE and INDEX then duplicate +index entries are not allowed. Any attempt to insert a duplicate entry +will result in an error.

+ +

For the purposes of unique indices, all NULL values +are considered different from all other NULL values and are thus unique. +This is one of the two possible interpretations of the SQL-92 standard +(the language in the standard is ambiguous). The interpretation used +by SQLite is the same and is the interpretation +followed by PostgreSQL, MySQL, Firebird, and Oracle. Informix and +Microsoft SQL Server follow the other interpretation of the standard, which +is that all NULL values are equal to one another.

+ + + +

1.2. Indexes on Expressions

+ +

Expressions in an index may not reference other tables +and may not use subqueries nor functions whose result might +change (ex: random() or sqlite_version()). +Expressions in an index may only refer to columns in the table +that is being indexed. +Indexes on expression will not work with versions of SQLite prior +to version 3.9.0 (2015-10-14). +See the Indexes On Expressions document for additional information +about using general expressions in CREATE INDEX statements. + + + +

1.3. Descending Indexes

+ +

Each column name or expression can be followed by one +of the "ASC" or "DESC" keywords to indicate sort order. +The sort order may or may not be ignored depending +on the database file format, and in particular the schema format number. +The "legacy" schema format (1) ignores index +sort order. The descending index schema format (4) takes index sort order +into account. Only versions of SQLite 3.3.0 (2006-01-11) +and later are able to understand +the descending index format. For compatibility, version of SQLite between 3.3.0 +and 3.7.9 use the legacy schema format by default. The newer schema format is +used by default in version 3.7.10 (2012-01-16) and later. +The legacy_file_format pragma can be used to change set the specific +behavior for any version of SQLite.

+ +

1.4. NULLS FIRST and NULLS LAST

+ +

The NULLS FIRST and NULLS LAST predicates are not supported +for indexes. For sorting purposes, SQLite considers NULL values +to be smaller than all other values. Hence NULL values always appear at +the beginning of an ASC index and at the end of a DESC index.

+ + + +

1.5. Collations

+ +

The COLLATE clause optionally following each column name +or expression defines a +collating sequence used for text entries in that column. +The default collating +sequence is the collating sequence defined for that column in the +CREATE TABLE statement. Or if no collating sequence is otherwise defined, +the built-in BINARY collating sequence is used.

+ + ADDED Doc/Extra/Core/lang_createtable.html Index: Doc/Extra/Core/lang_createtable.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_createtable.html @@ -0,0 +1,6138 @@ + + + + + +CREATE TABLE + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+CREATE TABLE +
+ + +
+ + + + +

1. Syntax

+ +

create-table-stmt: +

+
+
+ + + + +CREATE + +TEMP + +TEMPORARY + +TABLE + + + + + + + + + + + + + +IF + + + +NOT + + + +EXISTS + + + + + + + + +schema-name + + + +. + + + +table-name + + + + + + + + +( + + + +column-def + +table-constraint + + + +, + +) + + + +table-options + +, + + + + + + + + + + + + + + + + + + + + +AS + + + +select-stmt + + + + +
+

column-def: +

+ +

select-stmt: +

+ +

table-constraint: +

+ +

table-options: +

+ +
+ + +

2. The CREATE TABLE command

+ +

The "CREATE TABLE" command is used to create a new table in an SQLite +database. A CREATE TABLE command specifies the following attributes of the +new table: + +

    +
  • The name of the new table. + +

  • The database in which the new table is created. Tables may be + created in the main database, the temp database, or in any attached + database. + +

  • The name of each column in the table. + +

  • The declared type of each column in the table. + +

  • A default value or expression for each column in the table. + +

  • A default collation sequence to use with each column. + +

  • Optionally, a PRIMARY KEY for the table. Both single column and + composite (multiple column) primary keys are supported. + +

  • A set of SQL constraints for each table. SQLite supports UNIQUE, NOT + NULL, CHECK and FOREIGN KEY constraints. + +

  • Optionally, a generated column constraint. + +

  • Whether the table is a WITHOUT ROWID table. + +

  • Whether the table is subject to strict type checking. +

+ +

Every CREATE TABLE statement must specify a name for the new table. + Table names that begin with "sqlite_" are reserved for internal use. It + is an error to attempt to create a table with a name that starts with + "sqlite_". + +

If a schema-name is specified, it must be either "main", + "temp", or the name of an attached database. In this case + the new table is created in the named database. If the "TEMP" or "TEMPORARY" + keyword occurs between the "CREATE" and "TABLE" then the new table is + created in the temp database. It is an error to specify both a + schema-name and the TEMP or TEMPORARY keyword, unless the + schema-name is "temp". + If no schema name is specified and the + TEMP keyword is not present then the table is created in the main + database. + +

+ It is usually an error to attempt to create a new table in a database that + already contains a table, index or view of the same name. However, if the + "IF NOT EXISTS" clause is specified as part of the CREATE TABLE statement and + a table or view of the same name already exists, the CREATE TABLE command + simply has no effect (and no error message is returned). An error is still + returned if the table cannot be created because of an existing index, even + if the "IF NOT EXISTS" clause is specified. + +

It is not an error to create a table that has the same name as an + existing trigger. + +

Tables are removed using the DROP TABLE statement.

+ + + +

2.1. CREATE TABLE ... AS SELECT Statements

+ +

A "CREATE TABLE ... AS SELECT" statement creates and populates a database +table based on the results of a SELECT statement. The table has the same +number of columns as the SELECT statement returns. The name of +each column is the same as the name of the corresponding column in the result +set of the SELECT statement. The declared type of each column is determined +by the expression affinity of the corresponding expression in the result set +of the SELECT statement, as follows: +

+ +
+
Expression Affinity Column Declared Type +
TEXT "TEXT" +
NUMERIC "NUM" +
INTEGER "INT" +
REAL "REAL" +
BLOB (a.k.a "NONE") "" (empty string) +
+ +

A table created using CREATE TABLE AS has no PRIMARY KEY and no +constraints of any kind. The default value of each column is NULL. The default +collation sequence for each column of the new table is BINARY. + +

Tables created using CREATE TABLE AS are initially populated with the +rows of data returned by the SELECT statement. Rows are assigned contiguously +ascending rowid values, starting with 1, in the order that they +are returned by the SELECT statement. + + + +

3. Column Definitions

+ +

Unless it is a CREATE TABLE ... AS SELECT statement, a CREATE TABLE includes +one or more column definitions, optionally followed by a list of +table constraints. Each column definition consists of the +name of the column, optionally followed by the declared type of the column, +then one or more optional column constraints. Included in +the definition of "column constraints" for the purposes of the previous +statement are the COLLATE and DEFAULT clauses, even though these are not really +constraints in the sense that they do not restrict the data that the table may +contain. The other constraints - NOT NULL, CHECK, UNIQUE, PRIMARY KEY and +FOREIGN KEY constraints - impose restrictions on the table data. + +

The number of columns in a table is limited by the SQLITE_MAX_COLUMN +compile-time parameter. A single row of a table cannot store more than +SQLITE_MAX_LENGTH bytes of data. Both of these limits can be lowered at +runtime using the sqlite3_limit() C/C++ interface.

+ +

3.1. Column Data Types

+ +

Unlike most SQL databases, SQLite does not restrict the type of data that +may be inserted into a column based on the columns declared type. Instead, +SQLite uses dynamic typing. The declared type of a column is used to +determine the affinity of the column only. + + + +

3.2. The DEFAULT clause

+

The DEFAULT clause specifies a default value to use for the column if no +value is explicitly provided by the user when doing an INSERT. If there +is no explicit DEFAULT clause attached to a column definition, then the +default value of the column is NULL. An explicit DEFAULT clause may specify +that the default value is NULL, a string constant, a blob constant, a +signed-number, or any constant expression enclosed in parentheses. A +default value may also be one of the special case-independent keywords +CURRENT_TIME, CURRENT_DATE or CURRENT_TIMESTAMP. For the purposes of the +DEFAULT clause, an expression is considered constant if it +contains no sub-queries, column or table references, bound parameters, +or string literals enclosed in double-quotes instead of single-quotes. + +

Each time a row is inserted into the table by an INSERT statement that +does not provide explicit values for all table columns the values stored in +the new row are determined by their default values, as follows: + +

    +
  • If the default value of the column is a constant NULL, text, blob or + signed-number value, then that value is used directly in the new row. + +

  • If the default value of a column is an expression in parentheses, then + the expression is evaluated once for each row inserted and the results + used in the new row. + +

  • If the default value of a column is CURRENT_TIME, CURRENT_DATE or + CURRENT_TIMESTAMP, then the value used in the new row is a text + representation of the current UTC date and/or time. For CURRENT_TIME, the + format of the value is "HH:MM:SS". For CURRENT_DATE, "YYYY-MM-DD". The + format for CURRENT_TIMESTAMP is "YYYY-MM-DD HH:MM:SS". +

+ + + +

3.3. The COLLATE clause

+

The COLLATE clause specifies the name of a collating sequence to use as +the default collation sequence for the column. If no COLLATE clause is +specified, the default collation sequence is BINARY. + +

3.4. The GENERATED ALWAYS AS clause

+

A column that includes a GENERATED ALWAYS AS clause is a generated column. +Generated columns are supported beginning with SQLite version 3.31.0 (2020-01-22). +See the separate documentation for details on the capabilities and +limitations of generated columns. + + + +

3.5. The PRIMARY KEY

+

Each table in SQLite may have at most one PRIMARY KEY. If the + keywords PRIMARY KEY are added to a column definition, then the primary key + for the table consists of that single column. Or, if a PRIMARY KEY clause + is specified as a table-constraint, then the primary key of the table + consists of the list of columns specified as part of the PRIMARY KEY clause. + The PRIMARY KEY clause must contain only column names — the use of + expressions in an indexed-column of a PRIMARY KEY is not supported. + An error is raised if more than one PRIMARY KEY clause appears in a + CREATE TABLE statement. The PRIMARY KEY is optional for ordinary tables + but is required for WITHOUT ROWID tables. + +

If a table has a single column primary key and the declared type of that + column is "INTEGER" and the table is not a WITHOUT ROWID table, + then the column is known as an INTEGER PRIMARY KEY. + See below for a description of the special properties and behaviors + associated with an INTEGER PRIMARY KEY. + +

Each row in a table with a primary key must have a unique combination + of values in its primary key columns. For the purposes of determining + the uniqueness of primary key values, NULL values are considered distinct from + all other values, including other NULLs. If an INSERT or UPDATE + statement attempts to modify the table content so that two or more rows + have identical primary key values, that is a constraint violation. + +

According to the SQL standard, PRIMARY KEY should always imply NOT NULL. + Unfortunately, due to a bug in some early versions, this is not the + case in SQLite. Unless the column is an INTEGER PRIMARY KEY or + the table is a WITHOUT ROWID table or a STRICT table + or the column is declared NOT NULL, + SQLite allows NULL values in a PRIMARY KEY column. SQLite could be fixed to + conform to the standard, but doing so might break legacy applications. + Hence, it has been decided to merely document the fact that SQLite + allows NULLs in most PRIMARY KEY columns. + + + +

3.6. UNIQUE constraints

+

A UNIQUE constraint is similar to a PRIMARY KEY constraint, except + that a single table may have any number of UNIQUE constraints. For each + UNIQUE constraint on the table, each row must contain a unique combination + of values in the columns identified by the UNIQUE constraint. + For the purposes of UNIQUE constraints, NULL values + are considered distinct from all other values, including other NULLs. + As with PRIMARY KEYs, a UNIQUE table-constraint clause must contain + only column names — the use of + expressions in an indexed-column of a UNIQUE table-constraint + is not supported. + + +

In most cases, UNIQUE and PRIMARY KEY + constraints are implemented by creating a unique index in the database. + (The exceptions are INTEGER PRIMARY KEY and PRIMARY KEYs on + WITHOUT ROWID tables.) + Hence, the following schemas are logically equivalent: + +

    +
  1. CREATE TABLE t1(a, b UNIQUE); +

  2. CREATE TABLE t1(a, b PRIMARY KEY); +

  3. CREATE TABLE t1(a, b);
    + CREATE UNIQUE INDEX t1b ON t1(b); +

+ + + +

3.7. CHECK constraints

+

A CHECK constraint may be attached to a column definition or + specified as a table constraint. In practice it makes no difference. Each + time a new row is inserted into the table or an existing row is updated, + the expression associated with each CHECK constraint is evaluated and + cast to a NUMERIC value in the same way as a CAST expression. If the + result is zero (integer value 0 or real value 0.0), then a constraint + violation has occurred. If the CHECK expression evaluates to NULL, or + any other non-zero value, it is not a constraint violation. + The expression of a CHECK constraint may not contain a subquery. + +

CHECK constraints are only verified when the table is written, not when + it is read. Furthermore, verification of CHECK constraints can be + temporarily disabled using the "PRAGMA ignore_check_constraints=ON;" + statement. Hence, it is possible that a query might produce results that + violate the CHECK constraints. + + + +

3.8. NOT NULL constraints

+

A NOT NULL constraint may only be attached to a column definition, + not specified as a table constraint. Not surprisingly, a NOT NULL + constraint dictates that the associated column may not contain a NULL value. + Attempting to set the column value to NULL when inserting a new row or + updating an existing one causes a constraint violation. NOT NULL + constraints are not verified during queries, so a query of a column might + produce a NULL value even though the column is marked as NOT NULL, if the + database file is corrupt. + + +

4. Constraint enforcement

+ +

Constraints are checked during INSERT and UPDATE and by +PRAGMA integrity_check and PRAGMA quick_check and sometimes +by ALTER TABLE. Queries and DELETE +statements do not normally verify constraints. +Hence, if a database file has been corrupted (perhaps by an external +program making direct changes to the database file without going through +the SQLite library) a query might return data that violates a constraint. +For example: + +

CREATE TABLE t1(x INT CHECK( x>3 ));
+/* Insert a row with X less than 3 by directly writing into the
+** database file using an external program */
+PRAGMA integrity_check;  -- Reports row with x less than 3 as corrupt
+INSERT INTO t1(x) VALUES(2);  -- Fails with SQLITE_CORRUPT
+SELECT x FROM t1;  -- Returns an integer less than 3 in spite of the CHECK constraint
+
+ +

Enforcement of CHECK constraints can be temporarily disabled using +the PRAGMA ignore_check_constraints=ON; statement. + +

4.1. Response to constraint violations

+ +

The response to a constraint violation is determined by the + constraint conflict resolution algorithm. Each + PRIMARY KEY, UNIQUE, NOT NULL and CHECK constraint has a default conflict + resolution algorithm. PRIMARY KEY, UNIQUE and NOT NULL constraints may be + explicitly assigned another default conflict resolution algorithm by + including a conflict-clause in their definitions. + Or, if a constraint definition does not include a conflict-clause, + the default conflict resolution algorithm is ABORT. + The conflict resolution algorithm for CHECK constraints is always ABORT. + (For historical compatibility only, table CHECK constraints are allowed + to have a conflict resolution clause, but that has no effect.) + Different constraints within the + same table may have different default conflict resolution algorithms. See + the section titled ON CONFLICT for additional information. + + + +

5. ROWIDs and the INTEGER PRIMARY KEY

+ +

Except for WITHOUT ROWID tables, all rows within SQLite tables +have a 64-bit signed integer key that uniquely identifies the row within its table. +This integer is usually +called the "rowid". The rowid value can be accessed using one of the special +case-independent names "rowid", "oid", or "_rowid_" in place of a column name. +If a table contains a user defined column named "rowid", "oid" or "_rowid_", +then that name always refers the explicitly declared column and cannot be used +to retrieve the integer rowid value. + +

The rowid (and "oid" and "_rowid_") is omitted in WITHOUT ROWID tables. +WITHOUT ROWID tables are only available in SQLite version 3.8.2 +(2013-12-06) and later. +A table that lacks the WITHOUT ROWID clause is called a "rowid table". + +

The data for rowid tables is stored as a B-Tree structure containing +one entry for each table row, using the rowid value as the key. This means that +retrieving or sorting records by rowid is fast. Searching for a record with a +specific rowid, or for all records with rowids within a specified range is +around twice as fast as a similar search made by specifying any other PRIMARY +KEY or indexed value. + +

With one exception noted below, if a rowid table has a primary key that consists +of a single column and the declared type of that column is "INTEGER" in any mixture of +upper and lower case, then the column becomes an alias for the rowid. Such a +column is usually referred to as an "integer primary key". A PRIMARY KEY column +only becomes an integer primary key if the declared type name is exactly +"INTEGER". Other integer type names like "INT" or "BIGINT" or "SHORT INTEGER" +or "UNSIGNED INTEGER" causes the primary key column to behave as an ordinary +table column with integer affinity and a unique index, not as an alias for +the rowid. + +

The exception mentioned above is that if the declaration of a column with +declared type "INTEGER" includes an "PRIMARY KEY DESC" clause, it does not +become an alias for the rowid and is not classified as an integer primary key. +This quirk is not by design. It is due to a bug in early versions of SQLite. +But fixing the bug could result in backwards incompatibilities. +Hence, the original behavior has been retained (and documented) because odd +behavior in a corner case is far better than a compatibility break. This means +that the following three table declarations all cause the column "x" to be an +alias for the rowid (an integer primary key): + +

    +
  • CREATE TABLE t(x INTEGER PRIMARY KEY ASC, y, z); +
  • CREATE TABLE t(x INTEGER, y, z, PRIMARY KEY(x ASC)); +
  • CREATE TABLE t(x INTEGER, y, z, PRIMARY KEY(x DESC)); +
+ +

But the following declaration does not result in "x" being an alias for +the rowid: +

    +
  • CREATE TABLE t(x INTEGER PRIMARY KEY DESC, y, z); +
+ +

Rowid values may be modified using an UPDATE statement in the same +way as any other column value can, either using one of the built-in aliases +("rowid", "oid" or "_rowid_") or by using an alias created by an integer +primary key. Similarly, an INSERT statement may provide a value to use as the +rowid for each row inserted. Unlike normal SQLite columns, an integer primary +key or rowid column must contain integer values. Integer primary key or rowid +columns are not able to hold floating point values, strings, BLOBs, or NULLs. + +

If an UPDATE statement attempts to set an integer primary key or rowid column +to a NULL or blob value, or to a string or real value that cannot be losslessly +converted to an integer, a "datatype mismatch" error occurs and the statement +is aborted. If an INSERT statement attempts to insert a blob value, or a string +or real value that cannot be losslessly converted to an integer into an +integer primary key or rowid column, a "datatype mismatch" error occurs and the +statement is aborted. + +

If an INSERT statement attempts to insert a NULL value into a rowid or +integer primary key column, the system chooses an integer value to use as the +rowid automatically. A detailed description of how this is done is provided +separately.

+ +

The parent key of a foreign key constraint is not allowed to +use the rowid. The parent key must used named columns only.

+

This page last modified on 2024-03-13 17:43:35 UTC

+ ADDED Doc/Extra/Core/lang_createtrigger.html Index: Doc/Extra/Core/lang_createtrigger.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_createtrigger.html @@ -0,0 +1,4455 @@ + + + + + +CREATE TRIGGER + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+CREATE TRIGGER +
+ + +
+ + + + +

1. Syntax

+ +

create-trigger-stmt: +

+
+
+ + + + +CREATE + +TEMP + +TEMPORARY + +TRIGGER + + + + + + + + + + + + + +IF + + + +NOT + + + +EXISTS + + + + + + + + +schema-name + + + +. + + + +trigger-name + + + +BEFORE + +AFTER + +INSTEAD + + + +OF + + + + + + + + + + + + + + + + + + + + + +DELETE + +INSERT + +UPDATE + + + + + + + +OF + + + +column-name + +, + + + + + +ON + + + +table-name + + + + + + + + + + + + + +FOR + + + +EACH + + + +ROW + + + +WHEN + + + +expr + +BEGIN + + + +update-stmt + + + +; + + + +END + + + + +insert-stmt + +delete-stmt + +select-stmt + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+

delete-stmt: +

+ +

expr: +

+ +

insert-stmt: +

+ +

select-stmt: +

+ +

update-stmt: +

+ +
+ + + +

2. Description

+

The CREATE TRIGGER statement is used to add triggers to the +database schema. Triggers are database operations +that are automatically performed when a specified database event +occurs.

+ +

Each trigger must specify that it will fire for one of +the following operations: DELETE, INSERT, UPDATE. +The trigger fires once for each row that is deleted, inserted, +or updated. If the "UPDATE OF column-name" +syntax is used, then the trigger will only fire if +column-name appears on the left-hand side of +one of the terms in the SET clause of the UPDATE statement.

+ +

Due to an historical oversight, columns named in the "UPDATE OF" +clause do not actually have to exist in the table being updated. +Unrecognized column names are silently ignored. +It would be more helpful if SQLite would fail the CREATE TRIGGER +statement if any of the names in the "UPDATE OF" clause are not +columns in the table. However, as this problem was discovered +many years after SQLite was widely deployed, we have resisted +fixing the problem for fear of breaking legacy applications.

+ +

At this time SQLite supports only FOR EACH ROW triggers, not FOR EACH +STATEMENT triggers. Hence explicitly specifying FOR EACH ROW is optional. +FOR EACH ROW implies that the SQL statements specified in the trigger +may be executed (depending on the WHEN clause) for each database row being +inserted, updated or deleted by the statement causing the trigger to fire.

+ +

Both the WHEN clause and the trigger actions may access elements of +the row being inserted, deleted or updated using references of the form +"NEW.column-name" and "OLD.column-name", where +column-name is the name of a column from the table that the trigger +is associated with. OLD and NEW references may only be used in triggers on +events for which they are relevant, as follows:

+ + + + + + + + + + + + + + +
INSERTNEW references are valid
UPDATENEW and OLD references are valid
DELETEOLD references are valid
+ + +

If a WHEN clause is supplied, the SQL statements specified +are only executed if the WHEN clause is true. +If no WHEN clause is supplied, the SQL statements +are executed every time the trigger fires.

+ +

The BEFORE or AFTER keyword determines when the trigger actions +will be executed relative to the insertion, modification or removal of the +associated row. BEFORE is the default when neither keyword is present.

+ +

An ON CONFLICT clause may be specified as part of an UPDATE or INSERT +action within the body of the trigger. +However if an ON CONFLICT clause is specified as part of +the statement causing the trigger to fire, then conflict handling +policy of the outer statement is used instead.

+ +

Triggers are automatically dropped +when the table that they are +associated with (the table-name table) is +dropped. However if the trigger actions reference +other tables, the trigger is not dropped or modified if those other +tables are dropped or modified.

+ +

Triggers are removed using the DROP TRIGGER statement.

+ +

2.1. Syntax Restrictions On UPDATE, DELETE, and INSERT Statements Within + Triggers

+ +

The UPDATE, DELETE, and INSERT +statements within triggers do not support +the full syntax for UPDATE, DELETE, and INSERT statements. The following +restrictions apply:

+ +
    +
  • + The name of the table to be modified in an UPDATE, DELETE, or INSERT + statement must be an unqualified table name. In other words, one must + use just "tablename" not "database.tablename" + when specifying the table.

  • + +
  • + For non-TEMP triggers, + the table to be modified or queried must exist in the + same database as the table or view to which the trigger is attached. + TEMP triggers are not subject to the same-database rule. A TEMP + trigger is allowed to query or modify any table in any ATTACH-ed database. +

  • + +
  • + The "INSERT INTO table DEFAULT VALUES" form of the INSERT statement + is not supported. +

  • + +
  • + The INDEXED BY and NOT INDEXED clauses are not supported for UPDATE and + DELETE statements. +

  • + +
  • + The ORDER BY and LIMIT clauses on UPDATE and DELETE statements are not + supported. ORDER BY and LIMIT are not normally supported for UPDATE or + DELETE in any context but can be enabled for top-level statements + using the SQLITE_ENABLE_UPDATE_DELETE_LIMIT compile-time option. However, + that compile-time option only applies to top-level UPDATE and DELETE + statements, not UPDATE and DELETE statements within triggers. +

  • + +
  • + Common table expression are not supported for + statements inside of triggers. +

  • +
+ + + +

3. INSTEAD OF triggers

+ +

BEFORE and AFTER triggers work only on ordinary tables. +INSTEAD OF triggers work only on views. + +

If an INSTEAD OF INSERT trigger exists on a view, then it is +possible to execute an INSERT statement against that view. No actual +insert occurs. Instead, the statements contained within the trigger +are run. INSTEAD OF DELETE and +INSTEAD OF UPDATE triggers work the same way for DELETE and UPDATE statements +against views.

+ +

Note that the sqlite3_changes() and sqlite3_total_changes() interfaces +do not count INSTEAD OF trigger firings, but the +count_changes pragma does count INSTEAD OF trigger firing.

+ +

4. Some Example Triggers

+ +

Assuming that customer records are stored in the "customers" table, and +that order records are stored in the "orders" table, the following +UPDATE trigger +ensures that all associated orders are redirected when a customer changes +his or her address:

+ +
+CREATE TRIGGER update_customer_address UPDATE OF address ON customers 
+  BEGIN
+    UPDATE orders SET address = new.address WHERE customer_name = old.name;
+  END;
+
+ +

With this trigger installed, executing the statement:

+ +
+UPDATE customers SET address = '1 Main St.' WHERE name = 'Jack Jones';
+
+ +

causes the following to be automatically executed:

+ +
+UPDATE orders SET address = '1 Main St.' WHERE customer_name = 'Jack Jones';
+
+ +

For an example of an INSTEAD OF trigger, consider the following schema: + +

+CREATE TABLE customer(
+  cust_id INTEGER PRIMARY KEY,
+  cust_name TEXT,
+  cust_addr TEXT
+);
+CREATE VIEW customer_address AS
+   SELECT cust_id, cust_addr FROM customer;
+CREATE TRIGGER cust_addr_chng
+INSTEAD OF UPDATE OF cust_addr ON customer_address
+BEGIN
+  UPDATE customer SET cust_addr=NEW.cust_addr
+   WHERE cust_id=NEW.cust_id;
+END;
+
+ +

With the schema above, a statement of the form:

+ +
+UPDATE customer_address SET cust_addr=$new_address WHERE cust_id=$cust_id;
+
+ +

Causes the customer.cust_addr field to be updated for a specific +customer entry that has customer.cust_id equal to the $cust_id parameter. +Note how the values assigned to the view are made available as field +in the special "NEW" table within the trigger body.

+ + + +

5. Cautions On The Use Of BEFORE triggers

+ +

If a BEFORE UPDATE or BEFORE DELETE trigger modifies or deletes a row +that was to have been updated or deleted, then the result of the subsequent +update or delete operation is undefined. Furthermore, if a BEFORE trigger +modifies or deletes a row, then it is undefined whether or not AFTER triggers +that would have otherwise run on those rows will in fact run. +

+ +

The value of NEW.rowid is undefined in a BEFORE INSERT trigger in which +the rowid is not explicitly set to an integer.

+ +

Because of the behaviors described above, programmers are encouraged to +prefer AFTER triggers over BEFORE triggers.

+ + + +

6. The RAISE() function

+ +

A special SQL function RAISE() may be used within a trigger-program, +with the following syntax

+ +

raise-function:

+
+ + + + +RAISE + + + +( + + + +ROLLBACK + + + +, + + + +error-message + + + +) + + + + + + +IGNORE + + + + +ABORT + + + + +FAIL + + + + + + + + +
+
+ + +

When one of RAISE(ROLLBACK,...), RAISE(ABORT,...) or RAISE(FAIL,...) +is called during trigger-program +execution, the specified ON CONFLICT processing is performed and +the current query terminates. +An error code of SQLITE_CONSTRAINT is returned to the application, +along with the specified error message.

+ +

When RAISE(IGNORE) is called, the remainder of the current trigger program, +the statement that caused the trigger program to execute and any subsequent +trigger programs that would have been executed are abandoned. No database +changes are rolled back. If the statement that caused the trigger program +to execute is itself part of a trigger program, then that trigger program +resumes execution at the beginning of the next step. +

+ + + +

7. TEMP Triggers on Non-TEMP Tables

+ +

A trigger normally exists in the same database as the table named +after the "ON" keyword in the CREATE TRIGGER statement. Except, it is +possible to create a TEMP TRIGGER on a table in another database. +Such a trigger will only fire when changes +are made to the target table by the application that defined the trigger. +Other applications that modify the database will not be able to see the +TEMP trigger and hence cannot run the trigger.

+ +

When defining a TEMP trigger on a non-TEMP table, it is important to +specify the database holding the non-TEMP table. For example, +in the following statement, it is important to say "main.tab1" instead +of just "tab1":

+ +
+CREATE TEMP TRIGGER ex1 AFTER INSERT ON main.tab1 BEGIN ...
+
+ +

Failure to specify the schema name on the target table could result +in the TEMP trigger being reattached to a table with the same name in +another database whenever any schema change occurs.

+

This page last modified on 2024-08-03 15:33:40 UTC

+ ADDED Doc/Extra/Core/lang_createview.html Index: Doc/Extra/Core/lang_createview.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_createview.html @@ -0,0 +1,2452 @@ + + + + + +CREATE VIEW + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+CREATE VIEW +
+
+ + + + +

1. Syntax

+ +

create-view-stmt: +

+
+
+ + + + +CREATE + +TEMP + +TEMPORARY + +VIEW + + + + + + + + + + + + + +IF + + + +NOT + + + +EXISTS + + + + + + + + +schema-name + + + +. + + + +view-name + + + +( + + + +column-name + + + +) + + + +AS + + + +select-stmt + + + + + + + + + + +, + + + + +
+

select-stmt: +

+ +
+ + +

2. Description

+ +

The CREATE VIEW command assigns a name to a pre-packaged +SELECT statement. +Once the view is created, it can be used in the FROM clause +of another SELECT in place of a table name. +

+ +

If the "TEMP" or "TEMPORARY" keyword occurs in between "CREATE" +and "VIEW" then the view that is created is only visible to the +database connection that created it and is automatically deleted when +the database connection is closed.

+ +

If a schema-name is specified, then the view +is created in the specified database. +It is an error to specify both a schema-name +and the TEMP keyword on a VIEW, unless the schema-name +is "temp". +If no schema name is specified, and the TEMP keyword is not present, +the VIEW is created in the main database.

+ +

You cannot DELETE, INSERT, or UPDATE a view. Views are read-only +in SQLite. However, in many cases you can use an +INSTEAD OF trigger on the view to accomplish +the same thing. Views are removed +with the DROP VIEW command.

+ +

If a column-name list follows +the view-name, then that list determines +the names of the columns for the view. If the column-name +list is omitted, then the names of the columns in the view are derived +from the names of the result-set columns in the select-stmt. +The use of column-name list is recommended. Or, if +column-name list is omitted, then the result +columns in the SELECT statement that defines the view should have +well-defined names using the +"AS column-alias" syntax. +SQLite allows you to create views that depend on automatically +generated column names, but you should avoid using them since the +rules used to generate column names are not a defined part of the +interface and might change in future releases of SQLite. + +

The column-name list syntax was added in +SQLite versions 3.9.0 (2015-10-14). + +

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_createvtab.html Index: Doc/Extra/Core/lang_createvtab.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_createvtab.html @@ -0,0 +1,244 @@ + + + + + +CREATE VIRTUAL TABLE + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+CREATE VIRTUAL TABLE +
+
+ + + + + +

create-virtual-table-stmt: +

+
+
+ + + + +CREATE + + + +VIRTUAL + + + +TABLE + + + +IF + + + +NOT + + + +EXISTS + +schema-name + + + +. + + + +table-name + +USING + + + +module-name + + + +( + + + +module-argument + + + +) + + + + +, + + + + + + + + + + + + + + + + + + + + + +
+
+ + +

A virtual table is an interface to an external storage or computation +engine that appears to be a table but does not actually store information +in the database file.

+ +

In general, you can do anything with a virtual table that can be done +with an ordinary table, except that you cannot create indices or triggers on a +virtual table. Some virtual table implementations might impose additional +restrictions. For example, many virtual tables are read-only.

+ +

The module-name is the name of an object that implements +the virtual table. The module-name must be registered with +the SQLite database connection using +sqlite3_create_module() or sqlite3_create_module_v2() +prior to issuing the CREATE VIRTUAL TABLE statement. +The module takes zero or more comma-separated arguments. +The arguments can be just about any text as long as it has balanced +parentheses. The argument syntax is sufficiently general that the +arguments can be made to appear as column definitions in a traditional +CREATE TABLE statement. +SQLite passes the module arguments directly +to the xCreate and xConnect methods of the module implementation +without any interpretation. It is the responsibility +of the module implementation to parse and interpret its own arguments.

+ +

A virtual table is destroyed using the ordinary +DROP TABLE statement. There is no +DROP VIRTUAL TABLE statement.

+

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_datefunc.html Index: Doc/Extra/Core/lang_datefunc.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_datefunc.html @@ -0,0 +1,706 @@ + + + + + +Date And Time Functions + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+Date And Time Functions +
+ + +
+ + + + +

1. Overview

+ +

+SQLite supports seven scalar date and time functions as follows: +

+ +

+

    +
  1. date(time-value, modifier, modifier, ...)
  2. +
  3. time(time-value, modifier, modifier, ...)
  4. +
  5. datetime(time-value, modifier, modifier, ...)
  6. +
  7. julianday(time-value, modifier, modifier, ...)
  8. +
  9. unixepoch(time-value, modifier, modifier, ...)
  10. +
  11. strftime(format, time-value, modifier, modifier, ...)
  12. +
  13. timediff(time-value, time-value)
  14. +
+ +

+The first six date and time functions take an optional time-value as an argument, followed +by zero or more modifiers. +The strftime() function also takes a format string as its first argument. +The timediff() function takes exactly two arguments which are both time-values. +

+ +

+SQLite does not have a dedicated date/time datatype. Instead, +date and time values can stored as any of the following: + +

+ + + + +
ISO-8601 +A text string that is an ISO 8601 date/time value. +Example: '2025-05-29 14:16:00' +
Julian day number +The number of days including fractional days since -4713-11-24 12:00:00 +Example: 2460825.09444444 +
Unix timestamp +The number of seconds including fractional seconds since 1970-01-01 00:00:00 +Example: 1748528160 +
+
+ +

+These three formats are collectively known as time-values. +All of the date time functions accept time-values as either ISO-8601 text or +as Julian day numbers. They can also be made to accept unix timestamps by +adding optional modifiers arguments 'auto' or +'unixepoch'. +Since the timediff() function does not accept modifiers, +it can only use ISO-8601 and julian day number time-values. + + + +

+The date() function returns the date as text in this format: YYYY-MM-DD. + + + +

+The time() function returns the time as text in formatted as HH:MM:SS or as HH:MM:SS.SSS if +the subsec modifier is used. + + + +

+The datetime() function returns the date and time formatted as YYYY-MM-DD HH:MM:SS or +as YYYY-MM-DD HH:MM:SS.SSS if the subsec modifier is used. + + + +

+The julianday() function returns the +Julian day - the +fractional number of days since noon in Greenwich on November 24, 4714 B.C. +(Proleptic Gregorian calendar). + + + +

+The unixepoch() function returns a unix timestamp - the number of seconds +since 1970-01-01 00:00:00 UTC. The unixepoch() function normally returns +an integer number of seconds, but with the optional subsec modifier it +will return a floating point number which is the fractional number of seconds. + + + +

+The strftime() function returns the date formatted according to +the format string specified as the first argument. +The format string supports the most common substitutions found in the +strftime() function +from the standard C library plus two new substitutions, %f and %J. +The following is a complete list of valid strftime() substitutions +as of version 3.46.0 (2024-05-23). Earlier versions of SQLite +might not support all substitutions. If an undefined or unsupported +substitution is seen, the result is NULL. +

+ +
+ + + +
%d day of month: 01-31 +
%e day of month without leading zero: 1-31 +
%f fractional seconds: SS.SSS +
%F ISO 8601 date: YYYY-MM-DD +
%G ISO 8601 year corresponding to %V +
%g 2-digit ISO 8601 year corresponding to %V +
%H hour: 00-24 +
%I hour for 12-hour clock: 01-12 +
%j day of year: 001-366 +
%J Julian day number (fractional) +
%k hour without leading zero: 0-24 +
%l %I without leading zero: 1-12 +
%m month: 01-12 +
%M minute: 00-59 +
%p "AM" or "PM" depending on the hour +
%P "am" or "pm" depending on the hour +
%R ISO 8601 time: HH:MM +
%s seconds since 1970-01-01 +
%S seconds: 00-59 +
%T ISO 8601 time: HH:MM:SS +
%U week of year (00-53) - week 01 starts on the first Sunday +
%u day of week 1-7 with Monday==1 +
%V ISO 8601 week of year +
%w day of week 0-6 with Sunday==0 +
%W week of year (00-53) - week 01 starts on the first Monday +
%Y year: 0000-9999 +
%% % +
+
+ +

+Other date and time functions can be expressed +in terms of strftime(): +

+ +
+ +
FunctionEquivalent strftime() +
date(...) strftime('%F', ...) +
time(...) strftime('%T', ...) +
datetime(...) strftime('%F %T', ...) +
julianday(...) CAST(strftime('%J', ...) as REAL) +
unixepoch(...) CAST(strftime('%s', ...) as INT) +
+
+ +

+The date(), time(), and datetime() functions all return text, and so their +strftime() equivalents are exact. However, the julianday() +and unixepoch() functions return numeric values. Their strftime() equivalents +return a string that is the text representation of the corresponding number. +

+ +

+The main reasons for providing functions other than strftime() are +for convenience and for efficiency. The julianday() and unixepoch() +functions return real and integer values respectively, and do not +incur the format conversion costs or inexactitude resulting from use +of the '%J' or '%s' format specifiers with the strftime() function. +

+ + + +

+The timediff(A,B) function returns a string that describes the amount +of time that must be added to B in order to reach time A. The format of +the timediff() result is designed to be human-readable. The format is: +

+(+|-)YYYY-MM-DD HH:MM:SS.SSS +
+

+This time difference string is also an allowed modifier for the other +date/time functions. The following invariant holds for time-values A +and B: +

+datetime(A) = datetime(B, timediff(A,B)) +
+ +

+The length of months and years vary. February is shorter than March. +Leap years are longer than non-leap years. The output from timediff() +takes this all into account. The timediff() function is intended to provide +a human-friendly description of the time span. If you want to know the +number of days or seconds between two dates, A and B, then you can always do +one of these: +

+SELECT julianday(B) - julianday(A);
+SELECT unixepoch(B) - unixepoch(A); +
+ +

+The timediff(A,B) might return the same result even for values A and B +that span a different number of days - depending on the starting date. +For example, both of the following two timediff() calls return the +same result ("-0000-01-00 00:00:00.000") even though the first timespan +is 28 days and the seconds is 31 days: +

+SELECT timediff('2023-02-15','2023-03-15');
+SELECT timediff('2023-03-15','2023-04-15'); +
+

+Summary: If you want a human-friendly time span, use timediff(). If you +want a precise time difference (in days or seconds) use the difference +between two julianday() or unixepoch() calls. +

+ + + + +

2. Time Values

+ +

A time-value can be in any of the following formats shown below. +The value is usually a string, though it can be an integer or floating +point number in the case of format 12. + +

    +
  1. YYYY-MM-DD +
  2. YYYY-MM-DD HH:MM +
  3. YYYY-MM-DD HH:MM:SS +
  4. YYYY-MM-DD HH:MM:SS.SSS +
  5. YYYY-MM-DDTHH:MM +
  6. YYYY-MM-DDTHH:MM:SS +
  7. YYYY-MM-DDTHH:MM:SS.SSS +
  8. HH:MM +
  9. HH:MM:SS +
  10. HH:MM:SS.SSS +
  11. now +
  12. DDDDDDDDDD +
+ +

+In formats 5 through 7, the "T" is a literal character separating +the date and the time, as required by +ISO-8601. +Formats 8 through 10 that specify only a time assume a date of +2000-01-01. Format 11, the string 'now', is converted into the +current date and time as obtained from the xCurrentTime method +of the sqlite3_vfs object in use. +The 'now' argument to date and time functions always returns exactly the +same value for multiple invocations within the same sqlite3_step() call. +Universal Coordinated Time (UTC) is used. +Format 12 is the +Julian day number +expressed as an integer or floating point value. +Format 12 might also be interpreted as a unix timestamp if it is immediately followed +either the 'auto' or 'unixepoch' modifier. +

+ +

+Formats 2 through 10 may be optionally followed by a timezone indicator of the form +"[+-]HH:MM" or just "Z". The date and time functions use UTC or "zulu" +time internally, and so the "Z" suffix is a no-op. Any non-zero "HH:MM" suffix is +subtracted from the indicated date and time in order to compute zulu time. +For example, all of the following time-values are equivalent: +

+ +
+2013-10-07 08:23:19.120
+2013-10-07T08:23:19.120Z
+2013-10-07 04:23:19.120-04:00
+2456572.84952685 +
+ +

+In formats 4, 7, and 10, the fractional seconds value SS.SSS can have +one or more digits following the decimal point. Exactly three digits are +shown in the examples because only the first three digits are significant +to the result, but the input string can have fewer or more than three digits +and the date/time functions will still operate correctly. +Similarly, format 12 is shown with 10 significant digits, but the date/time +functions will really accept as many or as few digits as are necessary to +represent the Julian day number. +

+ +

+In all functions other than timediff(), +the time-value (and all modifiers) may be omitted, in which case a time +value of 'now' is assumed. + + + +

3. Modifiers

+ +

For all date/time functions other than timediff(), +the time-value argument can be followed by zero or more modifiers that +alter date and/or time. Each modifier +is a transformation that is applied to the time-value to its left. +Modifiers are applied from left to right; order is important. +The available modifiers are as follows.

+ +
    +
  1. NNN days +
  2. NNN hours +
  3. NNN minutes +
  4. NNN seconds +
  5. NNN months +
  6. NNN years +
  7. ±HH:MM +
  8. ±HH:MM:SS +
  9. ±HH:MM:SS.SSS +
  10. ±YYYY-MM-DD +
  11. ±YYYY-MM-DD HH:MM +
  12. ±YYYY-MM-DD HH:MM:SS +
  13. ±YYYY-MM-DD HH:MM:SS.SSS +
  14. ceiling +
  15. floor +
  16. start of month +
  17. start of year +
  18. start of day +
  19. weekday N +
  20. unixepoch +
  21. julianday +
  22. auto +
  23. localtime +
  24. utc +
  25. subsec +
  26. subsecond +
+ +

The first thirteen modifiers (1 through 13) +add the specified amount of time to the date and time +specified by the arguments to its left. +The 's' character at the end of the modifier names in 1 through 6 is optional. +The NNN value can be any floating point number, with an optional '+' or '-' prefix. + + + +

The time shift modifiers (7 through 13) move the time-value by the +number of years, months, days, hours, minutes, and/or seconds specified. +An initial "+" or "-" is required for formats 10 through 13 but is optional +for formats 7, 8, and 9. The changes are applies from left to right. +First the year is shifted by YYYY, then the month by MM, and then day +by DD, and so forth. The +timediff(A,B) function returns a time shift in format 13 that shifts +the time-value B into A.

+ + + +

Because the length of a month or year changes from one month or year +to the next, ambiguities can arise when shifting a date by months and/or years. +For example, what is the date one year after 2024-02-29? Is it 2025-02-28 +or 2025-03-01? Or what is the date that is two months after 2023-12-31? +Is it 2024-02-29 or 2024-03-02? There is no consensus on how to resolve +this ambiguity, so the "ceiling" and "floor" modifiers +(14 and 15) are available to +let the programmer decide. If the next modifier after a time shift is +"ceiling", then any ambiguity in the date is resolved by choosing the +later date. The "floor" modifier resolves ambiguities +by resolving to the last day of the previous month. The default +behavior is "ceiling". + +

The "start of" modifiers (16 through 18) shift the date backwards +to the beginning of the subject month, year or day.

+ +

The "weekday" modifier advances the date forward, if necessary, +to the next date where the weekday number is N. Sunday is 0, Monday is 1, +and so forth. +If the date is already on the desired weekday, the "weekday" modifier +leaves the date unchanged.

+ + + +

The "unixepoch" modifier (20) only works if it immediately follows +a time-value in the DDDDDDDDDD format. +This modifier causes the DDDDDDDDDD to be interpreted not +as a Julian day number as it normally would be, but as +Unix Time - the +number of seconds since 1970. If the "unixepoch" modifier does not +follow a time-value of the form DDDDDDDDDD which expresses the number +of seconds since 1970 or if other modifiers +separate the "unixepoch" modifier from prior DDDDDDDDDD then the +behavior is undefined. + + + +

The "julianday" modifier must immediately follow the initial +time-value which must be of the form DDDDDDDDD. Any other use of +the 'julianday' modifier is an error and causes the function to return NULL. +The 'julianday' modifier forces the time-value number to be interpreted +as a julian-day number. As this is the default behavior, the 'julianday' +modifier is scarcely more than a no-op. The only difference is that +adding 'julianday' forces the DDDDDDDDD time-value format, and causes +a NULL to be returned if any other time-value format is used. + + + +

The "auto" modifier must immediately follow the initial time-value. +If the time-value is numeric (the DDDDDDDDDD format) then the 'auto' +modifier causes the time-value to interpreted as either a julian day +number or a unix timestamp, depending on its magnitude. If the value +is between 0.0 and 5373484.499999, then it is interpreted as a julian +day number (corresponding to dates between +-4713-11-24 12:00:00 and 9999-12-31 23:59:59, inclusive). For numeric +values outside of the range of valid julian day numbers, but within +the range of -210866760000 to 253402300799, the 'auto' modifier causes +the value to be interpreted as a unix timestamp. Other numeric values +are out of range and cause a NULL return. The 'auto' modifier is a no-op +for ISO 8601 text time-values. +The "auto" modifier is designed to work with time-values even in +cases where it is not known which time-value format +is stored in the database file, or in cases where the same column +stores time-values in different formats on different rows. +The 'auto' modifier will automatically select the +appropriate format. However, there is some ambiguity. Unix +timestamps for the first 63 days of 1970 will be interpreted as julian +day numbers. The 'auto' modifier is very useful when the dataset is +guaranteed to contain no dates within that range, but should be +avoided for applications that might make use of dates in the opening +months of 1970. + + + +

The "localtime" modifier assumes the time-value to its left is in +Universal Coordinated Time (UTC) and adjusts that time +value so that it is in localtime. If "localtime" +follows a time that is not UTC, then the behavior is undefined. +The "utc" modifier is the opposite of "localtime". +"utc" assumes that the time-value +to its left is in the local timezone and adjusts that time-value to be in UTC. +If the time to the left is not in localtime, then the result of "utc" is +undefined.

+ + + +

+The "subsecond" modifier (which may be abbreviated as just +"subsec") increases the resolution of the output for +datetime(), time(), and unixepoch(), and for the "%s" +format string in strftime(). The "subsecond" +modifier has no effect on other date/time functions. +The current implemention increases the resolution from seconds +to milliseconds, but this might increase to a higher resolution +in future releases of SQLite. When "subsec" is used with +datetime() or time(), the seconds field at the end is +followed by a decimal point and one or more digits to show +fractional seconds. When "subsec" is used with unixepoch(), +the result is a floating point value which is the number of +seconds and fractional seconds since 1970-01-01. +The "subsecond" and "subsec" modifiers have the special property +that they can occur as the first argument to date/time functions +(or as the first argument after the format string for strftime()). +When this happens, the time-value that is normally in the first +argument is understood to be "now". For example, a short cut to +get the current time in seconds since 1970 with millisecond +precision is to say: + +

+ SELECT unixepoch('subsec'); +
+ +

4. Examples

+ +

Compute the current date.

+ +

SELECT date();
+ +

Compute the last day of the current month.

+ +
SELECT date('now','start of month','+1 month','-1 day'); +
+ +

Compute the date and time given a unix timestamp 1092941466.

+ +
+ SELECT datetime(1092941466, 'unixepoch');
+ SELECT datetime(1092941466, 'auto'); -- Does not work for early 1970! +
+ +

Compute the date and time given a unix timestamp 1092941466, and +compensate for your local timezone.

+ +
+ SELECT datetime(1092941466, 'unixepoch', 'localtime'); +
+ +

Compute the current unix timestamp.

+ +
+ SELECT unixepoch();
+ SELECT strftime('%s'); +
+ +

Compute the number of days since the signing of the US Declaration +of Independence.

+ +
+ SELECT julianday('now') - julianday('1776-07-04'); +
+ +

Compute the number of seconds since a particular moment in 2004:

+ +
+ SELECT unixepoch() - unixepoch('2004-01-01 02:34:56'); +
+ +

+Compute the date of the first Tuesday in October +for the current year. +

+ +
+ SELECT date('now','start of year','+9 months','weekday 2'); +
+ +

Compute the time since the unix epoch in seconds with +millisecond precision:

+ +
+ SELECT (julianday('now') - 2440587.5)*86400.0;
+ SELECT unixepoch('now','subsec'); +
+ +

Compute how old Abraham Lincoln would be if he were still alive today:

+ +
+ SELECT timediff('now','1809-02-12'); +
+ +

5. Caveats And Bugs

+ +

The computation of local time depends heavily on the whim +of politicians and is thus difficult to get correct for +all locales. In this implementation, the standard C library +function localtime_r() is used to assist in the calculation of +local time. The +localtime_r() C function normally only works for years +between 1970 and 2037. For dates outside this range, SQLite +attempts to map the year into an equivalent year within +this range, do the calculation, then map the year back.

+ +

These functions only work for dates between 0000-01-01 00:00:00 +and 9999-12-31 23:59:59 (julian day numbers 1721059.5 through 5373484.5). +For dates outside that range, the results of these +functions are undefined.

+ +

Non-Vista Windows platforms only support one set of DST rules. +Vista only supports two. Therefore, on these platforms, +historical DST calculations will be incorrect. +For example, in the US, in 2007 the DST rules changed. +Non-Vista Windows platforms apply the new 2007 DST rules +to all previous years as well. Vista does somewhat better +getting results correct back to 1986, when the rules were also changed.

+ +

All internal computations assume the +Gregorian calendar +system. They also assume that every +day is exactly 86400 seconds in duration; no leap seconds are incorporated.

+

This page last modified on 2024-08-14 17:04:32 UTC

+ ADDED Doc/Extra/Core/lang_delete.html Index: Doc/Extra/Core/lang_delete.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_delete.html @@ -0,0 +1,3897 @@ + + + + + +DELETE + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+DELETE +
+ + +
+ + + + +

1. Overview

+ +

delete-stmt: +

+
+
+ + + + + + +WITH + +RECURSIVE + + + + + +common-table-expression + + + + + + +, + + + + +DELETE + + + +FROM + + + +qualified-table-name + + + +returning-clause + + + + + +expr + + + +WHERE + + + + + + + + + + + + + + +
+

common-table-expression: +

+ +

expr: +

+ +

qualified-table-name: +

+ +

returning-clause: +

+ +
+ + +

The DELETE command removes records from the table identified by the + qualified-table-name. + +

If the WHERE clause is not present, all records in the table are deleted. + If a WHERE clause is supplied, then only those rows for which the + WHERE clause boolean expression is true are deleted. + Rows for which the expression is false or NULL are retained. + +

+ +

2. Restrictions on DELETE Statements Within CREATE TRIGGER

+ +

The following restrictions apply to DELETE statements that occur within the + body of a CREATE TRIGGER statement: + +

    +
  • The table-name specified as part of a + DELETE statement within + a trigger body must be unqualified. In other words, the + schema-name. prefix on the table name is not allowed + within triggers. If the table to which the trigger is attached is + not in the temp database, then DELETE statements within the trigger + body must operate on tables within the same database as it. If the table + to which the trigger is attached is in the TEMP database, then the + unqualified name of the table being deleted is resolved in the same way as + it is for a top-level statement (by searching first the TEMP database, then + the main database, then any other databases in the order they were + attached). + +

  • The INDEXED BY and NOT INDEXED clauses are not allowed on DELETE + statements within triggers.

    + +
  • The LIMIT and ORDER BY clauses (described below) are unsupported for + DELETE statements within triggers.

    + +
  • The RETURNING clause is not supported for triggers. +

+ +

3. Optional LIMIT and ORDER BY clauses

+ +

If SQLite is compiled with the SQLITE_ENABLE_UPDATE_DELETE_LIMIT +compile-time option, then the syntax of the DELETE statement is +extended by the addition of optional ORDER BY and LIMIT clauses:

+ +

delete-stmt-limited:

+
+ + + + +WITH + +RECURSIVE + + + + + +common-table-expression + + + + + + +, + + + + +DELETE + + + +FROM + + + +qualified-table-name + + + + + + +WHERE + + + +expr + +returning-clause + +ORDER + + + +BY + + + +ordering-term + +, + +LIMIT + + + +expr + + + +OFFSET + + + +expr + +, + + + +expr + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +

If a DELETE statement has a LIMIT clause, the maximum number of rows that +will be deleted is found by evaluating the accompanying expression and casting +it to an integer value. If the result of the evaluating the LIMIT clause +cannot be losslessly converted to an integer value, it is an error. A +negative LIMIT value is interpreted as "no limit". If the DELETE statement +also has an OFFSET clause, then it is similarly evaluated and cast to an +integer value. Again, it is an error if the value cannot be losslessly +converted to an integer. If there is no OFFSET clause, or the calculated +integer value is negative, the effective OFFSET value is zero. + +

If the DELETE statement has an ORDER BY clause, then all rows that would +be deleted in the absence of the LIMIT clause are sorted according to the +ORDER BY. The first M rows, where M is the value found by +evaluating the OFFSET clause expression, are skipped, and the following +N, where N is the value of the LIMIT expression, are deleted. +If there are less than N rows remaining after taking the OFFSET clause +into account, or if the LIMIT clause evaluated to a negative value, then all +remaining rows are deleted. + +

If the DELETE statement has no ORDER BY clause, then all rows that +would be deleted in the absence of the LIMIT clause are assembled in an +arbitrary order before applying the LIMIT and OFFSET clauses to determine +the subset that are actually deleted. + +

The ORDER BY clause on a DELETE statement is used only to determine which +rows fall within the LIMIT. The order in which rows are deleted is arbitrary +and is not influenced by the ORDER BY clause. +This means that if there is a RETURNING clause, the rows returned by +the statement probably will not be in the order specified by the +ORDER BY clause. + + + +

4. The Truncate Optimization

+ +

When the WHERE clause and RETURNING clause are both omitted +from a DELETE statement and the table being deleted has no triggers, +SQLite uses an optimization to erase the entire table content +without having to visit each row of the table individually. +This "truncate" optimization makes the delete run much faster. +Prior to SQLite version 3.6.5 (2008-11-12), the truncate optimization +also meant that the sqlite3_changes() and +sqlite3_total_changes() interfaces +and the count_changes pragma +will not actually return the number of deleted rows. +That problem has been fixed as of version 3.6.5 (2008-11-12). + +

The truncate optimization can be permanently disabled for all queries +by recompiling +SQLite with the SQLITE_OMIT_TRUNCATE_OPTIMIZATION compile-time switch.

+ +

The truncate optimization can also be disabled at runtime using +the sqlite3_set_authorizer() interface. If an authorizer callback +returns SQLITE_IGNORE for an SQLITE_DELETE action code, then +the DELETE operation will proceed but the truncate optimization will +be bypassed and rows will be deleted one by one.

+

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_detach.html Index: Doc/Extra/Core/lang_detach.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_detach.html @@ -0,0 +1,167 @@ + + + + + +DETACH + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+DETACH +
+
+ + + + +

detach-stmt: +

+
+
+ + + + +DETACH + + + +DATABASE + + + +schema-name + + + + + + + +
+
+ + +

This statement detaches an additional database connection previously +attached using the ATTACH statement. +When not in shared cache mode, +it is possible to have the same database file attached multiple times using +different names, and detaching one connection to a file will leave the +others intact.

+ +

+In shared cache mode, attempting to attach the same database file more +than once results in an error. +

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_dropindex.html Index: Doc/Extra/Core/lang_dropindex.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_dropindex.html @@ -0,0 +1,181 @@ + + + + + +DROP INDEX + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+DROP INDEX +
+
+ + + + +

drop-index-stmt: +

+
+
+ + + + +DROP + + + +INDEX + + + +IF + + + +EXISTS + + + +schema-name + + + +. + + + +index-name + + + + + + + + + + +
+
+ + +

The DROP INDEX statement removes an index added +with the CREATE INDEX statement. The index is completely removed from +the disk. The only way to recover the index is to reenter the +appropriate CREATE INDEX command.

+ +

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_droptable.html Index: Doc/Extra/Core/lang_droptable.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_droptable.html @@ -0,0 +1,200 @@ + + + + + +DROP TABLE + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+DROP TABLE +
+
+ + + + +

drop-table-stmt: +

+
+
+ + + + +DROP + + + +TABLE + + + +IF + + + +EXISTS + + + +schema-name + + + +. + + + +table-name + + + + + + + + + + +
+
+ + +

The DROP TABLE statement removes a table added with the +CREATE TABLE statement. The name specified is the +table name. The dropped table is completely removed from the database +schema and the disk file. The table can not be recovered. +All indices and triggers +associated with the table are also deleted.

+ +

The optional IF EXISTS clause suppresses the error that would normally +result if the table does not exist.

+ +

If foreign key constraints are enabled, a DROP TABLE command performs an +implicit DELETE FROM command before removing the +table from the database schema. Any triggers attached to the table are +dropped from the database schema before the implicit DELETE FROM +is executed, so this cannot cause any triggers to fire. By contrast, an +implicit DELETE FROM does cause any configured +foreign key actions to take place. +If the implicit DELETE FROM executed +as part of a DROP TABLE command violates any immediate foreign key constraints, +an error is returned and the table is not dropped. If +the implicit DELETE FROM causes any +deferred foreign key constraints to be violated, and the violations still +exist when the transaction is committed, an error is returned at the time +of commit. +

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_droptrigger.html Index: Doc/Extra/Core/lang_droptrigger.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_droptrigger.html @@ -0,0 +1,176 @@ + + + + + +DROP TRIGGER + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ + + + +

drop-trigger-stmt: +

+
+
+ + + + +DROP + + + +TRIGGER + + + +IF + + + +EXISTS + + + +schema-name + + + +. + + + +trigger-name + + + + + + + + + + +
+
+ + +

The DROP TRIGGER statement removes a trigger created by the +CREATE TRIGGER statement. Once removed, the trigger definition is no +longer present in the sqlite_schema (or sqlite_temp_schema) table and is +not fired by any subsequent INSERT, UPDATE or DELETE statements. + +

Note that triggers are automatically dropped when the associated table is +dropped. +

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_dropview.html Index: Doc/Extra/Core/lang_dropview.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_dropview.html @@ -0,0 +1,191 @@ + + + + + +DROP VIEW + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+DROP VIEW +
+
+ + + + +

drop-view-stmt: +

+
+
+ + + + +DROP + + + +VIEW + + + +IF + + + +EXISTS + + + +schema-name + + + +. + + + +view-name + + + + + + + + + + +
+
+ + +

The DROP VIEW statement removes a view created by the CREATE VIEW + statement. The view definition is removed from the database schema, but + no actual data in the underlying base tables is modified. + +

The view to drop is identified by the view-name and optional + schema-name specified as part of the DROP VIEW statement. This + reference is resolved using the standard procedure for object resolution. + +

+ If the specified view cannot be found and the IF EXISTS clause is not + present, it is an error. If the specified view cannot be found and an IF + EXISTS clause is present in the DROP VIEW statement, then the statement + is a no-op. + + +

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_explain.html Index: Doc/Extra/Core/lang_explain.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_explain.html @@ -0,0 +1,398 @@ + + + + + +EXPLAIN + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+EXPLAIN +
+
+ + + + +

1. Syntax

+

sql-stmt:

+
+ + + + +EXPLAIN + + + +QUERY + + + +PLAN + + + + + + + + + +alter-table-stmt + + + + + + +analyze-stmt + + + + + + +attach-stmt + + + + + + +begin-stmt + + + + + + +commit-stmt + + + + + + +create-index-stmt + + + + + + +create-table-stmt + + + + + + +create-trigger-stmt + + + + + + +create-view-stmt + + + + + + +create-virtual-table-stmt + + + + + + +delete-stmt + + + + + + +delete-stmt-limited + + + + + + +detach-stmt + + + + + + +drop-index-stmt + + + + + + +drop-table-stmt + + + + + + +drop-trigger-stmt + + + + + + +drop-view-stmt + + + + + + +insert-stmt + + + + + + +pragma-stmt + + + + + + +reindex-stmt + + + + + + +release-stmt + + + + + + +rollback-stmt + + + + + + +savepoint-stmt + + + + + + +select-stmt + + + + + + +update-stmt + + + + + + +update-stmt-limited + + + + + + +vacuum-stmt + + + + + + +
+
+ + +

2. Description

+ +

An SQL statement can be preceded by the keyword "EXPLAIN" or +by the phrase "EXPLAIN QUERY PLAN". Either modification causes the +SQL statement to behave as a query and to return information about +how the SQL statement would have operated if the EXPLAIN keyword or +phrase had been omitted.

+ +

The output from EXPLAIN and EXPLAIN QUERY PLAN is intended for +interactive analysis and troubleshooting only. The details of the +output format are subject to change from one release of SQLite to the next. +Applications should not use EXPLAIN or EXPLAIN QUERY PLAN since +their exact behavior is variable and only partially documented.

+ +

When the EXPLAIN keyword appears by itself it causes the statement +to behave as a query that returns the sequence of +virtual machine instructions it would have used to execute the command had +the EXPLAIN keyword not been present. When the EXPLAIN QUERY PLAN phrase +appears, the statement returns high-level information regarding the query +plan that would have been used. + +

The EXPLAIN QUERY PLAN command is described in +more detail here. + +

2.1. EXPLAIN operates at run-time, not at prepare-time

+ +

The EXPLAIN and EXPLAIN QUERY PLAN prefixes affect the behavior of +running a prepared statement using sqlite3_step(). The process of +generating a new prepared statement using sqlite3_prepare() or similar +is (mostly) unaffected by EXPLAIN. (The exception to the previous sentence +is that some special opcodes used by EXPLAIN QUERY PLAN are omitted when +building an EXPLAIN QUERY PLAN prepared statement, as a performance +optimization.) + +

This means that actions that occur during sqlite3_prepare() are +unaffected by EXPLAIN. + +

    +
  • +Some PRAGMA statements do their work during sqlite3_prepare() rather +than during sqlite3_step(). Those PRAGMA statements are unaffected +by EXPLAIN. They operate the same with or without the EXPLAIN prefix. +The set of PRAGMA statements that are unaffected by EXPLAIN can vary +from one release to the next. Some PRAGMA statements operate during +sqlite3_prepare() depending on their arguments. For consistent +results, avoid using EXPLAIN on PRAGMA statements. + +

  • +The authorizer callback is invoked regardless of the presence of +EXPLAIN or EXPLAIN QUERY PLAN. +

+

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_expr.html Index: Doc/Extra/Core/lang_expr.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_expr.html @@ -0,0 +1,3542 @@ + + + + + +SQL Language Expressions + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+SQL Language Expressions +
+ + +
+ + + + +

1. Syntax

+ +

expr: +

+
+
+ + + + +literal-value + + + + +bind-parameter + + + + + + +schema-name + + + +. + + + +table-name + + + +. + + + +column-name + + + + + + + + + + + + +unary-operator + + + +expr + + + + + + +expr + + + +binary-operator + + + +expr + + + + + + +function-name + + + +( + + + +function-arguments + + + +) + + + +filter-clause + + + + + +over-clause + + + + + + + + + + + + +( + + + +expr + + + +) + + + + + + +, + + + + +CAST + + + +( + + + +expr + + + +AS + + + +type-name + + + +) + + + + + + +expr + + + +COLLATE + + + +collation-name + + + + + + +expr + + + +NOT + + + +LIKE + +GLOB + +REGEXP + +MATCH + +expr + +expr + + + +ESCAPE + + + +expr + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +expr + + + +ISNULL + + + + + + +NOTNULL + +NOT + + + +NULL + + + + + + + + + + + + +expr + + + +IS + + + +NOT + + + + + +DISTINCT + + + +FROM + + + +expr + + + + + + + + +expr + + + +NOT + + + +BETWEEN + + + +expr + + + +AND + + + +expr + + + + + + + +expr + + + +NOT + + + +IN + + + +( + + + +select-stmt + + + +) + + + + + + + + +expr + + + + +, + + + + +schema-name + + + +. + + + +table-function + + + +( + + + +expr + + + +) + + + + +table-name + + + + + + +, + + + + + + + + + + +NOT + + + +EXISTS + + + +( + + + +select-stmt + + + +) + + + + + + + + + + +CASE + + + +expr + + + +WHEN + + + +expr + + + +THEN + + + +expr + + + +ELSE + + + +expr + + + +END + + + + + + + + + + + +raise-function + + + + + + +
+

filter-clause: +

+ +

function-arguments: +

+ +

literal-value: +

+ +

over-clause: +

+ +

raise-function: +

+ +

select-stmt: +

+ +

type-name: +

+ +
+ + + + +

2. Operators, and Parse-Affecting Attributes

+

SQLite understands these operators, listed in precedence1 order
+(top to bottom / highest to lowest): +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Operators 2
[expr]    + [expr]    - [expr]
[expr] COLLATE (collation-name) 3
||   ->   ->>
*   /   %
+   -
&  |   <<  >>
[expr] ESCAPE [escape-character-expr] 4
<  >  <=  >=
=  ==  <>  !=  IS   IS NOT
+ IS DISTINCT FROM   IS NOT DISTINCT FROM
+ [expr] BETWEEN5 [expr] AND [expr]
+ IN5  MATCH5  LIKE5  REGEXP5  GLOB5
+ [expr] ISNULL  [expr] NOTNULL   [expr] NOT NULL +
NOT [expr]
AND
OR
+

+

    +
  1. Operators shown within the same table cell share precedence.
    +
  2. "[expr]" denotes operand locations for non-binary operators.
    +   Operators with no "[expr]" adjunct are binary and left-associative.
    +
  3. The COLLATE clause (with its collation-name) acts as a single postfix operator.
    +
  4. The ESCAPE clause (with its escape character) acts as a single postfix operator.
    +   It can only bind to a preceding [expr] LIKE [expr] expression.
    +
  5. Each keyword in (BETWEEN IN GLOB LIKE MATCH REGEXP) may be prefixed
    +   by NOT, retaining the bare operator's precedence and associativity. +
+ + + + +

The COLLATE operator is a unary postfix +operator that assigns a collating sequence to an expression. +The collating sequence set by the COLLATE operator overrides the +collating sequence determined by the COLLATE clause in a table +column definition. +See the detailed discussion on collating sequences +in the Datatype In SQLite3 document for additional information. +

+ + + +

The unary operator + is a no-op. It can be applied +to strings, numbers, blobs or NULL and it always returns a result +with the same value as the operand.

+ +

Note that there are two variations of the equals and not equals +operators. Equals can be either + += or ==. +The not-equal operator can be either +!= or <>. +The || operator is "concatenate" - it joins together +the two strings of its operands. +The -> and ->> operators are "extract"; +they extract the RHS component from the LHS. +For an example, see +JSON subcomponent extraction.

+ +

+ +The % operator casts both of its operands to type +INTEGER and then computes the remainder after dividing the left integer +by the right integer. The other arithmetic operators perform integer +arithmetic if both operands are integers and no overflow would result, +or floating point arithmetic, per IEEE Standard 754, if either operand +is a real value or integer arithmetic would produce an overflow. +Integer divide yields an integer result, truncated toward zero. +

+ +

The result of any binary operator is either a numeric value or +NULL, except for the || concatenation operator, +and the -> and ->> extract operators +which can return values of any type.

+ +

All operators generally evaluate to NULL when any operand is NULL, +with specific exceptions as stated below. This is in accordance with +the SQL92 standard.

+ +

When paired with NULL:
+  AND evaluates to 0 (false) when +the other operand is false; and
+  OR evaluates to 1 (true) +when the other operand is true. +

+ + + + + +

The IS and IS NOT operators work +like = and != except when one or both of the +operands are NULL. In this case, if both operands are NULL, then the +IS operator evaluates to 1 (true) and the IS NOT operator evaluates +to 0 (false). If one operand is NULL and the other is not, then the +IS operator evaluates to 0 (false) and the IS NOT operator is 1 (true). +It is not possible for an IS or IS NOT expression to evaluate to NULL. + + + + +

+The IS NOT DISTINCT FROM operator is an alternative spelling +for the IS operator. +Likewise, the IS DISTINCT FROM operator means the same thing +as IS NOT. Standard SQL does not support the compact IS and IS NOT +notation. Those compact forms are an SQLite extension. You have to use +the prolix and much less readable IS NOT DISTINCT FROM and +IS DISTINCT FROM operators on other SQL database engines. + + + +

3. Literal Values (Constants)

+

+A literal value represents a constant. +Literal values may be integers, floating point numbers, strings, +BLOBs, or NULLs. +

+The syntax for integer and floating point literals (collectively +"numeric literals") is shown by the following diagram:

+ +

numeric-literal:

+
+ + + + +digit + +_ + + + + + + + + + + +. + + + + + + + + + +E + + + + + +e + + +digit + +_ + + + + + + + + + + + + + +. + + + +digit + +_ + + + + + + + + + + + + + +- + + + +digit + +_ + + + + + + + + ++ + + + + + + + + + + + + + + + + + +0x + + + + + +0X + +hexdigit + +_ + + + + + + + + + + + + + + + +
+
+ + +

+If a numeric literal has a decimal point or an exponentiation +clause or if it is less than -9223372036854775808 or +greater than 9223372036854775807, then it is a floating point literal. +Otherwise is it is an integer literal. +The "E" character that begins the exponentiation +clause of a floating point literal can be either upper or lower case. +The "." character is always used +as the decimal point even if the locale setting specifies "," for +this role - the use of "," for the decimal point would result in +syntactic ambiguity. + +

+Beginning in SQLite version 3.46.0 (2024-05-23), a single extra underscore ("_") +character can be added between any two digits. The underscores are purely +for human readability and are ignored by SQLite. + + + +

Hexadecimal integer literals follow the C-language notation of +"0x" or "0X" followed by hexadecimal digits. +For example, 0x1234 means the same as 4660 +and 0x8000000000000000 means the same as -9223372036854775808. + Hexadecimal integer literals are interpreted as 64-bit +two's-complement integers and are thus limited +to sixteen significant digits of precision. +Support for hexadecimal integers was added to SQLite +version 3.8.6 (2014-08-15). +For backwards compatibility, the "0x" hexadecimal integer +notation is only understood by the SQL language parser, not by the +type conversions routines. +String variables that +contain text formatted like hexadecimal integers are not +interpreted as hexadecimal integers when coercing the string value +into an integer due to a CAST expression or for a column affinity +transformation or prior to performing a numeric operation or for +any other run-time conversions. When coercing a +string value in the format of a hexadecimal integer into an integer +value, the conversion process stops when the 'x' character is seen +so the resulting integer value is always zero. +SQLite only understands the hexadecimal integer notation when it +appears in the SQL statement text, not when it appears as +part of the content of the database. + +

A string constant is formed by enclosing the +string in single quotes ('). A single quote within the string can +be encoded by putting two single quotes in a row - as in Pascal. +C-style escapes using the backslash character are not supported because +they are not standard SQL. + +

BLOB literals are string literals containing hexadecimal data and +preceded by a single "x" or "X" character. Example: X'53514C697465' + +

+A literal value can also be the token "NULL". +

+ + + +

4. Parameters

+

+A "variable" or "parameter" token +specifies a placeholder in the expression for a +value that is filled in at runtime using the +sqlite3_bind() family of C/C++ interfaces. +Parameters can take several forms: +

+ +
+ + + + + + + + + + + + + + + + + + + + + +
?NNNA question mark followed by a number NNN holds a spot for the +NNN-th parameter. NNN must be between 1 and SQLITE_MAX_VARIABLE_NUMBER. +
?A question mark that is not followed by a number creates a parameter +with a number one greater than the largest parameter number already assigned. +If this means the parameter number is greater than +SQLITE_MAX_VARIABLE_NUMBER, it is an error. +This parameter format is provided for compatibility with other database +engines. But because it is easy to miscount the question marks, the +use of this parameter format is discouraged. Programmers are encouraged +to use one of the symbolic formats below or the ?NNN format above instead. +
:AAAAA colon followed by an identifier name holds a spot for a +named parameter with the name :AAAA. +Named parameters are also numbered. The number assigned is one greater than +the largest parameter number already assigned. If this means the parameter +would be assigned a number greater than SQLITE_MAX_VARIABLE_NUMBER, it is +an error. To avoid confusion, it is best to avoid mixing named and numbered +parameters.
@AAAAAn "at" sign works exactly like a colon, except that the name of +the parameter created is @AAAA.
$AAAAA dollar-sign followed by an identifier name also holds a spot for a named +parameter with the name $AAAA. The identifier name in this case can include +one or more occurrences of "::" and a suffix enclosed in "(...)" containing +any text at all. This syntax is the form of a variable name in the +Tcl programming language. The presence +of this syntax results from the fact that SQLite is really a +Tcl extension that has escaped into the wild.
+
+ +

Parameters that are not assigned values using +sqlite3_bind() are treated +as NULL. The sqlite3_bind_parameter_index() interface can be used +to translate a symbolic parameter name into its equivalent numeric index.

+ +

The maximum parameter number is set at compile-time by +the SQLITE_MAX_VARIABLE_NUMBER macro. An individual database connection +D can reduce its maximum parameter number below the compile-time maximum +using the sqlite3_limit(D, SQLITE_LIMIT_VARIABLE_NUMBER,...) interface. +

+ + + +

5. The LIKE, GLOB, REGEXP, MATCH, and extract operators

+

The LIKE operator does a pattern matching comparison. The operand +to the right of the LIKE operator contains the pattern and the left hand +operand contains the string to match against the pattern. + +A percent symbol ("%") in the LIKE pattern matches any +sequence of zero or more characters in the string. An underscore +("_") in the LIKE pattern matches any single character in the +string. Any other character matches itself or its lower/upper case +equivalent (i.e. case-insensitive matching). + +Important Note: SQLite only +understands upper/lower case for ASCII characters by default. The +LIKE operator is case sensitive by default for unicode characters that are +beyond the ASCII range. For example, +the expression 'a' LIKE 'A' +is TRUE but 'æ' LIKE 'Æ' is FALSE. +The ICU extension to SQLite includes an enhanced version of the +LIKE operator that does case folding across all unicode characters.

+ +

If the optional ESCAPE clause is present, then the expression +following the ESCAPE keyword must evaluate to a string consisting of +a single character. This character may be used in the LIKE pattern +to include literal percent or underscore characters. The escape +character followed by a percent symbol (%), underscore (_), or a second +instance of the escape character itself matches a +literal percent symbol, underscore, or a single escape character, +respectively. + +

The infix LIKE operator is implemented by calling the +application-defined SQL functions like(Y,X) or +like(Y,X,Z).

+ +

The LIKE operator can be made case sensitive using the +case_sensitive_like pragma.

+ + + +

The GLOB operator is similar to LIKE but uses the Unix +file globbing syntax for its wildcards. Also, GLOB is case +sensitive, unlike LIKE. Both GLOB and LIKE may be preceded by +the NOT keyword to invert the sense of the test. The infix GLOB +operator is implemented by calling the function +glob(Y,X) and can be modified by overriding +that function.

+ + + +

The REGEXP operator is a special syntax for the regexp() +user function. No regexp() user function is defined by default +and so use of the REGEXP operator will normally result in an +error message. If an application-defined SQL function named "regexp" +is added at run-time, then the "X REGEXP Y" operator will +be implemented as a call to "regexp(Y,X)".

+ + + +

The MATCH operator is a special syntax for the match() +application-defined function. The default match() function implementation +raises an exception and is not really useful for anything. +But extensions can override the match() function with more +helpful logic.

+ + + +

The extract operators act as a special syntax for functions +"->"() and "->>"(). Default implementations for these functions +perform JSON subcomponent extraction, +but extensions can override them for other purposes.

+ + + +

6. The BETWEEN operator

+

The BETWEEN operator is logically equivalent to a pair of comparisons. +"x BETWEEN y AND z" is +equivalent to +"x>=y AND x<=z" except +that with BETWEEN, the x expression is only evaluated once. + + + +

7. The CASE expression

+

A CASE expression serves a role similar to IF-THEN-ELSE in other +programming languages. + +

The optional expression that occurs in between the CASE keyword and the +first WHEN keyword is called the "base" expression. There are two +fundamental forms +of the CASE expression: those with a base expression and those without. + +

In a CASE without a base expression, each WHEN expression is evaluated +and the result treated as a boolean, starting with the leftmost and continuing +to the right. The result of the CASE expression is the evaluation of the THEN +expression that corresponds to the first WHEN expression that evaluates to +true. Or, if none of the WHEN expressions evaluate to true, the result of +evaluating the ELSE expression, if any. If there is no ELSE expression and +none of the WHEN expressions are true, then the overall result is NULL. + +

A NULL result is considered untrue when evaluating WHEN terms. + +

In a CASE with a base expression, the base expression is evaluated just +once and the result is compared against the evaluation of each WHEN +expression from left to right. The result of the CASE expression is the +evaluation of the THEN expression that corresponds to the first WHEN +expression for which the comparison is true. Or, if none of the WHEN +expressions evaluate to a value equal to the base expression, the result +of evaluating the ELSE expression, if any. If there is no ELSE expression and +none of the WHEN expressions produce a result equal to the base expression, +the overall result is NULL. + +

When comparing a base expression against a WHEN expression, the same +collating sequence, affinity, and NULL-handling rules apply as if the +base expression and WHEN expression are respectively the left- and +right-hand operands of an = operator.

+ +

If the base +expression is NULL then the result of the CASE is always the result +of evaluating the ELSE expression if it exists, or NULL if it does not. + +

Both forms of the CASE expression use lazy, or short-circuit, +evaluation. + +

The only difference between the following two CASE expressions is that +the x expression is evaluated exactly once in the first example but +might be evaluated multiple times in the second: + +

    +
  • CASE x WHEN w1 THEN r1 WHEN w2 THEN r2 ELSE r3 END +
  • CASE WHEN x=w1 THEN r1 WHEN x=w2 THEN r2 ELSE r3 END +
+ +

The built-in iif(x,y,z) SQL function is logically +equivalent to "CASE WHEN x THEN y ELSE z END". The iif() function +is found in SQL Server and is included in SQLite for compatibility. +Some developers prefer the iif() function because it is more +concise. + + + + +

8. The IN and NOT IN operators

+

The IN and NOT IN operators take an expression on the +left and a list of values or a subquery on the right. +When the right operand of an IN or NOT IN operator is a subquery, the +subquery must have the same number of columns as there are columns in +the row value of the left operand. The subquery on the +right of an IN or NOT IN operator must be a scalar subquery if the left +expression is not a row value expression. +If the right operand of an IN or NOT IN operator is a list of values, +each of those values must be scalars and the left expression must also +be a scalar. +The right-hand side of an IN or NOT IN operator can be a +table name or table-valued function name in which +case the right-hand side is understood to be subquery of +the form "(SELECT * FROM name)". +When the right operand is an empty set, the result of IN is false and the +result of NOT IN is true, regardless of the left operand and even if the +left operand is NULL. +

The result of an IN or NOT IN operator is determined by the following +matrix: + +

+ + + + + + + +
Left operand
is NULL +
Right operand
contains NULL +
Right operand
is an empty set +
Left operand found
within right operand +
Result of
IN operator +
Result of
NOT IN operator +
no +no +no +no +false +true +
does not matter +no +yes +no +false +true +
no +does not matter +no +yes +true +false +
no +yes +no +no +NULL +NULL +
yes +does not matter +no +does not matter +NULL +NULL +
+
+ +

Note that SQLite allows the parenthesized list of scalar values on +the right-hand side of an IN or NOT IN operator to be an empty list but +most other SQL database engines and the SQL92 standard require +the list to contain at least one element.

+ +

9. Table Column Names

+ +

A column name can be any of the names defined in the CREATE TABLE +statement or one of the following special identifiers: "ROWID", +"OID", or "_ROWID_". +The three special identifiers describe the +unique integer key (the rowid) associated with every +row of every table and so are not available on WITHOUT ROWID tables. +The special identifiers only refer to the row key if the CREATE TABLE +statement does not define a real column with the same name. +The rowid can be used anywhere a regular +column can be used.

+ + + +

10. The EXISTS operator

+ +

The EXISTS operator always evaluates to one of the integer values 0 +and 1. If executing the SELECT statement specified as the right-hand +operand of the EXISTS operator would return one or more rows, then the +EXISTS operator evaluates to 1. If executing the SELECT would return +no rows at all, then the EXISTS operator evaluates to 0. + +

The number of columns in each row returned by the SELECT statement +(if any) and the specific values returned have no effect on the results +of the EXISTS operator. In particular, rows containing NULL values are +not handled any differently from rows without NULL values. + + + +

11. Subquery Expressions

+ +

A SELECT statement enclosed in parentheses is a subquery. +All types of SELECT statement, including +aggregate and compound SELECT queries (queries with keywords like +UNION or EXCEPT) are allowed as scalar subqueries. +The value of a subquery expression is the first row of the result +from the enclosed SELECT statement. +The value of a subquery expression is NULL if the enclosed +SELECT statement returns no rows. + +

A subquery that returns a single column is a scalar subquery and can +be used most anywhere. +A subquery that returns two or more columns is a row value +subquery and can only be used as an operand of a comparison operator or as +the value in an UPDATE SET clause whose column name list has the same size. + + + +

12. Correlated Subqueries

+ +

A SELECT statement used as either a scalar subquery or as the +right-hand operand of an IN, NOT IN or EXISTS expression may contain +references to columns in the outer query. Such a subquery is known as +a correlated subquery. A correlated subquery is reevaluated each time +its result is required. An uncorrelated subquery is evaluated only once +and the result reused as necessary. + + + +

13. CAST expressions

+ +

A CAST expression of the form "CAST(expr AS type-name)" +is used to convert the value of expr to +a different storage class specified by type-name. +A CAST conversion is similar to the conversion that takes +place when a column affinity is applied to a value except that with +the CAST operator the conversion always takes place even if the conversion +lossy and irreversible, whereas column affinity only changes the data type +of a value if the change is lossless and reversible. + +

If the value of expr is NULL, then the result of the CAST +expression is also NULL. Otherwise, the storage class of the result +is determined by applying the rules for determining column affinity to +the type-name. + +

+ + + + + + + + +
Affinity of type-name + Conversion Processing +
NONE + Casting a value to a type-name with no affinity + causes the value to + be converted into a BLOB. Casting to a BLOB consists of first casting + the value to TEXT in the encoding of the database connection, then + interpreting the resulting byte sequence as a BLOB instead of as TEXT. + +
TEXT + To cast a BLOB value to TEXT, the sequence of bytes that make up the + BLOB is interpreted as text encoded using the database encoding. +

+ Casting an INTEGER or REAL value into TEXT renders the value as if via + sqlite3_snprintf() except that the resulting TEXT uses the encoding of + the database connection. + +

REAL + When casting a BLOB value to a REAL, the value is first converted to + TEXT. +

When casting a TEXT value to REAL, the longest possible prefix of + the value that can be interpreted as a real number is extracted from + the TEXT value and the remainder ignored. Any leading spaces in the + TEXT value are ignored when converging from TEXT to REAL. If there is + no prefix that can be interpreted as a real number, the result of the + conversion is 0.0. + +

INTEGER + When casting a BLOB value to INTEGER, the value is first converted to + TEXT. +

When casting a TEXT value to INTEGER, the longest possible prefix of + the value that can be interpreted as an integer number is extracted from + the TEXT value and the remainder ignored. Any leading spaces in the + TEXT value when converting from TEXT to INTEGER are ignored. If there + is no prefix that can be interpreted as an integer number, the result + of the conversion is 0. If the prefix integer is greater than + +9223372036854775807 then the result of the cast is exactly + +9223372036854775807. Similarly, if the prefix integer is + less than -9223372036854775808 then the result of the cast is + exactly -9223372036854775808. + +

When casting to INTEGER, if the text looks like a floating point + value with an exponent, the exponent will be ignored because it is + no part of the integer prefix. For example, + "CAST('123e+5' AS INTEGER)" results in 123, not in 12300000. + +

The CAST operator understands decimal + integers only — conversion of hexadecimal integers stops + at the "x" in the "0x" prefix of the hexadecimal integer string + and thus result of the CAST is always zero. + +

A cast of a REAL value into an INTEGER results in the integer + between the REAL value and zero that is closest to the REAL value. + If a REAL is greater than the greatest possible signed + integer (+9223372036854775807) then the result is the greatest possible + signed integer and if the REAL is less than the least possible signed + integer (-9223372036854775808) then the result is the least possible + signed integer. + +

Prior to SQLite version 3.8.2 (2013-12-06), + casting a REAL value greater than + +9223372036854775807.0 into an integer resulted in the most negative + integer, -9223372036854775808. This behavior was meant to emulate the + behavior of x86/x64 hardware when doing the equivalent cast. + +

NUMERIC + Casting a TEXT or BLOB value into NUMERIC yields either an INTEGER or + a REAL result. + If the input text looks like an integer (there is no decimal point nor + exponent) and the value is small enough to fit in a 64-bit signed integer, + then the result will be INTEGER. + Input text that looks like floating point (there is a decimal point and/or + an exponent) and the text describes a value that + can be losslessly converted back and forth between IEEE 754 64-bit float and a + 51-bit signed integer, then the result is INTEGER. + (In the previous sentence, a 51-bit integer is specified since that is one + bit less than the length of the mantissa of an IEEE 754 64-bit float and + thus provides a 1-bit of margin for the text-to-float conversion operation.) + Any text input that describes a value outside the range of a 64-bit + signed integer yields a REAL result. +

Casting a REAL or INTEGER value to NUMERIC is a no-op, even if a real + value could be losslessly converted to an integer. +

+
+ +

Note that the result from casting any non-BLOB value into a +BLOB and the result from casting any BLOB value into a non-BLOB value +may be different depending on whether the database encoding is UTF-8, +UTF-16be, or UTF-16le. + + + + +

14. Boolean Expressions

+ +

The SQL language features several contexts where an expression is +evaluated and the result converted to a boolean (true or false) value. These +contexts are: + +

    +
  • the WHERE clause of a SELECT, UPDATE or DELETE statement, +
  • the ON or USING clause of a join in a SELECT statement, +
  • the HAVING clause of a SELECT statement, +
  • the WHEN clause of an SQL trigger, and +
  • the WHEN clause or clauses of some CASE expressions. +
+ +

To convert the results of an SQL expression to a boolean value, SQLite +first casts the result to a NUMERIC value in the same way as a +CAST expression. A numeric zero value (integer value 0 or real +value 0.0) is considered to be false. A NULL value is still NULL. +All other values are considered true. + +

For example, the values NULL, 0.0, 0, 'english' and '0' are all considered +to be false. Values 1, 1.0, 0.1, -0.1 and '1english' are considered to +be true. + +

Beginning with SQLite 3.23.0 (2018-04-02), SQLite recognizes the +identifiers "TRUE" and "FALSE" as boolean literals, if and only if those +identifiers are not already used for some other meaning. If there already +exists columns or tables or other objects named TRUE or FALSE, then for +the sake of backwards compatibility, the TRUE and FALSE identifiers refer +to those other objects, not to the boolean values. + +

The boolean identifiers TRUE and FALSE are usually just aliases for +the integer values 1 and 0, respectively. However, if TRUE or FALSE +occur on the right-hand side of an IS operator, then the IS operator +evaluates the left-hand operand as a boolean value and returns an appropriate +answer. + + + +

15. Functions

+

SQLite supports many simple, aggregate, +and window +SQL functions. For presentation purposes, simple functions are further +subdivided into core functions, date-time functions, +math functions, and JSON functions. +Applications can add new functions, written in C/C++, using the +sqlite3_create_function() interface. +

+ +

+The main expression bubble diagram above shows a single syntax for +all function invocations. But this is merely to simplify the expression +bubble diagram. In reality, each type of function has a slightly different +syntax, shown below. The function invocation syntax shown in the main +expression bubble diagram is the union of the three syntaxes shown here: + +

simple-function-invocation:

+
+ + + + +simple-func + + + +( + + + +expr + + + +) + + + + +, + + + + + + + +* + + + + +
+
+

aggregate-function-invocation:

+
+ + + + +aggregate-func + + + +( + + + + + +DISTINCT + + + + + + + +expr + + + +) + + + +filter-clause + + + + + + + + + + + + +, + + + + + +* + + + + + + + + + + + +ORDER + + + +BY + + + +ordering-term + +, + + + + + + +
+
+

window-function-invocation:

+
+ + + + +window-func + + + +( + + + +expr + + + +) + + + +filter-clause + + + +OVER + + + +window-name + + + + + + + +window-defn + + +, + + + + + + + +* + + + + +
+
+ + +

+The OVER clause is required for window functions and is prohibited +otherwise. The DISTINCT keyword and the ORDER BY clause is only allowed +in aggregate functions. +The FILTER clause may not appear on a simple function. + +

It is possible to have an aggregate function with the same name as a +simple function, as long as the number of arguments for the two forms of the +function are different. For example, the max() function with a +single argument is an aggregate and the max() function with two or more +arguments is a simple function. Aggregate functions can usually also +be used as window functions. +

This page last modified on 2024-06-02 10:08:16 UTC

+ ADDED Doc/Extra/Core/lang_indexedby.html Index: Doc/Extra/Core/lang_indexedby.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_indexedby.html @@ -0,0 +1,260 @@ + + + + + +The INDEXED BY Clause + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+The INDEXED BY Clause +
+
+ + + + +

1. How INDEXED BY Works

+ +

The INDEXED BY phrase forces the SQLite query planner to use a +particular named index on a DELETE, SELECT, or UPDATE statement. +The INDEXED BY phrase is an SQLite extension and +is not portable to other SQL database engines.

+ +

qualified-table-name: +

+
+
+ + + + +schema-name + + + +. + + + +table-name + + + +AS + + + +alias + + + + + + + + + +INDEXED + + + +BY + + + +index-name + +NOT + + + +INDEXED + + + + + + + + + + + + + + + + + + +
+
+ + +

The "INDEXED BY index-name" phrase specifies +that the named index +must be used in order to look up values on the preceding table. +If index-name does not exist or cannot be used +for the query, then the preparation of the SQL statement fails. +The "NOT INDEXED" clause specifies that no index shall be used when +accessing the preceding table, including implied indices create by +UNIQUE and PRIMARY KEY constraints. However, the rowid +can still be used to look up entries even when "NOT INDEXED" is specified.

+ +

Some SQL database engines provide non-standard "hint" mechanisms which +can be used to give the query optimizer clues about what indices it should +use for a particular statement. The INDEXED BY clause of SQLite is +not a hinting mechanism and it should not be used as such. +The INDEXED BY clause does not give the optimizer hints about which index +to use; it gives the optimizer a requirement of which index to use. +If the query optimizer is unable to use the index specified by the +INDEXED BY clause, then the query will fail with an error.

+ +

The INDEXED BY clause is not intended for use in tuning +the performance of a query. The intent of the INDEXED BY clause is +to raise a run-time error if a schema change, such as dropping or +creating an index, causes the query plan for a time-sensitive query +to change. The INDEXED BY clause is designed to help detect +undesirable query plan changes during regression testing. +Application +developers are admonished to omit all use of INDEXED BY during +application design, implementation, testing, and tuning. If +INDEXED BY is to be used at all, it should be inserted at the very +end of the development process when "locking down" a design.

+ +

2. See Also

+ +
    +
  1. The query planner checklist describes steps that application +developers should following to help resolve query planner problems. +Notice the that the use of INDEXED BY is a last resort, to be used only +when all other measures fail.

    + +
  2. The unary "+" operator +can be used to disqualify terms in the WHERE clause from use by indices. +Careful use of unary + can sometimes help prevent the query planner from +choosing a poor index without restricting it to using one specific index. +Careful placement of unary + operators is a better method for controlling +which indices are used by a query.

    + +
  3. The sqlite3_stmt_status() C/C++ interface together with the +SQLITE_STMTSTATUS_FULLSCAN_STEP and SQLITE_STMTSTATUS_SORT verbs +can be used to detect at run-time when an SQL statement is not +making effective use of indices. Many applications may prefer to +use the sqlite3_stmt_status() interface to detect index misuse +rather than the INDEXED BY phrase described here.

    +
+

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_insert.html Index: Doc/Extra/Core/lang_insert.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_insert.html @@ -0,0 +1,2964 @@ + + + + + +INSERT + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+INSERT +
+
+ + + + +

1. Overview

+ +

insert-stmt: +

+
+
+ + + + +WITH + +RECURSIVE + + + + + +common-table-expression + + + + + + +, + + + + + + + + +REPLACE + +INSERT + + + +OR + + + +ROLLBACK + + + + + +INTO + + + + + + + + + + +ABORT + + + + + + +FAIL + + + + + + +IGNORE + + + + + + +REPLACE + + + + + + +schema-name + + + +. + + + +table-name + + + +AS + + + +alias + + + + + + + + + + +( + + + +column-name + + + +) + +, + + + + + + + + + + + + + + +VALUES + + + +( + + + +expr + + + +) + + + +, + + + + +, + + + + + + +upsert-clause + + + + + + +select-stmt + + + + + + +upsert-clause + + + + + + +DEFAULT + + + +VALUES + + + + + +returning-clause + + + + + + + + + +
+

common-table-expression: +

+ +

expr: +

+ +

returning-clause: +

+ +

select-stmt: +

+ +

upsert-clause: +

+ +
+ + +

The INSERT statement comes in three basic forms. +

    +
  1. INSERT INTO table VALUES(...); +

    The first form (with the "VALUES" keyword) creates one or more +new rows in +an existing table. If the column-name list after +table-name is omitted then the number +of values inserted into each row +must be the same as the number of columns in the table. In this case +the result of evaluating the left-most expression from each term of +the VALUES list is inserted into the left-most column of each new row, +and so forth for each subsequent expression. If a column-name +list is specified, then the number of values in each term of the +VALUE list must match the number of +specified columns. Each of the named columns of the new row is populated +with the results of evaluating the corresponding VALUES expression. Table +columns that do not appear in the column list are populated with the +default column value (specified as part of the CREATE TABLE statement), or +with NULL if no default value is specified. + +

  2. INSERT INTO table SELECT ...; +

    The second form of the INSERT statement contains a SELECT statement +instead of a VALUES clause. A new entry is inserted into the table for each +row of data returned by executing the SELECT statement. If a column-list is +specified, the number of columns in the result of the SELECT must be the same +as the number of items in the column-list. Otherwise, if no column-list is +specified, the number of columns in the result of the SELECT must be the same +as the number of columns in the table. Any SELECT statement, including +compound SELECTs and SELECT statements with ORDER BY and/or LIMIT clauses, +may be used in an INSERT statement of this form. + +

    To avoid a parsing ambiguity, the SELECT statement should always +contain a WHERE clause, even if that clause is simply "WHERE true", +if the upsert-clause is present. Without the WHERE clause, the +parser does not know if the token "ON" is part of a join constraint +on the SELECT, or the beginning of the upsert-clause. + +

  3. INSERT INTO table DEFAULT VALUES; +

    The third form of an INSERT statement is with DEFAULT VALUES. +The INSERT ... DEFAULT VALUES statement inserts a single new row into the +named table. Each column of the new row is populated with its +default value, or with a NULL if no default value is specified +as part of the column definition in the CREATE TABLE statement. +The upsert-clause is not supported after DEFAULT VALUES. + +

+ +

+The initial "INSERT" keyword can be replaced by +"REPLACE" or "INSERT OR action" to specify an alternative +constraint conflict resolution algorithm to use during +that one INSERT command. +For compatibility with MySQL, the parser allows the use of the +single keyword REPLACE as an +alias for "INSERT OR REPLACE". + +

The optional "schema-name." prefix on the +table-name +is supported for top-level INSERT statements only. The table name must be +unqualified for INSERT statements that occur within CREATE TRIGGER statements. +Similarly, the "DEFAULT VALUES" form of the INSERT statement is supported for +top-level INSERT statements only and not for INSERT statements within +triggers. +

+ +

The optional "AS alias" phrase provides an alternative +name for the table into which content is being inserted. The alias name +can be used within WHERE and SET clauses of the UPSERT. If there is no +upsert-clause, then the alias is pointless, but also +harmless. + +

See the separate UPSERT documentation for the additional trailing +syntax that can cause an INSERT to behave as an UPDATE if the INSERT would +otherwise violate a uniqueness constraint. The upsert clause is not +allowed on an "INSERT ... DEFAULT VALUES". +

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_keywords.html Index: Doc/Extra/Core/lang_keywords.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_keywords.html @@ -0,0 +1,361 @@ + + + + + +SQLite Keywords + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+SQLite Keywords +
+
+ + + + + +

The SQL standard specifies a large number of keywords which may not +be used as the names of tables, indices, columns, databases, user-defined +functions, collations, virtual table modules, or any other named object. +The list of keywords is so long that few people can remember them all. +For most SQL code, your safest bet is to never use any English language +word as the name of a user-defined object.

+ +

If you want to use a keyword as a name, you need to quote it. There +are four ways of quoting keywords in SQLite:

+ +

+

+ + + + + + + + + + + + +
'keyword'A keyword in single quotes is a string literal.
"keyword"A keyword in double-quotes is an identifier.
[keyword]A keyword enclosed in square brackets is + an identifier. This is not standard SQL. This quoting mechanism + is used by MS Access and SQL Server and is included in SQLite for + compatibility.
`keyword`A keyword enclosed in grave accents (ASCII code 96) is + an identifier. This is not standard SQL. This quoting mechanism + is used by MySQL and is included in SQLite for + compatibility.
+
+ + +

For resilience when confronted with historical SQL statements, SQLite +will sometimes bend the quoting rules above:

+ +
    +
  • If a keyword in single +quotes (ex: 'key' or 'glob') is used in a context where +an identifier is allowed but where a string literal is not allowed, then +the token is understood to be an identifier instead of a string literal. +

  • + +
  • If a keyword in double +quotes (ex: "key" or "glob") is used in a context where +it cannot be resolved to an identifier but where a string literal +is allowed, then the token is understood to be a string literal instead +of an identifier.

  • +
+ +

Programmers are cautioned not to use the two exceptions described in +the previous bullets. We emphasize that they exist only so that old +and ill-formed SQL statements will run correctly. Future versions of +SQLite might raise errors instead of accepting the malformed +statements covered by the exceptions above.

+ +

+SQLite adds new keywords from time to time when it takes on new features. +So to prevent your code from being broken by future enhancements, you should +normally quote any identifier that is an English language word, even if +you do not have to. +

+ +

+The list below shows all possible keywords used by any build of +SQLite regardless of compile-time options. +Most reasonable configurations use most or all of these keywords, +but some keywords may be omitted when SQL language features are +disabled. +Applications can use the +sqlite3_keyword_count(), sqlite3_keyword_name(), and +sqlite3_keyword_check() interfaces to determine the keywords +recognized by SQLite at run-time. +Regardless of the compile-time configuration, any identifier that is not on +the following 147-element + +list is not a keyword to the SQL parser in SQLite: +

+ +
+
    +
  • ABORT
  • +
  • ACTION
  • +
  • ADD
  • +
  • AFTER
  • +
  • ALL
  • +
  • ALTER
  • +
  • ALWAYS
  • +
  • ANALYZE
  • +
  • AND
  • +
  • AS
  • +
  • ASC
  • +
  • ATTACH
  • +
  • AUTOINCREMENT
  • +
  • BEFORE
  • +
  • BEGIN
  • +
  • BETWEEN
  • +
  • BY
  • +
  • CASCADE
  • +
  • CASE
  • +
  • CAST
  • +
  • CHECK
  • +
  • COLLATE
  • +
  • COLUMN
  • +
  • COMMIT
  • +
  • CONFLICT
  • +
  • CONSTRAINT
  • +
  • CREATE
  • +
  • CROSS
  • +
  • CURRENT
  • +
  • CURRENT_DATE
  • +
  • CURRENT_TIME
  • +
  • CURRENT_TIMESTAMP
  • +
  • DATABASE
  • +
  • DEFAULT
  • +
  • DEFERRABLE
  • +
  • DEFERRED
  • +
  • DELETE
  • +
  • DESC
  • +
  • DETACH
  • +
  • DISTINCT
  • +
  • DO
  • +
  • DROP
  • +
  • EACH
  • +
  • ELSE
  • +
  • END
  • +
  • ESCAPE
  • +
  • EXCEPT
  • +
  • EXCLUDE
  • +
  • EXCLUSIVE
  • +
  • EXISTS
  • +
  • EXPLAIN
  • +
  • FAIL
  • +
  • FILTER
  • +
  • FIRST
  • +
  • FOLLOWING
  • +
  • FOR
  • +
  • FOREIGN
  • +
  • FROM
  • +
  • FULL
  • +
  • GENERATED
  • +
  • GLOB
  • +
  • GROUP
  • +
  • GROUPS
  • +
  • HAVING
  • +
  • IF
  • +
  • IGNORE
  • +
  • IMMEDIATE
  • +
  • IN
  • +
  • INDEX
  • +
  • INDEXED
  • +
  • INITIALLY
  • +
  • INNER
  • +
  • INSERT
  • +
  • INSTEAD
  • +
  • INTERSECT
  • +
  • INTO
  • +
  • IS
  • +
  • ISNULL
  • +
  • JOIN
  • +
  • KEY
  • +
  • LAST
  • +
  • LEFT
  • +
  • LIKE
  • +
  • LIMIT
  • +
  • MATCH
  • +
  • MATERIALIZED
  • +
  • NATURAL
  • +
  • NO
  • +
  • NOT
  • +
  • NOTHING
  • +
  • NOTNULL
  • +
  • NULL
  • +
  • NULLS
  • +
  • OF
  • +
  • OFFSET
  • +
  • ON
  • +
  • OR
  • +
  • ORDER
  • +
  • OTHERS
  • +
  • OUTER
  • +
  • OVER
  • +
  • PARTITION
  • +
  • PLAN
  • +
  • PRAGMA
  • +
  • PRECEDING
  • +
  • PRIMARY
  • +
  • QUERY
  • +
  • RAISE
  • +
  • RANGE
  • +
  • RECURSIVE
  • +
  • REFERENCES
  • +
  • REGEXP
  • +
  • REINDEX
  • +
  • RELEASE
  • +
  • RENAME
  • +
  • REPLACE
  • +
  • RESTRICT
  • +
  • RETURNING
  • +
  • RIGHT
  • +
  • ROLLBACK
  • +
  • ROW
  • +
  • ROWS
  • +
  • SAVEPOINT
  • +
  • SELECT
  • +
  • SET
  • +
  • TABLE
  • +
  • TEMP
  • +
  • TEMPORARY
  • +
  • THEN
  • +
  • TIES
  • +
  • TO
  • +
  • TRANSACTION
  • +
  • TRIGGER
  • +
  • UNBOUNDED
  • +
  • UNION
  • +
  • UNIQUE
  • +
  • UPDATE
  • +
  • USING
  • +
  • VACUUM
  • +
  • VALUES
  • +
  • VIEW
  • +
  • VIRTUAL
  • +
  • WHEN
  • +
  • WHERE
  • +
  • WINDOW
  • +
  • WITH
  • +
  • WITHOUT
  • +
+ +

This page last modified on 2022-11-26 14:56:19 UTC

+ ADDED Doc/Extra/Core/lang_mathfunc.html Index: Doc/Extra/Core/lang_mathfunc.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_mathfunc.html @@ -0,0 +1,319 @@ + + + + + +Built-In Mathematical SQL Functions + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+Built-In Mathematical SQL Functions +
+
+ + + + + +

1. Overview

+ +

The math functions shown below are a subgroup of +scalar functions that are built into the +SQLite amalgamation source file but are only active +if the amalgamation is compiled using the +-DSQLITE_ENABLE_MATH_FUNCTIONS compile-time option. + +

The arguments to math functions can be integers, floating-point numbers, +or strings or blobs that look like integers or real numbers. If any argument +is NULL or is a string or blob that is not readily converted into a number, +then the function will return NULL. +These functions also return NULL for domain errors, such as trying to +take the square root of a negative number, or compute the arccosine of a +value greater than 1.0 or less than -1.0. + +

The values returned by these functions are often approximations. +For example, the pi() function returns +3.141592653589793115997963468544185161590576171875 which +is about 1.22465e-16 too small, but it is the closest approximation available +for IEEE754 doubles. + +

+

+ + +

2. Descriptions of built-in scalar SQL math functions

+
+ + +

acos(X)

+ Return the arccosine of X. The result is in radians. +

+ +

acosh(X)

+ Return the hyperbolic arccosine of X. +

+ +

asin(X)

+ Return the arcsine of X. The result is in radians. +

+ +

asinh(X)

+ Return the hyperbolic arcsine of X. +

+ +

atan(X)

+ Return the arctangent of X. The result is in radians. +

+ +

atan2(Y,X)

+ Return the arctangent of Y/X. The result is in radians. The + result is placed into correct quadrant depending on the signs + of X and Y. +

+ +

atanh(X)

+ Return the hyperbolic arctangent of X. +

+ +

ceil(X)
ceiling(X)

+ Return the first representable integer value greater than or equal to X. + For positive values of X, this routine rounds away from zero. + For negative values of X, this routine rounds toward zero. +

+ +

cos(X)

+ Return the cosine of X. X is in radians. +

+ +

cosh(X)

+ Return the hyperbolic cosine of X. +

+ +

degrees(X)

+ Convert value X from radians into degrees. +

+ +

exp(X)

+ Compute e (Euler's number, approximately 2.71828182845905) raised + to the power X. +

+ +

floor(X)

+ Return the first representable integer value less than or equal to X. + For positive numbers, this function rounds toward zero. + For negative numbers, this function rounds away from zero. +

+ +

ln(X)

+ Return the natural logarithm of X. +

+ +

log(X)
log10(X)
log(B,X)

+ Return the base-10 logarithm for X. Or, for the two-argument version, + return the base-B logarithm of X. +

+ Compatibility note: SQLite works like PostgreSQL in that the log() function + computes a base-10 logarithm. Most other SQL database engines compute a + natural logarithm for log(). In the two-argument version of log(B,X), the + first argument is the base and the second argument is the operand. This is + the same as in PostgreSQL and MySQL, but is reversed from SQL Server which + uses the second argument as the base and the first argument as the operand. +

+ +

log2(X)

+ Return the logarithm base-2 for the number X. +

+ +

mod(X,Y)

+ Return the remainder after dividing X by Y. This is similar to the '%' + operator, except that it works for non-integer arguments. +

+ +

pi()

+ Return an approximation for π. +

+ +

pow(X,Y)
power(X,Y)

+ Compute X raised to the power Y. +

+ +

radians(X)

+ Convert X from degrees into radians. +

+ +

sin(X)

+ Return the sine of X. X is in radians. +

+ +

sinh(X)

+ Return the hyperbolic sine of X. +

+ +

sqrt(X)

+ Return the square root of X. NULL is returned if X is negative. +

+ +

tan(X)

+ Return the tangent of X. X is in radians. +

+ +

tanh(X)

+ Return the hyperbolic tangent of X. +

+ +

trunc(X)

+ Return the representable integer in between X and 0 (inclusive) + that is furthest away from zero. Or, in other words, return the + integer part of X, rounding toward zero. + The trunc() function is similar to ceiling(X) and floor(X) except + that it always rounds toward zero whereas ceiling(X) and floor(X) round + up and down, respectively. +

+ +
+

This page last modified on 2023-12-05 14:43:20 UTC

+ ADDED Doc/Extra/Core/lang_naming.html Index: Doc/Extra/Core/lang_naming.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_naming.html @@ -0,0 +1,166 @@ + + + + + +Database Object Name Resolution + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ + + + +

+ In SQLite, a database object (a table, index, trigger or view) is identified + by the name of the object and the name of the database that it resides in. + Database objects may reside in the main database, the temp database, or in + an attached database. + +

+ The syntax of the DROP TABLE, DROP INDEX, DROP VIEW, DROP TRIGGER, + REINDEX, ALTER TABLE and many other commands all permit the user to + specify a database object either by its name alone, or by a combination of + its name and the name of its database. If no database is specified as part + of the object reference, then SQLite searches the main, temp and all attached + databases for an object with a matching name. The temp database is searched + first, followed by the main database, followed all attached databases in the + order that they were attached. The reference resolves to the first match + found. For example: + +

+      /* Add a table named 't1' to the temp, main and an attached database */
+      ATTACH 'file.db' AS aux;
+      CREATE TABLE t1(x, y);
+      CREATE TEMP TABLE t1(x, y);
+      CREATE TABLE aux.t1(x, y);
+
+      DROP TABLE t1;         /* Drop table in temp database */
+      DROP TABLE t1;         /* Drop table in main database */
+      DROP TABLE t1;         /* Drop table in aux database */
+
+ +

+ If a schema name is specified as part of an object reference, it must be + either "main", or "temp" or the schema-name of an attached database. + Like other SQL identifiers, schema names are case-insensitive. + If a schema name is specified, then only that one schema is searched for + the named object. + +

+ Most object references may only resolve to a specific type of object (for + example a reference that is part of a DROP TABLE statement may only resolve + to a table object, not an index, trigger or view). However in some contexts + (e.g. REINDEX) an object reference may be resolve to more than one type + of object. When searching database schemas for a named object, objects of + types that cannot be used in the context of the reference are always + ignored. +

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_reindex.html Index: Doc/Extra/Core/lang_reindex.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_reindex.html @@ -0,0 +1,204 @@ + + + + + +REINDEX + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+REINDEX +
+
+ + + + +

reindex-stmt: +

+
+
+ + + + +REINDEX + + + +schema-name + + + +. + + + + + +index-name + + + + + + + + +table-name + + + + + + + + + +collation-name + + + + + + + +
+
+ + +

The REINDEX command is used to delete and recreate indices from scratch. +This is useful when the definition of a collation sequence has changed, or +when there are indexes on expressions involving a function whose definition +has changed. +

+ +

If the REINDEX keyword is not followed by a collation-sequence or database +object identifier, then all indices in all attached databases are rebuilt. + +

If the REINDEX keyword is followed by a collation-sequence name, then +all indices in all attached databases that use the named collation sequences +are recreated. + +

Or, if the argument attached to the REINDEX identifies a specific +database table, then all indices attached to the database table are rebuilt. +If it identifies a specific database index, then just that index is recreated. + +

For a command of the form "REINDEX name", a match +against collation-name takes precedence over a match +against index-name or table-name. +This ambiguity in the syntax may be avoided by always specifying a +schema-name when reindexing a specific table or index. +

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_replace.html Index: Doc/Extra/Core/lang_replace.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_replace.html @@ -0,0 +1,133 @@ + + + + + +REPLACE + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+REPLACE +
+
+ + + + +

The REPLACE command is an alias for the "INSERT OR REPLACE" +variant of the INSERT command. +This alias is provided for compatibility other SQL database engines. See the +INSERT command documentation for additional information.

+

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_returning.html Index: Doc/Extra/Core/lang_returning.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_returning.html @@ -0,0 +1,2702 @@ + + + + + +RETURNING + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+RETURNING +
+ + +
+ + + + +

1. Overview

+ +

returning-clause: +

+
+
+ + + + +RETURNING + + + +expr + + + +AS + + + +column-alias + + + + + + + + + + + + + + +* + + + + + + +, + + + + + +
+

expr: +

+ +
+ + +

+The RETURNING clause is not a statement itself, but a clause that can +optionally appear near the end of top-level +DELETE, INSERT, and UPDATE statements. +The effect of the RETURNING clause is to cause the statement to return +one result row for each database row that is deleted, inserted, or updated. + RETURNING is not standard SQL. It is an extension. +SQLite's syntax for RETURNING is modelled after +PostgreSQL. + +

+The RETURNING syntax has been supported by SQLite since version 3.35.0 +(2021-03-12). + +

1.1. Typical Use

+ +

+The RETURNING clause is designed to provide the application with the +values of columns that are filled in automatically by SQLite. For +example: + +

CREATE TABLE t0(
+  a INTEGER PRIMARY KEY,
+  b DATE DEFAULT CURRENT_TIMESTAMP,
+  c INTEGER
+);
+INSERT INTO t0(c) VALUES(random()) RETURNING *;
+
+ +

+In the INSERT statement above, SQLite computes the values for all +three columns. The RETURNING clause causes SQLite to report the chosen +values back to the application. This saves the application from having +to issue a separate query to figure out exactly what values were inserted. + +

2. Details

+ +

+The RETURNING clause is followed by a comma-separated list of +expressions. These expressions are similar to the expressions following +the SELECT keyword in a SELECT statement in that they +define the values of the columns in the result set. Each expression +defines the value for a single column. Each expression may be +optionally followed by an AS clause that determines the name of +the result column. The special "*" expression expands into a list +of all non-hidden columns of the table being deleted, +inserted, or updated. + +

+For INSERT and UPDATE statements, references to columns in the table +being modified refer to the value of that column after the change +has been applied. For DELETE statements, references to columns mean +the value before the delete occurs. + +

+The RETURNING clause only returns rows that are directly modified +by the DELETE, INSERT, or UPDATE statement. The RETURNING clause +does not report any additional database changes +caused by foreign key constraints or triggers. + +

+A RETURNING clause for an UPSERT reports both inserted and +updated rows. + +

2.1. Processing Order

+ +

+When a DELETE, INSERT, or UPDATE statement with a RETURNING clause +is run, all of the database changes occur during the first invocation +of sqlite3_step(). The RETURNING clause output is accumulated in +memory. The first sqlite3_step() call returns one row of RETURNING +output and subsequent rows of RETURNING output are returned by subsequent +calls to sqlite3_step(). +To put this another way, all RETURNING clause output is embargoed +until after all database modification actions are completed. + +

+This means that if a statement has a RETURNING clause that generates +a large amount of output, either many rows or large +string or BLOB values, then the statement might use a lot of +temporary memory to hold those values while it is running. + +

+While SQLite does guarantee that all database changes will occur +before any RETURNING output is emitted, it does not guarantee +that the order of individual RETURNING rows will match the order in +which those rows were changed in the database. The output order +for the RETURNING rows is arbitrary and is not necessarily related +to the order in which the rows were processed internally. + +

2.2. Self-Referential Subqueries Are Indeterminate

+ +

+SQLite guarantees that all database changes will occur before +any RETURNING output is emitted, but SQLite makes no +guarantees about the order in which database changes occur nor +when the RETURNING output is computed in relation to +those database changes. RETURNING clause outputs are all computed and +placed in temporary storage during the first call to sqlite3_step() +but the specific order of when those output are computed and the +order in which database changes occur is unspecified. The order +can change from one query to the next. + +

+Hence if column of RETURNING output contains a +subquery that references the table being modified, then the result +of that subquery might depend on unspecified behavior and hence +could vary from one invocation of the query to the next. + +

2.3. ACID Changes

+ +

+When the previous "Processing Order" section says that +"database changes occur during the first invocation of sqlite3_step()", +that means that the changes are stored in the private page cache of +the database connection that is running the statement. It does +not mean that the changes are actually committed. The commit +does not occur until the statement finishes, and maybe not even then +if the statement is part of a larger transaction. Changes to the +database are still atomic, consistent, isolated, and durable (ACID). +When the previous section says "changes occur", +this means that internal data structures are adjusted pending a transaction +commit. Some of those changes may or may not spill into the +write-ahead log, depending on how much pressure there is on the +page cache. If the page cache is not under memory pressure, then +probably nothing will be written to disk until after the transaction +completes, which is after sqlite3_step() returns SQLITE_DONE. + +

+In other words, when the previous section says "database changes +occur", that means that the changes occur in the memory of the +specific database connection that is running the statement, not that +the changes are written to disk. + +

3. Limitations And Caveats

+ +
    +
  1. +The RETURNING clause is not available on DELETE and UPDATE statements +against virtual tables. +This limitation might be removed in future versions of SQLite. + +

  2. +The RETURNING clause is only available in top-level DELETE, INSERT, +and UPDATE statements. The RETURNING clause cannot be used by +statements within triggers. + +

  3. +Even though a DML statement with a RETURNING clause returns table content, +it cannot be used as a subquery. The RETURNING clause can only return +data to the application. It is not currently possible to divert the +RETURNING output into another table or query. PostgreSQL has the ability +to use a DML statement with a RETURNING clause like a view in a +common table expressions. SQLite does not currently have that +ability, though that is something that might be added in a future release. + +

  4. +The rows emitted by the RETURNING clause appear in an arbitrary order. +That order might change depending on the database schema, upon the specific +release of SQLite used, or even from one execution of the same statement +to the next. +There is no way to cause the output rows to appear in a particular order. +Even if SQLite is compiled with the SQLITE_ENABLE_UPDATE_DELETE_LIMIT +option such that ORDER BY clauses are allowed on DELETE and UPDATE +statements, those ORDER BY clauses do not constrain the output order +of RETURNING. + +

  5. +The values emitted by the RETURNING clause are the values as seen +by the top-level DELETE, INSERT, or UPDATE statement +and do not reflect any subsequent value changes made by triggers. +Thus, if the database includes AFTER triggers that modifies some +of the values of each row inserted or updated, the RETURNING clause +emits the original values that are computed before those triggers run. + +

  6. +The RETURNING clause may not contain top-level aggregate functions or +window functions. If there are subqueries in the RETURNING clause, +those subqueries may contain aggregates and window functions, but +aggregates cannot occur at the top level. + +

  7. +The RETURNING clause may only reference the table being modified. +In an UPDATE FROM statement, the auxiliary tables named in the FROM +clause may not participate in the RETURNING clause. + +

+ + ADDED Doc/Extra/Core/lang_savepoint.html Index: Doc/Extra/Core/lang_savepoint.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_savepoint.html @@ -0,0 +1,325 @@ + + + + + +Savepoints + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+
+
+Savepoints +
+
+ + + + +

1. Syntax

+ +

savepoint-stmt: +

+
+
+ + + + +SAVEPOINT + + + +savepoint-name + + + + +
+
+

release-stmt: +

+
+
+ + + + +RELEASE + + + +SAVEPOINT + + + +savepoint-name + + + + + + + +
+
+

rollback-stmt: +

+
+
+ + + + +ROLLBACK + + + +TRANSACTION + + + +TO + + + +SAVEPOINT + + + +savepoint-name + + + + + + + + + + + + + +
+
+ + +

2. Savepoints

+ +

SAVEPOINTs are a method of creating transactions, similar to +BEGIN and COMMIT, except that the SAVEPOINT and RELEASE commands +are named and may be nested.

+ +

The SAVEPOINT command starts a new transaction with a name. +The transaction names need not be unique. +A SAVEPOINT can be started either within or outside of +a BEGIN...COMMIT. When a SAVEPOINT is the outer-most savepoint +and it is not within a BEGIN...COMMIT then the behavior is the +same as BEGIN DEFERRED TRANSACTION.

+ +

The ROLLBACK TO command reverts the state of the database back to what +it was just after the corresponding SAVEPOINT. Note that unlike that +plain ROLLBACK command (without the TO keyword) the ROLLBACK TO command +does not cancel the transaction. Instead of cancelling the transaction, +the ROLLBACK TO command restarts the transaction again at the beginning. +All intervening SAVEPOINTs are canceled, however.

+ +

The RELEASE command is like a COMMIT for a SAVEPOINT. +The RELEASE command causes all savepoints back to and including the +most recent savepoint with a matching name to be removed from the +transaction stack. The RELEASE of an inner transaction +does not cause any changes to be written to the database file; it merely +removes savepoints from the transaction stack such that it is +no longer possible to ROLLBACK TO those savepoints. +If a RELEASE command releases the outermost savepoint, so +that the transaction stack becomes empty, then RELEASE is the same +as COMMIT. +The COMMIT command may be used to release all savepoints and +commit the transaction even if the transaction was originally started +by a SAVEPOINT command instead of a BEGIN command.

+ +

If the savepoint-name in a RELEASE command does not match any +savepoint currently in the transaction stack, then no savepoints are +released, the database is unchanged, and the RELEASE command returns +an error.

+ +

Note that an inner transaction might commit (using the RELEASE command) +but then later have its work undone by a ROLLBACK in an outer transaction. +A power failure or program crash or OS crash will cause the outer-most +transaction to rollback, undoing all changes that have occurred within +that outer transaction, even changes that have supposedly been "committed" +by the RELEASE command. Content is not actually committed on the disk +until the outermost transaction commits.

+ +

There are several ways of thinking about the RELEASE command:

+ +
    +
  • +Some people view RELEASE as the equivalent of COMMIT for a SAVEPOINT. +This is an acceptable point of view as long as one remembers that the +changes committed by an inner transaction might later be undone by a +rollback in an outer transaction.

  • + +
  • +Another view of RELEASE is that it merges a named transaction into its +parent transaction, so that the named transaction and its parent become +the same transaction. After RELEASE, the named transaction and its parent +will commit or rollback together, whatever their fate may be. +

  • + +
  • +One can also think of savepoints as +"marks" in the transaction timeline. In this view, the SAVEPOINT command +creates a new mark, the ROLLBACK TO command rewinds the timeline back +to a point just after the named mark, and the RELEASE command +erases marks from the timeline without actually making any +changes to the database. +

  • +
+ + + +

3. Transaction Nesting Rules

+ +

The last transaction started will be the first +transaction committed or rolled back.

+ +

The BEGIN command only works if the transaction stack is empty, or +in other words if there are no pending transactions. If the transaction +stack is not empty when the BEGIN command is invoked, then the command +fails with an error.

+ +

The COMMIT command commits all outstanding transactions and leaves +the transaction stack empty.

+ +

The RELEASE command starts with the most recent addition to the +transaction stack and releases savepoints backwards +in time until it releases a savepoint with a matching savepoint-name. +Prior savepoints, even savepoints with matching savepoint-names, are +unchanged. +If the RELEASE command causes the +transaction stack to become empty (if the RELEASE command releases the +outermost transaction from the stack) then the transaction commits.

+ +

The ROLLBACK command without a TO clause rolls backs all transactions +and leaves the transaction stack empty.

+ +

The ROLLBACK command with a TO clause rolls back transactions going +backwards in time back to the most recent SAVEPOINT with a matching name. +The SAVEPOINT with the matching name remains on the transaction stack, +but all database changes that occurred after that SAVEPOINT was created +are rolled back. If the savepoint-name in a ROLLBACK TO command does not +match any SAVEPOINT on the stack, then the ROLLBACK command fails with an +error and leaves the state of the database unchanged.

+

This page last modified on 2022-01-08 05:02:57 UTC

+ ADDED Doc/Extra/Core/lang_select.html Index: Doc/Extra/Core/lang_select.html ================================================================== --- /dev/null +++ Doc/Extra/Core/lang_select.html @@ -0,0 +1,16251 @@ + + + + + +SELECT + + + +
+ + + +
+
+Small. Fast. Reliable.
Choose any three. +
+ + +
+
+ + + +
+
+
+ +
+ + + + + +

1. Overview

+

select-stmt: +

+
+
+ + + + + +WITH + +RECURSIVE + + + + + +common-table-expression + + + + + + +, + + + + + + + + + + + + + +SELECT + + + +DISTINCT + + + +result-column + +, + + + + + + + +ALL + + + + + + +FROM + + + +table-or-subquery + +join-clause + +, + + + + + + + + + + + + + + + + + +WHERE + + + +expr + + + + + + + + + + +GROUP + + + +BY + + + +expr + + + +HAVING + + + +expr + +, + + + + + + + + + + + + + + + + + + +WINDOW + + + +window-name + + + +AS + + + +window-defn + +, + + + + + + + + + + + + + + + + + + + +VALUES + + + +( + + + +expr + + + +) + + + + +, + +, + + + + + + + + + +compound-operator + + + + + +select-core + +ORDER + + + +BY + +LIMIT + + + +expr + + + +ordering-term + +, + + + + + + + + + + + + + + + + + +OFFSET + + + +expr + + + +, + + + +expr + + + + + + + + + + + + + + + + + + + +
+

common-table-expression: +

+ +

compound-operator: +

+ +

expr: +

+ +

join-clause: +

+ +

ordering-term: +

+ +

result-column: +

+ +

table-or-subquery: +

+ +

window-defn: +

+ +
+ + + +

The SELECT statement is used to query the database. The +result of a SELECT is zero or more rows of data where each row +has a fixed number of columns. A SELECT statement does not make +any changes to the database. + +

The "select-stmt" syntax diagram above attempts to show as much of the +SELECT statement syntax as possible in a single diagram, because some readers +find that helpful. The following "factored-select-stmt" is an alternative +syntax diagrams that expresses the same syntax but tries to break the syntax +down into smaller chunks. + +

factored-select-stmt: +

+ + + +

Note that there are paths through the syntax diagrams that +are not allowed in practice. Some examples: +

+ +

These and other similar syntax restrictions are described in the text. + +

The SELECT statement is the most complicated command in the SQL language. +To make the description easier to follow, some of the passages below describe +the way the data returned by a SELECT statement is determined as a series of +steps. It is important to keep in mind that this is purely illustrative - +in practice neither SQLite nor any other SQL engine is required to follow +this or any other specific process. + + + +

2. Simple Select Processing

+ +

The core of a SELECT statement is a "simple SELECT" shown by the +select-core and simple-select-stmt syntax diagrams below. +In practice, most SELECT statements are simple SELECT statements. + +

simple-select-stmt: +

+
+
+ + + + +WITH + +RECURSIVE + + + + + +common-table-expression + + + + + + +, + + + + +select-core + +ORDER + + + +BY + +LIMIT + + + +expr + + + + + + + + +ordering-term + +, + + + + + + + + + + + + + + + +OFFSET + + + +expr + + + +, + + + +expr + + + + + + + + + + + + + + + + + + + +
+

common-table-expression: +

+ +

expr: +

+ +

ordering-term: +

+ +

select-core: +

+
+
+ + + + +SELECT + + + +DISTINCT + + + +result-column + +, + + + + + + + +ALL + + + + + + +FROM + + + +table-or-subquery + +join-clause + +, + + + + + + + + + + + + + + + + + +WHERE + + + +expr + + + + + + + + + + +GROUP + + + +BY + + + +expr + + + +HAVING + + + +expr + +, + + + + + + + + + + + + + + + + + + +WINDOW + + + +window-name + + + +AS + + + +window-defn + +, + + + + + + + + + + + + + + + + + + + + +VALUES + + + +( + + + +expr + + + +) + + + + + + +, + +, + + + + + + + + + +
+

join-clause: +

+ +

result-column: +

+ +

table-or-subquery: +

+ +

window-defn: +

+ +
+
+ + +

Generating the results of a simple SELECT +statement is presented as a four step process in the description below: + +

    +
  1. FROM clause processing: The input data for the simple SELECT is + determined. The input data is either implicitly a single row with 0 + columns (if there is no FROM clause) or is determined by the FROM + clause. +

  2. WHERE clause processing: The input data is filtered using the WHERE + clause expression. +

  3. GROUP BY, HAVING and result-column expression processing: + The set of result rows is computed by aggregating the data according to + any GROUP BY clause and calculating the result-set expressions for the + rows of the filtered input dataset. +

  4. DISTINCT/ALL keyword processing: If the query is a "SELECT + DISTINCT" query, duplicate rows are removed from the set of result rows. +

+ +

There are two types of simple SELECT statement - aggregate and +non-aggregate queries. A simple SELECT statement is an aggregate query if +it contains either a GROUP BY clause or one or more aggregate functions +in the result-set. Otherwise, if a simple SELECT contains no aggregate +functions or a GROUP BY clause, it is a non-aggregate query. + + + + +

2.1. Determination of input data (FROM clause processing)

+ +

The input data used by a simple SELECT query is a set of N rows +each M columns wide. + +

If the FROM clause is omitted from a simple SELECT statement, then the +input data is implicitly a single row zero columns wide (i.e. N=1 and +M=0). + +

If a FROM clause is specified, the data on which a simple SELECT query +operates comes from the one or more tables or subqueries (SELECT statements +in parentheses) specified following the FROM keyword. A subquery specified +in the table-or-subquery following the FROM clause in a +simple SELECT statement is +handled as if it was a table containing the data returned by executing the +subquery statement. Each column of the subquery has the +collation sequence and affinity of the corresponding expression +in the subquery statement. + +

If there is only a single table or subquery in the FROM +clause, then the input data used by the SELECT statement is the contents of the +named table. If there is more than one table or subquery in FROM clause +then the contents of all tables and/or subqueries +are joined into a single dataset for the simple SELECT statement to operate on. +Exactly how the data is combined depends on the specific join-operator and +join-constraint used to connect the tables or subqueries together. + +

All joins in SQLite are based on the cartesian product of the left and +right-hand datasets. The columns of the cartesian product dataset are, in +order, all the columns of the left-hand dataset followed by all the columns +of the right-hand dataset. There is a row in the cartesian product dataset +formed by combining each unique combination of a row from the left-hand +and right-hand datasets. In other words, if the left-hand dataset consists of +Nleft rows of +Mleft columns, and the right-hand dataset of +Nright rows of +Mright columns, then the cartesian product is a +dataset of +Nleft×Nright +rows, each containing +Mleft+Mright columns. + +

If the join-operator is "CROSS JOIN", "INNER JOIN", "JOIN" or a comma +(",") and there is no ON or USING clause, then the result of the join is +simply the cartesian product of the left and right-hand datasets. +If join-operator does have ON or USING clauses, those are handled according to +the following bullet points: + +

    +
  • If there is an ON clause then the ON expression is + evaluated for each row of the cartesian product as a + boolean expression. Only rows for which the expression evaluates to + true are included from the dataset. + +

  • If there is a USING clause + then each of the column names specified must exist in the datasets to + both the left and right of the join-operator. For each pair of named + columns, the expression "lhs.X = rhs.X" is evaluated for each row of + the cartesian product as a boolean expression. Only rows for which + all such expressions evaluates to true are included from the + result set. When comparing values as a result of a USING clause, the + normal rules for handling affinities, collation sequences and NULL + values in comparisons apply. The column from the dataset on the + left-hand side of the join-operator is considered to be on the left-hand + side of the comparison operator (=) for the purposes of collation + sequence and affinity precedence. + +

    For each pair of columns identified by a USING clause, the column + from the right-hand dataset is omitted from the joined dataset. This + is the only difference between a USING clause and its equivalent ON + constraint. + +

  • If the NATURAL keyword is in the join-operator then an + implicit USING clause is added to the join-constraints. The implicit + USING clause contains each of the column names that appear in both + the left and right-hand input datasets. If the left and right-hand + input datasets feature no common column names, then the NATURAL keyword + has no effect on the results of the join. A USING or ON clause may + not be added to a join that specifies the NATURAL keyword. + +

  • If the join-operator is a "LEFT JOIN" or "LEFT OUTER JOIN", then + after + the ON or USING filtering clauses have been applied, an extra row is + added to the output for each row in the original left-hand input + dataset that does not match any row in the right-hand dataset. + The added rows contain NULL values in the columns + that would normally contain values copied from the right-hand input + dataset. + +

  • + + + If the join-operator is a "RIGHT JOIN" or "RIGHT OUTER JOIN", then + after the ON or USING filtering clauses have been applied, an extra row is + added to the output for each row in the original right-hand input + dataset that does not match any row in the left-hand dataset. + The added rows contain NULL values in the columns + that would normally contain values copied from the left-hand input + dataset. + +

  • + + + A "FULL JOIN" or "FULL OUTER JOIN" is a combination of a + "LEFT JOIN" and a "RIGHT JOIN". Extra rows of output are + added for each row in left dataset that matches no rows in the right, + and for each row in the right dataset that matches no rows in the + left. Unmatched columns are filled in with NULL. +

+ +

When more than two tables are joined together as part of a FROM clause, +the join operations are processed in order from left to right. In other +words, the FROM clause (A join-op-1 B join-op-2 C) is computed as +((A join-op-1 B) join-op-2 C). + + + +

2.2. Special handling of CROSS JOIN.

+ +

There is no difference between the "INNER JOIN", "JOIN" and "," join +operators. They are completely interchangeable in SQLite. +The "CROSS JOIN" join operator produces the same result as the +"INNER JOIN", "JOIN" and "," operators, but is +handled differently by the query optimizer +in that it prevents the query optimizer from reordering +the tables in the join. An application programmer can use the CROSS JOIN +operator to directly influence the algorithm that is chosen to implement +the SELECT statement. Avoid using CROSS JOIN except in specific situations +where manual control of the query optimizer is desired. Avoid using +CROSS JOIN early in the development of an application as doing so is +a premature +optimization. The special handling of CROSS JOIN is an SQLite-specific +feature and is not a part of standard SQL. + + + + +

2.3. WHERE clause filtering.

+ +

If a WHERE clause is specified, the WHERE expression is evaluated for +each row in the input data as a boolean expression. Only rows for which the +WHERE clause expression evaluates to true are included from the dataset before +continuing. Rows are excluded from the result if the WHERE clause +evaluates to either false or NULL. + +

For a JOIN or INNER JOIN or CROSS JOIN, there is no difference between +a constraint expression in the WHERE clause and one in the ON clause. However, +for a LEFT or RIGHT or FULL OUTER JOIN, the difference is very important. +In an outer join, the extra NULL rows for non-matched rows on the other +operand are added after ON clause processing +but before WHERE clause processing. A constraint of the form "left.x=right.y" +in an ON clause will therefore allow through for the added all-NULL rows. +But if that same constraint is in the WHERE clause, a NULL in one of +"right.y" or "left.x" will prevent the expression "left.x=right.y" from being true, +and thus exclude that row from the output. + + + + +

2.4. Generation of the set of result rows

+ +

Once the input data from the FROM clause has been filtered by the +WHERE clause expression (if any), the set of result rows for the simple +SELECT are calculated. Exactly how this is done depends on whether the simple +SELECT is an aggregate or non-aggregate query, and whether or not a GROUP +BY clause was specified. + +

The list of expressions between the SELECT and FROM keywords is known as +the result expression list. If a result expression is the special expression +"*" then all columns in the input data are substituted for that one expression. +If the expression is the alias of a table or subquery in the FROM clause +followed by ".*" then all columns from the named table or subquery are +substituted for the single expression. It is an error to use a "*" or +"alias.*" expression in any context other than a result expression list. +It is also an error to use a "*" or "alias.*" expression in a simple SELECT +query that does not have a FROM clause. + +

The number of columns in the rows returned by a simple SELECT statement +is equal to the number of expressions in the result expression list after +substitution of * and alias.* expressions. Each result row is calculated by +evaluating the expressions in the result expression list with respect to a +single row of input data or, for aggregate queries, with respect to a group +of rows. + +

    +
  • If the SELECT statement is a non-aggregate query, then + each expression in the result expression list is evaluated for each row in + the dataset filtered by the WHERE clause. + +

  • If the SELECT statement is an aggregate query without a GROUP + BY clause, then each aggregate expression in the result-set is + evaluated once across the entire dataset. Each non-aggregate expression + in the result-set is evaluated once for an arbitrarily selected row of + the dataset. The same arbitrarily selected row is used for each + non-aggregate expression. Or, if the dataset contains zero rows, then + each non-aggregate expression is evaluated against a row consisting + entirely of NULL values. + +

    The single row of result-set data created by evaluating the aggregate + and non-aggregate expressions in the result-set forms the result of an + aggregate query without a GROUP BY clause. An aggregate query without a + GROUP BY clause always returns exactly one row of data, even if there are + zero rows of input data. + + +

  • If the SELECT statement is an aggregate query with a GROUP + BY clause, then each of the expressions specified as part of the + GROUP BY clause is evaluated for each row of the dataset according to + the processing rules stated below for ORDER BY expressions. Each row + is then assigned to a "group" based on the results; rows for which + the results of evaluating the GROUP BY expressions are the same get + assigned to the same group. For the purposes of grouping rows, NULL + values are considered equal. The usual rules for selecting a + collation sequence with which to compare text values apply when evaluating + expressions in a GROUP BY clause. The expressions in the GROUP BY clause + do not have to be expressions that appear in the result. The + expressions in a GROUP BY clause may not be aggregate expressions. + + +

    If a HAVING clause is specified, it is evaluated once for each group + of rows as a boolean expression. If the result of evaluating the + HAVING clause is false, the group is discarded. If the HAVING clause is + an aggregate expression, it is evaluated across all rows in the group. If + a HAVING clause is a non-aggregate expression, it is evaluated with respect + to an arbitrarily selected row from the group. The HAVING expression may + refer to values, even aggregate functions, that are not in the result.

    + +

    Each expression in the result-set is then evaluated once for each + group of rows. If the expression is an aggregate expression, it is + evaluated across all rows in the group. Otherwise, it is evaluated against + a single arbitrarily chosen row from within the group. If there is more + than one non-aggregate expression in the result-set, then all such + expressions are evaluated for the same row. + +

    Each group of input dataset rows contributes a single row to the + set of result rows. Subject to filtering associated with the DISTINCT + keyword, the number of rows returned by an aggregate query with a GROUP + BY clause is the same as the number of groups of rows produced by applying + the GROUP BY and HAVING clauses to the filtered input dataset. +

+ + + +

2.5. Bare columns in an aggregate query

+ +

The usual case is that all column names in an aggregate query are either +arguments to aggregate functions or else appear in the GROUP BY clause. +A result column which contains a column name that is not within an +aggregate function and that does not appear in the GROUP BY clause (if +one exists) is called a "bare" column. +Example: +

+SELECT a, b, sum(c) FROM tab1 GROUP BY a;
+
+

In the query above, the "a" column is part of the GROUP BY clause +and so each row of the output contains one of the distinct values for +"a". The "c" column is contained within the sum() aggregate function +and so that output column is the sum of all "c" values in rows that +have the same value for "a". But what is the result of the bare +column "b"? The answer is that the "b" result will be the +value for "b" in one of the input rows that form the aggregate. +The problem is that you usually do not know which input row is used +to compute "b", and so in many cases the value for "b" is undefined. +

+

+Special processing occurs when the aggregate function is +either min() or max(). Example: +

+SELECT a, b, max(c) FROM tab1 GROUP BY a;
+
+ +

If there is exactly one min() or max() aggregate +in the query, then all bare columns in the result set take values from an +input row which also contains the minimum or maximum. So in the query +above, the value of the "b" column in the output will be the value of +the "b" column in the input row that has the largest "c" value. +There are limitations on this special behavior of +min() and max(): + +

    +
  1. +

    If the same minimum or maximum value occurs on two or more rows, +then bare values might be selected from any of those rows. The choice +is arbitrary. There is no way to predict from which row the bare values +will be choosen. The choice might be different for different bare columns +within the same query. + +

  2. +

    If there are two or more min() or max() aggregates +in the query, then bare column values will be taken from one of the rows +on which one of the aggregates has their minimum or maximum value. The choice +of which min() or max() aggregate determines the +selection of bare column values is arbitrary. The choice might be different +for different bare columns within the same query. + +

  3. +

    This special processing for min() or max() aggregates +only works for the built-in implementation of those aggregates. If an +application overrides the built-in min() or max() aggregates +with application-defined alternatives, then the values selected for bare +columns will be taken from an arbitrary row. +

+ +

Most other SQL database engines disallow bare columns. If you include +a bare column in a query, other database engines will usually raise an error. +The ability to include bare columns in a query is an SQLite-specific extension. +This is considered a feature, not a bug. See the discussion on +SQLite Forum thread 7481d2a6df8980ff +for additional information.

+ + + + +

2.6. Removal of duplicate rows (DISTINCT processing)

+ +

One of the ALL or DISTINCT keywords may follow the SELECT keyword in a +simple SELECT statement. If the simple SELECT is a SELECT ALL, then the +entire set of result rows are returned by the SELECT. If neither ALL or +DISTINCT are present, then the behavior is as if ALL were specified. +If the simple SELECT is a SELECT DISTINCT, then duplicate rows are removed +from the set of result rows before it is returned. For the purposes of +detecting duplicate rows, two NULL values are considered to be equal. The +usual rules apply for selecting a collation +sequence to compare text values. + + + +

3. Compound Select Statements

+ +

Two or more simple SELECT statements may be connected together to form +a compound SELECT using the UNION, UNION ALL, INTERSECT or EXCEPT operator, +as shown by the following diagram: + +

compound-select-stmt: +

+
+
+ + + + + + +WITH + +RECURSIVE + + + + + +common-table-expression + + + + + + +, + + + + +select-core + +ORDER + + + +BY + +LIMIT + + + +expr + + + + + + + + +UNION + +UNION + + + +ALL + + + + + +select-core + +INTERSECT + +EXCEPT + + + + + + + + + + + + + + + + + + + + + + + +ordering-term + +, + + + + + + + + + + + + + +OFFSET + + + +expr + + + +, + + + +expr + + + + + + + + + + + + + + + + + + + +
+

common-table-expression: +

+ +

expr: +

+