Table of Contents
This chapter describes the syntax for the SQL statements supported by MySQL.
ALTER {DATABASE | SCHEMA} [db_name
]alter_specification
... ALTER {DATABASE | SCHEMA}db_name
UPGRADE DATA DIRECTORY NAMEalter_specification
: [DEFAULT] CHARACTER SET [=]charset_name
| [DEFAULT] COLLATE [=]collation_name
ALTER DATABASE
enables you to
change the overall characteristics of a database. These
characteristics are stored in the db.opt
file
in the database directory. To use ALTER
DATABASE
, you need the
ALTER
privilege on the database.
ALTER
SCHEMA
is a synonym for ALTER
DATABASE
.
The database name can be omitted from the first syntax, in which case the statement applies to the default database.
The CHARACTER SET
clause changes the default
database character set. The COLLATE
clause
changes the default database collation. Section 10.1, “Character Set Support”,
discusses character set and collation names.
You can see what character sets and collations are available
using, respectively, the SHOW CHARACTER
SET
and SHOW COLLATION
statements. See Section 13.7.5.3, “SHOW CHARACTER SET Syntax”, and
Section 13.7.5.4, “SHOW COLLATION Syntax”, for more information.
If you change the default character set or collation for a database, stored routines that use the database defaults must be dropped and recreated so that they use the new defaults. (In a stored routine, variables with character data types use the database defaults if the character set or collation are not specified explicitly. See Section 13.1.16, “CREATE PROCEDURE and CREATE FUNCTION Syntax”.)
The syntax that includes the UPGRADE DATA DIRECTORY
NAME
clause updates the name of the directory associated
with the database to use the encoding implemented in MySQL 5.1 for
mapping database names to database directory names (see
Section 9.2.3, “Mapping of Identifiers to File Names”). This clause is for use
under these conditions:
It is intended when upgrading MySQL to 5.1 or later from older versions.
It is intended to update a database directory name to the current encoding format if the name contains special characters that need encoding.
The statement is used by mysqlcheck (as invoked by mysql_upgrade).
For example, if a database in MySQL 5.0 has the name
a-b-c
, the name contains instances of the
-
(dash) character. In MySQL 5.0, the database
directory is also named a-b-c
, which is not
necessarily safe for all file systems. In MySQL 5.1 and later, the
same database name is encoded as a@002db@002dc
to produce a file system-neutral directory name.
When a MySQL installation is upgraded to MySQL 5.1 or later from
an older version,the server displays a name such as
a-b-c
(which is in the old format) as
#mysql50#a-b-c
, and you must refer to the name
using the #mysql50#
prefix. Use
UPGRADE DATA DIRECTORY NAME
in this case to
explicitly tell the server to re-encode the database directory
name to the current encoding format:
ALTER DATABASE `#mysql50#a-b-c` UPGRADE DATA DIRECTORY NAME;
After executing this statement, you can refer to the database as
a-b-c
without the special
#mysql50#
prefix.
The UPGRADE DATA DIRECTORY NAME
clause is
deprecated in MySQL 5.7.6 and will be removed in a future
version of MySQL. If it is necessary to convert MySQL 5.0
database or table names, a workaround is to upgrade a MySQL 5.0
installation to MySQL 5.1 before upgrading to a more recent
release.
ALTER [DEFINER = {user
| CURRENT_USER }] EVENTevent_name
[ON SCHEDULEschedule
] [ON COMPLETION [NOT] PRESERVE] [RENAME TOnew_event_name
] [ENABLE | DISABLE | DISABLE ON SLAVE] [COMMENT 'comment
'] [DOevent_body
]
The ALTER EVENT
statement changes
one or more of the characteristics of an existing event without
the need to drop and recreate it. The syntax for each of the
DEFINER
, ON SCHEDULE
,
ON COMPLETION
, COMMENT
,
ENABLE
/ DISABLE
, and
DO
clauses is exactly the same as
when used with CREATE EVENT
. (See
Section 13.1.12, “CREATE EVENT Syntax”.)
Any user can alter an event defined on a database for which that
user has the EVENT
privilege. When
a user executes a successful ALTER
EVENT
statement, that user becomes the definer for the
affected event.
ALTER EVENT
works only with an
existing event:
mysql>ALTER EVENT no_such_event
>ON SCHEDULE
>EVERY '2:3' DAY_HOUR;
ERROR 1517 (HY000): Unknown event 'no_such_event'
In each of the following examples, assume that the event named
myevent
is defined as shown here:
CREATE EVENT myevent ON SCHEDULE EVERY 6 HOUR COMMENT 'A sample comment.' DO UPDATE myschema.mytable SET mycol = mycol + 1;
The following statement changes the schedule for
myevent
from once every six hours starting
immediately to once every twelve hours, starting four hours from
the time the statement is run:
ALTER EVENT myevent ON SCHEDULE EVERY 12 HOUR STARTS CURRENT_TIMESTAMP + INTERVAL 4 HOUR;
It is possible to change multiple characteristics of an event in a
single statement. This example changes the SQL statement executed
by myevent
to one that deletes all records from
mytable
; it also changes the schedule for the
event such that it executes once, one day after this
ALTER EVENT
statement is run.
ALTER EVENT myevent ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 1 DAY DO TRUNCATE TABLE myschema.mytable;
Specify the options in an ALTER
EVENT
statement only for those characteristics that you
want to change; omitted options keep their existing values. This
includes any default values for CREATE
EVENT
such as ENABLE
.
To disable myevent
, use this
ALTER EVENT
statement:
ALTER EVENT myevent DISABLE;
The ON SCHEDULE
clause may use expressions
involving built-in MySQL functions and user variables to obtain
any of the timestamp
or
interval
values which it contains. You
cannot use stored routines or user-defined functions in such
expressions, and you cannot use any table references; however, you
can use SELECT FROM DUAL
. This is true for both
ALTER EVENT
and
CREATE EVENT
statements. References
to stored routines, user-defined functions, and tables in such
cases are specifically not permitted, and fail with an error (see
Bug #22830).
Although an ALTER EVENT
statement
that contains another ALTER EVENT
statement in its DO
clause appears
to succeed, when the server attempts to execute the resulting
scheduled event, the execution fails with an error.
To rename an event, use the ALTER
EVENT
statement's RENAME TO
clause.
This statement renames the event myevent
to
yourevent
:
ALTER EVENT myevent RENAME TO yourevent;
You can also move an event to a different database using
ALTER EVENT ... RENAME TO ...
and
notation, as shown here:
db_name.event_name
ALTER EVENT olddb.myevent RENAME TO newdb.myevent;
To execute the previous statement, the user executing it must have
the EVENT
privilege on both the
olddb
and newdb
databases.
There is no RENAME EVENT
statement.
The value DISABLE ON SLAVE
is used on a
replication slave instead of ENABLE
or
DISABLE
to indicate an event that was created
on the master and replicated to the slave, but that is not
executed on the slave. Normally, DISABLE ON
SLAVE
is set automatically as required; however, there
are some circumstances under which you may want or need to change
it manually. See Section 16.4.1.12, “Replication of Invoked Features”,
for more information.
ALTER FUNCTIONfunc_name
[characteristic
...]characteristic
: COMMENT 'string
' | LANGUAGE SQL | { CONTAINS SQL | NO SQL | READS SQL DATA | MODIFIES SQL DATA } | SQL SECURITY { DEFINER | INVOKER }
This statement can be used to change the characteristics of a
stored function. More than one change may be specified in an
ALTER FUNCTION
statement. However,
you cannot change the parameters or body of a stored function
using this statement; to make such changes, you must drop and
re-create the function using DROP
FUNCTION
and CREATE
FUNCTION
.
You must have the ALTER ROUTINE
privilege for the function. (That privilege is granted
automatically to the function creator.) If binary logging is
enabled, the ALTER FUNCTION
statement might also require the
SUPER
privilege, as described in
Section 23.7, “Binary Logging of Stored Programs”.
ALTER INSTANCE ROTATE INNODB MASTER KEY
ALTER INSTANCE
, introduced in MySQL 5.7.11,
defines actions applicable to a MySQL server instance.
The ALTER INSTANCE ROTATE INNODB MASTER KEY
statement is used to rotate the master encryption key used for
InnoDB
tablespace encryption. A keyring plugin
must be loaded to use this statement. For information about
keyring plugins, see Section 6.5.4, “The MySQL Keyring”. Key rotation
requires the SUPER
privilege.
ALTER INSTANCE ROTATE INNODB MASTER KEY
supports concurrent DML. However, it cannot be run concurrently
with CREATE TABLE
... ENCRYPTION
or
ALTER TABLE ...
ENCRYPTION
operations, and locks are taken to prevent
conflicts that could arise from concurrent execution of these
statements. If one of the conflicting statements is running, it
must complete before another can proceed.
ALTER INSTANCE
actions are written to the
binary log so that they can be executed on replicated servers.
For additional ALTER INSTANCE ROTATE INNODB MASTER
KEY
usage information, see
Section 14.7.10, “InnoDB Tablespace Encryption”. For information
about keyring plugins, see Section 6.5.4, “The MySQL Keyring”.
ALTER LOGFILE GROUPlogfile_group
ADD UNDOFILE 'file_name
' [INITIAL_SIZE [=]size
] [WAIT] ENGINE [=]engine_name
This statement adds an UNDO
file named
'file_name
' to an existing log file
group logfile_group
. An
ALTER LOGFILE GROUP
statement has
one and only one ADD UNDOFILE
clause. No
DROP UNDOFILE
clause is currently supported.
All NDB Cluster Disk Data objects share the same namespace. This means that each Disk Data object must be uniquely named (and not merely each Disk Data object of a given type). For example, you cannot have a tablespace and an undo log file with the same name, or an undo log file and a data file with the same name.
The optional INITIAL_SIZE
parameter sets the
UNDO
file's initial size in bytes; if not
specified, the initial size defaults to 134217728 (128 MB). You
may optionally follow size
with a
one-letter abbreviation for an order of magnitude, similar to
those used in my.cnf
. Generally, this is one
of the letters M
(megabytes) or
G
(gigabytes). (Bug #13116514, Bug #16104705,
Bug #62858)
On 32-bit systems, the maximum supported value for
INITIAL_SIZE
is 4294967296 (4 GB). (Bug #29186)
The minimum allowed value for INITIAL_SIZE
is
1048576 (1 MB). (Bug #29574)
WAIT
is parsed but otherwise ignored. This
keyword currently has no effect, and is intended for future
expansion.
The ENGINE
parameter (required) determines the
storage engine which is used by this log file group, with
engine_name
being the name of the
storage engine. Currently, the only accepted values for
engine_name
are
“NDBCLUSTER
” and
“NDB
”. The two values
are equivalent.
Here is an example, which assumes that the log file group
lg_3
has already been created using
CREATE LOGFILE GROUP
(see
Section 13.1.15, “CREATE LOGFILE GROUP Syntax”):
ALTER LOGFILE GROUP lg_3 ADD UNDOFILE 'undo_10.dat' INITIAL_SIZE=32M ENGINE=NDBCLUSTER;
When ALTER LOGFILE GROUP
is used
with ENGINE = NDBCLUSTER
(alternatively,
ENGINE = NDB
), an UNDO
log
file is created on each NDB Cluster data node. You can verify that
the UNDO
files were created and obtain
information about them by querying the
INFORMATION_SCHEMA.FILES
table. For
example:
mysql>SELECT FILE_NAME, LOGFILE_GROUP_NUMBER, EXTRA
->FROM INFORMATION_SCHEMA.FILES
->WHERE LOGFILE_GROUP_NAME = 'lg_3';
+-------------+----------------------+----------------+ | FILE_NAME | LOGFILE_GROUP_NUMBER | EXTRA | +-------------+----------------------+----------------+ | newdata.dat | 0 | CLUSTER_NODE=3 | | newdata.dat | 0 | CLUSTER_NODE=4 | | undo_10.dat | 11 | CLUSTER_NODE=3 | | undo_10.dat | 11 | CLUSTER_NODE=4 | +-------------+----------------------+----------------+ 4 rows in set (0.01 sec)
(See Section 24.8, “The INFORMATION_SCHEMA FILES Table”.)
Memory used for UNDO_BUFFER_SIZE
comes from the
global pool whose size is determined by the value of the
SharedGlobalMemory
data
node configuration parameter. This includes any default value
implied for this option by the setting of the
InitialLogFileGroup
data
node configuration parameter.
ALTER LOGFILE GROUP
is useful only
with Disk Data storage for NDB Cluster. For more information, see
Section 21.5.13, “NDB Cluster Disk Data Tables”.
ALTER PROCEDUREproc_name
[characteristic
...]characteristic
: COMMENT 'string
' | LANGUAGE SQL | { CONTAINS SQL | NO SQL | READS SQL DATA | MODIFIES SQL DATA } | SQL SECURITY { DEFINER | INVOKER }
This statement can be used to change the characteristics of a
stored procedure. More than one change may be specified in an
ALTER PROCEDURE
statement. However,
you cannot change the parameters or body of a stored procedure
using this statement; to make such changes, you must drop and
re-create the procedure using DROP
PROCEDURE
and CREATE
PROCEDURE
.
You must have the ALTER ROUTINE
privilege for the procedure. By default, that privilege is granted
automatically to the procedure creator. This behavior can be
changed by disabling the
automatic_sp_privileges
system
variable. See Section 23.2.2, “Stored Routines and MySQL Privileges”.
ALTER SERVERserver_name
OPTIONS (option
[,option
] ...)
Alters the server information for
,
adjusting any of the options permitted in the
server_name
CREATE SERVER
statement. The
corresponding fields in the mysql.servers
table
are updated accordingly. This statement requires the
SUPER
privilege.
For example, to update the USER
option:
ALTER SERVER s OPTIONS (USER 'sally');
ALTER SERVER
does not cause an automatic
commit.
In MySQL 5.7, ALTER SERVER
is not
written to the binary log, regardless of the logging format that
is in use.
In MySQL 5.7.1, gtid_next
must be
set to AUTOMATIC
before issuing this statement.
This restriction does not apply in MySQL 5.7.2 or later. (Bug
#16062608, Bug #16715809, Bug #69045)
ALTER TABLEtbl_name
[alter_specification
[,alter_specification
] ...] [partition_options
]alter_specification
:table_options
| ADD [COLUMN]col_name
column_definition
[FIRST | AFTERcol_name
] | ADD [COLUMN] (col_name
column_definition
,...) | ADD {INDEX|KEY} [index_name
] [index_type
] (index_col_name
,...) [index_option
] ... | ADD [CONSTRAINT [symbol
]] PRIMARY KEY [index_type
] (index_col_name
,...) [index_option
] ... | ADD [CONSTRAINT [symbol
]] UNIQUE [INDEX|KEY] [index_name
] [index_type
] (index_col_name
,...) [index_option
] ... | ADD FULLTEXT [INDEX|KEY] [index_name
] (index_col_name
,...) [index_option
] ... | ADD SPATIAL [INDEX|KEY] [index_name
] (index_col_name
,...) [index_option
] ... | ADD [CONSTRAINT [symbol
]] FOREIGN KEY [index_name
] (index_col_name
,...)reference_definition
| ALGORITHM [=] {DEFAULT|INPLACE|COPY} | ALTER [COLUMN]col_name
{SET DEFAULTliteral
| DROP DEFAULT} | CHANGE [COLUMN]old_col_name
new_col_name
column_definition
[FIRST|AFTERcol_name
] | LOCK [=] {DEFAULT|NONE|SHARED|EXCLUSIVE} | MODIFY [COLUMN]col_name
column_definition
[FIRST | AFTERcol_name
] | DROP [COLUMN]col_name
| DROP PRIMARY KEY | DROP {INDEX|KEY}index_name
| DROP FOREIGN KEYfk_symbol
| DISABLE KEYS | ENABLE KEYS | RENAME [TO|AS]new_tbl_name
| RENAME {INDEX|KEY}old_index_name
TOnew_index_name
| ORDER BYcol_name
[,col_name
] ... | CONVERT TO CHARACTER SETcharset_name
[COLLATEcollation_name
] | [DEFAULT] CHARACTER SET [=]charset_name
[COLLATE [=]collation_name
] | DISCARD TABLESPACE | IMPORT TABLESPACE | FORCE | {WITHOUT|WITH} VALIDATION | ADD PARTITION (partition_definition
) | DROP PARTITIONpartition_names
| DISCARD PARTITION {partition_names
| ALL} TABLESPACE | IMPORT PARTITION {partition_names
| ALL} TABLESPACE | TRUNCATE PARTITION {partition_names
| ALL} | COALESCE PARTITIONnumber
| REORGANIZE PARTITIONpartition_names
INTO (partition_definitions
) | EXCHANGE PARTITIONpartition_name
WITH TABLEtbl_name
[{WITH|WITHOUT} VALIDATION] | ANALYZE PARTITION {partition_names
| ALL} | CHECK PARTITION {partition_names
| ALL} | OPTIMIZE PARTITION {partition_names
| ALL} | REBUILD PARTITION {partition_names
| ALL} | REPAIR PARTITION {partition_names
| ALL} | REMOVE PARTITIONING | UPGRADE PARTITIONINGindex_col_name
:col_name
[(length
)] [ASC | DESC]index_type
: USING {BTREE | HASH}index_option
: KEY_BLOCK_SIZE [=]value
|index_type
| WITH PARSERparser_name
| COMMENT 'string
'table_options
:table_option
[[,]table_option
] ...table_option
: ENGINE [=]engine_name
| AUTO_INCREMENT [=]value
| AVG_ROW_LENGTH [=]value
| [DEFAULT] CHARACTER SET [=]charset_name
| CHECKSUM [=] {0 | 1} | [DEFAULT] COLLATE [=]collation_name
| COMMENT [=] 'string
' | COMPRESSION [=] {'ZLIB'|'LZ4'|'NONE'} | CONNECTION [=] 'connect_string
' | DATA DIRECTORY [=] 'absolute path to directory
' | DELAY_KEY_WRITE [=] {0 | 1} | ENCRYPTION [=] {'Y' | 'N'} | INDEX DIRECTORY [=] 'absolute path to directory
' | INSERT_METHOD [=] { NO | FIRST | LAST } | KEY_BLOCK_SIZE [=]value
| MAX_ROWS [=]value
| MIN_ROWS [=]value
| PACK_KEYS [=] {0 | 1 | DEFAULT} | PASSWORD [=] 'string
' | ROW_FORMAT [=] {DEFAULT|DYNAMIC|FIXED|COMPRESSED|REDUNDANT|COMPACT} | STATS_AUTO_RECALC [=] {DEFAULT|0|1} | STATS_PERSISTENT [=] {DEFAULT|0|1} | STATS_SAMPLE_PAGES [=]value
| TABLESPACEtablespace_name
[STORAGE {DISK|MEMORY|DEFAULT}] | UNION [=] (tbl_name
[,tbl_name
]...)partition_options
: (seeCREATE TABLE
options)
ALTER TABLE
changes the structure
of a table. For example, you can add or delete columns, create or
destroy indexes, change the type of existing columns, or rename
columns or the table itself. You can also change characteristics
such as the storage engine used for the table or the table
comment.
To use ALTER TABLE
, you need
ALTER
,
CREATE
, and
INSERT
privileges for the
table. Renaming a table requires
ALTER
and
DROP
on the old table,
ALTER
,
CREATE
, and
INSERT
on the new table.
Following the table name, specify the alterations to be made.
If none are given, ALTER TABLE
does nothing.
The syntax for many of the permissible alterations is similar
to clauses of the CREATE TABLE
statement. column_definition
clauses use the same syntax for ADD
and
CHANGE
as for CREATE
TABLE
. See Section 13.1.18, “CREATE TABLE Syntax”, for more
information.
The word COLUMN
is optional and can be
omitted.
You can issue multiple ADD
,
ALTER
, DROP
, and
CHANGE
clauses in a single
ALTER TABLE
statement,
separated by commas. This is a MySQL extension to standard
SQL, which permits only one of each clause per
ALTER TABLE
statement. For
example, to drop multiple columns in a single statement, do
this:
ALTER TABLE t2 DROP COLUMN c, DROP COLUMN d;
Some operations may result in warnings if attempted on a table
for which the storage engine does not support the operation.
These warnings can be displayed with SHOW
WARNINGS
. See Section 13.7.5.40, “SHOW WARNINGS Syntax”. For
information on troubleshooting ALTER
TABLE
, see Section B.5.6.1, “Problems with ALTER TABLE”.
For usage examples, see Section 13.1.8.4, “ALTER TABLE Examples”.
For information about generated columns, see Section 13.1.8.3, “ALTER TABLE and Generated Columns”.
With the mysql_info()
C API
function, you can find out how many rows were copied by
ALTER TABLE
. See
Section 27.8.7.36, “mysql_info()”.
There are several additional aspects to the ALTER
TABLE
statement, described under the following topics in
this section:
table_options
signifies table options
of the kind that can be used in the CREATE
TABLE
statement, such as ENGINE
,
AUTO_INCREMENT
,
AVG_ROW_LENGTH
, MAX_ROWS
,
ROW_FORMAT
, or TABLESPACE
.
For descriptions of all table options, see
Section 13.1.18, “CREATE TABLE Syntax”. However,
ALTER TABLE
ignores DATA
DIRECTORY
and INDEX DIRECTORY
when
given as table options. ALTER TABLE
permits them only as partitioning options, and, as of MySQL
5.7.17, requires that you have the FILE
privilege.
Use of table options with ALTER
TABLE
provides a convenient way of altering single table
characteristics. For example:
If t1
is currently not an
InnoDB
table, this statement changes its
storage engine to InnoDB
:
ALTER TABLE t1 ENGINE = InnoDB;
See Section 14.8.1.4, “Converting Tables from MyISAM to InnoDB” for
considerations when switching tables to the
InnoDB
storage engine.
When you specify an ENGINE
clause,
ALTER TABLE
rebuilds the
table. This is true even if the table already has the
specified storage engine.
Running ALTER
TABLE
on an existing
tbl_name
ENGINE=INNODBInnoDB
table performs a
“null” ALTER
TABLE
operation, which can be used to defragment
an InnoDB
table, as described in
Section 14.12.4, “Defragmenting a Table”. Running
ALTER TABLE
on an
tbl_name
FORCEInnoDB
table performs the same
function.
ALTER TABLE
and
tbl_name
ENGINE=INNODBALTER TABLE
use
online DDL. For
more information, see
Section 14.13.1, “Online DDL Overview”.
tbl_name
FORCE
The outcome of attempting to change the storage engine of
a table is affected by whether the desired storage engine
is available and the setting of the
NO_ENGINE_SUBSTITUTION
SQL mode, as described in Section 5.1.8, “Server SQL Modes”.
To prevent inadvertent loss of data,
ALTER TABLE
cannot be used
to change the storage engine of a table to
MERGE
or BLACKHOLE
.
To change the InnoDB
table to use
compressed row-storage format:
ALTER TABLE t1 ROW_FORMAT = COMPRESSED;
If the InnoDB
tablespace encryption feature
is enabled (see
Section 14.7.10, “InnoDB Tablespace Encryption”), encryption
for t1
can be enabled or disabled like
this:
ALTER TABLE t1 ENCRYPTION='Y'; ALTER TABLE t1 ENCRYPTION='N';
To reset the current auto-increment value:
ALTER TABLE t1 AUTO_INCREMENT = 13;
You cannot reset the counter to a value less than or equal to
the value that is currently in use. For both
InnoDB
and MyISAM
, if
the value is less than or equal to the maximum value currently
in the AUTO_INCREMENT
column, the value is
reset to the current maximum AUTO_INCREMENT
column value plus one.
To change the default table character set:
ALTER TABLE t1 CHARACTER SET = utf8;
To add (or change) a table comment:
ALTER TABLE t1 COMMENT = 'New table comment';
You can use ALTER TABLE
with the
TABLESPACE
option to move non-partitioned
InnoDB
tables between existing
general
tablespaces,
file-per-table
tablespaces, and the
system
tablespace. See
Moving Non-Partitioned Tables Between Tablespaces Using ALTER TABLE.
For partitioned tables, ALTER TABLE tbl_name
TABLESPACE [=]
only
modifies the default tablespace. It does not move
partitions from one tablespace to another. To move table
partitions, you must move each partition using
tablespace_name
ALTER TABLE
. See
Moving Table Partitions Between Tablespaces Using ALTER TABLE.
tbl_name
REORGANIZE PARTITION
ALTER TABLE ... TABLESPACE
operations
always cause a full table rebuild, even if the
TABLESPACE
attribute has not changed
from its previous value.
ALTER TABLE ... TABLESPACE
syntax does
not support moving a table from a temporary tablespace to
a persistent tablespace.
The DATA DIRECTORY
clause, which is
supported with
CREATE TABLE
... TABLESPACE
, is not supported with
ALTER TABLE ... TABLESPACE
, and is
ignored if specified.
For more information about the capabilities and
limitations of the TABLESPACE
option,
see CREATE TABLE
.
MySQL NDB Cluster 7.5.2 and later supports setting
NDB_TABLE
options for controlling a
table's partition balance (fragment count type),
read-from-any-replica capability, full replication, or any
combination of these, as part of the table comment for an
ALTER TABLE
statement in the same manner as
for CREATE TABLE
, as shown in
this example:
ALTER TABLE t1 COMMENT = "NDB_TABLE=READ_BACKUP=0,PARTITION_BALANCE=FOR_RA_BY_NODE";
Bear in mind that ALTER TABLE ... COMMENT
...
discards any existing comment for the table. See
Setting NDB_TABLE options, for
additional information and examples.
To verify that the table options were changed as intended, use
SHOW CREATE TABLE
, or query
INFORMATION_SCHEMA.TABLES
.
ALTER TABLE
operations that are
not performed in place make a
temporary copy of the original table. MySQL waits for other
operations that are modifying the table, then proceeds. It
incorporates the alteration into the copy, deletes the original
table, and renames the new one. While ALTER
TABLE
is executing, the original table is readable by
other sessions (with the exception noted shortly). Updates and
writes to the table that begin after the
ALTER TABLE
operation begins are
stalled until the new table is ready, then are automatically
redirected to the new table without any failed updates. The
temporary copy of the original table is created in the database
directory of the new table. This can differ from the database
directory of the original table for ALTER
TABLE
operations that rename the table to a different
database.
The exception referred to earlier is that
ALTER TABLE
blocks reads (not just
writes) at the point where it is ready to install a new version of
the table .frm
file, discard the old file,
and clear outdated table structures from the table and table
definition caches. At this point, it must acquire an exclusive
lock. To do so, it waits for current readers to finish, and blocks
new reads (and writes).
For MyISAM
tables, you can speed up index
re-creation (the slowest part of the alteration process) by
setting the
myisam_sort_buffer_size
system
variable to a high value.
For InnoDB
tables, a table-copying
ALTER TABLE
operation on table that
resides in a shared tablespace such as a
general tablespace
or the system
tablespace can increase the amount of space used by the
tablespace. Such operations require as much additional space as
the data in the table plus indexes. For a table that resides in a
shared tablespace, the additional space used during a
table-copying ALTER TABLE
operation
is not released back to the operating system as it is for a table
that resides in a
file-per-table
tablespace.
ALTER TABLE
operations that are
performed in place do not require creating a temporary copy of the
original table. These operations include:
ALTER TABLE
operations on
InnoDB
tables that are supported by the
InnoDB
online DDL feature. For
an overview of supported operations, see
Section 14.13.1, “Online DDL Overview”. For
information about performance and concurrency of online DDL
operations, see
Section 14.13.2, “Online DDL Performance, Concurrency, and Space Requirements”.
ALTER TABLE
.
When run without other options, MySQL renames files that
correspond to the table tbl_name
RENAME TO new_tbl_name
tbl_name
without making a copy. (You can also use the
RENAME TABLE
statement to
rename tables. See Section 13.1.33, “RENAME TABLE Syntax”.) Privileges
granted specifically for the renamed table are not migrated to
the new name. They must be changed manually.
Alterations that modify only table metadata and not table data
are immediate because the server only needs to alter the table
.frm
file, not touch table contents. The
following changes are made in this way:
Renaming a column.
Changing the default value of a column (except for
NDB
tables).
Changing the definition of an
ENUM
or
SET
column by adding new
enumeration or set members to the end
of the list of valid member values, as long as the storage
size of the data type does not change. For example, adding
a member to a SET
column
that has 8 members changes the required storage per value
from 1 byte to 2 bytes; this will require a table copy.
Adding members in the middle of the list causes
renumbering of existing members, which requires a table
copy.
Renaming an index.
Adding or dropping an index, for
InnoDB
and
NDB
. See
Section 14.13.1, “Online DDL Overview”.
For NDB
tables, operations that
add and drop indexes on variable-width columns occur online,
without table copying and without blocking concurrent DML
actions for most of their duration. See
Section 13.1.8.2, “ALTER TABLE Online Operations in NDB Cluster”.
Specifying ALGORITHM=INPLACE
makes the
operation use the in-place technique for clauses and storage
engines that support it, and fail with an error otherwise, thus
avoiding a lengthy table copy if you try altering a table that
uses a different storage engine than you expect.
You can force an ALTER TABLE
operation that
would otherwise not use the table copy method by setting the
old_alter_table
system variable
to ON
, or specifying
ALGORITHM=COPY
as one of the
alter_specification
clauses. If there
is a conflict between the old_alter_table
setting and an ALGORITHM
clause with a value
other than DEFAULT
, the
ALGORITHM
clause takes precedence.
Specifying ALGORITHM=DEFAULT
is the same a
specifying no ALGORITHM
clause at all, in which
case ALGORITM=INPLACE
is used if supported by
the storage engine. Otherwise, ALGORITHM=COPY
is used.
An ALTER TABLE
operation run with
the ALGORITHM=COPY
clause prevents concurrent
DML operations. Concurrent queries are still allowed. That is, a
table-copying operation always includes at least the concurrency
restrictions of LOCK=SHARED
(allow queries but
not DML). You can further restrict concurrency for such operations
by specifying LOCK=EXCLUSIVE
, which prevents
DML and queries.
As of MySQL 5.7.4, ALTER TABLE
upgrades MySQL 5.5 temporal columns to 5.6 format for ADD
COLUMN
, CHANGE COLUMN
,
MODIFY COLUMN
, ADD INDEX
,
and FORCE
operations. This conversion cannot be
done using the INPLACE
algorithm because the
table must be rebuilt, so specifying
ALGORITHM=INPLACE
in these cases results in an
error. Specify ALGORITHM=COPY
if necessary.
If an ALTER TABLE
operation on a multicolumn
index used to partition a table by KEY
changes
the order of the columns, it can only be performed using
ALGORITHM=COPY
.
The WITHOUT VALIDATION
and WITH
VALIDATION
clauses affect whether
ALTER TABLE
performs an in-place
operation for
virtual generated
column modifications. See
Section 13.1.8.3, “ALTER TABLE and Generated Columns”.
NDB Cluster formerly supported online ALTER
TABLE
operations using the ONLINE
and
OFFLINE
keywords. These keywords are no longer
supported; their use causes a syntax error. MySQL NDB Cluster 7.5
(and later) supports online operations using the same
ALGORITHM=INPLACE
syntax used with the standard
MySQL Server. See Section 13.1.8.2, “ALTER TABLE Online Operations in NDB Cluster”,
for more information.
ALTER TABLE
with DISCARD ... PARTITION
... TABLESPACE
or IMPORT ... PARTITION ...
TABLESPACE
does not create any temporary tables or
temporary partition files.
ALTER TABLE
with ADD
PARTITION
, DROP PARTITION
,
COALESCE PARTITION
, REBUILD
PARTITION
, or REORGANIZE PARTITION
does not create temporary tables (except when used with
NDB
tables); however, these
operations can and do create temporary partition files.
ADD
or DROP
operations for
RANGE
or LIST
partitions are
immediate operations or nearly so. ADD
or
COALESCE
operations for HASH
or KEY
partitions copy data between all
partitions, unless LINEAR HASH
or
LINEAR KEY
was used; this is effectively the
same as creating a new table, although the ADD
or COALESCE
operation is performed partition by
partition. REORGANIZE
operations copy only
changed partitions and do not touch unchanged ones.
You can control the level of concurrent reading and writing of the
table while it is being altered, using the LOCK
clause. Specifying a non-default value for this clause lets you
require a certain amount of concurrent access or exclusivity
during the alter operation, and halts the operation if the
requested degree of locking is not available. The parameters for
the LOCK
clause are:
LOCK = DEFAULT
Maximum level of concurrency for the given
ALGORITHM
clause (if any) and
ALTER TABLE
operation: Permit concurrent
reads and writes if supported. If not, permit concurrent reads
if supported. If not, enforce exclusive access.
LOCK = NONE
If supported, permit concurrent reads and writes. Otherwise, return an error message.
LOCK = SHARED
If supported, permit concurrent reads but block writes. Note
that writes will be blocked even if concurrent writes are
supported by the storage engine for the given
ALGORITHM
clause (if any) and
ALTER TABLE
operation. If concurrent reads
are not supported, return an error message.
LOCK = EXCLUSIVE
Enforce exclusive access. This will be done even if concurrent
reads/writes are supported by the storage engine for the given
ALGORITHM
clause (if any) and
ALTER TABLE
operation.
You can rename a column using a CHANGE
clause.
To do so, specify the old and new column names and the
definition that the column currently has. For example, to
rename an old_col_name
new_col_name
column_definition
INTEGER
column from
a
to b
, you can do this:
ALTER TABLE t1 CHANGE a b INTEGER;
To change a column's type but not the name,
CHANGE
syntax still requires an old and new
column name, even if they are the same. For example:
ALTER TABLE t1 CHANGE b b BIGINT NOT NULL;
You can also use MODIFY
to change a
column's type without renaming it:
ALTER TABLE t1 MODIFY b BIGINT NOT NULL;
MODIFY
is an extension to
ALTER TABLE
for Oracle
compatibility.
When you use CHANGE
or
MODIFY
,
column_definition
must include the
data type and all attributes that should apply to the new
column, other than index attributes such as PRIMARY
KEY
or UNIQUE
. Attributes present
in the original definition but not specified for the new
definition are not carried forward. Suppose that a column
col1
is defined as INT UNSIGNED
DEFAULT 1 COMMENT 'my column'
and you modify the
column as follows:
ALTER TABLE t1 MODIFY col1 BIGINT;
The resulting column will be defined as
BIGINT
, but will not include the attributes
UNSIGNED DEFAULT 1 COMMENT 'my column'
. To
retain them, the statement should be:
ALTER TABLE t1 MODIFY col1 BIGINT UNSIGNED DEFAULT 1 COMMENT 'my column';
When you change a data type using CHANGE
or
MODIFY
, MySQL tries to convert existing
column values to the new type as well as possible.
This conversion may result in alteration of data. For
example, if you shorten a string column, values may be
truncated. To prevent the operation from succeeding if
conversions to the new data type would result in loss of
data, enable strict SQL mode before using
ALTER TABLE
(see
Section 5.1.8, “Server SQL Modes”).
To add a column at a specific position within a table row, use
FIRST
or AFTER
. The default is
to add the column last. You can also use
col_name
FIRST
and AFTER
in
CHANGE
or MODIFY
operations to reorder columns within a table.
If you use CHANGE
or
MODIFY
to shorten a column for which an
index exists on the column, and the resulting column length is
less than the index length, MySQL shortens the index
automatically.
CHANGE
is a MySQL extension to standard SQL.
col_name
ALTER ... SET DEFAULT
or ALTER ...
DROP DEFAULT
specify a new default value for a
column or remove the old default value, respectively. If the
old default is removed and the column can be
NULL
, the new default is
NULL
. If the column cannot be
NULL
, MySQL assigns a default value as
described in Section 11.7, “Data Type Default Values”.
DROP PRIMARY KEY
drops the
primary key. If there
is no primary key, an error occurs. For information about the
performance characteristics of primary keys, especially for
InnoDB
tables, see
Section 8.3.2, “Using Primary Keys”.
If you add a UNIQUE INDEX
or
PRIMARY KEY
to a table, MySQL stores it
before any nonunique index to permit detection of duplicate
keys as early as possible.
DROP INDEX
removes an index.
This is a MySQL extension to standard SQL. See
Section 13.1.25, “DROP INDEX Syntax”. If you are unsure of the index
name, use SHOW INDEX FROM
.
tbl_name
Some storage engines permit you to specify an index type when
creating an index. The syntax for the
index_type
specifier is
USING
.
For details about type_name
USING
, see
Section 13.1.14, “CREATE INDEX Syntax”. The preferred position is
after the column list. Support for use of the option before
the column list will be removed in a future MySQL release.
index_option
values specify
additional options for an index. For details about permissible
index_option
values, see
Section 13.1.14, “CREATE INDEX Syntax”.
RENAME INDEX
renames an
index. This is a MySQL extension to standard SQL. The content
of the table remains unchanged.
old_index_name
TO
new_index_name
old_index_name
must be the name of
an existing index in the table that is not dropped by the same
ALTER TABLE
statement.
new_index_name
is the new index name, which
cannot duplicate the name of an index in the resulting table
after changes have been applied. Neither index name can be
PRIMARY
.
If you use ALTER TABLE
on a
MyISAM
table, all nonunique indexes are
created in a separate batch (as for
REPAIR TABLE
). This should make
ALTER TABLE
much faster when
you have many indexes.
For MyISAM
tables, key updating can be
controlled explicitly. Use ALTER TABLE ... DISABLE
KEYS
to tell MySQL to stop updating nonunique
indexes. Then use ALTER TABLE ... ENABLE
KEYS
to re-create missing indexes.
MyISAM
does this with a special algorithm
that is much faster than inserting keys one by one, so
disabling keys before performing bulk insert operations should
give a considerable speedup. Using ALTER TABLE ...
DISABLE KEYS
requires the
INDEX
privilege in addition to
the privileges mentioned earlier.
While the nonunique indexes are disabled, they are ignored for
statements such as SELECT
and
EXPLAIN
that otherwise would
use them.
After an ALTER TABLE
statement,
it may be necessary to run ANALYZE
TABLE
to update index cardinality information. See
Section 13.7.5.22, “SHOW INDEX Syntax”.
The FOREIGN KEY
and
REFERENCES
clauses are supported by the
InnoDB
and NDB
storage
engines, which implement ADD [CONSTRAINT
[
. See Section 1.8.3.2, “FOREIGN KEY Constraints”;
for information specific to symbol
]] FOREIGN KEY
[index_name
] (...) REFERENCES ...
(...)InnoDB
, see
Section 14.8.1.6, “InnoDB and FOREIGN KEY Constraints”.
For other storage engines, the clauses are parsed but ignored.
The CHECK
clause is parsed but ignored by
all storage engines. See Section 13.1.18, “CREATE TABLE Syntax”. The
reason for accepting but ignoring syntax clauses is for
compatibility, to make it easier to port code from other SQL
servers, and to run applications that create tables with
references. See Section 1.8.2, “MySQL Differences from Standard SQL”.
For ALTER TABLE
, unlike
CREATE TABLE
, ADD
FOREIGN KEY
ignores
index_name
if given and uses an
automatically generated foreign key name. As a workaround,
include the CONSTRAINT
clause to specify
the foreign key name:
ADD CONSTRAINT name
FOREIGN KEY (....) ...
The inline REFERENCES
specifications
where the references are defined as part of the column
specification are silently ignored. MySQL only accepts
REFERENCES
clauses defined as part of a
separate FOREIGN KEY
specification.
Partitioned InnoDB
tables do not support
foreign keys. This restriction does not apply to
NDB
tables, including those explicitly
partitioned by [LINEAR] KEY
. See
Section 22.6.2, “Partitioning Limitations Relating to Storage Engines”,
for more information.
MySQL Server and NDB Cluster both support the use of
ALTER TABLE
to drop foreign
keys:
ALTER TABLEtbl_name
DROP FOREIGN KEYfk_symbol
;
Adding and dropping a foreign key in the same
ALTER TABLE
statement is
supported for
ALTER TABLE ...
ALGORITHM=INPLACE
but is unsupported for
ALTER TABLE ...
ALGORITHM=COPY
.
The server prohibits changes to foreign key columns that have
the potential to cause loss of referential integrity. It also
prohibits changes to the data type of such columns that may be
unsafe. For example, changing
VARCHAR(20)
to
VARCHAR(30)
is permitted, but
changing it to VARCHAR(1024)
is
not because that alters the number of length bytes required to
store individual values. A workaround is to use
ALTER TABLE ...
DROP FOREIGN KEY
before changing the column
definition and
ALTER TABLE ...
ADD FOREIGN KEY
afterward.
ALTER TABLE
changes internally generated foreign key constraint names and
user-defined foreign key constraint names that contain the
string
“tbl_name
RENAME new_tbl_name
tbl_name
_ibfk_” to
reflect the new table name. InnoDB
interprets foreign key constraint names that contain the
string
“tbl_name
_ibfk_” as
internally generated names.
If a table contains only one column, the column cannot be
dropped. If what you intend is to remove the table, use
DROP TABLE
instead.
If columns are dropped from a table, the columns are also removed from any index of which they are a part. If all columns that make up an index are dropped, the index is dropped as well.
DROP
is
a MySQL extension to standard SQL.
col_name
To change the table default character set and all character
columns (CHAR
,
VARCHAR
,
TEXT
) to a new character set, use a
statement like this:
ALTER TABLEtbl_name
CONVERT TO CHARACTER SETcharset_name
;
The statement also changes the collation of all character columns.
If you specify no COLLATE
clause to indicate
which collation to use, the statement uses default collation for
the character set. If this collation is inappropriate for the
intended table use (for example, if it would change from a
case-sensitive collation to a case-insensitive collation), specify
a collation explicitly.
For a column that has a data type of
VARCHAR
or one of the
TEXT
types, CONVERT TO
CHARACTER SET
will change the data type as necessary to
ensure that the new column is long enough to store as many
characters as the original column. For example, a
TEXT
column has two length bytes,
which store the byte-length of values in the column, up to a
maximum of 65,535. For a latin1
TEXT
column, each character
requires a single byte, so the column can store up to 65,535
characters. If the column is converted to utf8
,
each character might require up to three bytes, for a maximum
possible length of 3 × 65,535 = 196,605 bytes. That length
will not fit in a TEXT
column's
length bytes, so MySQL will convert the data type to
MEDIUMTEXT
, which is the smallest
string type for which the length bytes can record a value of
196,605. Similarly, a VARCHAR
column might be converted to
MEDIUMTEXT
.
To avoid data type changes of the type just described, do not use
CONVERT TO CHARACTER SET
. Instead, use
MODIFY
to change individual columns. For
example:
ALTER TABLE t MODIFY latin1_text_col TEXT CHARACTER SET utf8;
ALTER TABLE t MODIFY latin1_varchar_col VARCHAR(M
) CHARACTER SET utf8;
If you specify CONVERT TO CHARACTER SET binary
,
the CHAR
,
VARCHAR
, and
TEXT
columns are converted to their
corresponding binary string types
(BINARY
,
VARBINARY
,
BLOB
). This means that the columns
no longer will have a character set and a subsequent
CONVERT TO
operation will not apply to them.
If charset_name
is
DEFAULT
, the database character set is used.
The CONVERT TO
operation converts column
values between the character sets. This is
not what you want if you have a column in
one character set (like latin1
) but the
stored values actually use some other, incompatible character
set (like utf8
). In this case, you have to do
the following for each such column:
ALTER TABLE t1 CHANGE c1 c1 BLOB; ALTER TABLE t1 CHANGE c1 c1 TEXT CHARACTER SET utf8;
The reason this works is that there is no conversion when you
convert to or from BLOB
columns.
To change only the default character set for a table, use this statement:
ALTER TABLEtbl_name
DEFAULT CHARACTER SETcharset_name
;
The word DEFAULT
is optional. The default
character set is the character set that is used if you do not
specify the character set for columns that you add to a table
later (for example, with ALTER TABLE ... ADD
column
).
When foreign_key_checks
is
enabled, which is the default setting, character set conversion is
not permitted on tables that include a character string column
used in a foreign key constraint. The workaround is to disable
foreign_key_checks
before
performing the character set conversion. You must perform the
conversion on both tables involved in the foreign key constraint
before re-enabling
foreign_key_checks
. If you
re-enable foreign_key_checks
after converting only one of the tables, an ON DELETE
CASCADE
or ON UPDATE CASCADE
operation could corrupt data in the referencing table due to
implicit conversion that occurs during these operations (Bug
#45290, Bug #74816).
An InnoDB
table created in its own
file-per-table
tablespace can be discarded and imported using the
DISCARD TABLESPACE
and IMPORT
TABLESPACE
options. These options can be used to import
a file-per-table tablespace from a backup or to copy a
file-per-table tablespace from one database server to another. See
Section 14.7.6, “Copying File-Per-Table Tablespaces to Another Instance”.
ORDER BY
enables you to create the new table
with the rows in a specific order. This option is useful primarily
when you know that you query the rows in a certain order most of
the time. By using this option after major changes to the table,
you might be able to get higher performance. In some cases, it
might make sorting easier for MySQL if the table is in order by
the column that you want to order it by later.
The table does not remain in the specified order after inserts and deletes.
ORDER BY
syntax permits one or more column
names to be specified for sorting, each of which optionally can be
followed by ASC
or DESC
to
indicate ascending or descending sort order, respectively. The
default is ascending order. Only column names are permitted as
sort criteria; arbitrary expressions are not permitted. This
clause should be given last after any other clauses.
ORDER BY
does not make sense for
InnoDB
tables because InnoDB
always orders table rows according to the
clustered index.
When used on a partitioned table, ALTER TABLE ... ORDER
BY
orders rows within each partition only.
partition_options
signifies options
that can be used with partitioned tables for repartitioning, for
adding, dropping, discarding, importing, merging, and splitting
partitions, and for performing partitioning maintenance.
It is possible for an ALTER TABLE
statement to contain a PARTITION BY
or
REMOVE PARTITIONING
clause in an addition to
other alter specifications, but the PARTITION
BY
or REMOVE PARTITIONING
clause must
be specified last after any other specifications. The ADD
PARTITION
, DROP PARTITION
,
DISCARD PARTITION
, IMPORT
PARTITION
, COALESCE PARTITION
,
REORGANIZE PARTITION
, EXCHANGE
PARTITION
, ANALYZE PARTITION
,
CHECK PARTITION
, and REPAIR
PARTITION
options cannot be combined with other alter
specifications in a single ALTER TABLE
, since
the options just listed act on individual partitions.
For more information about partition options, see
Section 13.1.18, “CREATE TABLE Syntax”, and
Section 13.1.8.1, “ALTER TABLE Partition Operations”. For
information about and examples of ALTER TABLE ...
EXCHANGE PARTITION
statements, see
Section 22.3.3, “Exchanging Partitions and Subpartitions with Tables”.
Prior to MySQL 5.7.6, partitioned InnoDB
tables
used the generic ha_partition
partitioning
handler employed by MyISAM
and other storage
engines not supplying their own partitioning handlers; in MySQL
5.7.6 and later, such tables are created using the
InnoDB
storage engine's own (or
“native”) partitioning handler. Beginning with MySQL
5.7.9, you can upgrade an InnoDB
table that was
created in MySQL 5.7.6 or earlier (that is, created using
ha_partition
) to the InnoDB
native partition handler using ALTER TABLE ... UPGRADE
PARTITIONING
. (Bug #76734, Bug #20727344) This version
of ALTER TABLE
does not accept any other
options and can be used only on a single table at a time. You can
also use mysql_upgrade in MySQL 5.7.9 or later
to upgrade older partitioned InnoDB tables to
the native partitioning handler.
Partitioning-related clauses for ALTER
TABLE
can be used with partitioned tables for
repartitioning, for adding, dropping, discarding, importing,
merging, and splitting partitions, and for performing
partitioning maintenance.
Simply using a partition_options
clause with ALTER TABLE
on a
partitioned table repartitions the table according to the
partitioning scheme defined by the
partition_options
. This clause
always begins with PARTITION BY
, and
follows the same syntax and other rules as apply to the
partition_options
clause for
CREATE TABLE
(see
Section 13.1.18, “CREATE TABLE Syntax”, for more detailed
information), and can also be used to partition an existing
table that is not already partitioned. For example, consider
a (nonpartitioned) table defined as shown here:
CREATE TABLE t1 ( id INT, year_col INT );
This table can be partitioned by HASH
,
using the id
column as the partitioning
key, into 8 partitions by means of this statement:
ALTER TABLE t1 PARTITION BY HASH(id) PARTITIONS 8;
MySQL 5.7.1 and later supports an
ALGORITHM
option with
[SUB]PARTITION BY [LINEAR] KEY
.
ALGORITHM=1
causes the server to use the
same key-hashing functions as MySQL 5.1 when computing the
placement of rows in partitions;
ALGORITHM=2
means that the server employs
the key-hashing functions implemented and used by default
for new KEY
partitioned tables in MySQL
5.5 and later. (Partitioned tables created with the
key-hashing functions employed in MySQL 5.5 and later cannot
be used by a MySQL 5.1 server.) Not specifying the option
has the same effect as using ALGORITHM=2
.
This option is intended for use chiefly when upgrading or
downgrading [LINEAR] KEY
partitioned
tables between MySQL 5.1 and later MySQL versions, or for
creating tables partitioned by KEY
or
LINEAR KEY
on a MySQL 5.5 or later server
which can be used on a MySQL 5.1 server.
To upgrade a KEY
partitioned table that
was created in MySQL 5.1, first execute
SHOW CREATE TABLE
and note
the exact columns and number of partitions shown. Now
execute an ALTER TABLE
statement using
exactly the same column list and number of partitions as in
the CREATE TABLE
statement, while adding
ALGORITHM=2
immediately following the
PARTITION BY
keywords. (You should also
include the LINEAR
keyword if it was used
for the original table definition.) An example from a
session in the mysql client is shown
here:
mysql>SHOW CREATE TABLE p\G
*************************** 1. row *************************** Table: p Create Table: CREATE TABLE `p` ( `id` int(11) NOT NULL AUTO_INCREMENT, `cd` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 /*!50100 PARTITION BY LINEAR KEY (id) PARTITIONS 32 */ 1 row in set (0.00 sec) mysql>ALTER TABLE p
PARTITION BY LINEAR KEY ALGORITHM=2 (id) PARTITIONS 32;
Query OK, 0 rows affected (5.34 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>SHOW CREATE TABLE p\G
*************************** 1. row *************************** Table: p Create Table: CREATE TABLE `p` ( `id` int(11) NOT NULL AUTO_INCREMENT, `cd` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 /*!50100 PARTITION BY LINEAR KEY (id) PARTITIONS 32 */ 1 row in set (0.00 sec)
Downgrading a table created using the default key-hashing
used in MySQL 5.5 and later to enable its use by a MySQL 5.1
server is similar, except in this case you should use
ALGORITHM=1
to force the table's
partitions to be rebuilt using the MySQL 5.1 key-hashing
functions. It is recommended that you not do this except
when necessary for compatibility with a MySQL 5.1 server, as
the improved KEY
hashing functions used
by default in MySQL 5.5 and later provide fixes for a number
of issues found in the older implementation.
A table upgraded by means of ALTER TABLE ...
PARTITION BY ALGORITHM=2 [LINEAR] KEY ...
can no
longer be used by a MySQL 5.1 server. (Such a table would
need to be downgraded with ALTER TABLE ...
PARTITION BY ALGORITHM=1 [LINEAR] KEY ...
before
it could be used again by a MySQL 5.1 server.)
The table that results from using an ALTER TABLE
... PARTITION BY
statement must follow the same
rules as one created using CREATE TABLE ...
PARTITION BY
. This includes the rules governing
the relationship between any unique keys (including any
primary key) that the table might have, and the column or
columns used in the partitioning expression, as discussed in
Section 22.6.1, “Partitioning Keys, Primary Keys, and Unique Keys”.
The CREATE TABLE ... PARTITION BY
rules
for specifying the number of partitions also apply to
ALTER TABLE ... PARTITION BY
.
The partition_definition
clause
for ALTER TABLE ADD PARTITION
supports
the same options as the clause of the same name for the
CREATE TABLE
statement. (See
Section 13.1.18, “CREATE TABLE Syntax”, for the syntax and
description.) Suppose that you have the partitioned table
created as shown here:
CREATE TABLE t1 ( id INT, year_col INT ) PARTITION BY RANGE (year_col) ( PARTITION p0 VALUES LESS THAN (1991), PARTITION p1 VALUES LESS THAN (1995), PARTITION p2 VALUES LESS THAN (1999) );
You can add a new partition p3
to this
table for storing values less than 2002
as follows:
ALTER TABLE t1 ADD PARTITION (PARTITION p3 VALUES LESS THAN (2002));
ADD PARTITION
can also be used with the
TABLESPACE
clause to add a new partition
to an existing general tablespace, to a file-per-table
tablespace, or to the system tablespace.
ALTER TABLE t1 ADD PARTITION (PARTITION p4 VALUES LESS THAN (2015) TABLESPACE = `ts1`);
ALTER TABLE t1 ADD PARTITION (PARTITION p4 VALUES LESS THAN (2015) TABLESPACE = `innodb_file_per_table`);
ALTER TABLE t1 ADD PARTITION (PARTITION p4 VALUES LESS THAN (2015) TABLESPACE = `innodb_system`);
If the TABLESPACE =
option is not defined, the
tablespace_name
ALTER TABLE
... ADD PARTITION
operation adds the partition
to the table's default tablespace, which can be specified
at the table level during CREATE
TABLE
or ALTER
TABLE
.
DROP PARTITION
can be used to drop one or
more RANGE
or LIST
partitions. This statement cannot be used with
HASH
or KEY
partitions; instead, use COALESCE
PARTITION
(see below). Any data that was stored in
the dropped partitions named in the
partition_names
list is
discarded. For example, given the table
t1
defined previously, you can drop the
partitions named p0
and
p1
as shown here:
ALTER TABLE t1 DROP PARTITION p0, p1;
DROP PARTITION
does not work with
tables that use the NDB
storage engine. See
Section 22.3.1, “Management of RANGE and LIST Partitions”, and
Section 21.1.6, “Known Limitations of NDB Cluster”.
ADD PARTITION
and DROP
PARTITION
do not currently support IF
[NOT] EXISTS
.
DISCARD
PARTITION ... TABLESPACE
and
IMPORT
PARTITION ... TABLESPACE
options extend the
Transportable
Tablespace feature to individual
InnoDB
table partitions. Each
InnoDB
table partition has its own
tablespace file (.idb
file). The
Transportable
Tablespace feature makes it easy to copy the
tablespaces from a running MySQL server instance to another
running instance, or to perform a restore on the same
instance. Both options take a comma-separated list of one or
more partition names. For example:
ALTER TABLE t1 DISCARD PARTITION p2, p3 TABLESPACE;
ALTER TABLE t1 IMPORT PARTITION p2, p3 TABLESPACE;
When running
DISCARD
PARTITION ... TABLESPACE
and
IMPORT
PARTITION ... TABLESPACE
on subpartitioned tables,
both partition and subpartition names are allowed. When a
partition name is specified, subpartitions of that partition
are included.
The
Transportable
Tablespace feature also supports copying or restoring
partitioned InnoDB
tables (all partitions
at once). For addition information about the
Transportable
Tablespace feature, see
Section 14.7.6, “Copying File-Per-Table Tablespaces to Another Instance”. For usage examples,
see
Section 14.7.6.1, “Transportable Tablespace Examples”.
Renames of partitioned table are supported. You can rename
individual partitions indirectly using ALTER TABLE
... REORGANIZE PARTITION
; however, this operation
makes a copy of the partition's data.
In MySQL 5.7, it is possible to delete rows
from selected partitions using the TRUNCATE
PARTITION
option. This option takes a
comma-separated list of one or more partition names. For
example, consider the table t1
as defined
here:
CREATE TABLE t1 ( id INT, year_col INT ) PARTITION BY RANGE (year_col) ( PARTITION p0 VALUES LESS THAN (1991), PARTITION p1 VALUES LESS THAN (1995), PARTITION p2 VALUES LESS THAN (1999), PARTITION p3 VALUES LESS THAN (2003), PARTITION p4 VALUES LESS THAN (2007) );
To delete all rows from partition p0
, you
can use the following statement:
ALTER TABLE t1 TRUNCATE PARTITION p0;
The statement just shown has the same effect as the
following DELETE
statement:
DELETE FROM t1 WHERE year_col < 1991;
When truncating multiple partitions, the partitions do not
have to be contiguous: This can greatly simplify delete
operations on partitioned tables that would otherwise
require very complex WHERE
conditions if
done with DELETE
statements.
For example, this statement deletes all rows from partitions
p1
and p3
:
ALTER TABLE t1 TRUNCATE PARTITION p1, p3;
An equivalent DELETE
statement is shown here:
DELETE FROM t1 WHERE (year_col >= 1991 AND year_col < 1995) OR (year_col >= 2003 AND year_col < 2007);
You can also use the ALL
keyword in place
of the list of partition names; in this case, the statement
acts on all partitions in the table.
TRUNCATE PARTITION
merely deletes rows;
it does not alter the definition of the table itself, or of
any of its partitions.
You can verify that the rows were dropped by checking the
INFORMATION_SCHEMA.PARTITIONS
table,
using a query such as this one:
SELECT PARTITION_NAME, TABLE_ROWS FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_NAME = 't1';
TRUNCATE PARTITION
is supported only for
partitioned tables that use the
MyISAM
,
InnoDB
, or
MEMORY
storage engine. It also
works on BLACKHOLE
tables (but
has no effect). It is not supported for
ARCHIVE
tables.
COALESCE PARTITION
can be used with a
table that is partitioned by HASH
or
KEY
to reduce the number of partitions by
number
. Suppose that you have
created table t2
using the following
definition:
CREATE TABLE t2 ( name VARCHAR (30), started DATE ) PARTITION BY HASH( YEAR(started) ) PARTITIONS 6;
You can reduce the number of partitions used by
t2
from 6 to 4 using the following
statement:
ALTER TABLE t2 COALESCE PARTITION 2;
The data contained in the last
number
partitions will be merged
into the remaining partitions. In this case, partitions 4
and 5 will be merged into the first 4 partitions (the
partitions numbered 0, 1, 2, and 3).
To change some but not all the partitions used by a
partitioned table, you can use REORGANIZE
PARTITION
. This statement can be used in several
ways:
To merge a set of partitions into a single partition.
This can be done by naming several partitions in the
partition_names
list and
supplying a single definition for
partition_definition
.
To split an existing partition into several partitions.
You can accomplish this by naming a single partition for
partition_names
and providing
multiple
partition_definitions
.
To change the ranges for a subset of partitions defined
using VALUES LESS THAN
or the value
lists for a subset of partitions defined using
VALUES IN
.
To move a partition from one tablespace to another. For an example, see Moving Table Partitions Between Tablespaces Using ALTER TABLE.
This statement may also be used without the
option on tables that are automatically partitioned
using partition_names
INTO
(partition_definitions
)HASH
partitioning to force
redistribution of data. (Currently, only
NDB
tables are
automatically partitioned in this way.) This is useful
in NDB Cluster where, after you have added new NDB
Cluster data nodes online to an existing NDB Cluster,
you wish to redistribute existing NDB Cluster table data
to the new data nodes. In such cases, you should invoke
the statement with the
ALGORITHM=INPLACE
option; in other
words, as shown here:
ALTER TABLE table
ALGORITHM=INPLACE, REORGANIZE PARTITION;
You cannot perform other DDL concurrently with online
table reorganization—that is, no other DDL
statements can be issued while an ALTER TABLE
... ALGORITHM=INPLACE, REORGANIZE PARTITION
statement is executing. For more information about
adding NDB Cluster data nodes online, see
Section 21.5.14, “Adding NDB Cluster Data Nodes Online”.
ALTER TABLE ... ALGORITHM=INPLACE, REORGANIZE
PARTITION
does not work with tables which were
created using the MAX_ROWS
option,
because it uses the constant MAX_ROWS
value specified in the original
CREATE TABLE
statement to
determine the number of partitions required, so no new
partitions are created. Instead, you can use
ALTER TABLE ... ALGORITHM=INPLACE,
MAX_ROWS=
to
increase the maximum number of rows for such a table; in
this case, rows
ALTER TABLE ... ALGORITHM=INPLACE,
REORGANIZE PARTITION
is not needed (and causes
an error if executed). The value of
rows
must be greater than the
value specified for MAX_ROWS
in the
original CREATE TABLE
statement for
this to work.
Attempting to use REORGANIZE
PARTITION
without the
option on explicitly partitioned tables results in the
error REORGANIZE PARTITION without parameters
can only be used on auto-partitioned tables using HASH
partitioning.
partition_names
INTO
(partition_definitions
)
For partitions that have not been explicitly named, MySQL
automatically provides the default names
p0
, p1
,
p2
, and so on. The same is true with
regard to subpartitions.
For more detailed information about and examples of
ALTER TABLE ... REORGANIZE PARTITION
statements, see
Section 22.3.1, “Management of RANGE and LIST Partitions”.
In MySQL 5.7, it is possible to exchange a
table partition or subpartition with a table using the
ALTER TABLE ...
EXCHANGE PARTITION
statement—that is, to
move any existing rows in the partition or subpartition to
the nonpartitioned table, and any existing rows in the
nonpartitioned table to the table partition or subpartition.
For usage information and examples, see Section 22.3.3, “Exchanging Partitions and Subpartitions with Tables”.
Several additional options provide partition maintenance and
repair functionality analogous to that implemented for
nonpartitioned tables by statements such as
CHECK TABLE
and
REPAIR TABLE
(which are also
supported for partitioned tables; see
Section 13.7.2, “Table Maintenance Statements” for more
information). These include ANALYZE
PARTITION
, CHECK PARTITION
,
OPTIMIZE PARTITION
, REBUILD
PARTITION
, and REPAIR
PARTITION
. Each of these options takes a
partition_names
clause consisting
of one or more names of partitions, separated by commas. The
partitions must already exist in the table to be altered.
You can also use the ALL
keyword in place
of partition_names
, in which case
the statement acts on all partitions in the table. For more
information and examples, see
Section 22.3.4, “Maintenance of Partitions”.
Some MySQL storage engines, such as
InnoDB
, do not support
per-partition optimization. For a partitioned table using
such a storage engine, ALTER TABLE ... OPTIMIZE
PARTITION
causes the entire table to rebuilt and
analyzed, and an appropriate warning to be issued. (Bug
#11751825, Bug #42822)
To work around this problem, use the statements
ALTER TABLE ... REBUILD PARTITION
and
ALTER TABLE ... ANALYZE PARTITION
instead.
The ANALYZE PARTITION
, CHECK
PARTITION
, OPTIMIZE PARTITION
,
and REPAIR PARTITION
options are not
permitted for tables which are not partitioned.
REMOVE PARTITIONING
enables you to remove
a table's partitioning without otherwise affecting the table
or its data. This option can be combined with other
ALTER TABLE
options such as
those used to add, drop, or rename columns or indexes.
Using the ENGINE
option with
ALTER TABLE
changes the
storage engine used by the table without affecting the
partitioning.
In MySQL 5.7, when ALTER TABLE ...
EXCHANGE PARTITION
or ALTER TABLE ...
TRUNCATE PARTITION
is run against a partitioned table
that uses MyISAM
(or another
storage engine that makes use of table-level locking), only
those partitions that are actually read from are locked. (This
does not apply to partitioned tables using a storage enginethat
employs row-level locking, such as
InnoDB
.) See
Section 22.6.4, “Partitioning and Locking”.
It is possible for an ALTER TABLE
statement to contain a PARTITION BY
or
REMOVE PARTITIONING
clause in an addition to
other alter specifications, but the PARTITION
BY
or REMOVE PARTITIONING
clause
must be specified last after any other specifications.
The ADD PARTITION
, DROP
PARTITION
, COALESCE PARTITION
,
REORGANIZE PARTITION
, ANALYZE
PARTITION
, CHECK PARTITION
, and
REPAIR PARTITION
options cannot be combined
with other alter specifications in a single ALTER
TABLE
, since the options just listed act on individual
partitions. For more information, see
Section 13.1.8.1, “ALTER TABLE Partition Operations”.
Only a single instance of any one of the following options can
be used in a given ALTER TABLE
statement: PARTITION BY
, ADD
PARTITION
, DROP PARTITION
,
TRUNCATE PARTITION
, EXCHANGE
PARTITION
, REORGANIZE PARTITION
, or
COALESCE PARTITION
, ANALYZE
PARTITION
, CHECK PARTITION
,
OPTIMIZE PARTITION
, REBUILD
PARTITION
, REMOVE PARTITIONING
.
For example, the following two statements are invalid:
ALTER TABLE t1 ANALYZE PARTITION p1, ANALYZE PARTITION p2; ALTER TABLE t1 ANALYZE PARTITION p1, CHECK PARTITION p2;
In the first case, you can analyze partitions
p1
and p2
of table
t1
concurrently using a single statement with
a single ANALYZE PARTITION
option that lists
both of the partitions to be analyzed, like this:
ALTER TABLE t1 ANALYZE PARTITION p1, p2;
In the second case, it is not possible to perform
ANALYZE
and CHECK
operations on different partitions of the same table
concurrently. Instead, you must issue two separate statements,
like this:
ALTER TABLE t1 ANALYZE PARTITION p1; ALTER TABLE t1 CHECK PARTITION p2;
REBUILD
operations are currently unsupported
for subpartitions. REBUILD
is expressly
disallowed with subpartitions, and causes ALTER
TABLE
to fail with an error if so used.
CHECK PARTITION
and REPAIR
PARTITION
operations fail when the partition to be
checked or repaired contains any duplicate key errors.
For more information about these statements, see Section 22.3.4, “Maintenance of Partitions”.
MySQL NDB Cluster 7.5 supports online table schema changes using
the standard ALTER TABLE
syntax
employed by the MySQL Server
(ALGORITHM=DEFAULT|INPLACE|COPY
), and
described elsewhere.
Some older releases of NDB Cluster used a syntax specific to
NDB
for online ALTER
TABLE
operations. That syntax has since been
removed.
Operations that add and drop indexes on variable-width columns
of NDB
tables occur online. Online
operations are noncopying; that is, they do not require that
indexes be re-created. They do not lock the table being altered
from access by other API nodes in an NDB Cluster (but see
Limitations of NDB Cluster online operations, later in this
section). Such operations do not require single user mode for
NDB
table alterations made in an
NDB cluster with multiple API nodes; transactions can continue
uninterrupted during online DDL operations.
ALGORITHM=INPLACE
can be used to perform
online ADD COLUMN
, ADD
INDEX
(including CREATE INDEX
statements), and DROP INDEX
operations on
NDB
tables. Online renaming of
NDB
tables is also supported.
Currently you cannot add disk-based columns to
NDB
tables online. This means that,
if you wish to add an in-memory column to an
NDB
table that uses a table-level
STORAGE DISK
option, you must declare the new
column as using memory-based storage explicitly. For
example—assuming that you have already created tablespace
ts1
—suppose that you create table
t1
as follows:
mysql>CREATE TABLE t1 (
>c1 INT NOT NULL PRIMARY KEY,
>c2 VARCHAR(30)
>)
>TABLESPACE ts1 STORAGE DISK
>ENGINE NDB;
Query OK, 0 rows affected (1.73 sec) Records: 0 Duplicates: 0 Warnings: 0
You can add a new in-memory column to this table online as shown here:
mysql>ALTER TABLE t1
>ADD COLUMN c3 INT COLUMN_FORMAT DYNAMIC STORAGE MEMORY,
>ALGORITHM=INPLACE;
Query OK, 0 rows affected (1.25 sec) Records: 0 Duplicates: 0 Warnings: 0
This statement fails if the STORAGE MEMORY
option is omitted:
mysql>ALTER TABLE t1
>ADD COLUMN c4 INT COLUMN_FORMAT DYNAMIC,
>ALGORITHM=INPLACE;
ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Adding column(s) or add/reorganize partition not supported online. Try ALGORITHM=COPY.
If you omit the COLUMN_FORMAT DYNAMIC
option,
the dynamic column format is employed automatically, but a
warning is issued, as shown here:
mysql>ALTER ONLINE TABLE t1 ADD COLUMN c4 INT STORAGE MEMORY;
Query OK, 0 rows affected, 1 warning (1.17 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>SHOW WARNINGS\G
*************************** 1. row *************************** Level: Warning Code: 1478 Message: DYNAMIC column c4 with STORAGE DISK is not supported, column will become FIXED mysql>SHOW CREATE TABLE t1\G
*************************** 1. row *************************** Table: t1 Create Table: CREATE TABLE `t1` ( `c1` int(11) NOT NULL, `c2` varchar(30) DEFAULT NULL, `c3` int(11) /*!50606 STORAGE MEMORY */ /*!50606 COLUMN_FORMAT DYNAMIC */ DEFAULT NULL, `c4` int(11) /*!50606 STORAGE MEMORY */ DEFAULT NULL, PRIMARY KEY (`c1`) ) /*!50606 TABLESPACE ts_1 STORAGE DISK */ ENGINE=ndbcluster DEFAULT CHARSET=latin1 1 row in set (0.03 sec)
The STORAGE
and
COLUMN_FORMAT
keywords are supported only
in NDB Cluster; in any other version of MySQL, attempting to
use either of these keywords in a CREATE
TABLE
or ALTER TABLE
statement results in an error.
It is also possible to use the statement ALTER TABLE
... REORGANIZE PARTITION, ALGORITHM=INPLACE
with no
option on partition_names
INTO
(partition_definitions
)NDB
tables. This can be
used to redistribute NDB Cluster data among new data nodes that
have been added to the cluster online. For more information
about this statement, see
Section 13.1.8.1, “ALTER TABLE Partition Operations”. For more
information about adding data nodes online to an NDB Cluster,
see Section 21.5.14, “Adding NDB Cluster Data Nodes Online”.
Online DROP COLUMN
operations are not
supported.
Online ALTER TABLE
,
CREATE INDEX
, or
DROP INDEX
statements that add
columns or add or drop indexes are subject to the following
limitations:
A given online ALTER TABLE
can use only one of ADD COLUMN
,
ADD INDEX
, or DROP
INDEX
. One or more columns can be added online in
a single statement; only one index may be created or dropped
online in a single statement.
The table being altered is not locked with respect to API
nodes other than the one on which an online
ALTER TABLE
ADD
COLUMN
, ADD INDEX
, or
DROP INDEX
operation (or
CREATE INDEX
or
DROP INDEX
statement) is run.
However, the table is locked against any other operations
originating on the same API node while
the online operation is being executed.
The table to be altered must have an explicit primary key;
the hidden primary key created by the
NDB
storage engine is not
sufficient for this purpose.
The storage engine used by the table cannot be changed online.
When used with NDB Cluster Disk Data tables, it is not
possible to change the storage type (DISK
or MEMORY
) of a column online. This
means, that when you add or drop an index in such a way that
the operation would be performed online, and you want the
storage type of the column or columns to be changed, you
must use ALGORITHM=COPY
in the statement
that adds or drops the index.
Columns to be added online cannot use the
BLOB
or
TEXT
type, and must meet the
following criteria:
The columns must be dynamic; that is, it must be possible to
create them using COLUMN_FORMAT DYNAMIC
.
If you omit the COLUMN_FORMAT DYNAMIC
option, the dynamic column format is employed automatically.
The columns must permit NULL
values and
not have any explicit default value other than
NULL
. Columns added online are
automatically created as DEFAULT NULL
, as
can be seen here:
mysql>CREATE TABLE t2 (
>c1 INT NOT NULL AUTO_INCREMENT PRIMARY KEY
>) ENGINE=NDB;
Query OK, 0 rows affected (1.44 sec) mysql>ALTER TABLE t2
>ADD COLUMN c2 INT,
>ADD COLUMN c3 INT,
>ALGORITHM=INPLACE;
Query OK, 0 rows affected, 2 warnings (0.93 sec) mysql>SHOW CREATE TABLE t1\G
*************************** 1. row *************************** Table: t1 Create Table: CREATE TABLE `t2` ( `c1` int(11) NOT NULL AUTO_INCREMENT, `c2` int(11) DEFAULT NULL, `c3` int(11) DEFAULT NULL, PRIMARY KEY (`c1`) ) ENGINE=ndbcluster DEFAULT CHARSET=latin1 1 row in set (0.00 sec)
The columns must be added following any existing columns. If
you attempt to add a column online before any existing
columns or using the FIRST
keyword, the
statement fails with an error.
Existing table columns cannot be reordered online.
For online ALTER TABLE
operations
on NDB
tables, fixed-format columns
are converted to dynamic when they are added online, or when
indexes are created or dropped online, as shown here (repeating
the CREATE TABLE
and ALTER
TABLE
statements just shown for the sake of clarity):
mysql>CREATE TABLE t2 (
>c1 INT NOT NULL AUTO_INCREMENT PRIMARY KEY
>) ENGINE=NDB;
Query OK, 0 rows affected (1.44 sec) mysql>ALTER TABLE t2
>ADD COLUMN c2 INT,
>ADD COLUMN c3 INT,
>ALGORITHM=INPLACE;
Query OK, 0 rows affected, 2 warnings (0.93 sec) mysql>SHOW WARNINGS;
*************************** 1. row *************************** Level: Warning Code: 1478 Message: Converted FIXED field 'c2' to DYNAMIC to enable online ADD COLUMN *************************** 2. row *************************** Level: Warning Code: 1478 Message: Converted FIXED field 'c3' to DYNAMIC to enable online ADD COLUMN 2 rows in set (0.00 sec)
Only the column or columns to be added online must be dynamic.
Existing columns need not be; this includes the table's
primary key, which may also be FIXED
, as
shown here:
mysql>CREATE TABLE t3 (
>c1 INT NOT NULL AUTO_INCREMENT PRIMARY KEY COLUMN_FORMAT FIXED
>) ENGINE=NDB;
Query OK, 0 rows affected (2.10 sec) mysql>ALTER TABLE t3 ADD COLUMN c2 INT, ALGORITHM=INPLACE;
Query OK, 0 rows affected, 1 warning (0.78 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>SHOW WARNINGS;
*************************** 1. row *************************** Level: Warning Code: 1478 Message: Converted FIXED field 'c2' to DYNAMIC to enable online ADD COLUMN 1 row in set (0.00 sec)
Columns are not converted from FIXED
to
DYNAMIC
column format by renaming operations.
For more information about COLUMN_FORMAT
, see
Section 13.1.18, “CREATE TABLE Syntax”.
The KEY
, CONSTRAINT
, and
IGNORE
keywords are supported in
ALTER TABLE
statements using
ALGORITHM=INPLACE
.
Beginning with NDB Cluster 7.5.7 and 7.6.3, setting
MAX_ROWS
to 0 using an online ALTER
TABLE
statement is disallowed. You must use a copying
ALTER TABLE
to perform this operation. (Bug
#21960004)
ALTER TABLE
operations permitted for
generated columns are ADD
,
MODIFY
, and CHANGE
.
Generated columns can be added.
The data type and expression of generated columns can be modified.
Generated columns can be renamed or dropped, if no other column refers to them.
Virtual generated columns cannot be altered to stored generated columns, or vice versa. To work around this, drop the column, then add it with the new definition.
Nongenerated columns can be altered to stored but not virtual generated columns.
Stored but not virtual generated columns can be altered to nongenerated columns. The stored generated values become the values of the nongenerated column.
ADD COLUMN
is not an in-place operation
for stored columns (done without using a temporary table)
because the expression must be evaluated by the server. For
stored columns, indexing changes are done in place, and
expression changes are not done in place. Changes to column
comments are done in place.
For non-partitioned tables, ADD COLUMN
and DROP COLUMN
are in-place operations
for virtual columns. However, adding or dropping a virtual
column cannot be performed in place in combination with
other ALTER TABLE
operations.
For partitioned tables, ADD COLUMN
and
DROP COLUMN
are not in-place operations
for virtual columns.
InnoDB
supports secondary indexes on
virtual generated columns. Adding or dropping a secondary
index on a virtual generated column is an in-place
operation. For more information, see
Section 13.1.18.9, “Secondary Indexes and Generated Columns”.
When a VIRTUAL
generated column is added
to a table or modified, it is not ensured that data being
calculated by the generated column expression will not be
out of range for the column. This can lead to inconsistent
data being returned and unexpectedly failed statements. To
permit control over whether validation occurs for such
columns, ALTER TABLE
supports
WITHOUT VALIDATION
and WITH
VALIDATION
clauses:
With WITHOUT VALIDATION
(the default
if neither clause is specified), an in-place operation
is performed (if possible), data integrity is not
checked, and the statement finishes more quickly.
However, later reads from the table might report
warnings or errors for the column if values are out of
range.
With WITH VALIDATION
, ALTER
TABLE
copies the table. If an out-of-range or
any other error occurs, the statement fails. Because a
table copy is performed, the statement takes longer.
WITHOUT VALIDATION
and WITH
VALIDATION
are permitted only with ADD
COLUMN
, CHANGE COLUMN
, and
MODIFY COLUMN
operations. An
ER_WRONG_USAGE
error occurs
otherwise.
As of MySQL 5.7.10, if expression evaluation causes
truncation or provides incorrect input to a function, the
ALTER TABLE
statement
terminates with an error and the DDL operation is rejected.
An ALTER TABLE
statement that
changes the default value of a column
col_name
may also change the
value of a generated column expression that refers to the
column using
DEFAULT(
.
For this reason, as of MySQL 5.7.13,
col_name
)ALTER TABLE
operations that
change the definition of a column now cause a table rebuild
if any generated column expression uses
DEFAULT()
.
Begin with a table t1
that is created as
shown here:
CREATE TABLE t1 (a INTEGER,b CHAR(10));
To rename the table from t1
to
t2
:
ALTER TABLE t1 RENAME t2;
To change column a
from
INTEGER
to TINYINT NOT
NULL
(leaving the name the same), and to change column
b
from CHAR(10)
to
CHAR(20)
as well as renaming it from
b
to c
:
ALTER TABLE t2 MODIFY a TINYINT NOT NULL, CHANGE b c CHAR(20);
To add a new TIMESTAMP
column
named d
:
ALTER TABLE t2 ADD d TIMESTAMP;
To add an index on column d
and a
UNIQUE
index on column a
:
ALTER TABLE t2 ADD INDEX (d), ADD UNIQUE (a);
To remove column c
:
ALTER TABLE t2 DROP COLUMN c;
To add a new AUTO_INCREMENT
integer column
named c
:
ALTER TABLE t2 ADD c INT UNSIGNED NOT NULL AUTO_INCREMENT, ADD PRIMARY KEY (c);
We indexed c
(as a PRIMARY
KEY
) because AUTO_INCREMENT
columns
must be indexed, and we declare c
as
NOT NULL
because primary key columns cannot
be NULL
.
For NDB
tables, it is also possible
to change the storage type used for a table or column. For
example, consider an NDB
table
created as shown here:
mysql> CREATE TABLE t1 (c1 INT) TABLESPACE ts_1 ENGINE NDB;
Query OK, 0 rows affected (1.27 sec)
To convert this table to disk-based storage, you can use the
following ALTER TABLE
statement:
mysql>ALTER TABLE t1 TABLESPACE ts_1 STORAGE DISK;
Query OK, 0 rows affected (2.99 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>SHOW CREATE TABLE t1\G
*************************** 1. row *************************** Table: t1 Create Table: CREATE TABLE `t1` ( `c1` int(11) DEFAULT NULL ) /*!50100 TABLESPACE ts_1 STORAGE DISK */ ENGINE=ndbcluster DEFAULT CHARSET=latin1 1 row in set (0.01 sec)
It is not necessary that the tablespace was referenced when the
table was originally created; however, the tablespace must be
referenced by the ALTER TABLE
:
mysql>CREATE TABLE t2 (c1 INT) ts_1 ENGINE NDB;
Query OK, 0 rows affected (1.00 sec) mysql>ALTER TABLE t2 STORAGE DISK;
ERROR 1005 (HY000): Can't create table 'c.#sql-1750_3' (errno: 140) mysql>ALTER TABLE t2 TABLESPACE ts_1 STORAGE DISK;
Query OK, 0 rows affected (3.42 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>SHOW CREATE TABLE t2\G
*************************** 1. row *************************** Table: t1 Create Table: CREATE TABLE `t2` ( `c1` int(11) DEFAULT NULL ) /*!50100 TABLESPACE ts_1 STORAGE DISK */ ENGINE=ndbcluster DEFAULT CHARSET=latin1 1 row in set (0.01 sec)
To change the storage type of an individual column, you can use
ALTER TABLE ... MODIFY [COLUMN]
. For example,
suppose you create an NDB Cluster Disk Data table with two
columns, using this CREATE TABLE
statement:
mysql>CREATE TABLE t3 (c1 INT, c2 INT)
->TABLESPACE ts_1 STORAGE DISK ENGINE NDB;
Query OK, 0 rows affected (1.34 sec)
To change column c2
from disk-based to
in-memory storage, include a STORAGE MEMORY clause in the column
definition used by the ALTER TABLE statement, as shown here:
mysql> ALTER TABLE t3 MODIFY c2 INT STORAGE MEMORY;
Query OK, 0 rows affected (3.14 sec)
Records: 0 Duplicates: 0 Warnings: 0
You can make an in-memory column into a disk-based column by
using STORAGE DISK
in a similar fashion.
Column c1
uses disk-based storage, since this
is the default for the table (determined by the table-level
STORAGE DISK
clause in the
CREATE TABLE
statement). However,
column c2
uses in-memory storage, as can be
seen here in the output of SHOW CREATE
TABLE
:
mysql> SHOW CREATE TABLE t3\G
*************************** 1. row ***************************
Table: t3
Create Table: CREATE TABLE `t3` (
`c1` int(11) DEFAULT NULL,
`c2` int(11) /*!50120 STORAGE MEMORY */ DEFAULT NULL
) /*!50100 TABLESPACE ts_1 STORAGE DISK */ ENGINE=ndbcluster DEFAULT CHARSET=latin1
1 row in set (0.02 sec)
When you add an AUTO_INCREMENT
column, column
values are filled in with sequence numbers automatically. For
MyISAM
tables, you can set the first sequence
number by executing SET
INSERT_ID=
before
value
ALTER TABLE
or by using the
AUTO_INCREMENT=
table option. See Section 5.1.5, “Server System Variables”.
value
With MyISAM
tables, if you do not change the
AUTO_INCREMENT
column, the sequence number is
not affected. If you drop an AUTO_INCREMENT
column and then add another AUTO_INCREMENT
column, the numbers are resequenced beginning with 1.
When replication is used, adding an
AUTO_INCREMENT
column to a table might not
produce the same ordering of the rows on the slave and the
master. This occurs because the order in which the rows are
numbered depends on the specific storage engine used for the
table and the order in which the rows were inserted. If it is
important to have the same order on the master and slave, the
rows must be ordered before assigning an
AUTO_INCREMENT
number. Assuming that you want
to add an AUTO_INCREMENT
column to the table
t1
, the following statements produce a new
table t2
identical to t1
but with an AUTO_INCREMENT
column:
CREATE TABLE t2 (id INT AUTO_INCREMENT PRIMARY KEY) SELECT * FROM t1 ORDER BY col1, col2;
This assumes that the table t1
has columns
col1
and col2
.
This set of statements will also produce a new table
t2
identical to t1
, with
the addition of an AUTO_INCREMENT
column:
CREATE TABLE t2 LIKE t1; ALTER TABLE t2 ADD id INT AUTO_INCREMENT PRIMARY KEY; INSERT INTO t2 SELECT * FROM t1 ORDER BY col1, col2;
To guarantee the same ordering on both master and slave,
all columns of t1
must
be referenced in the ORDER BY
clause.
Regardless of the method used to create and populate the copy
having the AUTO_INCREMENT
column, the final
step is to drop the original table and then rename the copy:
DROP TABLE t1; ALTER TABLE t2 RENAME t1;
ALTER TABLESPACEtablespace_name
{ADD|DROP} DATAFILE 'file_name
' [INITIAL_SIZE [=]size
] [WAIT] ENGINE [=]engine_name
This statement can be used either to add a new data file, or to drop a data file from a tablespace.
The ADD DATAFILE
variant enables you to specify
an initial size using an INITIAL_SIZE
clause,
where size
is measured in bytes; the
default value is 134217728 (128 MB). You may optionally follow
size
with a one-letter abbreviation for
an order of magnitude, similar to those used in
my.cnf
. Generally, this is one of the letters
M
(megabytes) or G
(gigabytes).
All NDB Cluster Disk Data objects share the same namespace. This means that each Disk Data object must be uniquely named (and not merely each Disk Data object of a given type). For example, you cannot have a tablespace and an data file with the same name, or an undo log file and a tablespace with the same name.
On 32-bit systems, the maximum supported value for
INITIAL_SIZE
is 4294967296 (4 GB). (Bug #29186)
INITIAL_SIZE
is rounded, explicitly, as for
CREATE TABLESPACE
.
Once a data file has been created, its size cannot be changed;
however, you can add more data files to the tablespace using
additional ALTER TABLESPACE ... ADD DATAFILE
statements.
Using DROP DATAFILE
with
ALTER TABLESPACE
drops the data
file 'file_name
' from the tablespace.
You cannot drop a data file from a tablespace which is in use by
any table; in other words, the data file must be empty (no extents
used). See Section 21.5.13.1, “NDB Cluster Disk Data Objects”. In
addition, any data file to be dropped must previously have been
added to the tablespace with CREATE
TABLESPACE
or ALTER
TABLESPACE
.
Both ALTER TABLESPACE ... ADD DATAFILE
and
ALTER TABLESPACE ... DROP DATAFILE
require an
ENGINE
clause which specifies the storage
engine used by the tablespace. Currently, the only accepted values
for engine_name
are
NDB
and
NDBCLUSTER
.
WAIT
is parsed but otherwise ignored, and so
has no effect in MySQL 5.7. It is intended for future
expansion.
When ALTER TABLESPACE ... ADD DATAFILE
is used
with ENGINE = NDB
, a data file is created on
each Cluster data node. You can verify that the data files were
created and obtain information about them by querying the
INFORMATION_SCHEMA.FILES
table. For
example, the following query shows all data files belonging to the
tablespace named newts
:
mysql>SELECT LOGFILE_GROUP_NAME, FILE_NAME, EXTRA
->FROM INFORMATION_SCHEMA.FILES
->WHERE TABLESPACE_NAME = 'newts' AND FILE_TYPE = 'DATAFILE';
+--------------------+--------------+----------------+ | LOGFILE_GROUP_NAME | FILE_NAME | EXTRA | +--------------------+--------------+----------------+ | lg_3 | newdata.dat | CLUSTER_NODE=3 | | lg_3 | newdata.dat | CLUSTER_NODE=4 | | lg_3 | newdata2.dat | CLUSTER_NODE=3 | | lg_3 | newdata2.dat | CLUSTER_NODE=4 | +--------------------+--------------+----------------+ 2 rows in set (0.03 sec)
See Section 24.8, “The INFORMATION_SCHEMA FILES Table”.
ALTER TABLESPACE
is useful only
with Disk Data storage for NDB Cluster. See
Section 21.5.13, “NDB Cluster Disk Data Tables”.
ALTER [ALGORITHM = {UNDEFINED | MERGE | TEMPTABLE}] [DEFINER = {user
| CURRENT_USER }] [SQL SECURITY { DEFINER | INVOKER }] VIEWview_name
[(column_list
)] ASselect_statement
[WITH [CASCADED | LOCAL] CHECK OPTION]
This statement changes the definition of a view, which must exist.
The syntax is similar to that for CREATE
VIEW
and the effect is the same as for
CREATE OR REPLACE
VIEW
. See Section 13.1.21, “CREATE VIEW Syntax”. This statement
requires the CREATE VIEW
and
DROP
privileges for the view, and
some privilege for each column referred to in the
SELECT
statement.
ALTER VIEW
is permitted only to the
definer or users with the SUPER
privilege.
CREATE {DATABASE | SCHEMA} [IF NOT EXISTS]db_name
[create_specification
] ...create_specification
: [DEFAULT] CHARACTER SET [=]charset_name
| [DEFAULT] COLLATE [=]collation_name
CREATE DATABASE
creates a database
with the given name. To use this statement, you need the
CREATE
privilege for the database.
CREATE
SCHEMA
is a synonym for CREATE
DATABASE
.
An error occurs if the database exists and you did not specify
IF NOT EXISTS
.
In MySQL 5.7, CREATE
DATABASE
is not permitted within a session that has an
active LOCK TABLES
statement.
create_specification
options specify
database characteristics. Database characteristics are stored in
the db.opt
file in the database directory.
The CHARACTER SET
clause specifies the default
database character set. The COLLATE
clause
specifies the default database collation.
Section 10.1, “Character Set Support”, discusses character set and collation
names.
A database in MySQL is implemented as a directory containing files
that correspond to tables in the database. Because there are no
tables in a database when it is initially created, the
CREATE DATABASE
statement creates
only a directory under the MySQL data directory and the
db.opt
file. Rules for permissible database
names are given in Section 9.2, “Schema Object Names”. If a database
name contains special characters, the name for the database
directory contains encoded versions of those characters as
described in Section 9.2.3, “Mapping of Identifiers to File Names”.
If you manually create a directory under the data directory (for
example, with mkdir), the server considers it a
database directory and it shows up in the output of
SHOW DATABASES
.
You can also use the mysqladmin program to create databases. See Section 4.5.2, “mysqladmin — Client for Administering a MySQL Server”.
CREATE [DEFINER = {user
| CURRENT_USER }] EVENT [IF NOT EXISTS]event_name
ON SCHEDULEschedule
[ON COMPLETION [NOT] PRESERVE] [ENABLE | DISABLE | DISABLE ON SLAVE] [COMMENT 'comment
'] DOevent_body
;schedule
: ATtimestamp
[+ INTERVALinterval
] ... | EVERYinterval
[STARTStimestamp
[+ INTERVALinterval
] ...] [ENDStimestamp
[+ INTERVALinterval
] ...]interval
:quantity
{YEAR | QUARTER | MONTH | DAY | HOUR | MINUTE | WEEK | SECOND | YEAR_MONTH | DAY_HOUR | DAY_MINUTE | DAY_SECOND | HOUR_MINUTE | HOUR_SECOND | MINUTE_SECOND}
This statement creates and schedules a new event. The event will not run unless the Event Scheduler is enabled. For information about checking Event Scheduler status and enabling it if necessary, see Section 23.4.2, “Event Scheduler Configuration”.
CREATE EVENT
requires the
EVENT
privilege for the schema in
which the event is to be created. It might also require the
SUPER
privilege, depending on the
DEFINER
value, as described later in this
section.
The minimum requirements for a valid CREATE
EVENT
statement are as follows:
The keywords CREATE EVENT
plus
an event name, which uniquely identifies the event in a
database schema.
An ON SCHEDULE
clause, which determines
when and how often the event executes.
A DO
clause, which contains the
SQL statement to be executed by an event.
This is an example of a minimal CREATE
EVENT
statement:
CREATE EVENT myevent ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 1 HOUR DO UPDATE myschema.mytable SET mycol = mycol + 1;
The previous statement creates an event named
myevent
. This event executes once—one
hour following its creation—by running an SQL statement that
increments the value of the myschema.mytable
table's mycol
column by 1.
The event_name
must be a valid MySQL
identifier with a maximum length of 64 characters. Event names are
not case sensitive, so you cannot have two events named
myevent
and MyEvent
in the
same schema. In general, the rules governing event names are the
same as those for names of stored routines. See
Section 9.2, “Schema Object Names”.
An event is associated with a schema. If no schema is indicated as
part of event_name
, the default
(current) schema is assumed. To create an event in a specific
schema, qualify the event name with a schema using
syntax.
schema_name
.event_name
The DEFINER
clause specifies the MySQL account
to be used when checking access privileges at event execution
time. If a user
value is given, it
should be a MySQL account specified as
'
,
user_name
'@'host_name
'CURRENT_USER
, or
CURRENT_USER()
. The default
DEFINER
value is the user who executes the
CREATE EVENT
statement. This is the
same as specifying DEFINER = CURRENT_USER
explicitly.
If you specify the DEFINER
clause, these rules
determine the valid DEFINER
user values:
If you do not have the SUPER
privilege, the only permitted user
value is your own account, either specified literally or by
using CURRENT_USER
. You cannot
set the definer to some other account.
If you have the SUPER
privilege, you can specify any syntactically valid account
name. If the account does not exist, a warning is generated.
Although it is possible to create an event with a nonexistent
DEFINER
account, an error occurs at event
execution time if the account does not exist.
For more information about event security, see Section 23.6, “Access Control for Stored Programs and Views”.
Within an event, the CURRENT_USER()
function returns the account used to check privileges at event
execution time, which is the DEFINER
user. For
information about user auditing within events, see
Section 6.3.11, “SQL-Based MySQL Account Activity Auditing”.
IF NOT EXISTS
has the same meaning for
CREATE EVENT
as for
CREATE TABLE
: If an event named
event_name
already exists in the same
schema, no action is taken, and no error results. (However, a
warning is generated in such cases.)
The ON SCHEDULE
clause determines when, how
often, and for how long the event_body
defined for the event repeats. This clause takes one of two forms:
AT
is
used for a one-time event. It specifies that the event
executes one time only at the date and time given by
timestamp
timestamp
, which must include both
the date and time, or must be an expression that resolves to a
datetime value. You may use a value of either the
DATETIME
or
TIMESTAMP
type for this
purpose. If the date is in the past, a warning occurs, as
shown here:
mysql>SELECT NOW();
+---------------------+ | NOW() | +---------------------+ | 2006-02-10 23:59:01 | +---------------------+ 1 row in set (0.04 sec) mysql>CREATE EVENT e_totals
->ON SCHEDULE AT '2006-02-10 23:59:00'
->DO INSERT INTO test.totals VALUES (NOW());
Query OK, 0 rows affected, 1 warning (0.00 sec) mysql>SHOW WARNINGS\G
*************************** 1. row *************************** Level: Note Code: 1588 Message: Event execution time is in the past and ON COMPLETION NOT PRESERVE is set. The event was dropped immediately after creation.
CREATE EVENT
statements which
are themselves invalid—for whatever reason—fail
with an error.
You may use CURRENT_TIMESTAMP
to specify the current date and time. In such a case, the
event acts as soon as it is created.
To create an event which occurs at some point in the future
relative to the current date and time—such as that
expressed by the phrase “three weeks from
now”—you can use the optional clause +
INTERVAL
. The
interval
interval
portion consists of two
parts, a quantity and a unit of time, and follows the same
syntax rules that govern intervals used in the
DATE_ADD()
function (see
Section 12.7, “Date and Time Functions”. The units keywords
are also the same, except that you cannot use any units
involving microseconds when defining an event. With some
interval types, complex time units may be used. For example,
“two minutes and ten seconds” can be expressed as
+ INTERVAL '2:10' MINUTE_SECOND
.
You can also combine intervals. For example, AT
CURRENT_TIMESTAMP + INTERVAL 3 WEEK + INTERVAL 2 DAY
is equivalent to “three weeks and two days from
now”. Each portion of such a clause must begin with
+ INTERVAL
.
To repeat actions at a regular interval, use an
EVERY
clause. The EVERY
keyword is followed by an interval
as described in the previous discussion of the
AT
keyword. (+ INTERVAL
is not used with
EVERY
.) For example, EVERY 6
WEEK
means “every six weeks”.
Although + INTERVAL
clauses are not
permitted in an EVERY
clause, you can use
the same complex time units permitted in a +
INTERVAL
.
An EVERY
clause may contain an optional
STARTS
clause. STARTS
is
followed by a timestamp
value that
indicates when the action should begin repeating, and may also
use + INTERVAL
to specify an
amount of time “from now”. For example,
interval
EVERY 3 MONTH STARTS CURRENT_TIMESTAMP + INTERVAL 1
WEEK
means “every three months, beginning one
week from now”. Similarly, you can express “every
two weeks, beginning six hours and fifteen minutes from
now” as EVERY 2 WEEK STARTS CURRENT_TIMESTAMP
+ INTERVAL '6:15' HOUR_MINUTE
. Not specifying
STARTS
is the same as using STARTS
CURRENT_TIMESTAMP
—that is, the action
specified for the event begins repeating immediately upon
creation of the event.
An EVERY
clause may contain an optional
ENDS
clause. The ENDS
keyword is followed by a timestamp
value that tells MySQL when the event should stop repeating.
You may also use + INTERVAL
with
interval
ENDS
; for instance, EVERY 12 HOUR
STARTS CURRENT_TIMESTAMP + INTERVAL 30 MINUTE ENDS
CURRENT_TIMESTAMP + INTERVAL 4 WEEK
is equivalent to
“every twelve hours, beginning thirty minutes from now,
and ending four weeks from now”. Not using
ENDS
means that the event continues
executing indefinitely.
ENDS
supports the same syntax for complex
time units as STARTS
does.
You may use STARTS
,
ENDS
, both, or neither in an
EVERY
clause.
If a repeating event does not terminate within its scheduling
interval, the result may be multiple instances of the event
executing simultaneously. If this is undesirable, you should
institute a mechanism to prevent simultaneous instances. For
example, you could use the
GET_LOCK()
function, or row or
table locking.
The ON SCHEDULE
clause may use expressions
involving built-in MySQL functions and user variables to obtain
any of the timestamp
or
interval
values which it contains. You
may not use stored functions or user-defined functions in such
expressions, nor may you use any table references; however, you
may use SELECT FROM DUAL
. This is true for both
CREATE EVENT
and
ALTER EVENT
statements. References
to stored functions, user-defined functions, and tables in such
cases are specifically not permitted, and fail with an error (see
Bug #22830).
Times in the ON SCHEDULE
clause are interpreted
using the current session
time_zone
value. This becomes the
event time zone; that is, the time zone that is used for event
scheduling and is in effect within the event as it executes. These
times are converted to UTC and stored along with the event time
zone in the mysql.event
table. This enables
event execution to proceed as defined regardless of any subsequent
changes to the server time zone or daylight saving time effects.
For additional information about representation of event times,
see Section 23.4.4, “Event Metadata”. See also
Section 13.7.5.18, “SHOW EVENTS Syntax”, and Section 24.7, “The INFORMATION_SCHEMA EVENTS Table”.
Normally, once an event has expired, it is immediately dropped.
You can override this behavior by specifying ON
COMPLETION PRESERVE
. Using ON COMPLETION NOT
PRESERVE
merely makes the default nonpersistent behavior
explicit.
You can create an event but prevent it from being active using the
DISABLE
keyword. Alternatively, you can use
ENABLE
to make explicit the default status,
which is active. This is most useful in conjunction with
ALTER EVENT
(see
Section 13.1.2, “ALTER EVENT Syntax”).
A third value may also appear in place of
ENABLE
or DISABLE
;
DISABLE ON SLAVE
is set for the status of an
event on a replication slave to indicate that the event was
created on the master and replicated to the slave, but is not
executed on the slave. See
Section 16.4.1.12, “Replication of Invoked Features”.
You may supply a comment for an event using a
COMMENT
clause.
comment
may be any string of up to 64
characters that you wish to use for describing the event. The
comment text, being a string literal, must be surrounded by
quotation marks.
The DO
clause specifies an action
carried by the event, and consists of an SQL statement. Nearly any
valid MySQL statement that can be used in a stored routine can
also be used as the action statement for a scheduled event. (See
Section C.1, “Restrictions on Stored Programs”.) For example, the
following event e_hourly
deletes all rows from
the sessions
table once per hour, where this
table is part of the site_activity
schema:
CREATE EVENT e_hourly ON SCHEDULE EVERY 1 HOUR COMMENT 'Clears out sessions table each hour.' DO DELETE FROM site_activity.sessions;
MySQL stores the sql_mode
system
variable setting in effect when an event is created or altered,
and always executes the event with this setting in force,
regardless of the current server SQL mode when the event
begins executing.
A CREATE EVENT
statement that
contains an ALTER EVENT
statement
in its DO
clause appears to
succeed; however, when the server attempts to execute the
resulting scheduled event, the execution fails with an error.
Statements such as SELECT
or
SHOW
that merely return a result
set have no effect when used in an event; the output from these
is not sent to the MySQL Monitor, nor is it stored anywhere.
However, you can use statements such as
SELECT ...
INTO
and
INSERT INTO ...
SELECT
that store a result. (See the next example in
this section for an instance of the latter.)
The schema to which an event belongs is the default schema for
table references in the DO
clause.
Any references to tables in other schemas must be qualified with
the proper schema name.
As with stored routines, you can use compound-statement syntax in
the DO
clause by using the
BEGIN
and END
keywords, as
shown here:
delimiter | CREATE EVENT e_daily ON SCHEDULE EVERY 1 DAY COMMENT 'Saves total number of sessions then clears the table each day' DO BEGIN INSERT INTO site_activity.totals (time, total) SELECT CURRENT_TIMESTAMP, COUNT(*) FROM site_activity.sessions; DELETE FROM site_activity.sessions; END | delimiter ;
This example uses the delimiter
command to
change the statement delimiter. See
Section 23.1, “Defining Stored Programs”.
More complex compound statements, such as those used in stored routines, are possible in an event. This example uses local variables, an error handler, and a flow control construct:
delimiter | CREATE EVENT e ON SCHEDULE EVERY 5 SECOND DO BEGIN DECLARE v INTEGER; DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN END; SET v = 0; WHILE v < 5 DO INSERT INTO t1 VALUES (0); UPDATE t2 SET s1 = s1 + 1; SET v = v + 1; END WHILE; END | delimiter ;
There is no way to pass parameters directly to or from events; however, it is possible to invoke a stored routine with parameters within an event:
CREATE EVENT e_call_myproc ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 1 DAY DO CALL myproc(5, 27);
If an event's definer has the SUPER
privilege, the event can read and write global variables. As
granting this privilege entails a potential for abuse, extreme
care must be taken in doing so.
Generally, any statements that are valid in stored routines may be used for action statements executed by events. For more information about statements permissible within stored routines, see Section 23.2.1, “Stored Routine Syntax”. You can create an event as part of a stored routine, but an event cannot be created by another event.
The CREATE FUNCTION
statement is
used to create stored functions and user-defined functions (UDFs):
For information about creating stored functions, see Section 13.1.16, “CREATE PROCEDURE and CREATE FUNCTION Syntax”.
For information about creating user-defined functions, see Section 13.7.3.1, “CREATE FUNCTION Syntax for User-Defined Functions”.
CREATE [UNIQUE|FULLTEXT|SPATIAL] INDEXindex_name
[index_type
] ONtbl_name
(index_col_name
,...) [index_option
] [algorithm_option
|lock_option
] ...index_col_name
:col_name
[(length
)] [ASC | DESC]index_option
: KEY_BLOCK_SIZE [=]value
|index_type
| WITH PARSERparser_name
| COMMENT 'string
'index_type
: USING {BTREE | HASH}algorithm_option
: ALGORITHM [=] {DEFAULT|INPLACE|COPY}lock_option
: LOCK [=] {DEFAULT|NONE|SHARED|EXCLUSIVE}
CREATE INDEX
is mapped to an
ALTER TABLE
statement to create
indexes. See Section 13.1.8, “ALTER TABLE Syntax”.
CREATE INDEX
cannot be used to
create a PRIMARY KEY
; use
ALTER TABLE
instead. For more
information about indexes, see Section 8.3.1, “How MySQL Uses Indexes”.
Normally, you create all indexes on a table at the time the table
itself is created with CREATE
TABLE
. See Section 13.1.18, “CREATE TABLE Syntax”. This
guideline is especially important for
InnoDB
tables, where the primary key
determines the physical layout of rows in the data file.
CREATE INDEX
enables you to add
indexes to existing tables.
A column list of the form (col1, col2, ...)
creates a multiple-column index. Index key values are formed by
concatenating the values of the given columns.
For string columns, indexes can be created that use only the
leading part of column values, using
syntax to specify an index prefix length:
col_name
(length
)
Prefixes can be specified for
CHAR
,
VARCHAR
,
BINARY
, and
VARBINARY
column indexes.
Prefixes must be specified for
BLOB
and
TEXT
column indexes.
Prefix limits are measured in bytes, whereas the prefix length
in CREATE TABLE
,
ALTER TABLE
, and
CREATE INDEX
statements is
interpreted as number of characters for nonbinary string types
(CHAR
,
VARCHAR
,
TEXT
) and number of bytes for
binary string types (BINARY
,
VARBINARY
,
BLOB
). Take this into account
when specifying a prefix length for a nonbinary string column
that uses a multibyte character set.
For spatial columns, prefix values cannot be given, as described later in this section.
The statement shown here creates an index using the first 10
characters of the name
column (assuming that
name
has a nonbinary string type):
CREATE INDEX part_of_name ON customer (name(10));
If names in the column usually differ in the first 10 characters,
this index should not be much slower than an index created from
the entire name
column. Also, using column
prefixes for indexes can make the index file much smaller, which
could save a lot of disk space and might also speed up
INSERT
operations.
Prefix support and lengths of prefixes (where supported) are
storage engine dependent. For example, a prefix can be up to 767
bytes long for InnoDB
tables or 3072
bytes if the innodb_large_prefix
option is enabled. For MyISAM
tables,
the prefix limit is 1000 bytes. The
NDB
storage engine does not support
prefixes (see
Section 21.1.6.6, “Unsupported or Missing Features in NDB Cluster”).
NDB Cluster formerly supported online CREATE
INDEX
operations using an alternative syntax that is no
longer supported. NDB Cluster now supports online operations using
the same ALGORITHM=INPLACE
syntax used with the
standard MySQL Server. See
Section 13.1.8.2, “ALTER TABLE Online Operations in NDB Cluster”, for more
information.
A UNIQUE
index creates a constraint such that
all values in the index must be distinct. An error occurs if you
try to add a new row with a key value that matches an existing
row. For all engines, a UNIQUE
index permits
multiple NULL
values for columns that can
contain NULL
. If you specify a prefix value for
a column in a UNIQUE
index, the column values
must be unique within the prefix.
FULLTEXT
indexes are supported only for
InnoDB
and
MyISAM
tables and can include only
CHAR
,
VARCHAR
, and
TEXT
columns. Indexing always
happens over the entire column; column prefix indexing is not
supported and any prefix length is ignored if specified. See
Section 12.9, “Full-Text Search Functions”, for details of operation.
The MyISAM
,
InnoDB
,
NDB
, and
ARCHIVE
storage engines support
spatial columns such as (POINT
and
GEOMETRY
.
(Section 11.5, “Extensions for Spatial Data”, describes the spatial data
types.) However, support for spatial column indexing varies among
engines. Spatial and nonspatial indexes are available according to
the following rules.
Spatial indexes (created using SPATIAL INDEX
)
have these characteristics:
Characteristics of nonspatial indexes (created with
INDEX
, UNIQUE
, or
PRIMARY KEY
):
Permitted for any storage engine that supports spatial columns
except ARCHIVE
.
Columns can be NULL
unless the index is a
primary key.
For each spatial column in a non-SPATIAL
index except POINT
columns, a
column prefix length must be specified. (This is the same
requirement as for indexed BLOB
columns.) The prefix length is given in bytes.
The index type for a non-SPATIAL
index
depends on the storage engine. Currently, B-tree is used.
You can add an index on a column that can have
NULL
values only for
InnoDB
,
MyISAM
, and
MEMORY
tables.
You can add an index on a BLOB
or TEXT
column only for using
the InnoDB
and
MyISAM
tables.
When the
innodb_stats_persistent
setting is enabled, run the ANALYZE
TABLE
statement for an
InnoDB
table after creating an
index on that table.
InnoDB
supports secondary indexes on
virtual columns. For more information, see
Section 13.1.18.9, “Secondary Indexes and Generated Columns”.
An index_col_name
specification can end
with ASC
or DESC
. These
keywords are permitted for future extensions for specifying
ascending or descending index value storage. Currently, they are
parsed but ignored; index values are always stored in ascending
order.
Following the index column list, index options can be given. An
index_option
value can be any of the
following:
KEY_BLOCK_SIZE [=]
value
For MyISAM
tables,
KEY_BLOCK_SIZE
optionally specifies the
size in bytes to use for index key blocks. The value is
treated as a hint; a different size could be used if
necessary. A KEY_BLOCK_SIZE
value specified
for an individual index definition overrides a table-level
KEY_BLOCK_SIZE
value.
KEY_BLOCK_SIZE
is not supported at the
index level for InnoDB
tables.
See Section 13.1.18, “CREATE TABLE Syntax”.
index_type
Some storage engines permit you to specify an index type when creating an index. For example:
CREATE TABLE lookup (id INT) ENGINE = MEMORY; CREATE INDEX id_index ON lookup (id) USING BTREE;
Table 13.1, “Index Types Per Storage Engine”
shows the permissible index type values supported by different
storage engines. Where multiple index types are listed, the
first one is the default when no index type specifier is
given. Storage engines not listed in the table do not support
an index_type
clause in index
definitions.
The index_type
clause cannot be
used for FULLTEXT INDEX
or SPATIAL
INDEX
specifications. Full-text index implementation
is storage engine dependent. Spatial indexes are implemented
as R-tree indexes.
BTREE
indexes are implemented by the
NDB
storage engine as T-tree
indexes.
For indexes on NDB
table
columns, the USING
option can be
specified only for a unique index or primary key.
USING HASH
prevents the creation of an
ordered index; otherwise, creating a unique index or primary
key on an NDB
table
automatically results in the creation of both an ordered
index and a hash index, each of which indexes the same set
of columns.
For unique indexes that include one or more
NULL
columns of an
NDB
table, the hash index can
be used only to look up literal values, which means that
IS [NOT] NULL
conditions require a full
scan of the table. One workaround is to make sure that a
unique index using one or more NULL
columns on such a table is always created in such a way that
it includes the ordered index; that is, avoid employing
USING HASH
when creating the index.
If you specify an index type that is not valid for a given
storage engine, but another index type is available that the
engine can use without affecting query results, the engine
uses the available type. The parser recognizes
RTREE
as a type name, but currently this
cannot be specified for any storage engine.
Use of the index_type
option
before the ON
clause is
deprecated; support for use of the option in this position
will be removed in a future MySQL release. If an
tbl_name
index_type
option is given in
both the earlier and later positions, the final option
applies.
TYPE
is recognized as a synonym for type_name
USING
. However,
type_name
USING
is the preferred form.
For the storage engines that support an
index_type
option,
Table 13.2, “Storage Engine Index Characteristics”
shows some characteristics of index use.
Table 13.2 Storage Engine Index Characteristics
Storage Engine | Index Type | Index Class | Stores NULL Values | Permits Multiple NULL Values | IS NULL Scan Type | IS NOT NULL Scan Type |
---|---|---|---|---|---|---|
InnoDB | BTREE | Primary key | No | No | N/A | N/A |
Unique | Yes | Yes | Index | Index | ||
Key | Yes | Yes | Index | Index | ||
Inapplicable | FULLTEXT | Yes | Yes | Table | Table | |
Inapplicable | SPATIAL | No | No | N/A | N/A | |
MyISAM | BTREE | Primary key | No | No | N/A | N/A |
Unique | Yes | Yes | Index | Index | ||
Key | Yes | Yes | Index | Index | ||
Inapplicable | FULLTEXT | Yes | Yes | Table | Table | |
Inapplicable | SPATIAL | No | No | N/A | N/A | |
MEMORY | HASH | Primary key | No | No | N/A | N/A |
Unique | Yes | Yes | Index | Index | ||
Key | Yes | Yes | Index | Index | ||
BTREE | Primary | No | No | N/A | N/A | |
Unique | Yes | Yes | Index | Index | ||
Key | Yes | Yes | Index | Index | ||
NDB | BTREE | Primary key | No | No | Index | Index |
Unique | Yes | Yes | Index | Index | ||
Key | Yes | Yes | Index | Index | ||
HASH | Primary | No | No | Table (see note 1) | Table (see note 1) | |
Unique | Yes | Yes | Table (see note 1) | Table (see note 1) | ||
Key | Yes | Yes | Table (see note 1) | Table (see note 1) |
Table note:
1. If USING HASH
is specified that prevents
creation of an implicit ordered index.
WITH PARSER
parser_name
This option can be used only with FULLTEXT
indexes. It associates a parser plugin with the index if
full-text indexing and searching operations need special
handling. InnoDB
and
MyISAM
support full-text parser
plugins. See Full-Text Parser Plugins and
Section 28.2.4.4, “Writing Full-Text Parser Plugins” for more
information.
COMMENT '
string
'
Index definitions can include an optional comment of up to 1024 characters.
The
MERGE_THRESHOLD
for index pages can be configured for individual indexes using
the index_option
COMMENT
clause of the
CREATE INDEX
statement. For
example:
CREATE TABLE t1 (id INT); CREATE INDEX id_index ON t1 (id) COMMENT 'MERGE_THRESHOLD=40';
If the page-full percentage for an index page falls below the
MERGE_THRESHOLD
value when a row is deleted
or when a row is shortened by an update operation,
InnoDB
attempts to merge the
index page with a neighboring index page. The default
MERGE_THRESHOLD
value is 50, which is the
previously hardcoded value.
MERGE_THRESHOLD
can also be defined at the
index level and table level using
CREATE TABLE
and
ALTER TABLE
statements. For
more information, see
Section 14.6.13, “Configuring the Merge Threshold for Index Pages”.
ALGORITHM
and LOCK
clauses
may be given to influence the table copying method and level of
concurrency for reading and writing the table while its indexes
are being modified. They have the same meaning as for the
ALTER TABLE
statement. For more
information, see Section 13.1.8, “ALTER TABLE Syntax”
CREATE LOGFILE GROUPlogfile_group
ADD UNDOFILE 'undo_file
' [INITIAL_SIZE [=]initial_size
] [UNDO_BUFFER_SIZE [=]undo_buffer_size
] [REDO_BUFFER_SIZE [=]redo_buffer_size
] [NODEGROUP [=]nodegroup_id
] [WAIT] [COMMENT [=]comment_text
] ENGINE [=]engine_name
This statement creates a new log file group named
logfile_group
having a single
UNDO
file named
'undo_file
'. A
CREATE LOGFILE GROUP
statement has
one and only one ADD UNDOFILE
clause. For rules
covering the naming of log file groups, see
Section 9.2, “Schema Object Names”.
All NDB Cluster Disk Data objects share the same namespace. This means that each Disk Data object must be uniquely named (and not merely each Disk Data object of a given type). For example, you cannot have a tablespace and a log file group with the same name, or a tablespace and a data file with the same name.
There can be only one log file group per NDB Cluster instance at any given time.
The optional INITIAL_SIZE
parameter sets the
UNDO
file's initial size; if not specified, it
defaults to 128M
(128 megabytes). The optional
UNDO_BUFFER_SIZE
parameter sets the size used
by the UNDO
buffer for the log file group; The
default value for UNDO_BUFFER_SIZE
is
8M
(eight megabytes); this value cannot exceed
the amount of system memory available. Both of these parameters
are specified in bytes. You may optionally follow either or both
of these with a one-letter abbreviation for an order of magnitude,
similar to those used in my.cnf
. Generally,
this is one of the letters M
(for megabytes)
or G
(for gigabytes).
Memory used for UNDO_BUFFER_SIZE
comes from the
global pool whose size is determined by the value of the
SharedGlobalMemory
data
node configuration parameter. This includes any default value
implied for this option by the setting of the
InitialLogFileGroup
data
node configuration parameter.
The maximum permitted for UNDO_BUFFER_SIZE
is
629145600 (600 MB).
On 32-bit systems, the maximum supported value for
INITIAL_SIZE
is 4294967296 (4 GB). (Bug #29186)
The minimum allowed value for INITIAL_SIZE
is
1048576 (1 MB).
The ENGINE
option determines the storage engine
to be used by this log file group, with
engine_name
being the name of the
storage engine. In MySQL 5.7, this must be
NDB
(or
NDBCLUSTER
). If
ENGINE
is not set, MySQL tries to use the
engine specified by the
default_storage_engine
server
system variable (formerly
storage_engine
). In any case, if
the engine is not specified as NDB
or
NDBCLUSTER
, the CREATE
LOGFILE GROUP
statement appears to succeed but actually
fails to create the log file group, as shown here:
mysql>CREATE LOGFILE GROUP lg1
->ADD UNDOFILE 'undo.dat' INITIAL_SIZE = 10M;
Query OK, 0 rows affected, 1 warning (0.00 sec) mysql>SHOW WARNINGS;
+-------+------+------------------------------------------------------------------------------------------------+ | Level | Code | Message | +-------+------+------------------------------------------------------------------------------------------------+ | Error | 1478 | Table storage engine 'InnoDB' does not support the create option 'TABLESPACE or LOGFILE GROUP' | +-------+------+------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) mysql>DROP LOGFILE GROUP lg1 ENGINE = NDB;
ERROR 1529 (HY000): Failed to drop LOGFILE GROUP mysql>CREATE LOGFILE GROUP lg1
->ADD UNDOFILE 'undo.dat' INITIAL_SIZE = 10M
->ENGINE = NDB;
Query OK, 0 rows affected (2.97 sec)
The fact that the CREATE LOGFILE GROUP
statement does not actually return an error when a
non-NDB
storage engine is named, but rather
appears to succeed, is a known issue which we hope to address in a
future release of NDB Cluster.
REDO_BUFFER_SIZE
,
NODEGROUP
, WAIT
, and
COMMENT
are parsed but ignored, and so have no
effect in MySQL 5.7. These options are intended for
future expansion.
When used with ENGINE [=] NDB
, a log file group
and associated UNDO
log file are created on
each Cluster data node. You can verify that the
UNDO
files were created and obtain information
about them by querying the
INFORMATION_SCHEMA.FILES
table. For
example:
mysql>SELECT LOGFILE_GROUP_NAME, LOGFILE_GROUP_NUMBER, EXTRA
->FROM INFORMATION_SCHEMA.FILES
->WHERE FILE_NAME = 'undo_10.dat';
+--------------------+----------------------+----------------+ | LOGFILE_GROUP_NAME | LOGFILE_GROUP_NUMBER | EXTRA | +--------------------+----------------------+----------------+ | lg_3 | 11 | CLUSTER_NODE=3 | | lg_3 | 11 | CLUSTER_NODE=4 | +--------------------+----------------------+----------------+ 2 rows in set (0.06 sec)
CREATE LOGFILE GROUP
is useful only
with Disk Data storage for NDB Cluster. See
Section 21.5.13, “NDB Cluster Disk Data Tables”.
CREATE [DEFINER = {user
| CURRENT_USER }] PROCEDUREsp_name
([proc_parameter
[,...]]) [characteristic
...]routine_body
CREATE [DEFINER = {user
| CURRENT_USER }] FUNCTIONsp_name
([func_parameter
[,...]]) RETURNStype
[characteristic
...]routine_body
proc_parameter
: [ IN | OUT | INOUT ]param_name
type
func_parameter
:param_name
type
type
:Any valid MySQL data type
characteristic
: COMMENT 'string
' | LANGUAGE SQL | [NOT] DETERMINISTIC | { CONTAINS SQL | NO SQL | READS SQL DATA | MODIFIES SQL DATA } | SQL SECURITY { DEFINER | INVOKER }routine_body
:Valid SQL routine statement
These statements create stored routines. By default, a routine is
associated with the default database. To associate the routine
explicitly with a given database, specify the name as
db_name.sp_name
when you create it.
The CREATE FUNCTION
statement is
also used in MySQL to support UDFs (user-defined functions). See
Section 28.4, “Adding New Functions to MySQL”. A UDF can be regarded as an
external stored function. Stored functions share their namespace
with UDFs. See Section 9.2.4, “Function Name Parsing and Resolution”, for the
rules describing how the server interprets references to different
kinds of functions.
To invoke a stored procedure, use the
CALL
statement (see
Section 13.2.1, “CALL Syntax”). To invoke a stored function, refer to it
in an expression. The function returns a value during expression
evaluation.
CREATE PROCEDURE
and
CREATE FUNCTION
require the
CREATE ROUTINE
privilege. They
might also require the SUPER
privilege, depending on the DEFINER
value, as
described later in this section. If binary logging is enabled,
CREATE FUNCTION
might require the
SUPER
privilege, as described in
Section 23.7, “Binary Logging of Stored Programs”.
By default, MySQL automatically grants the
ALTER ROUTINE
and
EXECUTE
privileges to the routine
creator. This behavior can be changed by disabling the
automatic_sp_privileges
system
variable. See Section 23.2.2, “Stored Routines and MySQL Privileges”.
The DEFINER
and SQL SECURITY
clauses specify the security context to be used when checking
access privileges at routine execution time, as described later in
this section.
If the routine name is the same as the name of a built-in SQL function, a syntax error occurs unless you use a space between the name and the following parenthesis when defining the routine or invoking it later. For this reason, avoid using the names of existing SQL functions for your own stored routines.
The IGNORE_SPACE
SQL mode
applies to built-in functions, not to stored routines. It is
always permissible to have spaces after a stored routine name,
regardless of whether
IGNORE_SPACE
is enabled.
The parameter list enclosed within parentheses must always be
present. If there are no parameters, an empty parameter list of
()
should be used. Parameter names are not case
sensitive.
Each parameter is an IN
parameter by default.
To specify otherwise for a parameter, use the keyword
OUT
or INOUT
before the
parameter name.
Specifying a parameter as IN
,
OUT
, or INOUT
is valid
only for a PROCEDURE
. For a
FUNCTION
, parameters are always regarded as
IN
parameters.
An IN
parameter passes a value into a
procedure. The procedure might modify the value, but the
modification is not visible to the caller when the procedure
returns. An OUT
parameter passes a value from
the procedure back to the caller. Its initial value is
NULL
within the procedure, and its value is
visible to the caller when the procedure returns. An
INOUT
parameter is initialized by the caller,
can be modified by the procedure, and any change made by the
procedure is visible to the caller when the procedure returns.
For each OUT
or INOUT
parameter, pass a user-defined variable in the
CALL
statement that invokes the
procedure so that you can obtain its value when the procedure
returns. If you are calling the procedure from within another
stored procedure or function, you can also pass a routine
parameter or local routine variable as an IN
or
INOUT
parameter.
Routine parameters cannot be referenced in statements prepared within the routine; see Section C.1, “Restrictions on Stored Programs”.
The following example shows a simple stored procedure that uses an
OUT
parameter:
mysql>delimiter //
mysql>CREATE PROCEDURE simpleproc (OUT param1 INT)
->BEGIN
->SELECT COUNT(*) INTO param1 FROM t;
->END//
Query OK, 0 rows affected (0.00 sec) mysql>delimiter ;
mysql>CALL simpleproc(@a);
Query OK, 0 rows affected (0.00 sec) mysql>SELECT @a;
+------+ | @a | +------+ | 3 | +------+ 1 row in set (0.00 sec)
The example uses the mysql client
delimiter
command to change the statement
delimiter from ;
to //
while
the procedure is being defined. This enables the
;
delimiter used in the procedure body to be
passed through to the server rather than being interpreted by
mysql itself. See
Section 23.1, “Defining Stored Programs”.
The RETURNS
clause may be specified only for a
FUNCTION
, for which it is mandatory. It
indicates the return type of the function, and the function body
must contain a RETURN
statement. If the
value
RETURN
statement returns a value of
a different type, the value is coerced to the proper type. For
example, if a function specifies an
ENUM
or
SET
value in the
RETURNS
clause, but the
RETURN
statement returns an
integer, the value returned from the function is the string for
the corresponding ENUM
member of
set of SET
members.
The following example function takes a parameter, performs an
operation using an SQL function, and returns the result. In this
case, it is unnecessary to use delimiter
because the function definition contains no internal
;
statement delimiters:
mysql>CREATE FUNCTION hello (s CHAR(20))
mysql>RETURNS CHAR(50) DETERMINISTIC
->RETURN CONCAT('Hello, ',s,'!');
Query OK, 0 rows affected (0.00 sec) mysql>SELECT hello('world');
+----------------+ | hello('world') | +----------------+ | Hello, world! | +----------------+ 1 row in set (0.00 sec)
Parameter types and function return types can be declared to use
any valid data type. The COLLATE
attribute can
be used if preceded by the CHARACTER SET
attribute.
The routine_body
consists of a valid
SQL routine statement. This can be a simple statement such as
SELECT
or
INSERT
, or a compound statement
written using BEGIN
and END
.
Compound statements can contain declarations, loops, and other
control structure statements. The syntax for these statements is
described in Section 13.6, “Compound-Statement Syntax”.
MySQL permits routines to contain DDL statements, such as
CREATE
and DROP
. MySQL also
permits stored procedures (but not stored functions) to contain
SQL transaction statements such as
COMMIT
. Stored functions may not
contain statements that perform explicit or implicit commit or
rollback. Support for these statements is not required by the SQL
standard, which states that each DBMS vendor may decide whether to
permit them.
Statements that return a result set can be used within a stored
procedure but not within a stored function. This prohibition
includes SELECT
statements that do
not have an INTO
clause and other
statements such as var_list
SHOW
,
EXPLAIN
, and
CHECK TABLE
. For statements that
can be determined at function definition time to return a result
set, a Not allowed to return a result set from a
function
error occurs
(ER_SP_NO_RETSET
). For statements
that can be determined only at runtime to return a result set, a
PROCEDURE %s can't return a result set in the given
context
error occurs
(ER_SP_BADSELECT
).
USE
statements within stored
routines are not permitted. When a routine is invoked, an implicit
USE
is
performed (and undone when the routine terminates). The causes the
routine to have the given default database while it executes.
References to objects in databases other than the routine default
database should be qualified with the appropriate database name.
db_name
For additional information about statements that are not permitted in stored routines, see Section C.1, “Restrictions on Stored Programs”.
For information about invoking stored procedures from within programs written in a language that has a MySQL interface, see Section 13.2.1, “CALL Syntax”.
MySQL stores the sql_mode
system
variable setting in effect when a routine is created or altered,
and always executes the routine with this setting in force,
regardless of the current server SQL mode when the
routine begins executing.
The switch from the SQL mode of the invoker to that of the routine occurs after evaluation of arguments and assignment of the resulting values to routine parameters. If you define a routine in strict SQL mode but invoke it in nonstrict mode, assignment of arguments to routine parameters does not take place in strict mode. If you require that expressions passed to a routine be assigned in strict SQL mode, you should invoke the routine with strict mode in effect.
The COMMENT
characteristic is a MySQL
extension, and may be used to describe the stored routine. This
information is displayed by the SHOW CREATE
PROCEDURE
and SHOW CREATE
FUNCTION
statements.
The LANGUAGE
characteristic indicates the
language in which the routine is written. The server ignores this
characteristic; only SQL routines are supported.
A routine is considered “deterministic” if it always
produces the same result for the same input parameters, and
“not deterministic” otherwise. If neither
DETERMINISTIC
nor NOT
DETERMINISTIC
is given in the routine definition, the
default is NOT DETERMINISTIC
. To declare that a
function is deterministic, you must specify
DETERMINISTIC
explicitly.
Assessment of the nature of a routine is based on the
“honesty” of the creator: MySQL does not check that a
routine declared DETERMINISTIC
is free of
statements that produce nondeterministic results. However,
misdeclaring a routine might affect results or affect performance.
Declaring a nondeterministic routine as
DETERMINISTIC
might lead to unexpected results
by causing the optimizer to make incorrect execution plan choices.
Declaring a deterministic routine as
NONDETERMINISTIC
might diminish performance by
causing available optimizations not to be used.
If binary logging is enabled, the DETERMINISTIC
characteristic affects which routine definitions MySQL accepts.
See Section 23.7, “Binary Logging of Stored Programs”.
A routine that contains the NOW()
function (or its synonyms) or
RAND()
is nondeterministic, but it
might still be replication-safe. For
NOW()
, the binary log includes the
timestamp and replicates correctly.
RAND()
also replicates correctly as
long as it is called only a single time during the execution of a
routine. (You can consider the routine execution timestamp and
random number seed as implicit inputs that are identical on the
master and slave.)
Several characteristics provide information about the nature of data use by the routine. In MySQL, these characteristics are advisory only. The server does not use them to constrain what kinds of statements a routine will be permitted to execute.
CONTAINS SQL
indicates that the routine
does not contain statements that read or write data. This is
the default if none of these characteristics is given
explicitly. Examples of such statements are SET @x =
1
or DO RELEASE_LOCK('abc')
,
which execute but neither read nor write data.
NO SQL
indicates that the routine contains
no SQL statements.
READS SQL DATA
indicates that the routine
contains statements that read data (for example,
SELECT
), but not statements
that write data.
MODIFIES SQL DATA
indicates that the
routine contains statements that may write data (for example,
INSERT
or
DELETE
).
The SQL SECURITY
characteristic can be
DEFINER
or INVOKER
to
specify the security context; that is, whether the routine
executes using the privileges of the account named in the routine
DEFINER
clause or the user who invokes it. This
account must have permission to access the database with which the
routine is associated. The default value is
DEFINER
. The user who invokes the routine must
have the EXECUTE
privilege for it,
as must the DEFINER
account if the routine
executes in definer security context.
The DEFINER
clause specifies the MySQL account
to be used when checking access privileges at routine execution
time for routines that have the SQL SECURITY
DEFINER
characteristic.
If a user
value is given for the
DEFINER
clause, it should be a MySQL account
specified as
'
,
user_name
'@'host_name
'CURRENT_USER
, or
CURRENT_USER()
. The default
DEFINER
value is the user who executes the
CREATE PROCEDURE
or
CREATE FUNCTION
statement. This is
the same as specifying DEFINER = CURRENT_USER
explicitly.
If you specify the DEFINER
clause, these rules
determine the valid DEFINER
user values:
If you do not have the SUPER
privilege, the only permitted user
value is your own account, either specified literally or by
using CURRENT_USER
. You cannot
set the definer to some other account.
If you have the SUPER
privilege, you can specify any syntactically valid account
name. If the account does not exist, a warning is generated.
Although it is possible to create a routine with a nonexistent
DEFINER
account, an error occurs at routine
execution time if the SQL SECURITY
value is
DEFINER
but the definer account does not
exist.
For more information about stored routine security, see Section 23.6, “Access Control for Stored Programs and Views”.
Within a stored routine that is defined with the SQL
SECURITY DEFINER
characteristic,
CURRENT_USER
returns the routine's
DEFINER
value. For information about user
auditing within stored routines, see
Section 6.3.11, “SQL-Based MySQL Account Activity Auditing”.
Consider the following procedure, which displays a count of the
number of MySQL accounts listed in the
mysql.user
table:
CREATE DEFINER = 'admin'@'localhost' PROCEDURE account_count() BEGIN SELECT 'Number of accounts:', COUNT(*) FROM mysql.user; END;
The procedure is assigned a DEFINER
account of
'admin'@'localhost'
no matter which user
defines it. It executes with the privileges of that account no
matter which user invokes it (because the default security
characteristic is DEFINER
). The procedure
succeeds or fails depending on whether invoker has the
EXECUTE
privilege for it and
'admin'@'localhost'
has the
SELECT
privilege for the
mysql.user
table.
Now suppose that the procedure is defined with the SQL
SECURITY INVOKER
characteristic:
CREATE DEFINER = 'admin'@'localhost' PROCEDURE account_count() SQL SECURITY INVOKER BEGIN SELECT 'Number of accounts:', COUNT(*) FROM mysql.user; END;
The procedure still has a DEFINER
of
'admin'@'localhost'
, but in this case, it
executes with the privileges of the invoking user. Thus, the
procedure succeeds or fails depending on whether the invoker has
the EXECUTE
privilege for it and
the SELECT
privilege for the
mysql.user
table.
The server handles the data type of a routine parameter, local
routine variable created with
DECLARE
, or function return value
as follows:
Assignments are checked for data type mismatches and overflow. Conversion and overflow problems result in warnings, or errors in strict SQL mode.
Only scalar values can be assigned. For example, a statement
such as SET x = (SELECT 1, 2)
is invalid.
For character data types, if there is a CHARACTER
SET
attribute in the declaration, the specified
character set and its default collation is used. If the
COLLATE
attribute is also present, that
collation is used rather than the default collation.
If CHARACTER SET
and
COLLATE
attributes are not present, the
database character set and collation in effect at routine
creation time are used. To avoid having the server use the
database character set and collation, provide explicit
CHARACTER SET
and
COLLATE
attributes for character data
parameters.
If you change the database default character set or collation, stored routines that use the database defaults must be dropped and recreated so that they use the new defaults.
The database character set and collation are given by the
value of the
character_set_database
and
collation_database
system
variables. For more information, see
Section 10.1.3.3, “Database Character Set and Collation”.
CREATE SERVERserver_name
FOREIGN DATA WRAPPERwrapper_name
OPTIONS (option
[,option
] ...)option
: { HOSTcharacter-literal
| DATABASEcharacter-literal
| USERcharacter-literal
| PASSWORDcharacter-literal
| SOCKETcharacter-literal
| OWNERcharacter-literal
| PORTnumeric-literal
}
This statement creates the definition of a server for use with the
FEDERATED
storage engine. The CREATE
SERVER
statement creates a new row in the
servers
table in the mysql
database. This statement requires the
SUPER
privilege.
The
should be a unique reference to the server. Server definitions are
global within the scope of the server, it is not possible to
qualify the server definition to a specific database.
server_name
has a
maximum length of 64 characters (names longer than 64 characters
are silently truncated), and is case insensitive. You may specify
the name as a quoted string.
server_name
The
should be wrapper_name
mysql
, and may be quoted with single
quotation marks. Other values for
are not
currently supported.
wrapper_name
For each
you
must specify either a character literal or numeric literal.
Character literals are UTF-8, support a maximum length of 64
characters and default to a blank (empty) string. String literals
are silently truncated to 64 characters. Numeric literals must be
a number between 0 and 9999, default value is 0.
option
The OWNER
option is currently not applied,
and has no effect on the ownership or operation of the server
connection that is created.
The CREATE SERVER
statement creates an entry in
the mysql.servers
table that can later be used
with the CREATE TABLE
statement
when creating a FEDERATED
table. The options
that you specify will be used to populate the columns in the
mysql.servers
table. The table columns are
Server_name
, Host
,
Db
, Username
,
Password
, Port
and
Socket
.
For example:
CREATE SERVER s FOREIGN DATA WRAPPER mysql OPTIONS (USER 'Remote', HOST '192.168.1.106', DATABASE 'test');
Be sure to specify all options necessary to establish a connection to the server. The user name, host name, and database name are mandatory. Other options might be required as well, such as password.
The data stored in the table can be used when creating a
connection to a FEDERATED
table:
CREATE TABLE t (s1 INT) ENGINE=FEDERATED CONNECTION='s';
For more information, see Section 15.8, “The FEDERATED Storage Engine”.
CREATE SERVER
causes an automatic commit.
In MySQL 5.7, CREATE SERVER
is not
written to the binary log, regardless of the logging format that
is in use.
In MySQL 5.7.1, gtid_next
must be
set to AUTOMATIC
before issuing this statement.
This restriction does not apply in MySQL 5.7.2 or later. (Bug
#16062608, Bug #16715809, Bug #69045)
CREATE [TEMPORARY] TABLE [IF NOT EXISTS]tbl_name
(create_definition
,...) [table_options
] [partition_options
] CREATE [TEMPORARY] TABLE [IF NOT EXISTS]tbl_name
[(create_definition
,...)] [table_options
] [partition_options
] [IGNORE | REPLACE] [AS]query_expression
CREATE [TEMPORARY] TABLE [IF NOT EXISTS]tbl_name
{ LIKEold_tbl_name
| (LIKEold_tbl_name
) }create_definition
:col_name
column_definition
| [CONSTRAINT [symbol
]] PRIMARY KEY [index_type
] (index_col_name
,...) [index_option
] ... | {INDEX|KEY} [index_name
] [index_type
] (index_col_name
,...) [index_option
] ... | [CONSTRAINT [symbol
]] UNIQUE [INDEX|KEY] [index_name
] [index_type
] (index_col_name
,...) [index_option
] ... | {FULLTEXT|SPATIAL} [INDEX|KEY] [index_name
] (index_col_name
,...) [index_option
] ... | [CONSTRAINT [symbol
]] FOREIGN KEY [index_name
] (index_col_name
,...)reference_definition
| CHECK (expr
)column_definition
:data_type
[NOT NULL | NULL] [DEFAULTdefault_value
] [AUTO_INCREMENT] [UNIQUE [KEY] | [PRIMARY] KEY] [COMMENT 'string
'] [COLUMN_FORMAT {FIXED|DYNAMIC|DEFAULT}] [STORAGE {DISK|MEMORY|DEFAULT}] [reference_definition
] |data_type
[GENERATED ALWAYS] AS (expression
) [VIRTUAL | STORED] [UNIQUE [KEY]] [COMMENTcomment
] [NOT NULL | NULL] [[PRIMARY] KEY]data_type
: BIT[(length
)] | TINYINT[(length
)] [UNSIGNED] [ZEROFILL] | SMALLINT[(length
)] [UNSIGNED] [ZEROFILL] | MEDIUMINT[(length
)] [UNSIGNED] [ZEROFILL] | INT[(length
)] [UNSIGNED] [ZEROFILL] | INTEGER[(length
)] [UNSIGNED] [ZEROFILL] | BIGINT[(length
)] [UNSIGNED] [ZEROFILL] | REAL[(length
,decimals
)] [UNSIGNED] [ZEROFILL] | DOUBLE[(length
,decimals
)] [UNSIGNED] [ZEROFILL] | FLOAT[(length
,decimals
)] [UNSIGNED] [ZEROFILL] | DECIMAL[(length
[,decimals
])] [UNSIGNED] [ZEROFILL] | NUMERIC[(length
[,decimals
])] [UNSIGNED] [ZEROFILL] | DATE | TIME[(fsp
)] | TIMESTAMP[(fsp
)] | DATETIME[(fsp
)] | YEAR | CHAR[(length
)] [BINARY] [CHARACTER SETcharset_name
] [COLLATEcollation_name
] | VARCHAR(length
) [BINARY] [CHARACTER SETcharset_name
] [COLLATEcollation_name
] | BINARY[(length
)] | VARBINARY(length
) | TINYBLOB | BLOB | MEDIUMBLOB | LONGBLOB | TINYTEXT [BINARY] [CHARACTER SETcharset_name
] [COLLATEcollation_name
] | TEXT [BINARY] [CHARACTER SETcharset_name
] [COLLATEcollation_name
] | MEDIUMTEXT [BINARY] [CHARACTER SETcharset_name
] [COLLATEcollation_name
] | LONGTEXT [BINARY] [CHARACTER SETcharset_name
] [COLLATEcollation_name
] | ENUM(value1
,value2
,value3
,...) [CHARACTER SETcharset_name
] [COLLATEcollation_name
] | SET(value1
,value2
,value3
,...) [CHARACTER SETcharset_name
] [COLLATEcollation_name
] | JSON |spatial_type
index_col_name
:col_name
[(length
)] [ASC | DESC]index_type
: USING {BTREE | HASH}index_option
: KEY_BLOCK_SIZE [=]value
|index_type
| WITH PARSERparser_name
| COMMENT 'string
'reference_definition
: REFERENCEStbl_name
(index_col_name
,...) [MATCH FULL | MATCH PARTIAL | MATCH SIMPLE] [ON DELETEreference_option
] [ON UPDATEreference_option
]reference_option
: RESTRICT | CASCADE | SET NULL | NO ACTION | SET DEFAULTtable_options
:table_option
[[,]table_option
] ...table_option
: ENGINE [=]engine_name
| AUTO_INCREMENT [=]value
| AVG_ROW_LENGTH [=]value
| [DEFAULT] CHARACTER SET [=]charset_name
| CHECKSUM [=] {0 | 1} | [DEFAULT] COLLATE [=]collation_name
| COMMENT [=] 'string
' | COMPRESSION [=] {'ZLIB'|'LZ4'|'NONE'} | CONNECTION [=] 'connect_string
' | DATA DIRECTORY [=] 'absolute path to directory
' | DELAY_KEY_WRITE [=] {0 | 1} | ENCRYPTION [=] {'Y' | 'N'} | INDEX DIRECTORY [=] 'absolute path to directory
' | INSERT_METHOD [=] { NO | FIRST | LAST } | KEY_BLOCK_SIZE [=]value
| MAX_ROWS [=]value
| MIN_ROWS [=]value
| PACK_KEYS [=] {0 | 1 | DEFAULT} | PASSWORD [=] 'string
' | ROW_FORMAT [=] {DEFAULT|DYNAMIC|FIXED|COMPRESSED|REDUNDANT|COMPACT} | STATS_AUTO_RECALC [=] {DEFAULT|0|1} | STATS_PERSISTENT [=] {DEFAULT|0|1} | STATS_SAMPLE_PAGES [=]value
| TABLESPACEtablespace_name
[STORAGE {DISK|MEMORY|DEFAULT}] | UNION [=] (tbl_name
[,tbl_name
]...)partition_options
: PARTITION BY { [LINEAR] HASH(expr
) | [LINEAR] KEY [ALGORITHM={1|2}] (column_list
) | RANGE{(expr
) | COLUMNS(column_list
)} | LIST{(expr
) | COLUMNS(column_list
)} } [PARTITIONSnum
] [SUBPARTITION BY { [LINEAR] HASH(expr
) | [LINEAR] KEY [ALGORITHM={1|2}] (column_list
) } [SUBPARTITIONSnum
] ] [(partition_definition
[,partition_definition
] ...)]partition_definition
: PARTITIONpartition_name
[VALUES {LESS THAN {(expr
|value_list
) | MAXVALUE} | IN (value_list
)}] [[STORAGE] ENGINE [=]engine_name
] [COMMENT [=]'comment_text'
] [DATA DIRECTORY [=] ''] [INDEX DIRECTORY [=] '
data_dir
'] [MAX_ROWS [=]
index_dir
max_number_of_rows
] [MIN_ROWS [=]min_number_of_rows
] [TABLESPACE [=] tablespace_name] [(subpartition_definition
[,subpartition_definition
] ...)]subpartition_definition
: SUBPARTITIONlogical_name
[[STORAGE] ENGINE [=]engine_name
] [COMMENT [=]'comment_text'
] [DATA DIRECTORY [=] ''] [INDEX DIRECTORY [=] '
data_dir
'] [MAX_ROWS [=]
index_dir
max_number_of_rows
] [MIN_ROWS [=]min_number_of_rows
] [TABLESPACE [=] tablespace_name]query_expression:
SELECT ... (Some valid select or union statement
)
CREATE TABLE
creates a table with
the given name. You must have the
CREATE
privilege for the table.
By default, tables are created in the default database, using the
InnoDB
storage engine. An error
occurs if the table exists, if there is no default database, or if
the database does not exist.
For information about the physical representation of a table, see Section 13.1.18.2, “Files Created by CREATE TABLE”.
The original CREATE TABLE
statement, including all specifications and table options are
stored by MySQL when the table is created. For more information,
see Section 13.1.18.1, “CREATE TABLE Statement Retention”.
There are several aspects to the CREATE
TABLE
statement, described under the following topics in
this section:
tbl_name
The table name can be specified as
db_name.tbl_name
to create the
table in a specific database. This works regardless of whether
there is a default database, assuming that the database
exists. If you use quoted identifiers, quote the database and
table names separately. For example, write
`mydb`.`mytbl`
, not
`mydb.mytbl`
.
Rules for permissible table names are given in Section 9.2, “Schema Object Names”.
IF NOT EXISTS
Prevents an error from occurring if the table exists. However,
there is no verification that the existing table has a
structure identical to that indicated by the
CREATE TABLE
statement.
You can use the TEMPORARY
keyword when creating
a table. A TEMPORARY
table is visible only to
the current session, and is dropped automatically when the session
is closed. For more information, see
Section 13.1.18.3, “CREATE TEMPORARY TABLE Syntax”.
LIKE
Use CREATE TABLE ... LIKE
to create an
empty table based on the definition of another table,
including any column attributes and indexes defined in the
original table:
CREATE TABLEnew_tbl
LIKEorig_tbl
;
For more information, see Section 13.1.18.4, “CREATE TABLE ... LIKE Syntax”.
[AS]
query_expression
To create one table from another, add a
SELECT
statement at the end of
the CREATE TABLE
statement:
CREATE TABLEnew_tbl
AS SELECT * FROMorig_tbl
;
For more information, see Section 13.1.18.5, “CREATE TABLE ... SELECT Syntax”.
IGNORE|REPLACE
The IGNORE
and REPLACE
options indicate how to handle rows that duplicate unique key
values when copying a table using a
SELECT
statement.
For more information, see Section 13.1.18.5, “CREATE TABLE ... SELECT Syntax”.
There is a hard limit of 4096 columns per table, but the effective maximum may be less for a given table and depends on the factors discussed in Section C.10.4, “Limits on Table Column Count and Row Size”.
data_type
data_type
represents the data type
in a column definition.
spatial_type
represents a spatial
data type. The data type syntax shown is representative only.
For a full description of the syntax available for specifying
column data types, as well as information about the properties
of each type, see Chapter 11, Data Types, and
Section 11.5, “Extensions for Spatial Data”. Beginning with MySQL
5.7.8, a JSON
data type is also
supported for table columns; see Section 11.6, “The JSON Data Type”, for
more information.
Some attributes do not apply to all data types.
AUTO_INCREMENT
applies only to integer
and floating-point types. DEFAULT
does
not apply to the BLOB
,
TEXT
,
GEOMETRY
, and
JSON
types.
Character data types (CHAR
,
VARCHAR
,
TEXT
) can include
CHARACTER SET
and
COLLATE
attributes to specify the
character set and collation for the column. For details,
see Section 10.1, “Character Set Support”. CHARSET
is a synonym for CHARACTER SET
.
Example:
CREATE TABLE t (c CHAR(20) CHARACTER SET utf8 COLLATE utf8_bin);
MySQL 5.7 interprets length specifications in
character column definitions in characters. Lengths for
BINARY
and
VARBINARY
are in bytes.
For CHAR
,
VARCHAR
,
BINARY
, and
VARBINARY
columns, indexes
can be created that use only the leading part of column
values, using
syntax to specify an index prefix length.
col_name
(length
)BLOB
and
TEXT
columns also can be
indexed, but a prefix length must be
given. Prefix lengths are given in characters for
nonbinary string types and in bytes for binary string
types. That is, index entries consist of the first
length
characters of each
column value for CHAR
,
VARCHAR
, and
TEXT
columns, and the first
length
bytes of each column
value for BINARY
,
VARBINARY
, and
BLOB
columns. Indexing only
a prefix of column values like this can make the index
file much smaller. For additional information about index
prefixes, see Section 13.1.14, “CREATE INDEX Syntax”.
Only the InnoDB
and
MyISAM
storage engines support indexing
on BLOB
and
TEXT
columns. For example:
CREATE TABLE test (blob_col BLOB, INDEX(blob_col(10)));
JSON
columns cannot be
indexed. You can work around this restriction by creating
an index on a generated column that extracts a scalar
value from the JSON
column. See
Indexing a Generated Column to Provide a JSON Column Index, for a
detailed example.
NOT NULL | NULL
If neither NULL
nor NOT
NULL
is specified, the column is treated as though
NULL
had been specified.
In MySQL 5.7, only the InnoDB
,
MyISAM
, and MEMORY
storage engines support indexes on columns that can have
NULL
values. In other cases, you must
declare indexed columns as NOT NULL
or an
error results.
DEFAULT
Specifies a default value for a column. With one exception,
the default value must be a constant; it cannot be a function
or an expression. This means, for example, that you cannot set
the default for a date column to be the value of a function
such as NOW()
or
CURRENT_DATE
. The exception is
that you can specify
CURRENT_TIMESTAMP
as the
default for a TIMESTAMP
or
DATETIME
column. See
Section 11.3.5, “Automatic Initialization and Updating for TIMESTAMP and DATETIME”.
If a column definition includes no explicit
DEFAULT
value, MySQL determines the default
value as described in Section 11.7, “Data Type Default Values”.
BLOB
,
TEXT
, and
JSON
columns cannot be assigned
a default value.
If the NO_ZERO_DATE
or
NO_ZERO_IN_DATE
SQL mode is
enabled and a date-valued default is not correct according to
that mode, CREATE TABLE
produces a warning if strict SQL mode is not enabled and an
error if strict mode is enabled. For example, with
NO_ZERO_IN_DATE
enabled,
c1 DATE DEFAULT '2010-00-00'
produces a
warning.
AUTO_INCREMENT
An integer or floating-point column can have the additional
attribute AUTO_INCREMENT
. When you insert a
value of NULL
(recommended) or
0
into an indexed
AUTO_INCREMENT
column, the column is set to
the next sequence value. Typically this is
, where
value
+1value
is the largest value for the
column currently in the table.
AUTO_INCREMENT
sequences begin with
1
.
To retrieve an AUTO_INCREMENT
value after
inserting a row, use the
LAST_INSERT_ID()
SQL function
or the mysql_insert_id()
C API
function. See Section 12.14, “Information Functions”, and
Section 27.8.7.38, “mysql_insert_id()”.
If the NO_AUTO_VALUE_ON_ZERO
SQL mode is enabled, you can store 0
in
AUTO_INCREMENT
columns as
0
without generating a new sequence value.
See Section 5.1.8, “Server SQL Modes”.
There can be only one AUTO_INCREMENT
column
per table, it must be indexed, and it cannot have a
DEFAULT
value. An
AUTO_INCREMENT
column works properly only
if it contains only positive values. Inserting a negative
number is regarded as inserting a very large positive number.
This is done to avoid precision problems when numbers
“wrap” over from positive to negative and also to
ensure that you do not accidentally get an
AUTO_INCREMENT
column that contains
0
.
For MyISAM
tables, you can specify an
AUTO_INCREMENT
secondary column in a
multiple-column key. See
Section 3.6.9, “Using AUTO_INCREMENT”.
To make MySQL compatible with some ODBC applications, you can
find the AUTO_INCREMENT
value for the last
inserted row with the following query:
SELECT * FROMtbl_name
WHEREauto_col
IS NULL
This method requires that
sql_auto_is_null
variable is
not set to 0. See Section 5.1.5, “Server System Variables”.
For information about InnoDB
and
AUTO_INCREMENT
, see
Section 14.8.1.5, “AUTO_INCREMENT Handling in InnoDB”. For
information about AUTO_INCREMENT
and MySQL
Replication, see
Section 16.4.1.1, “Replication and AUTO_INCREMENT”.
COMMENT
A comment for a column can be specified with the
COMMENT
option, up to 1024 characters long.
The comment is displayed by the SHOW
CREATE TABLE
and
SHOW FULL
COLUMNS
statements.
COLUMN_FORMAT
In NDB Cluster, it is also possible to specify a data storage
format for individual columns of
NDB
tables using
COLUMN_FORMAT
. Permissible column formats
are FIXED
, DYNAMIC
, and
DEFAULT
. FIXED
is used
to specify fixed-width storage, DYNAMIC
permits the column to be variable-width, and
DEFAULT
causes the column to use
fixed-width or variable-width storage as determined by the
column's data type (possibly overridden by a
ROW_FORMAT
specifier).
Beginning with MySQL NDB Cluster 7.5.4, for
NDB
tables, the default value for
COLUMN_FORMAT
is FIXED
.
(The default had been switched to DYNAMIC
in MySQL NDB Cluster 7.5.1, but this change was reverted to
maintain backwards compatibility with existing GA release
series.) (Bug #24487363)
COLUMN_FORMAT
currently has no effect on
columns of tables using storage engines other than
NDB
. In MySQL 5.7
and later, COLUMN_FORMAT
is silently
ignored.
STORAGE
For NDB
tables, it is possible to
specify whether the column is stored on disk or in memory by
using a STORAGE
clause. STORAGE
DISK
causes the column to be stored on disk, and
STORAGE MEMORY
causes in-memory storage to
be used. The CREATE TABLE
statement used must still include a
TABLESPACE
clause:
mysql>CREATE TABLE t1 (
->c1 INT STORAGE DISK,
->c2 INT STORAGE MEMORY
->) ENGINE NDB;
ERROR 1005 (HY000): Can't create table 'c.t1' (errno: 140) mysql>CREATE TABLE t1 (
->c1 INT STORAGE DISK,
->c2 INT STORAGE MEMORY
->) TABLESPACE ts_1 ENGINE NDB;
Query OK, 0 rows affected (1.06 sec)
For NDB
tables, STORAGE
DEFAULT
is equivalent to STORAGE
MEMORY
.
The STORAGE
clause has no effect on tables
using storage engines other than
NDB
. The
STORAGE
keyword is supported only in the
build of mysqld that is supplied with NDB
Cluster; it is not recognized in any other version of MySQL,
where any attempt to use the STORAGE
keyword causes a syntax error.
GENERATED ALWAYS
Used to specify a generated column expression. For information about generated columns, see Section 13.1.18.8, “CREATE TABLE and Generated Columns”.
Stored generated
columns can be indexed. InnoDB
supports secondary indexes on
virtual
generated columns. See
Section 13.1.18.9, “Secondary Indexes and Generated Columns”.
CONSTRAINT
symbol
If the CONSTRAINT
clause is given,
the symbol
symbol
value, if used, must be
unique in the database. A duplicate
symbol
results in an error. If the
clause is not given, or a symbol
is
not included following the CONSTRAINT
keyword, a name for the constraint is created automatically.
PRIMARY KEY
A unique index where all key columns must be defined as
NOT NULL
. If they are not explicitly
declared as NOT NULL
, MySQL declares them
so implicitly (and silently). A table can have only one
PRIMARY KEY
. The name of a PRIMARY
KEY
is always PRIMARY
, which thus
cannot be used as the name for any other kind of index.
If you do not have a PRIMARY KEY
and an
application asks for the PRIMARY KEY
in
your tables, MySQL returns the first UNIQUE
index that has no NULL
columns as the
PRIMARY KEY
.
In InnoDB
tables, keep the PRIMARY
KEY
short to minimize storage overhead for secondary
indexes. Each secondary index entry contains a copy of the
primary key columns for the corresponding row. (See
Section 14.8.2.1, “Clustered and Secondary Indexes”.)
In the created table, a PRIMARY KEY
is
placed first, followed by all UNIQUE
indexes, and then the nonunique indexes. This helps the MySQL
optimizer to prioritize which index to use and also more
quickly to detect duplicated UNIQUE
keys.
A PRIMARY KEY
can be a multiple-column
index. However, you cannot create a multiple-column index
using the PRIMARY KEY
key attribute in a
column specification. Doing so only marks that single column
as primary. You must use a separate PRIMARY
KEY(
clause.
index_col_name
, ...)
If a PRIMARY KEY
consists of only one
column that has an integer type, you can also refer to the
column as _rowid
in
SELECT
statements.
In MySQL, the name of a PRIMARY KEY
is
PRIMARY
. For other indexes, if you do not
assign a name, the index is assigned the same name as the
first indexed column, with an optional suffix
(_2
, _3
,
...
) to make it unique. You can see index
names for a table using SHOW INDEX FROM
. See
Section 13.7.5.22, “SHOW INDEX Syntax”.
tbl_name
KEY | INDEX
KEY
is normally a synonym for
INDEX
. The key attribute PRIMARY
KEY
can also be specified as just
KEY
when given in a column definition. This
was implemented for compatibility with other database systems.
UNIQUE
A UNIQUE
index creates a constraint such
that all values in the index must be distinct. An error occurs
if you try to add a new row with a key value that matches an
existing row. For all engines, a UNIQUE
index permits multiple NULL
values for
columns that can contain NULL
.
If a UNIQUE
index consists of only one
column that has an integer type, you can also refer to the
column as _rowid
in
SELECT
statements.
FULLTEXT
A FULLTEXT
index is a special type of index
used for full-text searches. Only the
InnoDB
and
MyISAM
storage engines support
FULLTEXT
indexes. They can be created only
from CHAR
,
VARCHAR
, and
TEXT
columns. Indexing always
happens over the entire column; column prefix indexing is not
supported and any prefix length is ignored if specified. See
Section 12.9, “Full-Text Search Functions”, for details of operation. A
WITH PARSER
clause can be specified as an
index_option
value to associate a
parser plugin with the index if full-text indexing and
searching operations need special handling. This clause is
valid only for FULLTEXT
indexes. Both
InnoDB
and
MyISAM
support full-text parser
plugins. See Full-Text Parser Plugins and
Section 28.2.4.4, “Writing Full-Text Parser Plugins” for more
information.
SPATIAL
You can create SPATIAL
indexes on spatial
data types. Spatial types are supported only for
MyISAM
and InnoDB
tables, and indexed columns must be declared as NOT
NULL
. See Section 11.5, “Extensions for Spatial Data”.
FOREIGN KEY
MySQL supports foreign keys, which let you cross-reference
related data across tables, and foreign key constraints, which
help keep this spread-out data consistent. For definition and
option information, see
reference_definition
,
and
reference_option
.
Partitioned tables employing the
InnoDB
storage engine do not
support foreign keys. See
Section 22.6, “Restrictions and Limitations on Partitioning”, for more
information.
CHECK
The CHECK
clause is parsed but ignored by
all storage engines. See
Section 1.8.2.3, “Foreign Key Differences”.
index_col_name
An index_col_name
specification
can end with ASC
or
DESC
. These keywords are permitted for
future extensions for specifying ascending or descending
index value storage. Currently, they are parsed but
ignored; index values are always stored in ascending
order.
Prefixes, defined by the length
attribute, can be up to 767 bytes long for
InnoDB
tables or 3072 bytes if the
innodb_large_prefix
option is enabled. For MyISAM tables, the prefix limit is
1000 bytes.
Prefix limits are measured in bytes, whereas the prefix
length in CREATE TABLE
,
ALTER TABLE
, and
CREATE INDEX
statements is
interpreted as number of characters for nonbinary string
types (CHAR
,
VARCHAR
,
TEXT
) and number of bytes
for binary string types
(BINARY
,
VARBINARY
,
BLOB
). Take this into
account when specifying a prefix length for a nonbinary
string column that uses a multibyte character set.
index_type
Some storage engines permit you to specify an index type when
creating an index. The syntax for the
index_type
specifier is
USING
.
type_name
Example:
CREATE TABLE lookup (id INT, INDEX USING BTREE (id)) ENGINE = MEMORY;
The preferred position for USING
is after
the index column list. It can be given before the column list,
but support for use of the option in that position is
deprecated and will be removed in a future MySQL release.
index_option
index_option
values specify
additional options for an index.
KEY_BLOCK_SIZE
For MyISAM
tables,
KEY_BLOCK_SIZE
optionally specifies the
size in bytes to use for index key blocks. The value is
treated as a hint; a different size could be used if
necessary. A KEY_BLOCK_SIZE
value
specified for an individual index definition overrides the
table-level KEY_BLOCK_SIZE
value.
For information about the table-level
KEY_BLOCK_SIZE
attribute, see
Table Options.
WITH PARSER
The WITH PARSER
option can only be used
with FULLTEXT
indexes. It associates a
parser plugin with the index if full-text indexing and
searching operations need special handling. Both
InnoDB
and
MyISAM
support full-text
parser plugins. If you have a
MyISAM
table with an
associated full-text parser plugin, you can convert the
table to InnoDB
using ALTER
TABLE
.
COMMENT
In MySQL 5.7, index definitions can include an optional comment of up to 1024 characters.
You can set the InnoDB
MERGE_THRESHOLD
value for an individual
index using the
index_option
COMMENT
clause. See
Section 14.6.13, “Configuring the Merge Threshold for Index Pages”.
For more information about permissible
index_option
values, see
Section 13.1.14, “CREATE INDEX Syntax”. For more information about
indexes, see Section 8.3.1, “How MySQL Uses Indexes”.
For reference_definition
syntax
details and examples, see
Section 13.1.18.6, “Using FOREIGN KEY Constraints”. For information
specific to foreign keys in InnoDB
, see
Section 14.8.1.6, “InnoDB and FOREIGN KEY Constraints”.
InnoDB
and
NDB
tables support checking of
foreign key constraints. The columns of the referenced table
must always be explicitly named. Both ON
DELETE
and ON UPDATE
actions on
foreign keys are supported. For more detailed information and
examples, see Section 13.1.18.6, “Using FOREIGN KEY Constraints”. For
information specific to foreign keys in
InnoDB
, see
Section 14.8.1.6, “InnoDB and FOREIGN KEY Constraints”.
For other storage engines, MySQL Server parses and ignores the
FOREIGN KEY
and
REFERENCES
syntax in
CREATE TABLE
statements. See
Section 1.8.2.3, “Foreign Key Differences”.
For users familiar with the ANSI/ISO SQL Standard, please
note that no storage engine, including
InnoDB
, recognizes or enforces the
MATCH
clause used in referential
integrity constraint definitions. Use of an explicit
MATCH
clause will not have the specified
effect, and also causes ON DELETE
and
ON UPDATE
clauses to be ignored. For
these reasons, specifying MATCH
should be
avoided.
The MATCH
clause in the SQL standard
controls how NULL
values in a composite
(multiple-column) foreign key are handled when comparing to
a primary key. InnoDB
essentially
implements the semantics defined by MATCH
SIMPLE
, which permit a foreign key to be all or
partially NULL
. In that case, the (child
table) row containing such a foreign key is permitted to be
inserted, and does not match any row in the referenced
(parent) table. It is possible to implement other semantics
using triggers.
Additionally, MySQL requires that the referenced columns be
indexed for performance. However, InnoDB
does not enforce any requirement that the referenced columns
be declared UNIQUE
or NOT
NULL
. The handling of foreign key references to
nonunique keys or keys that contain NULL
values is not well defined for operations such as
UPDATE
or DELETE
CASCADE
. You are advised to use foreign keys that
reference only keys that are both UNIQUE
(or PRIMARY
) and NOT
NULL
.
MySQL parses but ignores “inline
REFERENCES
specifications” (as
defined in the SQL standard) where the references are
defined as part of the column specification. MySQL accepts
REFERENCES
clauses only when specified as
part of a separate FOREIGN KEY
specification.
For information about the RESTRICT
,
CASCADE
, SET NULL
,
NO ACTION
, and SET
DEFAULT
options, see
Section 13.1.18.6, “Using FOREIGN KEY Constraints”.
Table options are used to optimize the behavior of the table. In
most cases, you do not have to specify any of them. These options
apply to all storage engines unless otherwise indicated. Options
that do not apply to a given storage engine may be accepted and
remembered as part of the table definition. Such options then
apply if you later use ALTER TABLE
to convert the table to use a different storage engine.
ENGINE
Specifies the storage engine for the table, using one of the
names shown in the following table. The engine name can be
unquoted or quoted. The quoted name
'DEFAULT'
is recognized but ignored.
Storage Engine | Description |
---|---|
InnoDB | Transaction-safe tables with row locking and foreign keys. The default
storage engine for new tables. See
Chapter 14, The InnoDB Storage Engine, and in
particular Section 14.1, “Introduction to InnoDB” if
you have MySQL experience but are new to
InnoDB . |
MyISAM | The binary portable storage engine that is primarily used for read-only or read-mostly workloads. See Section 15.2, “The MyISAM Storage Engine”. |
MEMORY | The data for this storage engine is stored only in memory. See Section 15.3, “The MEMORY Storage Engine”. |
CSV | Tables that store rows in comma-separated values format. See Section 15.4, “The CSV Storage Engine”. |
ARCHIVE | The archiving storage engine. See Section 15.5, “The ARCHIVE Storage Engine”. |
EXAMPLE | An example engine. See Section 15.9, “The EXAMPLE Storage Engine”. |
FEDERATED | Storage engine that accesses remote tables. See Section 15.8, “The FEDERATED Storage Engine”. |
HEAP | This is a synonym for MEMORY . |
MERGE | A collection of MyISAM tables used as one table. Also
known as MRG_MyISAM . See
Section 15.7, “The MERGE Storage Engine”. |
NDB | Clustered, fault-tolerant, memory-based tables, supporting transactions
and foreign keys. Also known as
NDBCLUSTER . See
Chapter 21, MySQL NDB Cluster 7.5 and NDB Cluster 7.6. |
By default, if a storage engine is specified that is not
available, the statement fails with an error. You can override
this behavior by removing
NO_ENGINE_SUBSTITUTION
from
the server SQL mode (see Section 5.1.8, “Server SQL Modes”) so that
MySQL allows substitution of the specified engine with the
default storage engine instead. Normally in such cases, this
is InnoDB
, which is the default value for
the default_storage_engine
system variable. When
NO_ENGINE_SUBSTITUTION
is disabled, a
warning occurs if the storage engine specification is not
honored.
AUTO_INCREMENT
The initial AUTO_INCREMENT
value for the
table. In MySQL 5.7, this works for
MyISAM
, MEMORY
,
InnoDB
, and ARCHIVE
tables. To set the first auto-increment value for engines that
do not support the AUTO_INCREMENT
table
option, insert a “dummy” row with a value one
less than the desired value after creating the table, and then
delete the dummy row.
For engines that support the AUTO_INCREMENT
table option in CREATE TABLE
statements, you can also use ALTER TABLE
to reset the
tbl_name
AUTO_INCREMENT =
N
AUTO_INCREMENT
value. The value cannot be
set lower than the maximum value currently in the column.
AVG_ROW_LENGTH
An approximation of the average row length for your table. You need to set this only for large tables with variable-size rows.
When you create a MyISAM
table, MySQL uses
the product of the MAX_ROWS
and
AVG_ROW_LENGTH
options to decide how big
the resulting table is. If you don't specify either option,
the maximum size for MyISAM
data and index
files is 256TB by default. (If your operating system does not
support files that large, table sizes are constrained by the
file size limit.) If you want to keep down the pointer sizes
to make the index smaller and faster and you don't really need
big files, you can decrease the default pointer size by
setting the
myisam_data_pointer_size
system variable. (See
Section 5.1.5, “Server System Variables”.) If you want all
your tables to be able to grow above the default limit and are
willing to have your tables slightly slower and larger than
necessary, you can increase the default pointer size by
setting this variable. Setting the value to 7 permits table
sizes up to 65,536TB.
[DEFAULT] CHARACTER SET
Specifies a default character set for the table.
CHARSET
is a synonym for CHARACTER
SET
. If the character set name is
DEFAULT
, the database character set is
used.
CHECKSUM
Set this to 1 if you want MySQL to maintain a live checksum
for all rows (that is, a checksum that MySQL updates
automatically as the table changes). This makes the table a
little slower to update, but also makes it easier to find
corrupted tables. The CHECKSUM
TABLE
statement reports the checksum.
(MyISAM
only.)
[DEFAULT] COLLATE
Specifies a default collation for the table.
COMMENT
A comment for the table, up to 2048 characters long.
You can set the InnoDB
MERGE_THRESHOLD
value for a table using the
table_option
COMMENT
clause. See
Section 14.6.13, “Configuring the Merge Threshold for Index Pages”.
Setting NDB_TABLE options.
In MySQL NDB Cluster 7.5.2 and later, the table comment in a
CREATE TABLE
or
ALTER TABLE
statement can
also be used to specify one to four of the
NDB_TABLE
options
NOLOGGING
,
READ_BACKUP
,
PARTITION_BALANCE
, or
FULLY_REPLICATED
as a set of name-value
pairs, separated by commas if need be, immediately following
the string NDB_TABLE=
that begins the
quoted comment text. An example statement using this syntax
is shown here (emphasized text):
CREATE TABLE t1 (
c1 INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
c2 VARCHAR(100),
c3 VARCHAR(100) )
ENGINE=NDB
COMMENT="NDB_TABLE=READ_BACKUP=0,PARTITION_BALANCE=FOR_RP_BY_NODE";
Spaces are not permitted within the quoted string. The string is case-insensitive.
The comment is displayed as part of the ouput of
SHOW CREATE TABLE
. The text of
the comment is also available as the TABLE_COMMENT column of
the MySQL Information Schema
TABLES
table.
This comment syntax is also supported with
ALTER TABLE
statements for
NDB
tables. Keep in mind that a table
comment used with ALTER TABLE
replaces any
existing comment which the table might have had perviously.
Setting the MERGE_THRESHOLD
option in table
comments is not supported for NDB
tables (it is ignored).
For complete syntax information and examples, see Section 13.1.18.10, “Setting NDB_TABLE Options in Table Comments”.
COMPRESSION
The compression algorithm used for page level compression for
InnoDB
tables. Supported values include
Zlib
, LZ4
, and
None
. The COMPRESSION
attribute was introduced with the transparent page compression
feature. Page compression is only supported with
InnoDB
tables that reside in
file-per-table
tablespaces, and is only available on Linux and Windows
platforms that support sparse files and hole punching. For
more information, see
Section 14.9.2, “InnoDB Page Compression”.
CONNECTION
The connection string for a FEDERATED
table.
Older versions of MySQL used a COMMENT
option for the connection string.
DATA DIRECTORY
, INDEX
DIRECTORY
For InnoDB
, the DATA
DIRECTORY='
option allows you to create directory
'InnoDB
file-per-table tablespaces outside the MySQL data directory.
Within the directory that you specify, MySQL creates a
subdirectory corresponding to the database name, and within
that a .ibd
file for the table. The
innodb_file_per_table
configuration option must be enabled to use the DATA
DIRECTORY
option with InnoDB
. The
full directory path must be specified. See
Section 14.7.5, “Creating File-Per-Table Tablespaces Outside the Data Directory” for more information.
When creating MyISAM
tables, you can use
the DATA
DIRECTORY='
clause, the directory
'INDEX
DIRECTORY='
clause, or both. They specify where to put a
directory
'MyISAM
table's data file and index file,
respectively. Unlike InnoDB
tables, MySQL
does not create subdirectories that correspond to the database
name when creating a MyISAM
table with a
DATA DIRECTORY
or INDEX
DIRECTORY
option. Files are created in the directory
that is specified.
As of MySQL 5.7.17, you must have the
FILE
privilege to use the
DATA DIRECTORY
or INDEX
DIRECTORY
table option.
Table-level DATA DIRECTORY
and
INDEX DIRECTORY
options are ignored for
partitioned tables. (Bug #32091)
These options work only when you are not using the
--skip-symbolic-links
option. Your operating system must also have a working,
thread-safe realpath()
call. See
Section 8.12.3.2, “Using Symbolic Links for MyISAM Tables on Unix”, for more complete
information.
If a MyISAM
table is created with no
DATA DIRECTORY
option, the
.MYD
file is created in the database
directory. By default, if MyISAM
finds an
existing .MYD
file in this case, it
overwrites it. The same applies to .MYI
files for tables created with no INDEX
DIRECTORY
option. To suppress this behavior, start
the server with the
--keep_files_on_create
option,
in which case MyISAM
will not overwrite
existing files and returns an error instead.
If a MyISAM
table is created with a
DATA DIRECTORY
or INDEX
DIRECTORY
option and an existing
.MYD
or .MYI
file is
found, MyISAM always returns an error. It will not overwrite a
file in the specified directory.
You cannot use path names that contain the MySQL data
directory with DATA DIRECTORY
or
INDEX DIRECTORY
. This includes
partitioned tables and individual table partitions. (See Bug
#32167.)
DELAY_KEY_WRITE
Set this to 1 if you want to delay key updates for the table
until the table is closed. See the description of the
delay_key_write
system
variable in Section 5.1.5, “Server System Variables”.
(MyISAM
only.)
ENCRYPTION
Set the ENCRYPTION
option to
'Y'
to enable page-level data encryption
for an InnoDB
table created in a
file-per-table
tablespace. Option values are not case sensitive. The
ENCRYPTION
option was introduced with the
InnoDB
tablespace encryption feature; see
Section 14.7.10, “InnoDB Tablespace Encryption”. The
keyring_file
plugin must be loaded to use
the ENCRYPTION
option.
INSERT_METHOD
If you want to insert data into a MERGE
table, you must specify with INSERT_METHOD
the table into which the row should be inserted.
INSERT_METHOD
is an option useful for
MERGE
tables only. Use a value of
FIRST
or LAST
to have
inserts go to the first or last table, or a value of
NO
to prevent inserts. See
Section 15.7, “The MERGE Storage Engine”.
KEY_BLOCK_SIZE
For MyISAM
tables,
KEY_BLOCK_SIZE
optionally specifies the
size in bytes to use for index key blocks. The value is
treated as a hint; a different size could be used if
necessary. A KEY_BLOCK_SIZE
value specified
for an individual index definition overrides the table-level
KEY_BLOCK_SIZE
value.
For InnoDB
tables,
KEY_BLOCK_SIZE
optionally specifies the
page size (in kilobytes) to
use for compressed
InnoDB
tables. The
KEY_BLOCK_SIZE
value is treated as a hint;
a different size could be used by InnoDB
if
necessary. KEY_BLOCK_SIZE
can only be less
than or equal to the
innodb_page_size
value. A
value of 0 represents the default compressed page size, which
is half of the
innodb_page_size
value.
Depending on
innodb_page_size
, possible
KEY_BLOCK_SIZE
values include 0, 1, 2, 4,
8, and 16. See Section 14.9.1, “InnoDB Table Compression” for
more information.
Oracle recommends enabling
innodb_strict_mode
when
specifying KEY_BLOCK_SIZE
for
InnoDB
tables. When
innodb_strict_mode
is
enabled, specifying an invalid
KEY_BLOCK_SIZE
value returns an error. If
innodb_strict_mode
is
disabled, an invalid KEY_BLOCK_SIZE
value
results in a warning, and the
KEY_BLOCK_SIZE
option is ignored.
InnoDB
only supports
KEY_BLOCK_SIZE
at the table level.
KEY_BLOCK_SIZE
is not supported with 32k
and 64k innodb_page_size
values. InnoDB
table compression does not
support these pages sizes.
MAX_ROWS
The maximum number of rows you plan to store in the table. This is not a hard limit, but rather a hint to the storage engine that the table must be able to store at least this many rows.
The NDB
storage engine treats
this value as a maximum. If you plan to create very large NDB
Cluster tables (containing millions of rows), you should use
this option to insure that NDB
allocates sufficient number of index slots in the hash table
used for storing hashes of the table's primary keys by
setting MAX_ROWS = 2 *
, where
rows
rows
is the number of rows that you
expect to insert into the table.
The maximum MAX_ROWS
value is 4294967295;
larger values are truncated to this limit.
MIN_ROWS
The minimum number of rows you plan to store in the table. The
MEMORY
storage engine uses this
option as a hint about memory use.
PACK_KEYS
Takes effect only with MyISAM
tables. Set
this option to 1 if you want to have smaller indexes. This
usually makes updates slower and reads faster. Setting the
option to 0 disables all packing of keys. Setting it to
DEFAULT
tells the storage engine to pack
only long CHAR
,
VARCHAR
,
BINARY
, or
VARBINARY
columns.
If you do not use PACK_KEYS
, the default is
to pack strings, but not numbers. If you use
PACK_KEYS=1
, numbers are packed as well.
When packing binary number keys, MySQL uses prefix compression:
Every key needs one extra byte to indicate how many bytes of the previous key are the same for the next key.
The pointer to the row is stored in high-byte-first order directly after the key, to improve compression.
This means that if you have many equal keys on two consecutive
rows, all following “same” keys usually only take
two bytes (including the pointer to the row). Compare this to
the ordinary case where the following keys takes
storage_size_for_key + pointer_size
(where
the pointer size is usually 4). Conversely, you get a
significant benefit from prefix compression only if you have
many numbers that are the same. If all keys are totally
different, you use one byte more per key, if the key is not a
key that can have NULL
values. (In this
case, the packed key length is stored in the same byte that is
used to mark if a key is NULL
.)
PASSWORD
This option is unused. If you have a need to scramble your
.frm
files and make them unusable to any
other MySQL server, please contact our sales department.
ROW_FORMAT
Defines the physical format in which the rows are stored.
When executing a CREATE TABLE
statement, if you specify a row format that is not supported
by the storage engine that is used for the table, the table is
created using that storage engine's default row format.
The information reported in this column in response to
SHOW TABLE STATUS
is the actual
row format used. This may differ from the value in the
Create_options
column because the original
CREATE TABLE
definition is
retained during creation.
Row format choices differ depending on the storage engine used for the table.
For InnoDB
tables:
The default row format is defined by
innodb_default_row_format
,
which has a default setting of DYNAMIC
.
The default row format is used when the
ROW_FORMAT
option is not defined or
when ROW_FORMAT=DEFAULT
is used.
If the ROW_FORMAT
option is not
defined, or if ROW_FORMAT=DEFAULT
is
used, operations that rebuild a table also silently change
the row format of the table to the default defined by
innodb_default_row_format
.
For more information, see
Section 14.11.2, “Specifying the Row Format for a Table”.
For more efficient InnoDB
storage of
data types, especially BLOB
types, use the DYNAMIC
. See
Section 14.11.3, “DYNAMIC and COMPRESSED Row Formats” for
requirements associated with the
DYNAMIC
row format.
To enable compression for InnoDB
tables, specify ROW_FORMAT=COMPRESSED
.
See Section 14.9, “InnoDB Table and Page Compression” for requirements
associated with the COMPRESSED
row
format.
The row format used in older versions of MySQL can still
be requested by specifying the
REDUNDANT
row format.
When you specify a non-default
ROW_FORMAT
clause, consider also
enabling the
innodb_strict_mode
configuration option.
ROW_FORMAT=FIXED
is not supported. If
ROW_FORMAT=FIXED
is specified while
innodb_strict_mode
is
disabled, InnoDB
issues a warning and
assumes ROW_FORMAT=DYNAMIC
. If
ROW_FORMAT=FIXED
is specified while
innodb_strict_mode
is
enabled, which is the default, InnoDB
returns an error.
For additional information about InnoDB
row formats, see Section 14.11, “InnoDB Row Storage and Row Formats”.
For MyISAM
tables, the option value can be
FIXED
or DYNAMIC
for
static or variable-length row format.
myisampack sets the type to
COMPRESSED
. See
Section 15.2.3, “MyISAM Table Storage Formats”.
For NDB
tables, the default
ROW_FORMAT
in MySQL NDB Cluster 7.5.1 and
later is DYNAMIC
. (Previously, it was
FIXED
.)
STATS_AUTO_RECALC
Specifies whether to automatically recalculate
persistent
statistics for an InnoDB
table. The
value DEFAULT
causes the persistent
statistics setting for the table to be determined by the
innodb_stats_auto_recalc
configuration option. The value 1
causes
statistics to be recalculated when 10% of the data in the
table has changed. The value 0
prevents
automatic recalculation for this table; with this setting,
issue an ANALYZE TABLE
statement to recalculate the statistics after making
substantial changes to the table. For more information about
the persistent statistics feature, see
Section 14.6.12.1, “Configuring Persistent Optimizer Statistics Parameters”.
STATS_PERSISTENT
Specifies whether to enable
persistent
statistics for an InnoDB
table. The
value DEFAULT
causes the persistent
statistics setting for the table to be determined by the
innodb_stats_persistent
configuration option. The value 1
enables
persistent statistics for the table, while the value
0
turns off this feature. After enabling
persistent statistics through a CREATE
TABLE
or ALTER TABLE
statement,
issue an ANALYZE TABLE
statement to calculate the statistics, after loading
representative data into the table. For more information about
the persistent statistics feature, see
Section 14.6.12.1, “Configuring Persistent Optimizer Statistics Parameters”.
STATS_SAMPLE_PAGES
The number of index pages to sample when estimating
cardinality and other statistics for an indexed column, such
as those calculated by ANALYZE
TABLE
. For more information, see
Section 14.6.12.1, “Configuring Persistent Optimizer Statistics Parameters”.
TABLESPACE
The TABLESPACE
option is used to create a
table in an InnoDB
general tablespace.
CREATE TABLEtbl_name
... TABLESPACE [=]tablespace_name
The general tablespace that you specify must exist prior to
using the TABLESPACE
option. For
information about general tablespaces, see
Section 14.7.9, “InnoDB General Tablespaces”.
The
is a case-sensitive identifier. It may be quoted or unquoted.
The forward slash character (“/”) is not
permitted. Names beginning with “innodb_” are
reserved for special use.
tablespace_name
The TABLESPACE
option may be used to assign
InnoDB
table partitions or subpartitions to
a general
tablespace, a separate file-per-table tablespace, or
the system tablespace. TABLESPACE
option
support for table partitions and subpartitions was added in
MySQL 5.7. All partitions must belong to the same
storage engine.
A tablespace specified at the table level becomes the default
tablespace for new partitions and subpartitions. The default
tablespace may be overridden by specifying a tablespace at the
partition or subpartition level in a
CREATE TABLE
or
ALTER TABLE
statement. The
following example shows tablespaces defined at the table level
and partition level.
mysql> CREATE TABLE t1 ( a INT NOT NULL, PRIMARY KEY (a)) -> ENGINE=InnoDB TABLESPACE ts1 -> PARTITION BY RANGE (a) PARTITIONS 3 ( -> PARTITION P1 VALUES LESS THAN (2), -> PARTITION P2 VALUES LESS THAN (4) TABLESPACE ts2, -> PARTITION P3 VALUES LESS THAN (6) TABLESPACE ts3);
For more information about the TABLESPACE
option and partitioning, see
Section 14.7.9, “InnoDB General Tablespaces”
To create a table in the system tablespace, specify
innodb_system
as the tablespace name.
CREATE TABLE tbl_name
... TABLESPACE [=] innodb_system
Using the TABLESPACE [=] innodb_system
option, you can place a table of any uncompressed row format
in the system tablespace regardless of the
innodb_file_per_table
setting. For example, you can add a table with
ROW_FORMAT=DYNAMIC
to the system tablespace
using the TABLESPACE [=] innodb_system
option.
To create a table in a file-per-table tablespace, specify
innodb_file_per_table
as the tablespace
name.
CREATE TABLE tbl_name
... TABLESPACE [=] innodb_file_per_table
If innodb_file_per_table
is
enabled, you need not specify
TABLESPACE=innodb_file_per_table
to
create an InnoDB
file-per-table
tablespace. InnoDB
tables are created in
file-per-table tablespaces by default when
innodb_file_per_table
is
enabled.
The DATA DIRECTORY
clause is permitted with
CREATE TABLE ...
TABLESPACE=innodb_file_per_table
but is otherwise
not supported for use in combination with the
TABLESPACE
option.
The TABLESPACE
option is supported with
ALTER TABLE
and
ALTER TABLE ...
REORGANIZE PARTITION
statements, which can be used
to move tables and partitions from one tablespace to another,
respectively. For more information, see
Section 14.7.9, “InnoDB General Tablespaces”.
The STORAGE
table option is employed only
with NDB
tables.
STORAGE
determines the type of storage used
(disk or memory), and can be one of DISK
,
MEMORY
, or DEFAULT
.
TABLESPACE ... STORAGE DISK
assigns a table
to an NDB Cluster Disk Data tablespace. The tablespace must
already have been created using CREATE
TABLESPACE
. See
Section 21.5.13, “NDB Cluster Disk Data Tables”, for more
information.
A STORAGE
clause cannot be used in a
CREATE TABLE
statement
without a TABLESPACE
clause.
Used to access a collection of identical
MyISAM
tables as one. This works only with
MERGE
tables. See
Section 15.7, “The MERGE Storage Engine”.
You must have SELECT
,
UPDATE
, and
DELETE
privileges for the
tables you map to a MERGE
table.
Formerly, all tables used had to be in the same database as
the MERGE
table itself. This restriction
no longer applies.
partition_options
can be used to
control partitioning of the table created with
CREATE TABLE
.
Not all options shown in the syntax for
partition_options
at the beginning of
this section are available for all partitioning types. Please see
the listings for the following individual types for information
specific to each type, and see Chapter 22, Partitioning, for
more complete information about the workings of and uses for
partitioning in MySQL, as well as additional examples of table
creation and other statements relating to MySQL partitioning.
Partitions can be modified, merged, added to tables, and dropped from tables. For basic information about the MySQL statements to accomplish these tasks, see Section 13.1.8, “ALTER TABLE Syntax”. For more detailed descriptions and examples, see Section 22.3, “Partition Management”.
PARTITION BY
If used, a partition_options
clause
begins with PARTITION BY
. This clause
contains the function that is used to determine the partition;
the function returns an integer value ranging from 1 to
num
, where
num
is the number of partitions.
(The maximum number of user-defined partitions which a table
may contain is 1024; the number of
subpartitions—discussed later in this section—is
included in this maximum.)
The expression (expr
) used in a
PARTITION BY
clause cannot refer to any
columns not in the table being created; such references are
specifically not permitted and cause the statement to fail
with an error. (Bug #29444)
HASH(
expr
)
Hashes one or more columns to create a key for placing and
locating rows. expr
is an
expression using one or more table columns. This can be any
valid MySQL expression (including MySQL functions) that yields
a single integer value. For example, these are both valid
CREATE TABLE
statements using
PARTITION BY HASH
:
CREATE TABLE t1 (col1 INT, col2 CHAR(5)) PARTITION BY HASH(col1); CREATE TABLE t1 (col1 INT, col2 CHAR(5), col3 DATETIME) PARTITION BY HASH ( YEAR(col3) );
You may not use either VALUES LESS THAN
or
VALUES IN
clauses with PARTITION
BY HASH
.
PARTITION BY HASH
uses the remainder of
expr
divided by the number of
partitions (that is, the modulus). For examples and additional
information, see Section 22.2.4, “HASH Partitioning”.
The LINEAR
keyword entails a somewhat
different algorithm. In this case, the number of the partition
in which a row is stored is calculated as the result of one or
more logical AND
operations. For
discussion and examples of linear hashing, see
Section 22.2.4.1, “LINEAR HASH Partitioning”.
KEY(
column_list
)
This is similar to HASH
, except that MySQL
supplies the hashing function so as to guarantee an even data
distribution. The column_list
argument is simply a list of 1 or more table columns (maximum:
16). This example shows a simple table partitioned by key,
with 4 partitions:
CREATE TABLE tk (col1 INT, col2 CHAR(5), col3 DATE) PARTITION BY KEY(col3) PARTITIONS 4;
For tables that are partitioned by key, you can employ linear
partitioning by using the LINEAR
keyword.
This has the same effect as with tables that are partitioned
by HASH
. That is, the partition number is
found using the
&
operator rather than the modulus (see
Section 22.2.4.1, “LINEAR HASH Partitioning”, and
Section 22.2.5, “KEY Partitioning”, for details). This example
uses linear partitioning by key to distribute data between 5
partitions:
CREATE TABLE tk (col1 INT, col2 CHAR(5), col3 DATE) PARTITION BY LINEAR KEY(col3) PARTITIONS 5;
The ALGORITHM={1|2}
option is supported
with [SUB]PARTITION BY [LINEAR] KEY
beginning with MySQL 5.7.1. ALGORITHM=1
causes the server to use the same key-hashing functions as
MySQL 5.1; ALGORITHM=2
means that the
server employs the key-hashing functions implemented and used
by default for new KEY
partitioned tables
in MySQL 5.5 and later. (Partitioned tables created with the
key-hashing functions employed in MySQL 5.5 and later cannot
be used by a MySQL 5.1 server.) Not specifying the option has
the same effect as using ALGORITHM=2
. This
option is intended for use chiefly when upgrading or
downgrading [LINEAR] KEY
partitioned tables
between MySQL 5.1 and later MySQL versions, or for creating
tables partitioned by KEY
or
LINEAR KEY
on a MySQL 5.5 or later server
which can be used on a MySQL 5.1 server. For more information,
see Section 13.1.8.1, “ALTER TABLE Partition Operations”.
mysqldump in MySQL 5.7 (and later) writes this option encased in versioned comments, like this:
CREATE TABLE t1 (a INT)
/*!50100 PARTITION BY KEY */ /*!50611 ALGORITHM = 1 */ /*!50100 ()
PARTITIONS 3 */
This causes MySQL 5.6.10 and earlier servers to ignore the
option, which would otherwise cause a syntax error in those
versions. If you plan to load a dump made on a MySQL 5.7
server where you use tables that are partitioned or
subpartitioned by KEY
into a MySQL 5.6
server previous to version 5.6.11, be sure to consult
Changes Affecting Upgrades to MySQL 5.6,
before proceeding. (The information found there also applies
if you are loading a dump containing KEY
partitioned or subpartitioned tables made from a MySQL
5.7—actually 5.6.11 or later—server into a MySQL
5.5.30 or earlier server.)
Also in MySQL 5.6.11 and later, ALGORITHM=1
is shown when necessary in the output of
SHOW CREATE TABLE
using
versioned comments in the same manner as
mysqldump. ALGORITHM=2
is always omitted from SHOW CREATE TABLE
output, even if this option was specified when creating the
original table.
You may not use either VALUES LESS THAN
or
VALUES IN
clauses with PARTITION
BY KEY
.
RANGE(
expr
)
In this case, expr
shows a range of
values using a set of VALUES LESS THAN
operators. When using range partitioning, you must define at
least one partition using VALUES LESS THAN
.
You cannot use VALUES IN
with range
partitioning.
For tables partitioned by RANGE
,
VALUES LESS THAN
must be used with either
an integer literal value or an expression that evaluates to
a single integer value. In MySQL 5.7, you can
overcome this limitation in a table that is defined using
PARTITION BY RANGE COLUMNS
, as described
later in this section.
Suppose that you have a table that you wish to partition on a column containing year values, according to the following scheme.
Partition Number: | Years Range: |
---|---|
0 | 1990 and earlier |
1 | 1991 to 1994 |
2 | 1995 to 1998 |
3 | 1999 to 2002 |
4 | 2003 to 2005 |
5 | 2006 and later |
A table implementing such a partitioning scheme can be
realized by the CREATE TABLE
statement shown here:
CREATE TABLE t1 ( year_col INT, some_data INT ) PARTITION BY RANGE (year_col) ( PARTITION p0 VALUES LESS THAN (1991), PARTITION p1 VALUES LESS THAN (1995), PARTITION p2 VALUES LESS THAN (1999), PARTITION p3 VALUES LESS THAN (2002), PARTITION p4 VALUES LESS THAN (2006), PARTITION p5 VALUES LESS THAN MAXVALUE );
PARTITION ... VALUES LESS THAN ...
statements work in a consecutive fashion. VALUES LESS
THAN MAXVALUE
works to specify
“leftover” values that are greater than the
maximum value otherwise specified.
VALUES LESS THAN
clauses work sequentially
in a manner similar to that of the case
portions of a switch ... case
block (as
found in many programming languages such as C, Java, and PHP).
That is, the clauses must be arranged in such a way that the
upper limit specified in each successive VALUES LESS
THAN
is greater than that of the previous one, with
the one referencing MAXVALUE
coming last of
all in the list.
RANGE
COLUMNS(
column_list
)
This variant on RANGE
facilitates partition
pruning for queries using range conditions on multiple columns
(that is, having conditions such as WHERE a = 1 AND b
< 10
or WHERE a = 1 AND b = 10 AND c
< 10
). It enables you to specify value ranges in
multiple columns by using a list of columns in the
COLUMNS
clause and a set of column values
in each PARTITION ... VALUES LESS THAN
(
partition
definition clause. (In the simplest case, this set consists of
a single column.) The maximum number of columns that can be
referenced in the value_list
)column_list
and
value_list
is 16.
The column_list
used in the
COLUMNS
clause may contain only names of
columns; each column in the list must be one of the following
MySQL data types: the integer types; the string types; and
time or date column types. Columns using
BLOB
, TEXT
,
SET
, ENUM
,
BIT
, or spatial data types are not
permitted; columns that use floating-point number types are
also not permitted. You also may not use functions or
arithmetic expressions in the COLUMNS
clause.
The VALUES LESS THAN
clause used in a
partition definition must specify a literal value for each
column that appears in the COLUMNS()
clause; that is, the list of values used for each
VALUES LESS THAN
clause must contain the
same number of values as there are columns listed in the
COLUMNS
clause. An attempt to use more or
fewer values in a VALUES LESS THAN
clause
than there are in the COLUMNS
clause causes
the statement to fail with the error Inconsistency
in usage of column lists for partitioning.... You
cannot use NULL
for any value appearing in
VALUES LESS THAN
. It is possible to use
MAXVALUE
more than once for a given column
other than the first, as shown in this example:
CREATE TABLE rc ( a INT NOT NULL, b INT NOT NULL ) PARTITION BY RANGE COLUMNS(a,b) ( PARTITION p0 VALUES LESS THAN (10,5), PARTITION p1 VALUES LESS THAN (20,10), PARTITION p2 VALUES LESS THAN (50,MAXVALUE), PARTITION p3 VALUES LESS THAN (65,MAXVALUE), PARTITION p4 VALUES LESS THAN (MAXVALUE,MAXVALUE) );
Each value used in a VALUES LESS THAN
value
list must match the type of the corresponding column exactly;
no conversion is made. For example, you cannot use the string
'1'
for a value that matches a column that
uses an integer type (you must use the numeral
1
instead), nor can you use the numeral
1
for a value that matches a column that
uses a string type (in such a case, you must use a quoted
string: '1'
).
For more information, see Section 22.2.1, “RANGE Partitioning”, and Section 22.4, “Partition Pruning”.
LIST(
expr
)
This is useful when assigning partitions based on a table
column with a restricted set of possible values, such as a
state or country code. In such a case, all rows pertaining to
a certain state or country can be assigned to a single
partition, or a partition can be reserved for a certain set of
states or countries. It is similar to
RANGE
, except that only VALUES
IN
may be used to specify permissible values for
each partition.
VALUES IN
is used with a list of values to
be matched. For instance, you could create a partitioning
scheme such as the following:
CREATE TABLE client_firms ( id INT, name VARCHAR(35) ) PARTITION BY LIST (id) ( PARTITION r0 VALUES IN (1, 5, 9, 13, 17, 21), PARTITION r1 VALUES IN (2, 6, 10, 14, 18, 22), PARTITION r2 VALUES IN (3, 7, 11, 15, 19, 23), PARTITION r3 VALUES IN (4, 8, 12, 16, 20, 24) );
When using list partitioning, you must define at least one
partition using VALUES IN
. You cannot use
VALUES LESS THAN
with PARTITION BY
LIST
.
For tables partitioned by LIST
, the value
list used with VALUES IN
must consist of
integer values only. In MySQL 5.7, you can
overcome this limitation using partitioning by LIST
COLUMNS
, which is described later in this section.
LIST
COLUMNS(
column_list
)
This variant on LIST
facilitates partition
pruning for queries using comparison conditions on multiple
columns (that is, having conditions such as WHERE a =
5 AND b = 5
or WHERE a = 1 AND b = 10 AND c
= 5
). It enables you to specify values in multiple
columns by using a list of columns in the
COLUMNS
clause and a set of column values
in each PARTITION ... VALUES IN
(
partition
definition clause.
value_list
)
The rules governing regarding data types for the column list
used in LIST
COLUMNS(
and
the value list used in column_list
)VALUES
IN(
are the
same as those for the column list used in value_list
)RANGE
COLUMNS(
and
the value list used in column_list
)VALUES LESS
THAN(
,
respectively, except that in the value_list
)VALUES IN
clause, MAXVALUE
is not permitted, and you
may use NULL
.
There is one important difference between the list of values
used for VALUES IN
with PARTITION
BY LIST COLUMNS
as opposed to when it is used with
PARTITION BY LIST
. When used with
PARTITION BY LIST COLUMNS
, each element in
the VALUES IN
clause must be a
set of column values; the number of
values in each set must be the same as the number of columns
used in the COLUMNS
clause, and the data
types of these values must match those of the columns (and
occur in the same order). In the simplest case, the set
consists of a single column. The maximum number of columns
that can be used in the column_list
and in the elements making up the
value_list
is 16.
The table defined by the following CREATE
TABLE
statement provides an example of a table using
LIST COLUMNS
partitioning:
CREATE TABLE lc ( a INT NULL, b INT NULL ) PARTITION BY LIST COLUMNS(a,b) ( PARTITION p0 VALUES IN( (0,0), (NULL,NULL) ), PARTITION p1 VALUES IN( (0,1), (0,2), (0,3), (1,1), (1,2) ), PARTITION p2 VALUES IN( (1,0), (2,0), (2,1), (3,0), (3,1) ), PARTITION p3 VALUES IN( (1,3), (2,2), (2,3), (3,2), (3,3) ) );
PARTITIONS
num
The number of partitions may optionally be specified with a
PARTITIONS
clause, where num
num
is the number of
partitions. If both this clause and any
PARTITION
clauses are used,
num
must be equal to the total
number of any partitions that are declared using
PARTITION
clauses.
Whether or not you use a PARTITIONS
clause in creating a table that is partitioned by
RANGE
or LIST
, you
must still include at least one PARTITION
VALUES
clause in the table definition (see below).
SUBPARTITION BY
A partition may optionally be divided into a number of
subpartitions. This can be indicated by using the optional
SUBPARTITION BY
clause. Subpartitioning may
be done by HASH
or KEY
.
Either of these may be LINEAR
. These work
in the same way as previously described for the equivalent
partitioning types. (It is not possible to subpartition by
LIST
or RANGE
.)
The number of subpartitions can be indicated using the
SUBPARTITIONS
keyword followed by an
integer value.
Rigorous checking of the value used in
PARTITIONS
or
SUBPARTITIONS
clauses is applied and this
value must adhere to the following rules:
The value must be a positive, nonzero integer.
No leading zeros are permitted.
The value must be an integer literal, and cannot not be an
expression. For example, PARTITIONS
0.2E+01
is not permitted, even though
0.2E+01
evaluates to
2
. (Bug #15890)
partition_definition
Each partition may be individually defined using a
partition_definition
clause. The
individual parts making up this clause are as follows:
PARTITION
partition_name
Specifies a logical name for the partition.
VALUES
For range partitioning, each partition must include a
VALUES LESS THAN
clause; for list
partitioning, you must specify a VALUES
IN
clause for each partition. This is used to
determine which rows are to be stored in this partition.
See the discussions of partitioning types in
Chapter 22, Partitioning, for syntax examples.
[STORAGE] ENGINE
The partitioning handler accepts a [STORAGE]
ENGINE
option for both
PARTITION
and
SUBPARTITION
. Currently, the only way
in which this can be used is to set all partitions or all
subpartitions to the same storage engine, and an attempt
to set different storage engines for partitions or
subpartitions in the same table will give rise to the
error ERROR 1469 (HY000): The mix of handlers
in the partitions is not permitted in this version of
MySQL. We expect to lift this restriction on
partitioning in a future MySQL release.
COMMENT
An optional COMMENT
clause may be used
to specify a string that describes the partition. Example:
COMMENT = 'Data for the years previous to 1999'
The maximum length for a partition comment is 1024 characters.
DATA DIRECTORY
and INDEX
DIRECTORY
DATA DIRECTORY
and INDEX
DIRECTORY
may be used to indicate the directory
where, respectively, the data and indexes for this
partition are to be stored. Both the
and the
data_dir
must be absolute system path names.
index_dir
As of MySQL 5.7.17, you must have the
FILE
privilege to use the
DATA DIRECTORY
or INDEX
DIRECTORY
partition option.
Example:
CREATE TABLE th (id INT, name VARCHAR(30), adate DATE) PARTITION BY LIST(YEAR(adate)) ( PARTITION p1999 VALUES IN (1995, 1999, 2003) DATA DIRECTORY = '/var/appdata/95/data
' INDEX DIRECTORY = '/var/appdata/95/idx
', PARTITION p2000 VALUES IN (1996, 2000, 2004) DATA DIRECTORY = '/var/appdata/96/data
' INDEX DIRECTORY = '/var/appdata/96/idx
', PARTITION p2001 VALUES IN (1997, 2001, 2005) DATA DIRECTORY = '/var/appdata/97/data
' INDEX DIRECTORY = '/var/appdata/97/idx
', PARTITION p2002 VALUES IN (1998, 2002, 2006) DATA DIRECTORY = '/var/appdata/98/data
' INDEX DIRECTORY = '/var/appdata/98/idx
' );
DATA DIRECTORY
and INDEX
DIRECTORY
behave in the same way as in the
CREATE TABLE
statement's
table_option
clause as used for
MyISAM
tables.
One data directory and one index directory may be specified per partition. If left unspecified, the data and indexes are stored by default in the table's database directory.
On Windows, the DATA DIRECTORY
and
INDEX DIRECTORY
options are not
supported for individual partitions or subpartitions of
MyISAM
tables, and the
INDEX DIRECTORY
option is not supported
for individual partitions or subpartitions of
InnoDB
tables. These options
are ignored on Windows, except that a warning is
generated. (Bug #30459)
The DATA DIRECTORY
and INDEX
DIRECTORY
options are ignored for creating
partitioned tables if
NO_DIR_IN_CREATE
is in
effect. (Bug #24633)
MAX_ROWS
and
MIN_ROWS
May be used to specify, respectively, the maximum and
minimum number of rows to be stored in the partition. The
values for max_number_of_rows
and min_number_of_rows
must be
positive integers. As with the table-level options with
the same names, these act only as
“suggestions” to the server and are not hard
limits.
TABLESPACE
May be used to assign InnoDB
table
partitions or subpartitions to a
general
tablespace, a separate file-per-table tablespace,
or the system tablespace. TABLESPACE
option support for table partitions and subpartitions was
added in MySQL 5.7 see
Section 14.7.9, “InnoDB General Tablespaces”. It is also
supported by NDB Cluster. All partitions must belong to
the same storage engine.
subpartition_definition
The partition definition may optionally contain one or more
subpartition_definition
clauses.
Each of these consists at a minimum of the
SUBPARTITION
, where
name
name
is an identifier for the
subpartition. Except for the replacement of the
PARTITION
keyword with
SUBPARTITION
, the syntax for a subpartition
definition is identical to that for a partition definition.
Subpartitioning must be done by HASH
or
KEY
, and can be done only on
RANGE
or LIST
partitions. See Section 22.2.6, “Subpartitioning”.
Partitioning by Generated Columns
Partitioning by generated columns is permitted. For example:
CREATE TABLE t1 ( s1 INT, s2 INT AS (EXP(s1)) STORED ) PARTITION BY LIST (s2) ( PARTITION p1 VALUES IN (1) );
Partitioning sees a generated column as a regular column, which
enables workarounds for limitations on functions that are not
permitted for partitioning (see
Section 22.6.3, “Partitioning Limitations Relating to Functions”). The
preceding example demonstrates this technique:
EXP()
cannot be used directly in
the PARTITION BY
clause, but a generated column
defined using EXP()
is permitted.
The original CREATE TABLE
statement, including all specifications and table options are
stored by MySQL when the table is created. The information is
retained so that if you change storage engines, collations or
other settings using an ALTER
TABLE
statement, the original table options specified
are retained. This enables you to change between
InnoDB
and
MyISAM
table types even though the
row formats supported by the two engines are different.
Because the text of the original statement is retained, but due
to the way that certain values and options may be silently
reconfigured (such as the ROW_FORMAT
), the
active table definition (accessible through
DESCRIBE
or with
SHOW TABLE STATUS
) and the table
creation string (accessible through SHOW
CREATE TABLE
) will report different values.
MySQL represents each table by an .frm
table format (definition) file in the database directory. The
storage engine for the table might create other files as well.
For an InnoDB
table created in a
file-per-table tablespace or general tablespace, table data and
associated indexes are stored in an
ibd file in the database
directory. When an InnoDB
table is created in
the system tablespace, table data and indexes are stored in the
ibdata* files that
represent the system tablespace. The
innodb_file_per_table
option
controls whether tables are created in file-per-table
tablespaces or the system tablespace, by default. The
TABLESPACE
option can be used to place a
table in a file-per-table tablespace, general tablespace, or the
system tablespace, regardless of the
innodb_file_per_table
setting.
For MyISAM
tables, the storage engine creates
data and index files. Thus, for each MyISAM
table tbl_name
, there are three disk
files.
File | Purpose |
---|---|
| Table format (definition) file |
| Data file |
| Index file |
Chapter 15, Alternative Storage Engines, describes what files each storage engine creates to represent tables. If a table name contains special characters, the names for the table files contain encoded versions of those characters as described in Section 9.2.3, “Mapping of Identifiers to File Names”.
You can use the TEMPORARY
keyword when
creating a table. A TEMPORARY
table is
visible only to the current session, and is dropped
automatically when the session is closed. This means that two
different sessions can use the same temporary table name without
conflicting with each other or with an existing
non-TEMPORARY
table of the same name. (The
existing table is hidden until the temporary table is dropped.)
CREATE TABLE
does not
automatically commit the current active transaction if you use
the TEMPORARY
keyword.
TEMPORARY
tables have a very loose
relationship with databases (schemas). Dropping a database does
not automatically drop any TEMPORARY
tables
created within that database. Also, you can create a
TEMPORARY
table in a nonexistent database if
you qualify the table name with the database name in the
CREATE TABLE
statement. In this case, all
subsequent references to the table must be qualified with the
database name.
To create a temporary table, you must have the
CREATE TEMPORARY TABLES
privilege. After a session has created a temporary table, the
server performs no further privilege checks on the table. The
creating session can perform any operation on the table, such as
DROP TABLE
,
INSERT
,
UPDATE
, or
SELECT
.
One implication of this behavior is that a session can
manipulate its temporary tables even if the current user has no
privilege to create them. Suppose that the current user does not
have the CREATE TEMPORARY TABLES
privilege but is able to execute a
DEFINER
-context stored procedure that
executes with the privileges of a user who does have
CREATE TEMPORARY TABLES
and that
creates a temporary table. While the procedure executes, the
session uses the privileges of the defining user. After the
procedure returns, the effective privileges revert to those of
the current user, which can still see the temporary table and
perform any operation on it.
Use CREATE TABLE ... LIKE
to create an empty
table based on the definition of another table, including any
column attributes and indexes defined in the original table:
CREATE TABLEnew_tbl
LIKEorig_tbl
;
The copy is created using the same version of the table storage
format as the original table. The
SELECT
privilege is required on
the original table.
LIKE
works only for base tables, not for
views.
You cannot execute CREATE TABLE
or
CREATE TABLE ... LIKE
while a
LOCK TABLES
statement is in
effect.
CREATE TABLE ...
LIKE
makes the same checks as
CREATE TABLE
and does not just
copy the .frm
file. This means that if
the current SQL mode is different from the mode in effect when
the original table was created, the table definition might be
considered invalid for the new mode and the statement will
fail.
For CREATE TABLE ... LIKE
, the destination
table preserves generated column information from the original
table.
CREATE TABLE ... LIKE
does not preserve any
DATA DIRECTORY
or INDEX
DIRECTORY
table options that were specified for the
original table, or any foreign key definitions.
If the original table is a TEMPORARY
table,
CREATE TABLE ... LIKE
does not preserve
TEMPORARY
. To create a
TEMPORARY
destination table, use
CREATE TEMPORARY TABLE ... LIKE
.
You can create one table from another by adding a
SELECT
statement at the end of
the CREATE TABLE
statement:
CREATE TABLEnew_tbl
[AS] SELECT * FROMorig_tbl
;
MySQL creates new columns for all elements in the
SELECT
. For example:
mysql>CREATE TABLE test (a INT NOT NULL AUTO_INCREMENT,
->PRIMARY KEY (a), KEY(b))
->ENGINE=MyISAM SELECT b,c FROM test2;
This creates a MyISAM
table with
three columns, a
, b
, and
c
. The ENGINE
option is
part of the CREATE TABLE
statement, and should not be used following the
SELECT
; this would result in a
syntax error. The same is true for other
CREATE TABLE
options such as
CHARSET
.
Notice that the columns from the
SELECT
statement are appended to
the right side of the table, not overlapped onto it. Take the
following example:
mysql>SELECT * FROM foo;
+---+ | n | +---+ | 1 | +---+ mysql>CREATE TABLE bar (m INT) SELECT n FROM foo;
Query OK, 1 row affected (0.02 sec) Records: 1 Duplicates: 0 Warnings: 0 mysql>SELECT * FROM bar;
+------+---+ | m | n | +------+---+ | NULL | 1 | +------+---+ 1 row in set (0.00 sec)
For each row in table foo
, a row is inserted
in bar
with the values from
foo
and default values for the new columns.
In a table resulting from
CREATE TABLE ...
SELECT
, columns named only in the
CREATE TABLE
part come first.
Columns named in both parts or only in the
SELECT
part come after that. The
data type of SELECT
columns can
be overridden by also specifying the column in the
CREATE TABLE
part.
If any errors occur while copying the data to the table, it is automatically dropped and not created.
You can precede the SELECT
by
IGNORE
or
REPLACE
to indicate how to handle
rows that duplicate unique key values. With
IGNORE
, rows that duplicate an existing row
on a unique key value are discarded. With
REPLACE
, new rows replace rows
that have the same unique key value. If neither
IGNORE
nor
REPLACE
is specified, duplicate
unique key values result in an error. For more information, see
Comparison of the IGNORE Keyword and Strict SQL Mode.
Because the ordering of the rows in the underlying
SELECT
statements cannot always
be determined, CREATE TABLE ... IGNORE SELECT
and CREATE TABLE ... REPLACE SELECT
statements are flagged as unsafe for statement-based
replication. With this change, such statements produce a warning
in the log when using statement-based mode and are logged using
the row-based format when using MIXED
mode.
See also Section 16.2.1.1, “Advantages and Disadvantages of Statement-Based and Row-Based
Replication”.
CREATE TABLE ...
SELECT
does not automatically create any indexes for
you. This is done intentionally to make the statement as
flexible as possible. If you want to have indexes in the created
table, you should specify these before the
SELECT
statement:
mysql> CREATE TABLE bar (UNIQUE (n)) SELECT n FROM foo;
For CREATE TABLE ... SELECT
, the destination
table does not preserve information about whether columns in the
selected-from table are generated columns. The
SELECT
part of the statement
cannot assign values to generated columns in the destination
table.
Some conversion of data types might occur. For example, the
AUTO_INCREMENT
attribute is not preserved,
and VARCHAR
columns can become
CHAR
columns. Retrained
attributes are NULL
(or NOT
NULL
) and, for those columns that have them,
CHARACTER SET
, COLLATION
,
COMMENT
, and the DEFAULT
clause.
When creating a table with
CREATE
TABLE ... SELECT
, make sure to alias any function
calls or expressions in the query. If you do not, the
CREATE
statement might fail or result in
undesirable column names.
CREATE TABLE artists_and_works SELECT artist.name, COUNT(work.artist_id) AS number_of_works FROM artist LEFT JOIN work ON artist.id = work.artist_id GROUP BY artist.id;
You can also explicitly specify the data type for a column in the created table:
CREATE TABLE foo (a TINYINT NOT NULL) SELECT b+1 AS a FROM bar;
For CREATE TABLE
... SELECT
, if IF NOT EXISTS
is
given and the target table exists, nothing is inserted into the
destination table, and the statement is not logged.
To ensure that the binary log can be used to re-create the
original tables, MySQL does not permit concurrent inserts during
CREATE TABLE ...
SELECT
.
You cannot use FOR UPDATE
as part of the
SELECT
in a statement such as
CREATE
TABLE
. If you
attempt to do so, the statement fails.
new_table
SELECT ... FROM
old_table
...
MySQL supports foreign keys, which let you cross-reference
related data across tables, and
foreign key
constraints, which help keep this spread-out data
consistent. The essential syntax for a foreign key constraint
definition in a CREATE TABLE
or
ALTER TABLE
statement looks like
this:
[CONSTRAINT [symbol
]] FOREIGN KEY [index_name
] (index_col_name
, ...) REFERENCEStbl_name
(index_col_name
,...) [ON DELETEreference_option
] [ON UPDATEreference_option
]reference_option
: RESTRICT | CASCADE | SET NULL | NO ACTION | SET DEFAULT
index_name
represents a foreign key
ID. The index_name
value is ignored
if there is already an explicitly defined index on the child
table that can support the foreign key. Otherwise, MySQL
implicitly creates a foreign key index that is named according
to the following rules:
If defined, the CONSTRAINT
symbol
value is used. Otherwise,
the FOREIGN KEY
index_name
value is used.
If neither a CONSTRAINT
symbol
or FOREIGN
KEY
index_name
is
defined, the foreign key index name is generated using the
name of the referencing foreign key column.
Foreign keys definitions are subject to the following conditions:
Foreign key relationships involve a
parent table that
holds the central data values, and a
child table with
identical values pointing back to its parent. The
FOREIGN KEY
clause is specified in the
child table. The parent and child tables must use the same
storage engine. They must not be
TEMPORARY
tables.
In MySQL 5.7, creation of a foreign key
constraint requires the
REFERENCES
privilege for the
parent table.
Corresponding columns in the foreign key and the referenced key must have similar data types. The size and sign of integer types must be the same. The length of string types need not be the same. For nonbinary (character) string columns, the character set and collation must be the same.
When foreign_key_checks
is
enabled, which is the default setting, character set
conversion is not permitted on tables that include a
character string column used in a foreign key constraint.
The workaround is described in
Section 13.1.8, “ALTER TABLE Syntax”.
MySQL requires indexes on foreign keys and referenced keys
so that foreign key checks can be fast and not require a
table scan. In the referencing table, there must be an index
where the foreign key columns are listed as the
first columns in the same order. Such
an index is created on the referencing table automatically
if it does not exist. This index might be silently dropped
later, if you create another index that can be used to
enforce the foreign key constraint.
index_name
, if given, is used as
described previously.
InnoDB
permits a foreign key to reference
any column or group of columns. However, in the referenced
table, there must be an index where the referenced columns
are listed as the first columns in the
same order.
NDB
requires an explicit unique key (or
primary key) on any column referenced as a foreign key.
Index prefixes on foreign key columns are not supported. One
consequence of this is that
BLOB
and
TEXT
columns cannot be
included in a foreign key because indexes on those columns
must always include a prefix length.
If the CONSTRAINT
clause is given,
the symbol
symbol
value, if used, must
be unique in the database. A duplicate
symbol
will result in an error
similar to: ERROR 1022 (2300): Can't write;
duplicate key in table '#sql- 464_1'. If the
clause is not given, or a symbol
is not included following the CONSTRAINT
keyword, a name for the constraint is created automatically.
InnoDB
does not currently
support foreign keys for tables with user-defined
partitioning. This includes both parent and child tables.
This restriction does not apply for
NDB
tables that are partitioned
by KEY
or LINEAR KEY
(the only user partitioning types supported by the
NDB
storage engine); these may have
foreign key references or be the targets of such references.
For NDB
tables, ON
UPDATE CASCADE
is not supported where the
reference is to the parent table's primary key.
This section describes how foreign keys help guarantee referential integrity.
For storage engines supporting foreign keys, MySQL rejects any
INSERT
or
UPDATE
operation that attempts to
create a foreign key value in a child table if there is no a
matching candidate key value in the parent table.
When an UPDATE
or
DELETE
operation affects a key
value in the parent table that has matching rows in the child
table, the result depends on the referential
action specified using ON UPDATE
and ON DELETE
subclauses of the
FOREIGN KEY
clause. MySQL supports five
options regarding the action to be taken, listed here:
CASCADE
: Delete or update the row from
the parent table, and automatically delete or update the
matching rows in the child table. Both ON DELETE
CASCADE
and ON UPDATE CASCADE
are supported. Between two tables, do not define several
ON UPDATE CASCADE
clauses that act on the
same column in the parent table or in the child table.
Cascaded foreign key actions do not activate triggers.
SET NULL
: Delete or update the row from
the parent table, and set the foreign key column or columns
in the child table to NULL
. Both
ON DELETE SET NULL
and ON UPDATE
SET NULL
clauses are supported.
If you specify a SET NULL
action,
make sure that you have not declared the columns
in the child table as NOT
NULL
.
RESTRICT
: Rejects the delete or update
operation for the parent table. Specifying
RESTRICT
(or NO
ACTION
) is the same as omitting the ON
DELETE
or ON UPDATE
clause.
NO ACTION
: A keyword from standard SQL.
In MySQL, equivalent to RESTRICT
. The
MySQL Server rejects the delete or update operation for the
parent table if there is a related foreign key value in the
referenced table. Some database systems have deferred
checks, and NO ACTION
is a deferred
check. In MySQL, foreign key constraints are checked
immediately, so NO ACTION
is the same as
RESTRICT
.
SET DEFAULT
: This action is recognized by
the MySQL parser, but both
InnoDB
and
NDB
reject table definitions
containing ON DELETE SET DEFAULT
or
ON UPDATE SET DEFAULT
clauses.
For an ON DELETE
or ON
UPDATE
that is not specified, the default action is
always RESTRICT
.
MySQL supports foreign key references between one column and another within a table. (A column cannot have a foreign key reference to itself.) In these cases, “child table records” really refers to dependent records within the same table.
A foreign key constraint on a stored generated column cannot use
ON UPDATE CASCADE
, ON DELETE SET
NULL
, ON UPDATE SET NULL
,
ON DELETE SET DEFAULT
, or ON UPDATE
SET DEFAULT
.
A foreign key constraint cannot reference a virtual generated column.
For InnoDB
restrictions related to foreign
keys and generated columns, see
Section 14.8.1.6, “InnoDB and FOREIGN KEY Constraints”.
Here is a simple example that relates parent
and child
tables through a single-column
foreign key:
CREATE TABLE parent ( id INT NOT NULL, PRIMARY KEY (id) ) ENGINE=INNODB; CREATE TABLE child ( id INT, parent_id INT, INDEX par_ind (parent_id), FOREIGN KEY (parent_id) REFERENCES parent(id) ON DELETE CASCADE ) ENGINE=INNODB;
A more complex example in which a
product_order
table has foreign keys for two
other tables. One foreign key references a two-column index in
the product
table. The other references a
single-column index in the customer
table:
CREATE TABLE product ( category INT NOT NULL, id INT NOT NULL, price DECIMAL, PRIMARY KEY(category, id) ) ENGINE=INNODB; CREATE TABLE customer ( id INT NOT NULL, PRIMARY KEY (id) ) ENGINE=INNODB; CREATE TABLE product_order ( no INT NOT NULL AUTO_INCREMENT, product_category INT NOT NULL, product_id INT NOT NULL, customer_id INT NOT NULL, PRIMARY KEY(no), INDEX (product_category, product_id), INDEX (customer_id), FOREIGN KEY (product_category, product_id) REFERENCES product(category, id) ON UPDATE CASCADE ON DELETE RESTRICT, FOREIGN KEY (customer_id) REFERENCES customer(id) ) ENGINE=INNODB;
You can add a new foreign key constraint to an existing table by
using ALTER TABLE
. The syntax
relating to foreign keys for this statement is shown here:
ALTER TABLEtbl_name
ADD [CONSTRAINT [symbol
]] FOREIGN KEY [index_name
] (index_col_name
, ...) REFERENCEStbl_name
(index_col_name
,...) [ON DELETEreference_option
] [ON UPDATEreference_option
]
The foreign key can be self referential (referring to the same
table). When you add a foreign key constraint to a table using
ALTER TABLE
, remember
to create the required indexes first.
You can also use ALTER TABLE
to
drop foreign keys, using the syntax shown here:
ALTER TABLEtbl_name
DROP FOREIGN KEYfk_symbol
;
If the FOREIGN KEY
clause included a
CONSTRAINT
name when you created the foreign
key, you can refer to that name to drop the foreign key.
Otherwise, the fk_symbol
value is
generated internally when the foreign key is created. To find
out the symbol value when you want to drop a foreign key, use a
SHOW CREATE TABLE
statement, as
shown here:
mysql>SHOW CREATE TABLE ibtest11c\G
*************************** 1. row *************************** Table: ibtest11c Create Table: CREATE TABLE `ibtest11c` ( `A` int(11) NOT NULL auto_increment, `D` int(11) NOT NULL default '0', `B` varchar(200) NOT NULL default '', `C` varchar(175) default NULL, PRIMARY KEY (`A`,`D`,`B`), KEY `B` (`B`,`C`), KEY `C` (`C`), CONSTRAINT `0_38775` FOREIGN KEY (`A`, `D`) REFERENCES `ibtest11a` (`A`, `D`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `0_38776` FOREIGN KEY (`B`, `C`) REFERENCES `ibtest11a` (`B`, `C`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=INNODB CHARSET=latin1 1 row in set (0.01 sec) mysql>ALTER TABLE ibtest11c DROP FOREIGN KEY `0_38775`;
Prior to MySQL 5.6.6, adding and dropping a foreign key in the
same ALTER TABLE
statement may be
problematic in some cases and is therefore unsupported. Separate
statements should be used for each operation. As of MySQL 5.6.6,
adding and dropping a foreign key in the same
ALTER TABLE
statement is
supported for ALTER
TABLE ... ALGORITHM=INPLACE
but remains unsupported
for ALTER TABLE ...
ALGORITHM=COPY
.
In MySQL 5.7, the server prohibits changes to
foreign key columns with the potential to cause loss of
referential integrity. A workaround is to use
ALTER TABLE ...
DROP FOREIGN KEY
before changing the column definition
and ALTER TABLE ...
ADD FOREIGN KEY
afterward.
Table and column identifiers in a FOREIGN KEY ...
REFERENCES ...
clause can be quoted within backticks
(`
). Alternatively, double quotation marks
("
) can be used if the
ANSI_QUOTES
SQL mode is
enabled. The setting of the
lower_case_table_names
system
variable is also taken into account.
You can view a child table's foreign key definitions as
part of the output of the SHOW CREATE
TABLE
statement:
SHOW CREATE TABLE tbl_name
;
You can also obtain information about foreign keys by querying
the
INFORMATION_SCHEMA.KEY_COLUMN_USAGE
table.
You can find information about foreign keys used by
InnoDB
tables in the
INNODB_SYS_FOREIGN
and
INNODB_SYS_FOREIGN_COLS
tables,
also in the INFORMATION_SCHEMA
database.
mysqldump produces correct definitions of tables in the dump file, including the foreign keys for child tables.
To make it easier to reload dump files for tables that have
foreign key relationships, mysqldump
automatically includes a statement in the dump output to set
foreign_key_checks
to 0. This
avoids problems with tables having to be reloaded in a
particular order when the dump is reloaded. It is also possible
to set this variable manually:
mysql>SET foreign_key_checks = 0;
mysql>SOURCE
mysql>dump_file_name
;SET foreign_key_checks = 1;
This enables you to import the tables in any order if the dump
file contains tables that are not correctly ordered for foreign
keys. It also speeds up the import operation. Setting
foreign_key_checks
to 0 can
also be useful for ignoring foreign key constraints during
LOAD DATA
and
ALTER TABLE
operations. However,
even if foreign_key_checks = 0
,
MySQL does not permit the creation of a foreign key constraint
where a column references a nonmatching column type. Also, if a
table has foreign key constraints, ALTER
TABLE
cannot be used to alter the table to use another
storage engine. To change the storage engine, you must drop any
foreign key constraints first.
You cannot issue DROP TABLE
for a
table that is referenced by a FOREIGN KEY
constraint, unless you do SET foreign_key_checks =
0
. When you drop a table, any constraints that were
defined in the statement used to create that table are also
dropped.
If you re-create a table that was dropped, it must have a
definition that conforms to the foreign key constraints
referencing it. It must have the correct column names and types,
and it must have indexes on the referenced keys, as stated
earlier. If these are not satisfied, MySQL returns Error 1005
and refers to Error 150 in the error message, which means that a
foreign key constraint was not correctly formed. Similarly, if
an ALTER TABLE
fails due to Error
150, this means that a foreign key definition would be
incorrectly formed for the altered table.
For InnoDB
tables, you can obtain a detailed
explanation of the most recent InnoDB
foreign
key error in the MySQL Server, by checking the output of
SHOW ENGINE INNODB
STATUS
.
For users familiar with the ANSI/ISO SQL Standard, please note
that no storage engine, including InnoDB
,
recognizes or enforces the MATCH
clause
used in referential-integrity constraint definitions. Use of
an explicit MATCH
clause will not have the
specified effect, and also causes ON DELETE
and ON UPDATE
clauses to be ignored. For
these reasons, specifying MATCH
should be
avoided.
The MATCH
clause in the SQL standard
controls how NULL
values in a composite
(multiple-column) foreign key are handled when comparing to a
primary key. MySQL essentially implements the semantics
defined by MATCH SIMPLE
, which permit a
foreign key to be all or partially NULL
. In
that case, the (child table) row containing such a foreign key
is permitted to be inserted, and does not match any row in the
referenced (parent) table. It is possible to implement other
semantics using triggers.
Additionally, MySQL requires that the referenced columns be
indexed for performance reasons. However, the system does not
enforce a requirement that the referenced columns be
UNIQUE
or be declared NOT
NULL
. The handling of foreign key references to
nonunique keys or keys that contain NULL
values is not well defined for operations such as
UPDATE
or DELETE
CASCADE
. You are advised to use foreign keys that
reference only UNIQUE
(including
PRIMARY
) and NOT NULL
keys.
Furthermore, MySQL parses but ignores “inline
REFERENCES
specifications” (as
defined in the SQL standard) where the references are defined
as part of the column specification. MySQL accepts
REFERENCES
clauses only when specified as
part of a separate FOREIGN KEY
specification. For storage engines that do not support foreign
keys (such as MyISAM
), MySQL
Server parses and ignores foreign key specifications.
In some cases, MySQL silently changes column specifications from
those given in a CREATE TABLE
or
ALTER TABLE
statement. These
might be changes to a data type, to attributes associated with a
data type, or to an index specification.
All changes are subject to the internal row-size limit of 65,535 bytes, which may cause some attempts at data type changes to fail. See Section C.10.4, “Limits on Table Column Count and Row Size”.
Columns that are part of a PRIMARY KEY
are made NOT NULL
even if not declared
that way.
Trailing spaces are automatically deleted from
ENUM
and
SET
member values when the
table is created.
MySQL maps certain data types used by other SQL database vendors to MySQL types. See Section 11.10, “Using Data Types from Other Database Engines”.
If you include a USING
clause to specify
an index type that is not permitted for a given storage
engine, but there is another index type available that the
engine can use without affecting query results, the engine
uses the available type.
If strict SQL mode is not enabled, a
VARCHAR
column with a length
specification greater than 65535 is converted to
TEXT
, and a
VARBINARY
column with a
length specification greater than 65535 is converted to
BLOB
. Otherwise, an error
occurs in either of these cases.
Specifying the CHARACTER SET binary
attribute for a character data type causes the column to be
created as the corresponding binary data type:
CHAR
becomes
BINARY
,
VARCHAR
becomes
VARBINARY
, and
TEXT
becomes
BLOB
. For the
ENUM
and
SET
data types, this does not
occur; they are created as declared. Suppose that you
specify a table using this definition:
CREATE TABLE t ( c1 VARCHAR(10) CHARACTER SET binary, c2 TEXT CHARACTER SET binary, c3 ENUM('a','b','c') CHARACTER SET binary );
The resulting table has this definition:
CREATE TABLE t ( c1 VARBINARY(10), c2 BLOB, c3 ENUM('a','b','c') CHARACTER SET binary );
To see whether MySQL used a data type other than the one you
specified, issue a DESCRIBE
or
SHOW CREATE TABLE
statement after
creating or altering the table.
Certain other data type changes can occur if you compress a table using myisampack. See Section 15.2.3.3, “Compressed Table Characteristics”.
As of MySQL 5.7.6, CREATE TABLE
supports the specification of generated columns. Values of a
generated column are computed from an expression included in the
column definition.
Generated columns are supported by the
NDB
storage engine beginning with
MySQL NDB Cluster 7.5.3.
The following simple example shows a table that stores the
lengths of the sides of right triangles in the
sidea
and sideb
columns,
and computes the length of the hypotenuse in
sidec
(the square root of the sums of the
squares of the other sides):
CREATE TABLE triangle ( sidea DOUBLE, sideb DOUBLE, sidec DOUBLE AS (SQRT(sidea * sidea + sideb * sideb)) ); INSERT INTO triangle (sidea, sideb) VALUES(1,1),(3,4),(6,8);
Selecting from the table yields this result:
mysql> SELECT * FROM triangle;
+-------+-------+--------------------+
| sidea | sideb | sidec |
+-------+-------+--------------------+
| 1 | 1 | 1.4142135623730951 |
| 3 | 4 | 5 |
| 6 | 8 | 10 |
+-------+-------+--------------------+
Any application that uses the triangle
table
has access to the hypotenuse values without having to specify
the expression that calculates them.
Generated column definitions have this syntax:
col_name
data_type
[GENERATED ALWAYS] AS (expression
) [VIRTUAL | STORED] [UNIQUE [KEY]] [COMMENTcomment
] [[NOT] NULL] [[PRIMARY] KEY]
AS (
indicates that the column is generated and defines the
expression used to compute column values. expression
)AS
may be preceded by GENERATED ALWAYS
to make
the generated nature of the column more explicit. Constructs
that are permitted or prohibited in the expression are discussed
later.
The VIRTUAL
or STORED
keyword indicates how column values are stored, which has
implications for column use:
VIRTUAL
: Column values are not stored,
but are evaluated when rows are read, immediately after any
BEFORE
triggers. A virtual column takes
no storage.
InnoDB
supports secondary indexes on
virtual columns. See
Section 13.1.18.9, “Secondary Indexes and Generated Columns”.
STORED
: Column values are evaluated and
stored when rows are inserted or updated. A stored column
does require storage space and can be indexed.
The default is VIRTUAL
if neither keyword is
specified.
It is permitted to mix VIRTUAL
and
STORED
columns within a table.
Other attributes may be given to indicate whether the column is
indexed or can be NULL
, or provide a comment.
(Note that the order of these attributes differs from their
order in nongenerated column definitions.)
Generated column expressions must adhere to the following rules. An error occurs if an expression contains disallowed constructs.
Literals, deterministic built-in functions, and operators
are permitted. A function is deterministic if, given the
same data in tables, multiple invocations produce the same
result, independently of the connected user. Examples of
functions that fail this definition:
CONNECTION_ID()
,
CURRENT_USER()
,
NOW()
.
Subqueries, parameters, variables, stored functions, and user-defined functions are not permitted.
A generated column definition can refer to other generated columns, but only those occurring earlier in the table definition. A generated column definition can refer to any base (nongenerated) column in the table whether its definition occurs earlier or later.
The AUTO_INCREMENT
attribute cannot be
used in a generated column definition.
An AUTO_INCREMENT
column cannot be used
as a base column in a generated column definition.
As of MySQL 5.7.10, if expression evaluation causes
truncation or provides incorrect input to a function, the
CREATE TABLE
statement
terminates with an error and the DDL operation is rejected.
If the expression evaluates to a data type that differs from the declared column type, coercion to the declared type occurs according to the usual MySQL type-conversion rules. See Section 12.2, “Type Conversion in Expression Evaluation”.
If any component of the expression depends on the SQL mode, different results may occur for different uses of the table unless the SQL mode is the same during all uses.
For CREATE
TABLE ... LIKE
, the destination table preserves
generated column information from the original table.
For CREATE
TABLE ... SELECT
, the destination table does not
preserve information about whether columns in the selected-from
table are generated columns. The
SELECT
part of the statement
cannot assign values to generated columns in the destination
table.
Partitioning by generated columns is permitted. See Creating Partitioned Tables.
A foreign key constraint on a stored generated column cannot use
ON UPDATE CASCADE
, ON DELETE SET
NULL
, ON UPDATE SET NULL
,
ON DELETE SET DEFAULT
, or ON UPDATE
SET DEFAULT
.
A foreign key constraint cannot reference a virtual generated column.
For InnoDB
restrictions related to foreign
keys and generated columns, see
Section 14.8.1.6, “InnoDB and FOREIGN KEY Constraints”.
Triggers cannot use
NEW.
or
use col_name
OLD.
to refer to generated columns.
col_name
For INSERT
,
REPLACE
, and
UPDATE
, if a generated column is
inserted into, replaced, or updated explicitly, the only
permitted value is DEFAULT
.
A generated column in a view is considered updatable because it
is possible to assign to it. However, if such a column is
updated explicitly, the only permitted value is
DEFAULT
.
Generated columns have several use cases, such as these:
Virtual generated columns can be used as a way to simplify and unify queries. A complicated condition can be defined as a generated column and referred to from multiple queries on the table to ensure that all of them use exactly the same condition.
Stored generated columns can be used as a materialized cache for complicated conditions that are costly to calculate on the fly.
Generated columns can simulate functional indexes: Use a
stored column to define a functional expression and index
it. This can be useful for working with columns of types
that cannot be indexed directly, such as
JSON
columns; see
Indexing a Generated Column to Provide a JSON Column Index, for a detailed
example.
The disadvantage of such an approach is that values are stored twice; once as the value of the generated column and once in the index.
If a generated column is indexed, the optimizer recognizes query expressions that match the column definition and uses indexes from the column as appropriate during query execution, even if a query does not refer to the column directly by name. For details, see Section 8.3.10, “Optimizer Use of Generated Column Indexes”.
Example:
Suppose that a table t1
contains
first_name
and last_name
columns and that applications frequently construct the full name
using an expression like this:
SELECT CONCAT(first_name,' ',last_name) AS full_name FROM t1;
One way to avoid writing out the expression is to create a view
v1
on t1
, which simplifies
applications by enabling them to select
full_name
directly without using an
expression:
CREATE VIEW v1 AS SELECT *, CONCAT(first_name,' ',last_name) AS full_name FROM t1; SELECT full_name FROM v1;
A generated column also enables applications to select
full_name
directly without the need to define
a view:
CREATE TABLE t1 ( first_name VARCHAR(10), last_name VARCHAR(10), full_name VARCHAR(255) AS (CONCAT(first_name,' ',last_name)) ); SELECT full_name FROM t1;
InnoDB
supports secondary indexes on virtual
generated columns. Other index types are not supported. A
secondary index defined on a virtual column is sometimes
referred to as a “virtual index”.
A secondary index may be created on one or more virtual columns
or on a combination of virtual columns and regular columns or
stored generated columns. Secondary indexes that include virtual
columns may be defined as UNIQUE
.
When a secondary index is created on a virtual generated column, generated column values are materialized in the records of the index. If the index is a covering index (one that includes all the columns retrieved by a query), generated column values are retrieved from materialized values in the index structure instead of computed “on the fly”.
There are additional write costs to consider when using a
secondary index on a virtual column due to computation performed
when materializing virtual column values in secondary index
records during INSERT
and
UPDATE
operations. Even with
additional write costs, secondary indexes on virtual columns may
be preferable to generated stored columns,
which are materialized in the clustered index, resulting in
larger tables that require more disk space and memory. If a
secondary index is not defined on a virtual column, there are
additional costs for reads, as virtual column values must be
computed each time the column's row is examined.
Values of an indexed virtual column are MVCC-logged to avoid
unnecessary recomputation of generated column values during
rollback or during a purge operation. The data length of logged
values is limited by the index key limit of 767 bytes for
COMPACT
and REDUNDANT
row
formats, and 3072 bytes for DYNAMIC
and
COMPRESSED
row formats.
Adding or dropping a secondary index on a virtual column is an in-place operation.
Prior to 5.7.16, a foreign key constraint cannot reference a secondary index defined on a virtual generated column.
In MySQL 5.7.13 and earlier, InnoDB
does not
permit defining a foreign key constraint with a cascading
referential action on the base column of an indexed generated
virtual column. This restriction is lifted in MySQL 5.7.14.
As noted elsewhere, JSON
columns cannot be indexed directly. To create an index that
references such a column indirectly, you can define a
generated column that extracts the information that should be
indexed, then create an index on the generated column, as
shown in this example:
mysql>CREATE TABLE jemp (
->c JSON,
->g INT GENERATED ALWAYS AS (c->"$.id"),
->INDEX i (g)
->);
Query OK, 0 rows affected (0.28 sec) mysql>INSERT INTO jemp (c) VALUES
>('{"id": "1", "name": "Fred"}'), ('{"id": "2", "name": "Wilma"}'),
>('{"id": "3", "name": "Barney"}'), ('{"id": "4", "name": "Betty"}');
Query OK, 4 rows affected (0.04 sec) Records: 4 Duplicates: 0 Warnings: 0 mysql>SELECT c->>"$.name" AS name
>FROM jemp WHERE g > 2;
+--------+ | name | +--------+ | Barney | | Betty | +--------+ 2 rows in set (0.00 sec) mysql>EXPLAIN SELECT c->>"$.name" AS name
>FROM jemp WHERE g > 2\G
*************************** 1. row *************************** id: 1 select_type: SIMPLE table: jemp partitions: NULL type: range possible_keys: i key: i key_len: 5 ref: NULL rows: 2 filtered: 100.00 Extra: Using where 1 row in set, 1 warning (0.00 sec) mysql>SHOW WARNINGS\G
*************************** 1. row *************************** Level: Note Code: 1003 Message: /* select#1 */ select json_unquote(json_extract(`test`.`jemp`.`c`,'$.name')) AS `name` from `test`.`jemp` where (`test`.`jemp`.`g` > 2) 1 row in set (0.00 sec)
(We have wrapped the output from the last statement in this example to fit the viewing area.)
The
->
operator is supported in MySQL 5.7.9 and later. The
->>
operator is supported beginning with MySQL 5.7.13.
When you use EXPLAIN
on a
SELECT
or other SQL statement
containing one or more expressions that use the
->
or ->>
operator, these expressions are translated into their
equivalents using JSON_EXTRACT()
and (if
needed) JSON_UNQUOTE()
instead, as shown
here in the output from SHOW
WARNINGS
immediately following this
EXPLAIN
statement:
mysql>EXPLAIN SELECT c->>"$.name"
>FROM jemp WHERE g > 2 ORDER BY c->"$.name"\G
*************************** 1. row *************************** id: 1 select_type: SIMPLE table: jemp partitions: NULL type: range possible_keys: i key: i key_len: 5 ref: NULL rows: 2 filtered: 100.00 Extra: Using where; Using filesort 1 row in set, 1 warning (0.00 sec) mysql>SHOW WARNINGS\G
*************************** 1. row *************************** Level: Note Code: 1003 Message: /* select#1 */ select json_unquote(json_extract(`test`.`jemp`.`c`,'$.name')) AS `c->>"$.name"` from `test`.`jemp` where (`test`.`jemp`.`g` > 2) order by json_extract(`test`.`jemp`.`c`,'$.name') 1 row in set (0.00 sec)
See the descriptions of the
->
and
->>
operators, as well as those of the
JSON_EXTRACT()
and
JSON_UNQUOTE()
functions, for
additional information and examples.
This technique also can be used to provide indexes that
indirectly reference columns of other types that cannot be
indexed directly, such as GEOMETRY
columns.
It is also possible to use indirect indexing of JSON columns in MySQL NDB Cluster 7.5.3 and later, subject to the following conditions:
NDB
handles a
JSON
column value
internally as a BLOB
. This
means that any NDB
table having one or
more JSON columns must have a primary key, else it cannot
be recorded in the binary log.
The NDB
storage engine does
not support indexing of virtual columns. Since the default
for generated columns is VIRTUAL
, you
must specify explicitly the generated column to which to
apply the indirect index as STORED
.
The CREATE TABLE
statement
used to create the table jempn
shown here
is a version of the jemp
table shown
previously, with modifications making it compatible with
NDB
:
CREATE TABLE jempn ( a BIGINT(20) NOT NULL AUTO_INCREMENT PRIMARY KEY, c JSON DEFAULT NULL, g INT GENERATED ALWAYS AS (c->"$.name") STORED, INDEX i (g) ) ENGINE=NDB;
We can populate this table using the following
INSERT
statement:
INSERT INTO jempn (a, c) VALUES (NULL, '{"id": "1", "name": "Fred"}'), (NULL, '{"id": "2", "name": "Wilma"}'), (NULL, '{"id": "3", "name": "Barney"}'), (NULL, '{"id": "4", "name": "Betty"}');
Now NDB
can use index i
,
as shown here:
mysql>EXPLAIN SELECT c->>"$.name" AS name FROM jempn WHERE g > 2\G
*************************** 1. row *************************** id: 1 select_type: SIMPLE table: jempn partitions: p0,p1 type: range possible_keys: i key: i key_len: 5 ref: NULL rows: 3 filtered: 100.00 Extra: Using where with pushed condition (`test`.`jempn`.`g` > 2) 1 row in set, 1 warning (0.00 sec) mysql>SHOW WARNINGS\G
*************************** 1. row *************************** Level: Note Code: 1003 Message: /* select#1 */ select json_unquote(json_extract(`test`.`jempn`.`c`,'$.name')) AS `name` from `test`.`jempn` where (`test`.`jempn`.`g` > 2) 1 row in set (0.00 sec)
You should keep in mind that a stored generated column uses
DataMemory
, and that
an index on such a column uses
IndexMemory
.
In MySQL NDB Cluster 7.5.2 and later, the table comment in a
CREATE TABLE
or ALTER
TABLE
statement can also be used to specify an
NDB_TABLE
option, which consists of one or
more name-value pairs, separated by commas if need be, following
the string NDB_TABLE=
. Complete syntax for
names and values syntax is shown here:
COMMENT="NDB_TABLE=ndb_table_option
[,ndb_table_option
[,...]]"ndb_table_option
: NOLOGGING={1|0} | READ_BACKUP={1|0} | PARTITION_BALANCE={FOR_RP_BY_NODE|FOR_RA_BY_NODE|FOR_RP_BY_LDM|FOR_RA_BY_LDM} | FULLY_REPLICATED={1|0}
Spaces are not permitted within the quoted string. The string is case-insensitive.
The four NDB
table options that can be set as
part of a comment in this way are described in more detail in
the next few paragraphs.
NOLOGGING
: Using 1 corresponds to having
ndb_table_no_logging
enabled,
but has no actual effect. Provided as a placeholder, mostly for
completeness of ALTER TABLE
statements.
READ_BACKUP
: Setting this option to 1 has the
same effect as though
ndb_read_backup
were enabled;
enables reading from any replica. Starting with MySQL NDB
Cluster 7.5.3, you can set READ_BACKUP
for an
existing table online (Bug #80858, Bug #23001617), using an
ALTER TABLE
statement similar to one of those
shown here:
ALTER TABLE ... ALGORITHM=INPLACE, COMMENT="NDB_TABLE=READ_BACKUP=1"; ALTER TABLE ... ALGORITHM=INPLACE, COMMENT="NDB_TABLE=READ_BACKUP=0";
Prior to MySQL NDB Cluster 7.5.4, setting
READ_BACKUP
to 1 also caused
FRAGMENT_COUNT_TYPE
to be set to
ONE_PER_LDM_PER_NODE_GROUP
.
For more information about the ALGORITHM
option for ALTER TABLE
, see
Section 13.1.8.2, “ALTER TABLE Online Operations in NDB Cluster”.
PARTITION_BALANCE
: Provides additional
control over assignment and placement of partitions. The
following four schemes are supported:
FOR_RP_BY_NODE
: One partition per node.
Only one LDM on each node stores a primary partition. Each partition is stored in the same LDM (same ID) on all nodes.
FOR_RA_BY_NODE
: One partition per node
group.
Each node stores a single partition, which can be either a primary replica or a backup replica. Each partition is stored in the same LDM on all nodes.
FOR_RP_BY_LDM
: One partition for each LDM
on each node; the default.
This is the same behavior as prior to MySQL NDB Cluster 7.5.2, except for a slightly different mapping of partitions to LDMs, starting with LDM 0 and placing one partition per node group, then moving on to the next LDM.
In MySQL NDB Cluster 7.5.4 and later, this is the setting
used if READ_BACKUP
is set to 1. (Bug
#82634, Bug #24482114)
FOR_RA_BY_LDM
: One partition per LDM in
each node group.
These partitions can be primary or backup partitions.
Prior to MySQL NDB Cluster 7.5.4, this is the setting used
if READ_BACKUP
is set to 1.
Prior to MySQL NDB Cluster 7.5.4,
PARTITION_BALANCE
was named
FRAGMENT_COUNT_TYPE
, and accepted as its
value one of (in the same order as that of the listing just
shown) ONE_PER_NODE
,
ONE_PER_NODE_GROUP
,
ONE_PER_LDM_PER_NODE
, or
ONE_PER_LDM_PER_NODE_GROUP
. (Bug #81761, Bug
#23547525)
FULLY_REPLICATED
controls whether the table
is fully replicated, that is, whether each data node has a
complete copy of the table. To enable full replication of the
table, use FULLY_REPLICATED=1
. You must also
set (or have already set) the table's
PARTITION_BALANCE
to either one of
FOR_RA_BY_NODE
or
FOR_RA_BY_LDM
in order for this to work.
This setting can also be controlled using the
ndb_fully_replicated
system variable. Setting
it to ON
enables the option by default for
all new NDB
tables; the default is
OFF
, which maintains the previous behavior
(as in MySQL NDB Cluster 7.5.1 and earlier, before support for
fully replicated tables was introduced). The
ndb_data_node_neighbour
system
variable is also used for fully replicated tables, to ensure
that when a fully replicated table is accessed, we access the
data node which is local to this MySQL Server.
An example of a CREATE TABLE
statement using
such a comment when creating an NDB
table is
shown here:
mysql>CREATE TABLE t1 (
>c1 INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
>c2 VARCHAR(100),
>c3 VARCHAR(100) )
>ENGINE=NDB
>COMMENT="NDB_TABLE=READ_BACKUP=0,PARTITION_BALANCE=FOR_RP_BY_NODE";
The comment is displayed as part of the ouput of
SHOW CREATE TABLE
. The text of
the comment is also available from querying the MySQL
Information Schema TABLES
table, as
in this example:
mysql>SELECT TABLE_NAME, TABLE_SCHEMA, TABLE_COMMENT
>FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME="t1";
+------------+--------------+----------------------------------------------------------+ | TABLE_NAME | TABLE_SCHEMA | TABLE_COMMENT | +------------+--------------+----------------------------------------------------------+ | t1 | c | NDB_TABLE=READ_BACKUP=0,PARTITION_BALANCE=FOR_RP_BY_NODE | | t1 | d | | +------------+--------------+----------------------------------------------------------+ 2 rows in set (0.00 sec)
This comment syntax is also supported with
ALTER TABLE
statements for
NDB
tables. Keep in mind that a table comment
used with ALTER TABLE
replaces any existing
comment which the table might have.
mysql>ALTER TABLE t1 COMMENT="NDB_TABLE=PARTITION_BALANCE=FOR_RA_BY_NODE";
Query OK, 0 rows affected (0.40 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>SELECT TABLE_NAME, TABLE_SCHEMA, TABLE_COMMENT
>FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME="t1";
+------------+--------------+--------------------------------------------------+ | TABLE_NAME | TABLE_SCHEMA | TABLE_COMMENT | +------------+--------------+--------------------------------------------------+ | t1 | c | NDB_TABLE=PARTITION_BALANCE=FOR_RA_BY_NODE | | t1 | d | | +------------+--------------+--------------------------------------------------+ 2 rows in set (0.01 sec)
You can also see the value of the
PARTITION_BALANCE
option in the output of
ndb_desc. ndb_desc also
shows whether the READ_BACKUP
and
FULLY_REPLICATED
options are set for the
table. See the description of this program for more information.
Because the READ_BACKUP
value was not carried
over to the new comment set by the ALTER
TABLE
statement, there is no longer a way using SQL to
retrieve the value previously set for it. To keep this from
happening, it is suggested that you preserve any such values
from the existing comment string, like this:
mysql>SELECT TABLE_NAME, TABLE_SCHEMA, TABLE_COMMENT
>FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME="t1";
+------------+--------------+----------------------------------------------------------+ | TABLE_NAME | TABLE_SCHEMA | TABLE_COMMENT | +------------+--------------+----------------------------------------------------------+ | t1 | c | NDB_TABLE=READ_BACKUP=0,PARTITION_BALANCE=FOR_RP_BY_NODE | | t1 | d | | +------------+--------------+----------------------------------------------------------+ 2 rows in set (0.00 sec) mysql>ALTER TABLE t1 COMMENT="NDB_TABLE=READ_BACKUP=0,PARTITION_BALANCE=FOR_RA_BY_NODE";
Query OK, 0 rows affected (1.56 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>SELECT TABLE_NAME, TABLE_SCHEMA, TABLE_COMMENT
>FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME="t1";
+------------+--------------+----------------------------------------------------------------+ | TABLE_NAME | TABLE_SCHEMA | TABLE_COMMENT | +------------+--------------+----------------------------------------------------------------+ | t1 | c | NDB_TABLE=READ_BACKUP=0,PARTITION_BALANCE=FOR_RA_BY_NODE | | t1 | d | | +------------+--------------+----------------------------------------------------------------+ 2 rows in set (0.01 sec)
CREATE TABLESPACEtablespace_name
InnoDB and NDB: ADD DATAFILE 'file_name
' InnoDB only: [FILE_BLOCK_SIZE = value] NDB only: USE LOGFILE GROUPlogfile_group
[EXTENT_SIZE [=]extent_size
] [INITIAL_SIZE [=]initial_size
] [AUTOEXTEND_SIZE [=]autoextend_size
] [MAX_SIZE [=]max_size
] [NODEGROUP [=]nodegroup_id
] [WAIT] [COMMENT [=]comment_text
] InnoDB and NDB: [ENGINE [=]engine_name
]
This statement is used to create a tablespace. The precise syntax
and semantics depend on the storage engine used. In standard MySQL
5.7 releases, this is always an
InnoDB
tablespace. MySQL NDB Cluster
7.5 also supports tablespaces using the
NDB
storage engine in addition to
those using InnoDB
.
An InnoDB
tablespace created using
CREATE TABLESPACE
is referred to as a
general tablespace. This is a shared
tablespace, similar to the system tablespace. It can hold multiple
tables, and supports all table row formats. General tablespaces
can be created in a location relative to or independent of the
MySQL data directory.
After creating an InnoDB
general tablespace,
you can use CREATE
TABLE
or
tbl_name
... TABLESPACE [=]
tablespace_name
ALTER TABLE
to add tables
to the tablespace.
tbl_name
TABLESPACE [=]
tablespace_name
For more information, see Section 14.7.9, “InnoDB General Tablespaces”.
This statement is used to create a tablespace, which can contain
one or more data files, providing storage space for NDB Cluster
Disk Data tables (see Section 21.5.13, “NDB Cluster Disk Data Tables”).
One data file is created and added to the tablespace using this
statement. Additional data files may be added to the tablespace by
using the ALTER TABLESPACE
statement (see Section 13.1.9, “ALTER TABLESPACE Syntax”).
All NDB Cluster Disk Data objects share the same namespace. This means that each Disk Data object must be uniquely named (and not merely each Disk Data object of a given type). For example, you cannot have a tablespace and a log file group with the same name, or a tablespace and a data file with the same name.
A log file group of one or more UNDO
log files
must be assigned to the tablespace to be created with the
USE LOGFILE GROUP
clause.
logfile_group
must be an existing log
file group created with CREATE LOGFILE
GROUP
(see Section 13.1.15, “CREATE LOGFILE GROUP Syntax”).
Multiple tablespaces may use the same log file group for
UNDO
logging.
When setting EXTENT_SIZE
or
INITIAL_SIZE
, you may optionally follow the
number with a one-letter abbreviation for an order of magnitude,
similar to those used in my.cnf
. Generally,
this is one of the letters M
(for megabytes) or
G
(for gigabytes).
INITIAL_SIZE
and EXTENT_SIZE
are subject to rounding as follows:
EXTENT_SIZE
is rounded up to the nearest
whole multiple of 32K.
INITIAL_SIZE
is rounded
down to the nearest whole multiple of
32K; this result is rounded up to the nearest whole multiple
of EXTENT_SIZE
(after any rounding).
The rounding just described is done explicitly, and a warning is
issued by the MySQL Server when any such rounding is performed.
The rounded values are also used by the NDB kernel for calculating
INFORMATION_SCHEMA.FILES
column
values and other purposes. However, to avoid an unexpected result,
we suggest that you always use whole multiples of 32K in
specifying these options.
When CREATE TABLESPACE
is used with
ENGINE [=] NDB
, a tablespace and associated
data file are created on each Cluster data node. You can verify
that the data files were created and obtain information about them
by querying the
INFORMATION_SCHEMA.FILES
table. (See
the example later in this section.)
(See Section 24.8, “The INFORMATION_SCHEMA FILES Table”.)
ADD DATAFILE
: Defines the name of a
tablespace data file; this option is always required. An
InnoDB
tablespace supports only a single
data file, whose name must include a .ibd
extension. an NDB Cluster tablespace supports multiple data
files which can have any legal file names; more data files can
be added to an NDB Cluster tablespace following its creation
by using an ALTER TABLESPACE
statement.
ALTER TABLESPACE
is not supported by
InnoDB
.
To place the data file in a location outside of the MySQL data
directory (datadir
), include
an absolute directory path or a path relative to the MySQL
data directory. If you do not specify a path, the tablespace
is created in the MySQL data directory. An
isl file is created in
the MySQL data directory when an InnoDB
tablespace is created outside of the MySQL data directory.
To avoid conflicts with implicitly created file-per-table tablespaces, creating a general tablespace in a subdirectory under the MySQL data directory is not supported. Also, when creating a general tablespace outside of the MySQL data directory, the directory must exist prior to creating the tablespace.
The
,
including the path (optional), must be quoted with single or
double quotations marks. File names (not counting any
“.ibd” extension for file_name
InnoDB
files) and directory names must be at least one byte in
length. Zero length file names and directory names are not
supported.
FILE_BLOCK_SIZE
: This option—which is
specific to InnoDB
, and is ignored by
NDB
—defines the block size for the
tablespace data file. If you do not specify this option,
FILE_BLOCK_SIZE
defaults to
innodb_page_size
.
FILE_BLOCK_SIZE
is required when you intend
to use the tablespace for storing compressed
InnoDB
tables
(ROW_FORMAT=COMPRESSED
).
If FILE_BLOCK_SIZE
is equal
innodb_page_size
, the
tablespace can contain only tables having an uncompressed row
format (COMPACT
,
REDUNDANT
, or DYNAMIC
).
The physical page size for tables using
COMPRESSED
differs from that of
uncompressed tables; this means that compressed tables and
uncompressed tables cannot coexist in the same tablespace.
For a general tablespace to contain compressed tables,
FILE_BLOCK_SIZE
must be specified, and the
FILE_BLOCK_SIZE
value must be a valid
compressed page size in relation to the
innodb_page_size
value. Also,
the physical page size of the compressed table
(KEY_BLOCK_SIZE
) must be equal to
FILE_BLOCK_SIZE/1024
. For example, if
innodb_page_size=16K
, and
FILE_BLOCK_SIZE=8K
, the
KEY_BLOCK_SIZE
of the table must be 8. For
more information, see Section 14.7.9, “InnoDB General Tablespaces”.
USE LOGFILE GROUP
: Required for
NDB
, this is the name of a log file group
previously created using CREATE LOGFILE
GROUP
. Not supported for InnoDB
,
where it fails with an error.
EXTENT_SIZE
: This option is specific to
NDB, and is not supported by InnoDB, where it fails with an
error. EXTENT_SIZE
sets the size, in bytes,
of the extents used by any files belonging to the tablespace.
The default value is 1M. The minimum size is 32K, and
theoretical maximum is 2G, although the practical maximum size
depends on a number of factors. In most cases, changing the
extent size does not have any measurable effect on
performance, and the default value is recommended for all but
the most unusual situations.
An extent is a unit of
disk space allocation. One extent is filled with as much data
as that extent can contain before another extent is used. In
theory, up to 65,535 (64K) extents may used per data file;
however, the recommended maximum is 32,768 (32K). The
recommended maximum size for a single data file is
32G—that is, 32K extents × 1 MB per extent. In
addition, once an extent is allocated to a given partition, it
cannot be used to store data from a different partition; an
extent cannot store data from more than one partition. This
means, for example that a tablespace having a single datafile
whose INITIAL_SIZE
(described in the
following item) is 256 MB and whose
EXTENT_SIZE
is 128M has just two extents,
and so can be used to store data from at most two different
disk data table partitions.
You can see how many extents remain free in a given data file
by querying the
INFORMATION_SCHEMA.FILES
table,
and so derive an estimate for how much space remains free in
the file. For further discussion and examples, see
Section 24.8, “The INFORMATION_SCHEMA FILES Table”.
INITIAL_SIZE
: This option is specific to
NDB
, and is not supported by
InnoDB
, where it fails with an error.
The INITIAL_SIZE
parameter sets the total
size in bytes of the data file that was specific using
ADD DATATFILE
. Once this file has been
created, its size cannot be changed; however, you can add more
data files to the tablespace using
ALTER
TABLESPACE ... ADD DATAFILE
.
INITIAL_SIZE
is optional; its default value
is 134217728 (128 MB).
On 32-bit systems, the maximum supported value for
INITIAL_SIZE
is 4294967296 (4 GB).
AUTOEXTEND_SIZE
: Currently ignored by
MySQL; reserved for possible future use. Has no effect in any
release of MySQL 5.7 or MySQL NDB Cluster 7.5, regardless of
the storage engine used.
MAX_SIZE
: Currently ignored by MySQL;
reserved for possible future use. Has no effect in any release
of MySQL 5.7 or MySQL NDB Cluster 7.5, regardless of the
storage engine used.
NODEGROUP
: Currently ignored by MySQL;
reserved for possible future use. Has no effect in any release
of MySQL 5.7 or MySQL NDB Cluster 7.5, regardless of the
storage engine used.
WAIT
: Currently ignored by MySQL; reserved
for possible future use. Has no effect in any release of MySQL
5.7 or MySQL NDB Cluster 7.5, regardless of the storage engine
used.
COMMENT
: Currently ignored by MySQL;
reserved for possible future use. Has no effect in any release
of MySQL 5.7 or MySQL NDB Cluster 7.5, regardless of the
storage engine used.
ENGINE
: Defines the storage engine which
uses the tablespace, where
engine_name
is the name of the
storage engine. Currently, only the InnoDB
storage engine is supported by standard MySQL 5.7
releases. MySQL NDB Cluster 7.5 supports both
NDB
and InnoDB
tablespaces. The value of the
default_storage_engine
system
variable is used for ENGINE
if the option
is not specified.
For the rules covering the naming of MySQL tablespaces, see
Section 9.2, “Schema Object Names”. In addition to these rules, the
slash character (“/”) is not permitted, nor can
you use names beginning with innodb_
, as
this prefix is reserved for system use.
Tablespaces do not support temporary tables.
The TABLESPACE
option may be used with
CREATE TABLE
or
ALTER TABLE
to assign
InnoDB
table partitions or subpartitions to
a general
tablespace, a separate file-per-table tablespace, or
the system tablespace. TABLESPACE
option
support for table partitions and subpartitions was added in
MySQL 5.7. All partitions must belong to the same
storage engine. For more information, see
Section 14.7.9, “InnoDB General Tablespaces”.
innodb_file_per_table
,
innodb_file_format
, and
innodb_file_format_max
settings have no influence on CREATE
TABLESPACE
operations.
innodb_file_per_table
does
not need to be enabled. General tablespaces support all table
row formats regardless of file format settings. Likewise,
general tablespaces support the addition of tables of any row
format using
CREATE TABLE ...
TABLESPACE
, regardless of file format settings.
innodb_strict_mode
is not
applicable to general tablespaces. Tablespace management rules
are strictly enforced independently of
innodb_strict_mode
. If
CREATE TABLESPACE
parameters are incorrect
or incompatible, the operation fails regardless of the
innodb_strict_mode
setting.
When a table is added to a general tablespace using
CREATE TABLE ...
TABLESPACE
or
ALTER TABLE ...
TABLESPACE
,
innodb_strict_mode
is ignored
but the statement is evaluated as if
innodb_strict_mode
is
enabled.
Use DROP TABLESPACE
to remove a tablespace.
All tables must be dropped from a tablespace using
DROP TABLE
prior to dropping
the tablespace. Before dropping an NDB Cluster tablespace you
must also remove all its data files using one or more
ALTER
TABLESPACE ... DROP DATATFILE
statements. See
Section 21.5.13.1, “NDB Cluster Disk Data Objects”.
All parts of an InnoDB
table added to an
InnoDB
general tablespace reside in the
general tablespace, including indexes and
BLOB
pages.
For an NDB
table assigned to a tablespace,
only those columns which are not indexed are stored on disk,
and actually use the tablespace data files. Indexes and
indexed columns for all NDB
tables are
always kept in memory.
Similar to the system tablespace, truncating or dropping
tables stored in a general tablespace creates free space
internally in the general tablespace
.ibd data file which can
only be used for new InnoDB
data. Space is
not released back to the operating system as it is for
file-per-table tablespaces.
A general tablespace is not associated with any database or schema.
ALTER TABLE ...
DISCARD TABLESPACE
and
ALTER TABLE
...IMPORT TABLESPACE
are not supported for tables
that belong to a general tablespace.
The server uses tablespace-level metadata locking for DDL that references general tablespaces. By comparison, the server uses table-level metadata locking for DDL that references file-per-table tablespaces.
A generated or existing tablespace cannot be changed to a general tablespace.
Tables stored in a general tablespace can only be opened in MySQL 5.7.6 or later due to the addition of new table flags.
There is no conflict between general tablespace names and file-per-table tablespace names. The “/” character, which is present in file-per-table tablespace names, is not permitted in general tablespace names.
This example demonstrates creating a general tablespace and adding three uncompressed tables of different row formats.
mysql>CREATE TABLESPACE `ts1`
->ADD DATAFILE 'ts1.ibd'
->ENGINE=INNODB;
Query OK, 0 rows affected (0.01 sec) mysql>CREATE TABLE t1 (c1 INT PRIMARY KEY)
->TABLESPACE ts1
->ROW_FORMAT=REDUNDANT;
Query OK, 0 rows affected (0.00 sec) mysql>CREATE TABLE t2 (c1 INT PRIMARY KEY)
->TABLESPACE ts1
->ROW_FORMAT=COMPACT;
Query OK, 0 rows affected (0.00 sec) mysql>CREATE TABLE t3 (c1 INT PRIMARY KEY)
->TABLESPACE ts1
->ROW_FORMAT=DYNAMIC;
Query OK, 0 rows affected (0.00 sec)
This example demonstrates creating a general tablespace and adding
a compressed table. The example assumes a default
innodb_page_size
of 16K. The
FILE_BLOCK_SIZE
of 8192 requires that the
compressed table have a KEY_BLOCK_SIZE
of 8.
mysql> CREATE TABLESPACE `ts2` -> ADD DATAFILE 'ts2.ibd' -> FILE_BLOCK_SIZE = 8192 -> ENGINE=INNODB; Query OK, 0 rows affected (0.01 sec) mysql> CREATE TABLE t4 (c1 INT PRIMARY KEY) -> TABLESPACE ts2 -> ROW_FORMAT=COMPRESSED -> KEY_BLOCK_SIZE=8; Query OK, 0 rows affected (0.00 sec)
Suppose that you wish to create an NDB Cluster Disk Data
tablespace named myts
using a datafile named
mydata-1.dat
. An NDB
tablespace always requires the use of a log file group consisting
of one or more undo log files. For this example, we first create a
log file group named mylg
that contains one
undo long file named myundo-1.dat
, using the
CREATE LOGFILE GROUP
statement
shown here:
mysql>CREATE LOGFILE GROUP myg1
->ADD UNDOFILE 'myundo-1.dat'
->ENGINE=NDB;
Query OK, 0 rows affected (3.29 sec)
Now you can create the tablespace previously described using the following statement:
mysql>CREATE TABLESPACE myts
->ADD DATAFILE 'mydata-1.dat'
->USE LOGFILE GROUP mylg
->ENGINE=NDB;
Query OK, 0 rows affected (2.98 sec)
You can now create a Disk Data table using a
CREATE TABLE
statement with the
TABLESPACE
and STORAGE DISK
options, similar to what is shown here:
mysql>CREATE TABLE mytable (
->id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
->lname VARCHAR(50) NOT NULL,
->fname VARCHAR(50) NOT NULL,
->dob DATE NOT NULL,
->joined DATE NOT NULL,
->INDEX(last_name, first_name)
->)
->TABLESPACE myts STORAGE DISK
->ENGINE=NDB;
Query OK, 0 rows affected (1.41 sec)
It is important to note that only the dob
and
joined
columns from mytable
are actually stored on disk, due to the fact that the
id
, lname
, and
fname
columns are all indexed.
As mentioned previously, when CREATE TABLESPACE
is used with ENGINE [=] NDB
, a tablespace and
associated data file are created on each NDB Cluster data node.
You can verify that the data files were created and obtain
information about them by querying the
INFORMATION_SCHEMA.FILES
table, as
shown here:
mysql>SELECT FILE_NAME, FILE_TYPE, LOGFILE_GROUP_NAME, STATUS, EXTRA
->FROM INFORMATION_SCHEMA.FILES
->WHERE TABLESPACE_NAME = 'myts';
+--------------+------------+--------------------+--------+----------------+ | file_name | file_type | logfile_group_name | status | extra | +--------------+------------+--------------------+--------+----------------+ | mydata-1.dat | DATAFILE | mylg | NORMAL | CLUSTER_NODE=5 | | mydata-1.dat | DATAFILE | mylg | NORMAL | CLUSTER_NODE=6 | | NULL | TABLESPACE | mylg | NORMAL | NULL | +--------------+------------+--------------------+--------+----------------+ 3 rows in set (0.01 sec)
For additional information and examples, see Section 21.5.13.1, “NDB Cluster Disk Data Objects”.
CREATE [DEFINER = {user
| CURRENT_USER }] TRIGGERtrigger_name
trigger_time
trigger_event
ONtbl_name
FOR EACH ROW [trigger_order
]trigger_body
trigger_time
: { BEFORE | AFTER }trigger_event
: { INSERT | UPDATE | DELETE }trigger_order
: { FOLLOWS | PRECEDES }other_trigger_name
This statement creates a new trigger. A trigger is a named
database object that is associated with a table, and that
activates when a particular event occurs for the table. The
trigger becomes associated with the table named
tbl_name
, which must refer to a
permanent table. You cannot associate a trigger with a
TEMPORARY
table or a view.
Trigger names exist in the schema namespace, meaning that all triggers must have unique names within a schema. Triggers in different schemas can have the same name.
This section describes CREATE
TRIGGER
syntax. For additional discussion, see
Section 23.3.1, “Trigger Syntax and Examples”.
CREATE TRIGGER
requires the
TRIGGER
privilege for the table
associated with the trigger. The statement might also require the
SUPER
privilege, depending on the
DEFINER
value, as described later in this
section. If binary logging is enabled, CREATE
TRIGGER
might require the
SUPER
privilege, as described in
Section 23.7, “Binary Logging of Stored Programs”.
The DEFINER
clause determines the security
context to be used when checking access privileges at trigger
activation time, as described later in this section.
trigger_time
is the trigger action
time. It can be BEFORE
or
AFTER
to indicate that the trigger activates
before or after each row to be modified.
Basic column value checks occur prior to trigger activation, so
you cannot use BEFORE
triggers to convert
values inappropriate for the column type to valid values.
trigger_event
indicates the kind of
operation that activates the trigger. These
trigger_event
values are permitted:
INSERT
: The trigger activates
whenever a new row is inserted into the table; for example,
through INSERT
,
LOAD DATA
, and
REPLACE
statements.
UPDATE
: The trigger activates
whenever a row is modified; for example, through
UPDATE
statements.
DELETE
: The trigger activates
whenever a row is deleted from the table; for example, through
DELETE
and
REPLACE
statements.
DROP TABLE
and
TRUNCATE TABLE
statements on
the table do not activate this trigger,
because they do not use DELETE
.
Dropping a partition does not activate
DELETE
triggers, either.
The trigger_event
does not represent a
literal type of SQL statement that activates the trigger so much
as it represents a type of table operation. For example, an
INSERT
trigger activates not only
for INSERT
statements but also
LOAD DATA
statements because both
statements insert rows into a table.
A potentially confusing example of this is the INSERT
INTO ... ON DUPLICATE KEY UPDATE ...
syntax: a
BEFORE INSERT
trigger activates for every row,
followed by either an AFTER INSERT
trigger or
both the BEFORE UPDATE
and AFTER
UPDATE
triggers, depending on whether there was a
duplicate key for the row.
Cascaded foreign key actions do not activate triggers.
As of MySQL 5.7.2, it is possible to define multiple triggers for
a given table that have the same trigger event and action time.
For example, you can have two BEFORE UPDATE
triggers for a table. By default, triggers that have the same
trigger event and action time activate in the order they were
created. To affect trigger order, specify a
trigger_order
clause that indicates
FOLLOWS
or PRECEDES
and the
name of an existing trigger that also has the same trigger event
and action time. With FOLLOWS
, the new trigger
activates after the existing trigger. With
PRECEDES
, the new trigger activates before the
existing trigger.
Before MySQL 5.7.2, there cannot be multiple triggers for a given
table that have the same trigger event and action time. For
example, you cannot have two BEFORE UPDATE
triggers for a table. But you can have a BEFORE
UPDATE
and a BEFORE INSERT
trigger,
or a BEFORE UPDATE
and an AFTER
UPDATE
trigger.
trigger_body
is the statement to
execute when the trigger activates. To execute multiple
statements, use the
BEGIN ... END
compound statement construct. This also enables you to use the
same statements that are permitted within stored routines. See
Section 13.6.1, “BEGIN ... END Compound-Statement Syntax”. Some statements are not permitted in
triggers; see Section C.1, “Restrictions on Stored Programs”.
Within the trigger body, you can refer to columns in the subject
table (the table associated with the trigger) by using the aliases
OLD
and NEW
.
OLD.
refers
to a column of an existing row before it is updated or deleted.
col_name
NEW.
refers
to the column of a new row to be inserted or an existing row after
it is updated.
col_name
Triggers cannot use
NEW.
or use
col_name
OLD.
to
refer to generated columns. For information about generated
columns, see Section 13.1.18.8, “CREATE TABLE and Generated Columns”.
col_name
MySQL stores the sql_mode
system
variable setting in effect when a trigger is created, and always
executes the trigger body with this setting in force,
regardless of the current server SQL mode when the
trigger begins executing.
The DEFINER
clause specifies the MySQL account
to be used when checking access privileges at trigger activation
time. If a user
value is given, it
should be a MySQL account specified as
'
,
user_name
'@'host_name
'CURRENT_USER
, or
CURRENT_USER()
. The default
DEFINER
value is the user who executes the
CREATE TRIGGER
statement. This is
the same as specifying DEFINER = CURRENT_USER
explicitly.
If you specify the DEFINER
clause, these rules
determine the valid DEFINER
user values:
If you do not have the SUPER
privilege, the only permitted user
value is your own account, either specified literally or by
using CURRENT_USER
. You cannot
set the definer to some other account.
If you have the SUPER
privilege, you can specify any syntactically valid account
name. If the account does not exist, a warning is generated.
Although it is possible to create a trigger with a nonexistent
DEFINER
account, it is not a good idea for
such triggers to be activated until the account actually does
exist. Otherwise, the behavior with respect to privilege
checking is undefined.
MySQL takes the DEFINER
user into account when
checking trigger privileges as follows:
At CREATE TRIGGER
time, the
user who issues the statement must have the
TRIGGER
privilege.
At trigger activation time, privileges are checked against the
DEFINER
user. This user must have these
privileges:
The TRIGGER
privilege for
the subject table.
The SELECT
privilege for
the subject table if references to table columns occur
using
OLD.
or
col_name
NEW.
in the trigger body.
col_name
The UPDATE
privilege for
the subject table if table columns are targets of
SET NEW.
assignments in
the trigger body.
col_name
=
value
Whatever other privileges normally are required for the statements executed by the trigger.
For more information about trigger security, see Section 23.6, “Access Control for Stored Programs and Views”.
Within a trigger body, the
CURRENT_USER()
function returns the
account used to check privileges at trigger activation time. This
is the DEFINER
user, not the user whose actions
caused the trigger to be activated. For information about user
auditing within triggers, see
Section 6.3.11, “SQL-Based MySQL Account Activity Auditing”.
If you use LOCK TABLES
to lock a
table that has triggers, the tables used within the trigger are
also locked, as described in
Section 13.3.5.2, “LOCK TABLES and Triggers”.
For additional discussion of trigger use, see Section 23.3.1, “Trigger Syntax and Examples”.
CREATE [OR REPLACE] [ALGORITHM = {UNDEFINED | MERGE | TEMPTABLE}] [DEFINER = {user
| CURRENT_USER }] [SQL SECURITY { DEFINER | INVOKER }] VIEWview_name
[(column_list
)] ASselect_statement
[WITH [CASCADED | LOCAL] CHECK OPTION]
The CREATE VIEW
statement creates a
new view, or replaces an existing view if the OR
REPLACE
clause is given. If the view does not exist,
CREATE OR REPLACE
VIEW
is the same as CREATE
VIEW
. If the view does exist,
CREATE OR REPLACE
VIEW
is the same as ALTER
VIEW
.
For information about restrictions on view use, see Section C.5, “Restrictions on Views”.
The select_statement
is a
SELECT
statement that provides the
definition of the view. (Selecting from the view selects, in
effect, using the SELECT
statement.) The select_statement
can
select from base tables or other views.
The view definition is “frozen” at creation time and
is not affected by subsequent changes to the definitions of the
underlying tables. For example, if a view is defined as
SELECT *
on a table, new columns added to the
table later do not become part of the view, and columns dropped
from the table will result in an error when selecting from the
view.
The ALGORITHM
clause affects how MySQL
processes the view. The DEFINER
and
SQL SECURITY
clauses specify the security
context to be used when checking access privileges at view
invocation time. The WITH CHECK OPTION
clause
can be given to constrain inserts or updates to rows in tables
referenced by the view. These clauses are described later in this
section.
The CREATE VIEW
statement requires
the CREATE VIEW
privilege for the
view, and some privilege for each column selected by the
SELECT
statement. For columns used
elsewhere in the SELECT
statement,
you must have the SELECT
privilege.
If the OR REPLACE
clause is present, you must
also have the DROP
privilege for
the view. CREATE VIEW
might also
require the SUPER
privilege,
depending on the DEFINER
value, as described
later in this section.
When a view is referenced, privilege checking occurs as described later in this section.
A view belongs to a database. By default, a new view is created in
the default database. To create the view explicitly in a given
database, use db_name.view_name
syntax
to qualify the view name with the database name:
CREATE VIEW test.v AS SELECT * FROM t;
Unqualified table or view names in the
SELECT
statement are also
interpreted with respect to the default database. A view can refer
to tables or views in other databases by qualifying the table or
view name with the appropriate database name.
Within a database, base tables and views share the same namespace, so a base table and a view cannot have the same name.
Columns retrieved by the SELECT
statement can be simple references to table columns, or
expressions that use functions, constant values, operators, and so
forth.
A view must have unique column names with no duplicates, just like
a base table. By default, the names of the columns retrieved by
the SELECT
statement are used for
the view column names. To define explicit names for the view
columns, specify the optional
column_list
clause as a list of
comma-separated identifiers. The number of names in
column_list
must be the same as the
number of columns retrieved by the
SELECT
statement.
A view can be created from many kinds of
SELECT
statements. It can refer to
base tables or other views. It can use joins,
UNION
, and subqueries. The
SELECT
need not even refer to any
tables:
CREATE VIEW v_today (today) AS SELECT CURRENT_DATE;
The following example defines a view that selects two columns from another table as well as an expression calculated from those columns:
mysql>CREATE TABLE t (qty INT, price INT);
mysql>INSERT INTO t VALUES(3, 50);
mysql>CREATE VIEW v AS SELECT qty, price, qty*price AS value FROM t;
mysql>SELECT * FROM v;
+------+-------+-------+ | qty | price | value | +------+-------+-------+ | 3 | 50 | 150 | +------+-------+-------+
A view definition is subject to the following restrictions:
Before MySQL 5.7.7, the SELECT
statement cannot contain a subquery in the
FROM
clause.
The SELECT
statement cannot
refer to system variables or user-defined variables.
Within a stored program, the
SELECT
statement cannot refer
to program parameters or local variables.
The SELECT
statement cannot
refer to prepared statement parameters.
Any table or view referred to in the definition must exist.
If, after the view has been created, a table or view that the
definition refers to is dropped, use of the view results in an
error. To check a view definition for problems of this kind,
use the CHECK TABLE
statement.
The definition cannot refer to a TEMPORARY
table, and you cannot create a TEMPORARY
view.
You cannot associate a trigger with a view.
Aliases for column names in the
SELECT
statement are checked
against the maximum column length of 64 characters (not the
maximum alias length of 256 characters).
ORDER BY
is permitted in a view definition, but
it is ignored if you select from a view using a statement that has
its own ORDER BY
.
For other options or clauses in the definition, they are added to
the options or clauses of the statement that references the view,
but the effect is undefined. For example, if a view definition
includes a LIMIT
clause, and you select from
the view using a statement that has its own
LIMIT
clause, it is undefined which limit
applies. This same principle applies to options such as
ALL
, DISTINCT
, or
SQL_SMALL_RESULT
that follow the
SELECT
keyword, and to clauses such
as INTO
, FOR UPDATE
,
LOCK IN SHARE MODE
, and
PROCEDURE
.
The results obtained from a view may be affected if you change the query processing environment by changing system variables:
mysql>CREATE VIEW v (mycol) AS SELECT 'abc';
Query OK, 0 rows affected (0.01 sec) mysql>SET sql_mode = '';
Query OK, 0 rows affected (0.00 sec) mysql>SELECT "mycol" FROM v;
+-------+ | mycol | +-------+ | mycol | +-------+ 1 row in set (0.01 sec) mysql>SET sql_mode = 'ANSI_QUOTES';
Query OK, 0 rows affected (0.00 sec) mysql>SELECT "mycol" FROM v;
+-------+ | mycol | +-------+ | abc | +-------+ 1 row in set (0.00 sec)
The DEFINER
and SQL SECURITY
clauses determine which MySQL account to use when checking access
privileges for the view when a statement is executed that
references the view. The valid SQL SECURITY
characteristic values are DEFINER
(the default)
and INVOKER
. These indicate that the required
privileges must be held by the user who defined or invoked the
view, respectively.
If a user
value is given for the
DEFINER
clause, it should be a MySQL account
specified as
'
,
user_name
'@'host_name
'CURRENT_USER
, or
CURRENT_USER()
. The default
DEFINER
value is the user who executes the
CREATE VIEW
statement. This is the
same as specifying DEFINER = CURRENT_USER
explicitly.
If the DEFINER
clause is present, these rules
determine the valid DEFINER
user values:
If you do not have the SUPER
privilege, the only valid user
value is your own account, either specified literally or by
using CURRENT_USER
. You cannot
set the definer to some other account.
If you have the SUPER
privilege, you can specify any syntactically valid account
name. If the account does not exist, a warning is generated.
Although it is possible to create a view with a nonexistent
DEFINER
account, an error occurs when the
view is referenced if the SQL SECURITY
value is DEFINER
but the definer account
does not exist.
For more information about view security, see Section 23.6, “Access Control for Stored Programs and Views”.
Within a view definition,
CURRENT_USER
returns the view's
DEFINER
value by default. For views defined
with the SQL SECURITY INVOKER
characteristic,
CURRENT_USER
returns the account
for the view's invoker. For information about user auditing within
views, see Section 6.3.11, “SQL-Based MySQL Account Activity Auditing”.
Within a stored routine that is defined with the SQL
SECURITY DEFINER
characteristic,
CURRENT_USER
returns the routine's
DEFINER
value. This also affects a view defined
within such a routine, if the view definition contains a
DEFINER
value of
CURRENT_USER
.
MySQL checks view privileges like this:
At view definition time, the view creator must have the
privileges needed to use the top-level objects accessed by the
view. For example, if the view definition refers to table
columns, the creator must have some privilege for each column
in the select list of the definition, and the
SELECT
privilege for each
column used elsewhere in the definition. If the definition
refers to a stored function, only the privileges needed to
invoke the function can be checked. The privileges required at
function invocation time can be checked only as it executes:
For different invocations, different execution paths within
the function might be taken.
The user who references a view must have appropriate
privileges to access it (SELECT
to select from it, INSERT
to
insert into it, and so forth.)
When a view has been referenced, privileges for objects
accessed by the view are checked against the privileges held
by the view DEFINER
account or invoker,
depending on whether the SQL SECURITY
characteristic is DEFINER
or
INVOKER
, respectively.
If reference to a view causes execution of a stored function,
privilege checking for statements executed within the function
depend on whether the function SQL SECURITY
characteristic is DEFINER
or
INVOKER
. If the security characteristic is
DEFINER
, the function runs with the
privileges of the DEFINER
account. If the
characteristic is INVOKER
, the function
runs with the privileges determined by the view's SQL
SECURITY
characteristic.
Example: A view might depend on a stored function, and that
function might invoke other stored routines. For example, the
following view invokes a stored function f()
:
CREATE VIEW v AS SELECT * FROM t WHERE t.id = f(t.name);
Suppose that f()
contains a statement such as
this:
IF name IS NULL then CALL p1(); ELSE CALL p2(); END IF;
The privileges required for executing statements within
f()
need to be checked when
f()
executes. This might mean that privileges
are needed for p1()
or p2()
,
depending on the execution path within f()
.
Those privileges must be checked at runtime, and the user who must
possess the privileges is determined by the SQL
SECURITY
values of the view v
and the
function f()
.
The DEFINER
and SQL SECURITY
clauses for views are extensions to standard SQL. In standard SQL,
views are handled using the rules for SQL SECURITY
DEFINER
. The standard says that the definer of the view,
which is the same as the owner of the view's schema, gets
applicable privileges on the view (for example,
SELECT
) and may grant them. MySQL
has no concept of a schema “owner”, so MySQL adds a
clause to identify the definer. The DEFINER
clause is an extension where the intent is to have what the
standard has; that is, a permanent record of who defined the view.
This is why the default DEFINER
value is the
account of the view creator.
The optional ALGORITHM
clause is a MySQL
extension to standard SQL. It affects how MySQL processes the
view. ALGORITHM
takes three values:
MERGE
, TEMPTABLE
, or
UNDEFINED
. For more information, see
Section 23.5.2, “View Processing Algorithms”, as well as
Section 8.2.2.3, “Optimizing Derived Tables and View References”.
Some views are updatable. That is, you can use them in statements
such as UPDATE
,
DELETE
, or
INSERT
to update the contents of
the underlying table. For a view to be updatable, there must be a
one-to-one relationship between the rows in the view and the rows
in the underlying table. There are also certain other constructs
that make a view nonupdatable.
A generated column in a view is considered updatable because it is
possible to assign to it. However, if such a column is updated
explicitly, the only permitted value is
DEFAULT
. For information about generated
columns, see Section 13.1.18.8, “CREATE TABLE and Generated Columns”.
The WITH CHECK OPTION
clause can be given for
an updatable view to prevent inserts or updates to rows except
those for which the WHERE
clause in the
select_statement
is true.
In a WITH CHECK OPTION
clause for an updatable
view, the LOCAL
and CASCADED
keywords determine the scope of check testing when the view is
defined in terms of another view. The LOCAL
keyword restricts the CHECK OPTION
only to the
view being defined. CASCADED
causes the checks
for underlying views to be evaluated as well. When neither keyword
is given, the default is CASCADED
.
For more information about updatable views and the WITH
CHECK OPTION
clause, see
Section 23.5.3, “Updatable and Insertable Views”, and
Section 23.5.4, “The View WITH CHECK OPTION Clause”.
Views created before MySQL 5.7.3 containing ORDER BY
can result in errors
at view evaluation time. Consider these view definitions, which
use integer
ORDER BY
with an ordinal number:
CREATE VIEW v1 AS SELECT x, y, z FROM t ORDER BY 2; CREATE VIEW v2 AS SELECT x, 1, z FROM t ORDER BY 2;
In the first case, ORDER BY 2
refers to a named
column y
. In the second case, it refers to a
constant 1. For queries that select from either view fewer than 2
columns (the number named in the ORDER BY
clause), an error occurs if the server evaluates the view using
the MERGE algorithm. Examples:
mysql>SELECT x FROM v1;
ERROR 1054 (42S22): Unknown column '2' in 'order clause' mysql>SELECT x FROM v2;
ERROR 1054 (42S22): Unknown column '2' in 'order clause'
As of MySQL 5.7.3, to handle view definitions like this, the
server writes them differently into the .frm
file that stores the view definition. This difference is visible
with SHOW CREATE VIEW
. Previously,
the .frm
file contained this for the
ORDER BY 2
clause:
For v1: ORDER BY 2 For v2: ORDER BY 2
As of 5.7.3, the .frm
file contains this:
For v1: ORDER BY `t`.`y` For v2: ORDER BY ''
That is, for v1
, 2 is replaced by a reference
to the name of the column referred to. For v2
,
2 is replaced by a constant string expression (ordering by a
constant has no effect, so ordering by any constant will do).
If you experience view-evaluation errors such as just described,
drop and recreate the view so that the .frm
file contains the updated view representation. Alternatively, for
views like v2
that order by a constant value,
drop and recreate the view with no ORDER BY
clause.
DROP {DATABASE | SCHEMA} [IF EXISTS] db_name
DROP DATABASE
drops all tables in
the database and deletes the database. Be
very careful with this statement! To use
DROP DATABASE
, you need the
DROP
privilege on the database.
DROP
SCHEMA
is a synonym for DROP
DATABASE
.
When a database is dropped, user privileges on the database are not automatically dropped. See Section 13.7.1.4, “GRANT Syntax”.
IF EXISTS
is used to prevent an error from
occurring if the database does not exist.
If the default database is dropped, the default database is unset
(the DATABASE()
function returns
NULL
).
If you use DROP DATABASE
on a
symbolically linked database, both the link and the original
database are deleted.
DROP DATABASE
returns the number of
tables that were removed. This corresponds to the number of
.frm
files removed.
The DROP DATABASE
statement removes
from the given database directory those files and directories that
MySQL itself may create during normal operation:
All files with the following extensions.
.BAK | .DAT | .HSH | .MRG |
.MYD | .MYI | .TRG | .TRN |
.cfg | .db | .frm | .ibd |
.ndb | .par |
The db.opt
file, if it exists.
If other files or directories remain in the database directory
after MySQL removes those just listed, the database directory
cannot be removed. In this case, you must remove any remaining
files or directories manually and issue the
DROP DATABASE
statement again.
Dropping a database does not remove any
TEMPORARY
tables that were created in that
database. TEMPORARY
tables are automatically
removed when the session that created them ends. See
Section 13.1.18.3, “CREATE TEMPORARY TABLE Syntax”.
You can also drop databases with mysqladmin. See Section 4.5.2, “mysqladmin — Client for Administering a MySQL Server”.
DROP EVENT [IF EXISTS] event_name
This statement drops the event named
event_name
. The event immediately
ceases being active, and is deleted completely from the server.
If the event does not exist, the error ERROR 1517
(HY000): Unknown event
'event_name
' results. You
can override this and cause the statement to generate a warning
for nonexistent events instead using IF EXISTS
.
This statement requires the EVENT
privilege for the schema to which the event to be dropped belongs.
The DROP FUNCTION
statement is used
to drop stored functions and user-defined functions (UDFs):
For information about dropping stored functions, see Section 13.1.27, “DROP PROCEDURE and DROP FUNCTION Syntax”.
For information about dropping user-defined functions, see Section 13.7.3.2, “DROP FUNCTION Syntax”.
DROP INDEXindex_name
ONtbl_name
[algorithm_option
|lock_option
] ...algorithm_option
: ALGORITHM [=] {DEFAULT|INPLACE|COPY}lock_option
: LOCK [=] {DEFAULT|NONE|SHARED|EXCLUSIVE}
DROP INDEX
drops the index named
index_name
from the table
tbl_name
. This statement is mapped to
an ALTER TABLE
statement to drop
the index. See Section 13.1.8, “ALTER TABLE Syntax”.
To drop a primary key, the index name is always
PRIMARY
, which must be specified as a quoted
identifier because PRIMARY
is a reserved word:
DROP INDEX `PRIMARY` ON t;
Indexes on variable-width columns of
NDB
tables are dropped online; that
is, without any table copying. The table is not locked against
access from other NDB Cluster API nodes, although it is locked
against other operations on the same API node
for the duration of the operation. This is done automatically by
the server whenever it determines that it is possible to do so;
you do not have to use any special SQL syntax or server options to
cause it to happen.
ALGORITHM
and LOCK
clauses
may be given. These influence the table copying method and level
of concurrency for reading and writing the table while its indexes
are being modified. They have the same meaning as for the
ALTER TABLE
statement. For more
information, see Section 13.1.8, “ALTER TABLE Syntax”
NDB Cluster formerly supported online DROP
INDEX
operations using the ONLINE
and
OFFLINE
keywords. These keywords are no longer
supported in MySQL NDB Cluster 7.5 and later, and their use causes
a syntax error. Instead, MySQL NDB Cluster 7.5 and later support
online operations using the same
ALGORITHM=INPLACE
syntax used with the standard
MySQL Server. See Section 13.1.8.2, “ALTER TABLE Online Operations in NDB Cluster”,
for more information.
DROP LOGFILE GROUPlogfile_group
ENGINE [=]engine_name
This statement drops the log file group named
logfile_group
. The log file group must
already exist or an error results. (For information on creating
log file groups, see Section 13.1.15, “CREATE LOGFILE GROUP Syntax”.)
Before dropping a log file group, you must drop all tablespaces
that use that log file group for UNDO
logging.
The required ENGINE
clause provides the name of
the storage engine used by the log file group to be dropped.
Currently, the only permitted values for
engine_name
are
NDB
and
NDBCLUSTER
.
DROP LOGFILE GROUP
is useful only
with Disk Data storage for NDB Cluster. See
Section 21.5.13, “NDB Cluster Disk Data Tables”.
DROP {PROCEDURE | FUNCTION} [IF EXISTS] sp_name
This statement is used to drop a stored procedure or function.
That is, the specified routine is removed from the server. You
must have the ALTER ROUTINE
privilege for the routine. (If the
automatic_sp_privileges
system variable is
enabled, that privilege and EXECUTE
are granted automatically to the routine creator when the routine
is created and dropped from the creator when the routine is
dropped. See Section 23.2.2, “Stored Routines and MySQL Privileges”.)
The IF EXISTS
clause is a MySQL extension. It
prevents an error from occurring if the procedure or function does
not exist. A warning is produced that can be viewed with
SHOW WARNINGS
.
DROP FUNCTION
is also used to drop
user-defined functions (see Section 13.7.3.2, “DROP FUNCTION Syntax”).
DROP SERVER [ IF EXISTS ] server_name
Drops the server definition for the server named
. The
corresponding row in the server_name
mysql.servers
table is
deleted. This statement requires the
SUPER
privilege.
Dropping a server for a table does not affect any
FEDERATED
tables that used this connection
information when they were created. See
Section 13.1.17, “CREATE SERVER Syntax”.
DROP SERVER
does not cause an automatic commit.
In MySQL 5.7, DROP SERVER
is not
written to the binary log, regardless of the logging format that
is in use.
In MySQL 5.7.1, gtid_next
must be
set to AUTOMATIC
before issuing this statement.
This restriction does not apply in MySQL 5.7.2 or later. (Bug
#16062608, Bug #16715809, Bug #69045)
DROP [TEMPORARY] TABLE [IF EXISTS]tbl_name
[,tbl_name
] ... [RESTRICT | CASCADE]
DROP TABLE
removes one or more
tables. You must have the DROP
privilege for each table. All table data and the table definition
are removed, so be
careful with this statement! If any of the tables named
in the argument list do not exist, MySQL returns an error
indicating by name which nonexisting tables it was unable to drop,
but it also drops all of the tables in the list that do exist.
When a table is dropped, user privileges on the table are not automatically dropped. See Section 13.7.1.4, “GRANT Syntax”.
For a partitioned table, DROP TABLE
permanently removes the table definition, all of its partitions,
and all of the data which was stored in those partitions. It also
removes partition definitions associated with the dropped table.
Prior to MySQL 5.7.6, DROP TABLE
removes partition definitions (.par
) files
associated with the dropped table. As of MySQL 5.7.6, partition
definition (.par
) files are no longer
created. Instead, partition definitions are stored in the
internal data dictionary.
Use IF EXISTS
to prevent an error from
occurring for tables that do not exist. A NOTE
is generated for each nonexistent table when using IF
EXISTS
. See Section 13.7.5.40, “SHOW WARNINGS Syntax”.
IF EXISTS
can be useful for dropping tables in
unusual circumstances under which there is an
.frm
file but no table managed by the storage
engine. (For example, if an abnormal server exit occurs after
removal of the table from the storage engine but before
.frm
file removal.)
RESTRICT
and CASCADE
are
permitted to make porting easier. In MySQL 5.7, they
do nothing.
DROP TABLE
automatically commits
the current active transaction, unless you use the
TEMPORARY
keyword.
The TEMPORARY
keyword has the following
effects:
The statement drops only TEMPORARY
tables.
The statement does not end an ongoing transaction.
No access rights are checked. (A TEMPORARY
table is visible only to the session that created it, so no
check is necessary.)
Using TEMPORARY
is a good way to ensure that
you do not accidentally drop a non-TEMPORARY
table.
DROP TABLE
is not supported with
all innodb_force_recovery
settings. See Section 14.21.2, “Forcing InnoDB Recovery”.
DROP TABLESPACEtablespace_name
[ENGINE [=]engine_name
]
This statement drops a tablespace that was previously created
using CREATE TABLESPACE
. It is
supported with all MySQL NDB Cluster 7.5 releases, and with
InnoDB
in the standard MySQL Server as well,
beginning with MySQL 5.7.6.
ENGINE
sets the storage engine that uses the
tablespace, where engine_name
is the
name of the storage engine. Currently, the values
InnoDB
and NDB
are
supported. If not set, the value of
default_storage_engine
is used.
If it is not the same as the storage engine used to create the
tablespace, the DROP TABLESPACE
statement
fails.
For an InnoDB
tablespace, all tables must be
dropped from the tablespace prior to a DROP
TABLESPACE
operation. If the tablespace is not empty,
DROP TABLESPACE
returns an error.
As with the InnoDB
system tablespace,
truncating or dropping InnoDB
tables stored in
a general tablespace creates free space in the tablespace
.ibd data file, which can
only be used for new InnoDB
data. Space is not
released back to the operating system by such operations as it is
for file-per-table tablespaces.
An NDB
tablespace to be dropped must not
contain any data files; in other words, before you can drop an
NDB
tablespace, you must first drop each of its
data files using
ALTER TABLESPACE
... DROP DATAFILE
.
Tablespaces are not deleted automatically. A tablespace must
be dropped explicitly using DROP
TABLESPACE
. DROP
DATABASE
has no effect in this regard, even if the
operation drops all tables belonging to the tablespace.
A DROP DATABASE
operation can
drop tables that belong to a general tablespace but it cannot
drop the tablespace, even if the operation drops all tables
that belong to the tablespace. The tablespace must be dropped
explicitly using DROP TABLESPACE
.
tablespace_name
Similar to the system tablespace, truncating or dropping
tables stored in a general tablespace creates free space
internally in the general tablespace
.ibd data file which can
only be used for new InnoDB
data. Space is
not released back to the operating system as it is for
file-per-table tablespaces.
This example demonstrates how to drop an InnoDB
general tablespace. The general tablespace ts1
is created with a single table. Before dropping the tablespace,
the table must be dropped.
mysql>CREATE TABLESPACE `ts1`
->ADD DATAFILE 'ts1.ibd'
->ENGINE=INNODB;
Query OK, 0 rows affected (0.01 sec) mysql>CREATE TABLE t1 (c1 INT PRIMARY KEY)
->TABLESPACE ts10
->ENGINE=INNODB;
Query OK, 0 rows affected (0.02 sec) mysql>DROP TABLE t1;
Query OK, 0 rows affected (0.01 sec) mysql>DROP TABLESPACE ts1;
Query OK, 0 rows affected (0.01 sec)
This example shows how to drop an NDB
tablespace myts
having a data file named
mydata-1.dat
after first creating the
tablespace, and assumes the existence of a log file group named
mylg
(see
Section 13.1.15, “CREATE LOGFILE GROUP Syntax”).
mysql>CREATE TABLESPACE myts
->ADD DATAFILE 'mydata-1.dat'
->USE LOGFILE GROUP mylg
->ENGINE=NDB;
You must remove all data files from the tablespace using
ALTER TABLESPACE
, as shown here,
before it can be dropped:
mysql>ALTER TABLESPACE myts
->DROP DATAFILE 'mydata-1.dat'
->ENGINE=NDB;
mysql>DROP TABLESPACE myts;
DROP TRIGGER [IF EXISTS] [schema_name
.]trigger_name
This statement drops a trigger. The schema (database) name is
optional. If the schema is omitted, the trigger is dropped from
the default schema. DROP TRIGGER
requires the TRIGGER
privilege for
the table associated with the trigger.
Use IF EXISTS
to prevent an error from
occurring for a trigger that does not exist. A
NOTE
is generated for a nonexistent trigger
when using IF EXISTS
. See
Section 13.7.5.40, “SHOW WARNINGS Syntax”.
Triggers for a table are also dropped if you drop the table.
DROP VIEW [IF EXISTS]view_name
[,view_name
] ... [RESTRICT | CASCADE]
DROP VIEW
removes one or more
views. You must have the DROP
privilege for each view. If any of the views named in the argument
list do not exist, MySQL returns an error indicating by name which
nonexisting views it was unable to drop, but it also drops all of
the views in the list that do exist.
The IF EXISTS
clause prevents an error from
occurring for views that don't exist. When this clause is given, a
NOTE
is generated for each nonexistent view.
See Section 13.7.5.40, “SHOW WARNINGS Syntax”.
RESTRICT
and CASCADE
, if
given, are parsed and ignored.
RENAME TABLEtbl_name
TOnew_tbl_name
[,tbl_name2
TOnew_tbl_name2
] ...
This statement renames one or more tables. The rename operation is done atomically, which means that no other session can access any of the tables while the rename is running.
For example, a table named old_table
can be
renamed to new_table
as shown here:
RENAME TABLE old_table TO new_table;
This statement is equivalent to the following
ALTER TABLE
statement:
ALTER TABLE old_table RENAME new_table;
If the statement renames more than one table, renaming operations
are done from left to right. If you want to swap two table names,
you can do so like this (assuming that
tmp_table
does not already exist):
RENAME TABLE old_table TO tmp_table, new_table TO old_table, tmp_table TO new_table;
MySQL checks the destination table name before checking whether
the source table exists. For example, if
new_table
already exists and
old_table
does not, the following statement
fails as shown here:
mysql>SHOW TABLES;
+----------------+ | Tables_in_mydb | +----------------+ | table_a | +----------------+ 1 row in set (0.00 sec) mysql>RENAME TABLE table_b TO table_a;
ERROR 1050 (42S01): Table 'table_a' already exists
As long as two databases are on the same file system, you can use
RENAME TABLE
to move a table from
one database to another:
RENAME TABLEcurrent_db.tbl_name
TOother_db.tbl_name;
You can use this method to move all tables from one database to a different one, in effect renaming the database. (MySQL has no single statement to perform this task.)
If there are any triggers associated with a table which is moved
to a different database using RENAME TABLE
,
then the statement fails with the error Trigger in
wrong schema.
RENAME TABLE
changes internally generated
foreign key constraint names and user-defined foreign key
constraint names that contain the string
“tbl_name
_ibfk_” to
reflect the new table name. InnoDB
interprets
foreign key constraint names that contain the string
“tbl_name
_ibfk_” as
internally generated names.
Foreign keys that point to the renamed table are not automatically updated. In such cases, you must drop and re-create the foreign keys in order for them to function properly.
RENAME TABLE
also works for views, as long as
you do not try to rename a view into a different database.
Any privileges granted specifically for the renamed table or view are not migrated to the new name. They must be changed manually.
When you execute RENAME TABLE
, you cannot have
any locked tables or active transactions. You must also have the
ALTER
and
DROP
privileges on the original
table, and the CREATE
and
INSERT
privileges on the new table.
If MySQL encounters any errors in a multiple-table rename, it does a reverse rename for all renamed tables to return everything to its original state.
You cannot use RENAME TABLE
to rename a
TEMPORARY
table. However, you can use
ALTER TABLE
with temporary tables.
Like RENAME TABLE
, ALTER TABLE ...
RENAME
can also be used to move a table to a different
database. Regardless of the statement used to perform the rename,
if the rename operation would move the table to a database located
on a different file system, the success of the outcome is platform
specific and depends on the underlying operating system calls used
to move the table files.
TRUNCATE [TABLE] tbl_name
TRUNCATE TABLE
empties a table
completely. It requires the DROP
privilege.
Logically, TRUNCATE TABLE
is
similar to a DELETE
statement that
deletes all rows, or a sequence of DROP
TABLE
and CREATE TABLE
statements. To achieve high performance, it bypasses the DML
method of deleting data. Thus, it cannot be rolled back, it does
not cause ON DELETE
triggers to fire, and it
cannot be performed for InnoDB
tables with
parent-child foreign key relationships.
Although TRUNCATE TABLE
is similar
to DELETE
, it is classified as a
DDL statement rather than a DML statement. It differs from
DELETE
in the following ways in
MySQL 5.7:
Truncate operations drop and re-create the table, which is much faster than deleting rows one by one, particularly for large tables.
Truncate operations cause an implicit commit, and so cannot be rolled back.
Truncation operations cannot be performed if the session holds an active table lock.
TRUNCATE TABLE
fails for an
InnoDB
table or
NDB
table if there are any
FOREIGN KEY
constraints from other tables
that reference the table. Foreign key constraints between
columns of the same table are permitted.
Truncation operations do not return a meaningful value for the number of deleted rows. The usual result is “0 rows affected,” which should be interpreted as “no information.”
As long as the table format file
is valid, the table can be re-created as an empty table with
tbl_name
.frmTRUNCATE TABLE
, even if the
data or index files have become corrupted.
Any AUTO_INCREMENT
value is reset to its
start value. This is true even for MyISAM
and InnoDB
, which normally do not reuse
sequence values.
When used with partitioned tables,
TRUNCATE TABLE
preserves the
partitioning; that is, the data and index files are dropped
and re-created, while the partition definitions
(.par
) file is unaffected.
As of MySQL 5.7.6, partition definition
(.par
) files are no longer created.
Instead, partition definitions are stored in the internal
data dictionary.
The TRUNCATE TABLE
statement
does not invoke ON DELETE
triggers.
TRUNCATE TABLE
for a table closes
all handlers for the table that were opened with
HANDLER OPEN
.
TRUNCATE TABLE
is treated for
purposes of binary logging and replication as
DROP TABLE
followed by
CREATE TABLE
—that is, as DDL
rather than DML. This is due to the fact that, when using
InnoDB
and other transactional
storage engines where the transaction isolation level does not
permit statement-based logging (READ
COMMITTED
or READ
UNCOMMITTED
), the statement was not logged and
replicated when using STATEMENT
or
MIXED
logging mode. (Bug #36763) However, it is
still applied on replication slaves using
InnoDB
in the manner described
previously.
On a system with a large InnoDB
buffer pool and
innodb_adaptive_hash_index
enabled, TRUNCATE TABLE
operations may cause a
temporary drop in system performance due to an LRU scan that
occurs when removing an InnoDB
table's adaptive
hash index entries. The problem was addressed for
DROP TABLE
in MySQL 5.5.23 (Bug
#13704145, Bug #64284) but remains a known issue for
TRUNCATE TABLE
(Bug #68184).
TRUNCATE TABLE
can be used with
Performance Schema summary tables, but the effect is to reset the
summary columns to 0 or NULL
, not to remove
rows. See Section 25.11.15, “Performance Schema Summary Tables”.
CALLsp_name
([parameter
[,...]]) CALLsp_name
[()]
The CALL
statement invokes a stored
procedure that was defined previously with
CREATE PROCEDURE
.
Stored procedures that take no arguments can be invoked without
parentheses. That is, CALL p()
and
CALL p
are equivalent.
CALL
can pass back values to its
caller using parameters that are declared as
OUT
or INOUT
parameters.
When the procedure returns, a client program can also obtain the
number of rows affected for the final statement executed within
the routine: At the SQL level, call the
ROW_COUNT()
function; from the C
API, call the
mysql_affected_rows()
function.
To get back a value from a procedure using an
OUT
or INOUT
parameter, pass
the parameter by means of a user variable, and then check the
value of the variable after the procedure returns. (If you are
calling the procedure from within another stored procedure or
function, you can also pass a routine parameter or local routine
variable as an IN
or INOUT
parameter.) For an INOUT
parameter, initialize
its value before passing it to the procedure. The following
procedure has an OUT
parameter that the
procedure sets to the current server version, and an
INOUT
value that the procedure increments by
one from its current value:
CREATE PROCEDURE p (OUT ver_param VARCHAR(25), INOUT incr_param INT) BEGIN # Set value of OUT parameter SELECT VERSION() INTO ver_param; # Increment value of INOUT parameter SET incr_param = incr_param + 1; END;
Before calling the procedure, initialize the variable to be passed
as the INOUT
parameter. After calling the
procedure, the values of the two variables will have been set or
modified:
mysql>SET @increment = 10;
mysql>CALL p(@version, @increment);
mysql>SELECT @version, @increment;
+--------------+------------+ | @version | @increment | +--------------+------------+ | 5.5.3-m3-log | 11 | +--------------+------------+
In prepared CALL
statements used
with PREPARE
and
EXECUTE
, placeholders can be used
for IN
parameters. For OUT
and INOUT
parameters, placeholder support is
available as of MySQL 5.5.3. These types of parameters can be used
as follows:
mysql>SET @increment = 10;
mysql>PREPARE s FROM 'CALL p(?, ?)';
mysql>EXECUTE s USING @version, @increment;
mysql>SELECT @version, @increment;
+--------------+------------+ | @version | @increment | +--------------+------------+ | 5.5.3-m3-log | 11 | +--------------+------------+
Before MySQL 5.5.3, placeholder support is not available for
OUT
or INOUT
parameters. To
work around this limitation for OUT
and
INOUT
parameters, forego the use of
placeholders; instead, refer to user variables in the
CALL
statement itself and do not
specify them in the EXECUTE
statement:
mysql>SET @increment = 10;
mysql>PREPARE s FROM 'CALL p(@version, @increment)';
mysql>EXECUTE s;
mysql>SELECT @version, @increment;
+--------------+------------+ | @version | @increment | +--------------+------------+ | 5.5.0-m2-log | 11 | +--------------+------------+
To write C programs that use the
CALL
SQL statement to execute
stored procedures that produce result sets, the
CLIENT_MULTI_RESULTS
flag must be enabled. This
is because each CALL
returns a
result to indicate the call status, in addition to any result sets
that might be returned by statements executed within the
procedure. CLIENT_MULTI_RESULTS
must also be
enabled if CALL
is used to execute
any stored procedure that contains prepared statements. It cannot
be determined when such a procedure is loaded whether those
statements will produce result sets, so it is necessary to assume
that they will.
CLIENT_MULTI_RESULTS
can be enabled when you
call mysql_real_connect()
, either
explicitly by passing the CLIENT_MULTI_RESULTS
flag itself, or implicitly by passing
CLIENT_MULTI_STATEMENTS
(which also enables
CLIENT_MULTI_RESULTS
). In MySQL
5.7, CLIENT_MULTI_RESULTS
is
enabled by default.
To process the result of a CALL
statement executed using
mysql_query()
or
mysql_real_query()
, use a loop
that calls mysql_next_result()
to
determine whether there are more results. For an example, see
Section 27.8.17, “C API Support for Multiple Statement Execution”.
For programs written in a language that provides a MySQL
interface, there is no native method prior to MySQL 5.5.3 for
directly retrieving the results of OUT
or
INOUT
parameters from
CALL
statements. To get the
parameter values, pass user-defined variables to the procedure in
the CALL
statement and then execute
a SELECT
statement to produce a
result set containing the variable values. To handle an
INOUT
parameter, execute a statement prior to
the CALL
that sets the
corresponding user variable to the value to be passed to the
procedure.
The following example illustrates the technique (without error
checking) for the stored procedure p
described
earlier that has an OUT
parameter and an
INOUT
parameter:
mysql_query(mysql, "SET @increment = 10"); mysql_query(mysql, "CALL p(@version, @increment)"); mysql_query(mysql, "SELECT @version, @increment"); result = mysql_store_result(mysql); row = mysql_fetch_row(result); mysql_free_result(result);
After the preceding code executes, row[0]
and
row[1]
contain the values of
@version
and @increment
,
respectively.
In MySQL 5.7, C programs can use the
prepared-statement interface to execute
CALL
statements and access
OUT
and INOUT
parameters.
This is done by processing the result of a
CALL
statement using a loop that
calls mysql_stmt_next_result()
to
determine whether there are more results. For an example, see
Section 27.8.20, “C API Support for Prepared CALL Statements”. Languages that
provide a MySQL interface can use prepared
CALL
statements to directly
retrieve OUT
and INOUT
procedure parameters.
In MySQL 5.7, metadata changes to objects referred to by stored programs are detected and cause automatic reparsing of the affected statements when the program is next executed. For more information, see Section 8.10.4, “Caching of Prepared Statements and Stored Programs”.
DELETE
is a DML statement that
removes rows from a table.
DELETE [LOW_PRIORITY] [QUICK] [IGNORE] FROMtbl_name
[PARTITION (partition_name
,...)] [WHEREwhere_condition
] [ORDER BY ...] [LIMITrow_count
]
The DELETE
statement deletes rows from
tbl_name
and returns the number of
deleted rows. To check the number of deleted rows, call the
ROW_COUNT()
function described in
Section 12.14, “Information Functions”.
The conditions in the optional WHERE
clause
identify which rows to delete. With no WHERE
clause, all rows are deleted.
where_condition
is an expression that
evaluates to true for each row to be deleted. It is specified as
described in Section 13.2.9, “SELECT Syntax”.
If the ORDER BY
clause is specified, the rows
are deleted in the order that is specified. The
LIMIT
clause places a limit on the number of
rows that can be deleted. These clauses apply to single-table
deletes, but not multi-table deletes.
DELETE [LOW_PRIORITY] [QUICK] [IGNORE]tbl_name
[.*] [,tbl_name
[.*]] ... FROMtable_references
[WHEREwhere_condition
]
Or:
DELETE [LOW_PRIORITY] [QUICK] [IGNORE] FROMtbl_name
[.*] [,tbl_name
[.*]] ... USINGtable_references
[WHEREwhere_condition
]
You need the DELETE
privilege on a
table to delete rows from it. You need only the
SELECT
privilege for any columns
that are only read, such as those named in the
WHERE
clause.
When you do not need to know the number of deleted rows, the
TRUNCATE TABLE
statement is a
faster way to empty a table than a
DELETE
statement with no
WHERE
clause. Unlike
DELETE
,
TRUNCATE TABLE
cannot be used
within a transaction or if you have a lock on the table. See
Section 13.1.34, “TRUNCATE TABLE Syntax” and
Section 13.3.5, “LOCK TABLES and UNLOCK TABLES Syntax”.
The speed of delete operations may also be affected by factors discussed in Section 8.2.4.3, “Optimizing DELETE Statements”.
To ensure that a given DELETE
statement does not take too much time, the MySQL-specific
LIMIT
clause for row_count
DELETE
specifies the
maximum number of rows to be deleted. If the number of rows to
delete is larger than the limit, repeat the
DELETE
statement until the number of affected
rows is less than the LIMIT
value.
You cannot delete from a table and select from the same table in a subquery.
DELETE
supports explicit partition selection
using the PARTITION
option, which takes a
comma-separated list of the names of one or more partitions or
subpartitions (or both) from which to select rows to be dropped.
Partitions not included in the list are ignored. Given a
partitioned table t
with a partition named
p0
, executing the statement DELETE
FROM t PARTITION (p0)
has the same effect on the table
as executing ALTER
TABLE t TRUNCATE PARTITION (p0)
; in both cases, all rows
in partition p0
are dropped.
PARTITION
can be used along with a
WHERE
condition, in which case the condition is
tested only on rows in the listed partitions. For example,
DELETE FROM t PARTITION (p0) WHERE c < 5
deletes rows only from partition p0
for which
the condition c < 5
is true; rows in any
other partitions are not checked and thus not affected by the
DELETE
.
The PARTITION
option can also be used in
multiple-table DELETE
statements. You can use
up to one such option per table named in the
FROM
option.
See Section 22.5, “Partition Selection”, for more information and examples.
If you delete the row containing the maximum value for an
AUTO_INCREMENT
column, the value is not reused
for a MyISAM
or InnoDB
table. If you delete all rows in the table with DELETE
FROM
(without a
tbl_name
WHERE
clause) in
autocommit
mode, the sequence
starts over for all storage engines except
InnoDB
and MyISAM
. There are
some exceptions to this behavior for InnoDB
tables, as discussed in
Section 14.8.1.5, “AUTO_INCREMENT Handling in InnoDB”.
For MyISAM
tables, you can specify an
AUTO_INCREMENT
secondary column in a
multiple-column key. In this case, reuse of values deleted from
the top of the sequence occurs even for MyISAM
tables. See Section 3.6.9, “Using AUTO_INCREMENT”.
The DELETE
statement supports the
following modifiers:
If you specify LOW_PRIORITY
, the server
delays execution of the DELETE
until no other clients are reading from the table. This
affects only storage engines that use only table-level locking
(such as MyISAM
, MEMORY
,
and MERGE
).
For MyISAM
tables, if you use the
QUICK
modifier, the storage engine does not
merge index leaves during delete, which may speed up some
kinds of delete operations.
The IGNORE
modifier causes MySQL to ignore
errors during the process of deleting rows. (Errors
encountered during the parsing stage are processed in the
usual manner.) Errors that are ignored due to the use of
IGNORE
are returned as warnings. For more
information, see Comparison of the IGNORE Keyword and Strict SQL Mode.
If the DELETE
statement includes an
ORDER BY
clause, rows are deleted in the order
specified by the clause. This is useful primarily in conjunction
with LIMIT
. For example, the following
statement finds rows matching the WHERE
clause,
sorts them by timestamp_column
, and deletes the
first (oldest) one:
DELETE FROM somelog WHERE user = 'jcole' ORDER BY timestamp_column LIMIT 1;
ORDER BY
also helps to delete rows in an order
required to avoid referential integrity violations.
If you are deleting many rows from a large table, you may exceed
the lock table size for an InnoDB
table. To
avoid this problem, or simply to minimize the time that the table
remains locked, the following strategy (which does not use
DELETE
at all) might be helpful:
Select the rows not to be deleted into an empty table that has the same structure as the original table:
INSERT INTO t_copy SELECT * FROM t WHERE ... ;
Use RENAME TABLE
to atomically
move the original table out of the way and rename the copy to
the original name:
RENAME TABLE t TO t_old, t_copy TO t;
Drop the original table:
DROP TABLE t_old;
No other sessions can access the tables involved while
RENAME TABLE
executes, so the
rename operation is not subject to concurrency problems. See
Section 13.1.33, “RENAME TABLE Syntax”.
In MyISAM
tables, deleted rows are maintained
in a linked list and subsequent
INSERT
operations reuse old row
positions. To reclaim unused space and reduce file sizes, use the
OPTIMIZE TABLE
statement or the
myisamchk utility to reorganize tables.
OPTIMIZE TABLE
is easier to use,
but myisamchk is faster. See
Section 13.7.2.4, “OPTIMIZE TABLE Syntax”, and Section 4.6.3, “myisamchk — MyISAM Table-Maintenance Utility”.
The QUICK
modifier affects whether index leaves
are merged for delete operations. DELETE QUICK
is most useful for applications where index values for deleted
rows are replaced by similar index values from rows inserted
later. In this case, the holes left by deleted values are reused.
DELETE QUICK
is not useful when deleted values
lead to underfilled index blocks spanning a range of index values
for which new inserts occur again. In this case, use of
QUICK
can lead to wasted space in the index
that remains unreclaimed. Here is an example of such a scenario:
In this scenario, the index blocks associated with the deleted
index values become underfilled but are not merged with other
index blocks due to the use of QUICK
. They
remain underfilled when new inserts occur, because new rows do not
have index values in the deleted range. Furthermore, they remain
underfilled even if you later use
DELETE
without
QUICK
, unless some of the deleted index values
happen to lie in index blocks within or adjacent to the
underfilled blocks. To reclaim unused index space under these
circumstances, use OPTIMIZE TABLE
.
If you are going to delete many rows from a table, it might be
faster to use DELETE QUICK
followed by
OPTIMIZE TABLE
. This rebuilds the
index rather than performing many index block merge operations.
You can specify multiple tables in a
DELETE
statement to delete rows
from one or more tables depending on the condition in the
WHERE
clause. You cannot use ORDER
BY
or LIMIT
in a multiple-table
DELETE
. The
table_references
clause lists the
tables involved in the join, as described in
Section 13.2.9.2, “JOIN Syntax”.
For the first multiple-table syntax, only matching rows from the
tables listed before the FROM
clause are
deleted. For the second multiple-table syntax, only matching rows
from the tables listed in the FROM
clause
(before the USING
clause) are deleted. The
effect is that you can delete rows from many tables at the same
time and have additional tables that are used only for searching:
DELETE t1, t2 FROM t1 INNER JOIN t2 INNER JOIN t3 WHERE t1.id=t2.id AND t2.id=t3.id;
Or:
DELETE FROM t1, t2 USING t1 INNER JOIN t2 INNER JOIN t3 WHERE t1.id=t2.id AND t2.id=t3.id;
These statements use all three tables when searching for rows to
delete, but delete matching rows only from tables
t1
and t2
.
The preceding examples use INNER JOIN
, but
multiple-table DELETE
statements
can use other types of join permitted in
SELECT
statements, such as
LEFT JOIN
. For example, to delete rows that
exist in t1
that have no match in
t2
, use a LEFT JOIN
:
DELETE t1 FROM t1 LEFT JOIN t2 ON t1.id=t2.id WHERE t2.id IS NULL;
The syntax permits .*
after each
tbl_name
for compatibility with
Access.
If you use a multiple-table DELETE
statement involving InnoDB
tables for which
there are foreign key constraints, the MySQL optimizer might
process tables in an order that differs from that of their
parent/child relationship. In this case, the statement fails and
rolls back. Instead, you should delete from a single table and
rely on the ON DELETE
capabilities that
InnoDB
provides to cause the other tables to be
modified accordingly.
If you declare an alias for a table, you must use the alias when referring to the table:
DELETE t1 FROM test AS t1, test2 WHERE ...
Table aliases in a multiple-table
DELETE
should be declared only in
the table_references
part of the
statement. Elsewhere, alias references are permitted but not alias
declarations.
Correct:
DELETE a1, a2 FROM t1 AS a1 INNER JOIN t2 AS a2 WHERE a1.id=a2.id; DELETE FROM a1, a2 USING t1 AS a1 INNER JOIN t2 AS a2 WHERE a1.id=a2.id;
Incorrect:
DELETE t1 AS a1, t2 AS a2 FROM t1 INNER JOIN t2 WHERE a1.id=a2.id; DELETE FROM t1 AS a1, t2 AS a2 USING t1 INNER JOIN t2 WHERE a1.id=a2.id;
DOexpr
[,expr
] ...
DO
executes the expressions but
does not return any results. In most respects,
DO
is shorthand for SELECT
, but has the
advantage that it is slightly faster when you do not care about
the result.
expr
, ...
DO
is useful primarily with
functions that have side effects, such as
RELEASE_LOCK()
.
Example: This SELECT
statement
pauses, but also produces a result set:
mysql> SELECT SLEEP(5);
+----------+
| SLEEP(5) |
+----------+
| 0 |
+----------+
1 row in set (5.02 sec)
DO
, on the other hand, pauses
without producing a result set.:
mysql> DO SLEEP(5);
Query OK, 0 rows affected (4.99 sec)
This could be useful, for example in a stored function or trigger, which prohibit statements that produce result sets.
DO
only executes expressions. It
cannot be used in all cases where SELECT
can be
used. For example, DO id FROM t1
is invalid
because it references a table.
As of MySQL 5.7.8, DO
statement
errors that previously were converted to warnings are returned as
errors.
HANDLERtbl_name
OPEN [ [AS]alias
] HANDLERtbl_name
READindex_name
{ = | <= | >= | < | > } (value1
,value2
,...) [ WHEREwhere_condition
] [LIMIT ... ] HANDLERtbl_name
READindex_name
{ FIRST | NEXT | PREV | LAST } [ WHEREwhere_condition
] [LIMIT ... ] HANDLERtbl_name
READ { FIRST | NEXT } [ WHEREwhere_condition
] [LIMIT ... ] HANDLERtbl_name
CLOSE
The HANDLER
statement provides direct access to
table storage engine interfaces. It is available for
InnoDB
and MyISAM
tables.
The HANDLER ... OPEN
statement opens a table,
making it accessible using subsequent HANDLER ...
READ
statements. This table object is not shared by
other sessions and is not closed until the session calls
HANDLER ... CLOSE
or the session terminates.
If you open the table using an alias, further references to the
open table with other HANDLER
statements must
use the alias rather than the table name. If you do not use an
alias, but open the table using a table name qualified by the
database name, further references must use the unqualified table
name. For example, for a table opened using
mydb.mytable
, further references must use
mytable
.
The first HANDLER ... READ
syntax fetches a row
where the index specified satisfies the given values and the
WHERE
condition is met. If you have a
multiple-column index, specify the index column values as a
comma-separated list. Either specify values for all the columns in
the index, or specify values for a leftmost prefix of the index
columns. Suppose that an index my_idx
includes
three columns named col_a
,
col_b
, and col_c
, in that
order. The HANDLER
statement can specify values
for all three columns in the index, or for the columns in a
leftmost prefix. For example:
HANDLER ... READ my_idx = (col_a_val,col_b_val,col_c_val) ... HANDLER ... READ my_idx = (col_a_val,col_b_val) ... HANDLER ... READ my_idx = (col_a_val) ...
To employ the HANDLER
interface to refer to a
table's PRIMARY KEY
, use the quoted identifier
`PRIMARY`
:
HANDLER tbl_name
READ `PRIMARY` ...
The second HANDLER ... READ
syntax fetches a
row from the table in index order that matches the
WHERE
condition.
The third HANDLER ... READ
syntax fetches a row
from the table in natural row order that matches the
WHERE
condition. It is faster than
HANDLER
when a full table
scan is desired. Natural row order is the order in which rows are
stored in a tbl_name
READ
index_name
MyISAM
table data file. This
statement works for InnoDB
tables as well, but
there is no such concept because there is no separate data file.
Without a LIMIT
clause, all forms of
HANDLER ... READ
fetch a single row if one is
available. To return a specific number of rows, include a
LIMIT
clause. It has the same syntax as for the
SELECT
statement. See
Section 13.2.9, “SELECT Syntax”.
HANDLER ... CLOSE
closes a table that was
opened with HANDLER ... OPEN
.
There are several reasons to use the HANDLER
interface instead of normal SELECT
statements:
HANDLER
is faster than
SELECT
:
A designated storage engine handler object is allocated
for the HANDLER ... OPEN
. The object is
reused for subsequent HANDLER
statements for that table; it need not be reinitialized
for each one.
There is less parsing involved.
There is no optimizer or query-checking overhead.
The handler interface does not have to provide a
consistent look of the data (for example,
dirty reads are
permitted), so the storage engine can use optimizations
that SELECT
does not
normally permit.
HANDLER
makes it easier to port to MySQL
applications that use a low-level ISAM
-like
interface. (See Section 14.20, “InnoDB memcached Plugin” for an
alternative way to adapt applications that use the key-value
store paradigm.)
HANDLER
enables you to traverse a database
in a manner that is difficult (or even impossible) to
accomplish with SELECT
. The
HANDLER
interface is a more natural way to
look at data when working with applications that provide an
interactive user interface to the database.
HANDLER
is a somewhat low-level statement. For
example, it does not provide consistency. That is,
HANDLER ... OPEN
does not
take a snapshot of the table, and does not
lock the table. This means that after a HANDLER ...
OPEN
statement is issued, table data can be modified (by
the current session or other sessions) and these modifications
might be only partially visible to HANDLER ...
NEXT
or HANDLER ... PREV
scans.
An open handler can be closed and marked for reopen, in which case the handler loses its position in the table. This occurs when both of the following circumstances are true:
Any session executes
FLUSH TABLES
or DDL statements on the handler's table.
The session in which the handler is open executes
non-HANDLER
statements that use tables.
TRUNCATE TABLE
for a table closes
all handlers for the table that were opened with
HANDLER OPEN
.
If a table is flushed with
FLUSH TABLES
was
opened with tbl_name
WITH READ LOCKHANDLER
, the handler is implicitly
flushed and loses its position.
In previous versions of MySQL, HANDLER
was not
supported with partitioned tables. This limitation is removed
beginning with MySQL 5.7.1.
INSERT [LOW_PRIORITY | DELAYED | HIGH_PRIORITY] [IGNORE] [INTO]tbl_name
[PARTITION (partition_name
,...)] [(col_name
,...)] {VALUES | VALUE} ({expr
| DEFAULT},...),(...),... [ ON DUPLICATE KEY UPDATEcol_name
=expr
[,col_name
=expr
] ... ]
Or:
INSERT [LOW_PRIORITY | DELAYED | HIGH_PRIORITY] [IGNORE] [INTO]tbl_name
[PARTITION (partition_name
,...)] SETcol_name
={expr
| DEFAULT}, ... [ ON DUPLICATE KEY UPDATEcol_name
=expr
[,col_name
=expr
] ... ]
Or:
INSERT [LOW_PRIORITY | HIGH_PRIORITY] [IGNORE] [INTO]tbl_name
[PARTITION (partition_name
,...)] [(col_name
,...)] SELECT ... [ ON DUPLICATE KEY UPDATEcol_name
=expr
[,col_name
=expr
] ... ]
INSERT
inserts new rows into an
existing table. The INSERT
... VALUES
and
INSERT ... SET
forms of the statement insert rows based on explicitly specified
values. The INSERT
... SELECT
form inserts rows selected from another table
or tables. INSERT
... SELECT
is discussed further in
Section 13.2.5.1, “INSERT ... SELECT Syntax”.
When inserting into a partitioned table, you can control which
partitions and subpartitions accept new rows. The
PARTITION
option takes a comma-separated list
of the names of one or more partitions or subpartitions (or both)
of the table. If any of the rows to be inserted by a given
INSERT
statement do not match one of the
partitions listed, the INSERT
statement fails
with the error Found a row not matching the given
partition set. See
Section 22.5, “Partition Selection”, for more information and
examples.
In MySQL 5.7, the DELAYED
keyword
is accepted but ignored by the server. See
Section 13.2.5.2, “INSERT DELAYED Syntax”, for the reasons for this.
You can use REPLACE
instead of
INSERT
to overwrite old rows.
REPLACE
is the counterpart to
INSERT IGNORE
in
the treatment of new rows that contain unique key values that
duplicate old rows: The new rows are used to replace the old rows
rather than being discarded. See Section 13.2.8, “REPLACE Syntax”.
tbl_name
is the table into which rows
should be inserted. The columns for which the statement provides
values can be specified as follows:
You can provide a comma-separated list of column names
following the table name. In this case, a value for each named
column must be provided by the VALUES
list
or the SELECT
statement.
If you do not specify a list of column names for
INSERT ...
VALUES
or
INSERT ...
SELECT
, values for every column in the table must be
provided by the VALUES
list or the
SELECT
statement. If you do not
know the order of the columns in the table, use
DESCRIBE
to find out.
tbl_name
The SET
clause indicates the column names
explicitly.
Column values can be given in several ways:
If you are not running in strict SQL mode, any column not explicitly given a value is set to its default (explicit or implicit) value. For example, if you specify a column list that does not name all the columns in the table, unnamed columns are set to their default values. Default value assignment is described in Section 11.7, “Data Type Default Values”. See also Section 1.8.3.3, “Constraints on Invalid Data”.
If you want an INSERT
statement
to generate an error unless you explicitly specify values for
all columns that do not have a default value, you should use
strict mode. See Section 5.1.8, “Server SQL Modes”.
Use the keyword DEFAULT
to set a column
explicitly to its default value. This makes it easier to write
INSERT
statements that assign
values to all but a few columns, because it enables you to
avoid writing an incomplete VALUES
list
that does not include a value for each column in the table.
Otherwise, you would have to write out the list of column
names corresponding to each value in the
VALUES
list.
You can also use
DEFAULT(
as a more general form that can be used in expressions to
produce a given column's default value.
col_name
)
If both the column list and the VALUES
list
are empty, INSERT
creates a row
with each column set to its default value:
INSERT INTO tbl_name
() VALUES();
In strict mode, an error occurs if any column doesn't have a default value. Otherwise, MySQL uses the implicit default value for any column that does not have an explicitly defined default.
You can specify an expression expr
to provide a column value. This might involve type conversion
if the type of the expression does not match the type of the
column, and conversion of a given value can result in
different inserted values depending on the data type. For
example, inserting the string '1999.0e-2'
into an INT
,
FLOAT
,
DECIMAL(10,6)
, or
YEAR
column results in the
values 1999
, 19.9921
,
19.992100
, and 1999
being inserted, respectively. The reason the value stored in
the INT
and
YEAR
columns is
1999
is that the string-to-integer
conversion looks only at as much of the initial part of the
string as may be considered a valid integer or year. For the
floating-point and fixed-point columns, the
string-to-floating-point conversion considers the entire
string a valid floating-point value.
An expression expr
can refer to any
column that was set earlier in a value list. For example, you
can do this because the value for col2
refers to col1
, which has previously been
assigned:
INSERT INTO tbl_name
(col1,col2) VALUES(15,col1*2);
But the following is not legal, because the value for
col1
refers to col2
,
which is assigned after col1
:
INSERT INTO tbl_name
(col1,col2) VALUES(col2*2,15);
One exception involves columns that contain
AUTO_INCREMENT
values. Because the
AUTO_INCREMENT
value is generated after
other value assignments, any reference to an
AUTO_INCREMENT
column in the assignment
returns a 0
.
INSERT
statements that use
VALUES
syntax can insert multiple rows. To do
this, include multiple lists of column values, each enclosed
within parentheses and separated by commas. Example:
INSERT INTO tbl_name
(a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9);
The values list for each row must be enclosed within parentheses. The following statement is illegal because the number of values in the list does not match the number of column names:
INSERT INTO tbl_name
(a,b,c) VALUES(1,2,3,4,5,6,7,8,9);
VALUE
is a synonym for
VALUES
in this context. Neither implies
anything about the number of values lists, and either may be used
whether there is a single values list or multiple lists.
The affected-rows value for an
INSERT
can be obtained using the
ROW_COUNT()
function (see
Section 12.14, “Information Functions”), or the
mysql_affected_rows()
C API
function (see Section 27.8.7.1, “mysql_affected_rows()”).
If you use an INSERT ...
VALUES
statement with multiple value lists or
INSERT ...
SELECT
, the statement returns an information string in
this format:
Records: 100 Duplicates: 0 Warnings: 0
Records
indicates the number of rows processed
by the statement. (This is not necessarily the number of rows
actually inserted because Duplicates
can be
nonzero.) Duplicates
indicates the number of
rows that could not be inserted because they would duplicate some
existing unique index value. Warnings
indicates
the number of attempts to insert column values that were
problematic in some way. Warnings can occur under any of the
following conditions:
Inserting NULL
into a column that has been
declared NOT NULL
. For multiple-row
INSERT
statements or
INSERT INTO ...
SELECT
statements, the column is set to the implicit
default value for the column data type. This is
0
for numeric types, the empty string
(''
) for string types, and the
“zero” value for date and time types.
INSERT INTO ...
SELECT
statements are handled the same way as
multiple-row inserts because the server does not examine the
result set from the SELECT
to
see whether it returns a single row. (For a single-row
INSERT
, no warning occurs when
NULL
is inserted into a NOT
NULL
column. Instead, the statement fails with an
error.)
Setting a numeric column to a value that lies outside the column's range. The value is clipped to the closest endpoint of the range.
Assigning a value such as '10.34 a'
to a
numeric column. The trailing nonnumeric text is stripped off
and the remaining numeric part is inserted. If the string
value has no leading numeric part, the column is set to
0
.
Inserting a string into a string column
(CHAR
,
VARCHAR
,
TEXT
, or
BLOB
) that exceeds the column's
maximum length. The value is truncated to the column's maximum
length.
Inserting a value into a date or time column that is illegal for the data type. The column is set to the appropriate zero value for the type.
If a generated column is inserted into explicitly, the only
permitted value is DEFAULT
. For information
about generated columns, see
Section 13.1.18.8, “CREATE TABLE and Generated Columns”.
If you are using the C API, the information string can be obtained
by invoking the mysql_info()
function. See Section 27.8.7.36, “mysql_info()”.
If INSERT
inserts a row into a
table that has an AUTO_INCREMENT
column, you
can find the value used for that column by using the SQL
LAST_INSERT_ID()
function. From
within the C API, use the
mysql_insert_id()
function.
However, you should note that the two functions do not always
behave identically. The behavior of
INSERT
statements with respect to
AUTO_INCREMENT
columns is discussed further in
Section 12.14, “Information Functions”, and
Section 27.8.7.38, “mysql_insert_id()”.
The INSERT
statement supports the
following modifiers:
INSERT DELAYED
was deprecated
in MySQL 5.6, and is scheduled for eventual
removal. In MySQL 5.7, the
DELAYED
modifier is accepted but ignored.
Use INSERT
(without
DELAYED
) instead. See
Section 13.2.5.2, “INSERT DELAYED Syntax”.
If you use the LOW_PRIORITY
modifier,
execution of the INSERT
is
delayed until no other clients are reading from the table.
This includes other clients that began reading while existing
clients are reading, and while the INSERT
LOW_PRIORITY
statement is waiting. It is possible,
therefore, for a client that issues an INSERT
LOW_PRIORITY
statement to wait for a very long time.
LOW_PRIORITY
should normally not be used
with MyISAM
tables because doing so
disables concurrent inserts. See
Section 8.11.3, “Concurrent Inserts”.
If you specify HIGH_PRIORITY
, it overrides
the effect of the
--low-priority-updates
option
if the server was started with that option. It also causes
concurrent inserts not to be used. See
Section 8.11.3, “Concurrent Inserts”.
LOW_PRIORITY
and
HIGH_PRIORITY
affect only storage engines
that use only table-level locking (such as
MyISAM
, MEMORY
, and
MERGE
).
If you use the IGNORE
modifier, errors that
occur while executing the
INSERT
statement are ignored.
For example, without IGNORE
, a row that
duplicates an existing UNIQUE
index or
PRIMARY KEY
value in the table causes a
duplicate-key error and the statement is aborted. With
IGNORE
, the row is discarded and no error
occurs. Ignored errors generate warnings instead.
IGNORE
has a similar effect on inserts into
partitioned tables where no partition matching a given value
is found. Without IGNORE
, such
INSERT
statements are aborted
with an error; however, when
INSERT
IGNORE
is used, the insert operation fails silently
for the row containing the unmatched value, but any rows that
are matched are inserted. For an example, see
Section 22.2.2, “LIST Partitioning”.
Data conversions that would trigger errors abort the statement
if IGNORE
is not specified. With
IGNORE
, invalid values are adjusted to the
closest values and inserted; warnings are produced but the
statement does not abort. You can determine with the
mysql_info()
C API function
how many rows were actually inserted into the table.
For more information, see Comparison of the IGNORE Keyword and Strict SQL Mode.
If you specify ON DUPLICATE KEY UPDATE
, and
a row is inserted that would cause a duplicate value in a
UNIQUE
index or PRIMARY
KEY
, an UPDATE
of the
old row is performed. The affected-rows value per row is 1 if
the row is inserted as a new row, 2 if an existing row is
updated, and 0 if an existing row is set to its current
values. If you specify the
CLIENT_FOUND_ROWS
flag to
mysql_real_connect()
when
connecting to mysqld, the affected-rows
value is 1 (not 0) if an existing row is set to its current
values. See Section 13.2.5.3, “INSERT ... ON DUPLICATE KEY UPDATE Syntax”.
Inserting into a table requires the
INSERT
privilege for the table. If
the ON DUPLICATE KEY UPDATE
clause is used and
a duplicate key causes an UPDATE
to
be performed instead, the statement requires the
UPDATE
privilege for the columns to
be updated. For columns that are read but not modified you need
only the SELECT
privilege (such as
for a column referenced only on the right hand side of an
col_name
=expr
assignment in an ON DUPLICATE KEY UPDATE
clause).
In MySQL 5.7, an INSERT
statement
affecting a partitioned table using a storage engine such as
MyISAM
that employs table-level locks
locks only those partitions into which rows are actually inserted.
(For storage engines such as InnoDB
that employ row-level locking, no locking of partitions takes
place.) For more information, see
Section 22.6.4, “Partitioning and Locking”.
INSERT [LOW_PRIORITY | HIGH_PRIORITY] [IGNORE] [INTO]tbl_name
[PARTITION (partition_name
,...)] [(col_name
,...)] SELECT ... [ ON DUPLICATE KEY UPDATEcol_name
=expr
, ... ]
With INSERT ...
SELECT
, you can quickly insert many rows into a table
from one or many tables. For example:
INSERT INTO tbl_temp2 (fld_id) SELECT tbl_temp1.fld_order_id FROM tbl_temp1 WHERE tbl_temp1.fld_order_id > 100;
The following conditions hold for a
INSERT ...
SELECT
statements:
Specify IGNORE
to ignore rows that would
cause duplicate-key violations.
The target table of the
INSERT
statement may appear
in the FROM
clause of the
SELECT
part of the query.
(This was not possible in some older versions of MySQL.)
However, you cannot insert into a table and select from the
same table in a subquery.
When selecting from and inserting into a table at the same
time, MySQL creates a temporary table to hold the rows from
the SELECT
and then inserts
those rows into the target table. However, it remains true
that you cannot use INSERT INTO t ... SELECT ...
FROM t
when t
is a
TEMPORARY
table, because
TEMPORARY
tables cannot be referred to
twice in the same statement (see
Section B.5.6.2, “TEMPORARY Table Problems”).
AUTO_INCREMENT
columns work as usual.
To ensure that the binary log can be used to re-create the
original tables, MySQL does not permit concurrent inserts
for INSERT
... SELECT
statements (see
Section 8.11.3, “Concurrent Inserts”).
To avoid ambiguous column reference problems when the
SELECT
and the
INSERT
refer to the same
table, provide a unique alias for each table used in the
SELECT
part, and qualify
column names in that part with the appropriate alias.
You can explicitly select which partitions or subpartitions (or
both) of the source or target table (or both) are to be used
with a PARTITION
option following the name of
the table. When PARTITION
is used with the
name of the source table in the
SELECT
portion of the statement,
rows are selected only from the partitions or subpartitions
named in its partition list. When PARTITION
is used with the name of the target table for the
INSERT
portion of the statement,
then it must be possible to insert all rows selected into the
partitions or subpartitions named in the partition list
following the option, else the INSERT ...
SELECT
statement fails. For more information and
examples, see Section 22.5, “Partition Selection”.
In the values part of ON DUPLICATE KEY
UPDATE
, you can refer to columns in other tables, as
long as you do not use GROUP BY
in the
SELECT
part. One side effect is
that you must qualify nonunique column names in the values part.
The order in which rows are returned by a
SELECT
statement with no
ORDER BY
clause is not determined. This means
that, when using replication, there is no guarantee that such a
SELECT
returns rows in the same order on the
master and the slave; this can lead to inconsistencies between
them. To prevent this from occurring, you should always write
INSERT ... SELECT
statements that are to be
replicated as INSERT ... SELECT ... ORDER BY
. The choice of
column
column
does not matter as long as the
same order for returning the rows is enforced on both the master
and the slave. See also
Section 16.4.1.17, “Replication and LIMIT”.
Due to this issue,
INSERT ...
SELECT ON DUPLICATE KEY UPDATE
and
INSERT IGNORE ...
SELECT
statements are flagged as unsafe for
statement-based replication. With this change, such statements
produce a warning in the log when using statement-based mode and
are logged using the row-based format when using
MIXED
mode. (Bug #11758262, Bug #50439)
See also Section 16.2.1.1, “Advantages and Disadvantages of Statement-Based and Row-Based Replication”.
In MySQL 5.7, an INSERT ...
SELECT
statement that acted on partitioned tables
using a storage engine such as
MyISAM
that employs table-level
locks locks all partitions of the target table; however, only
those partitions that are actually read from the source table
are locked. (This does not occur with tables using storage
engines such as InnoDB
that employ
row-level locking.) See
Section 22.6.4, “Partitioning and Locking”, for more
information.
INSERT DELAYED ...
The DELAYED
option for the
INSERT
statement is a MySQL
extension to standard SQL. In previous versions of MySQL, it can
be used for certain kinds of tables (such as
MyISAM
), such that when a client uses
INSERT DELAYED
, it gets an okay
from the server at once, and the row is queued to be inserted
when the table is not in use by any other thread.
DELAYED
inserts and replaces were deprecated
in MySQL 5.6.6. In MySQL 5.7,
DELAYED
is not supported. The server
recognizes but ignores the DELAYED
keyword,
handles the insert as a nondelayed insert, and generates an
ER_WARN_LEGACY_SYNTAX_CONVERTED
warning
(“INSERT DELAYED is no longer supported. The statement was
converted to INSERT”). The DELAYED
keyword is scheduled for removal in a future release.
If you specify ON DUPLICATE KEY UPDATE
, and a
row is inserted that would cause a duplicate value in a
UNIQUE
index or PRIMARY
KEY
, MySQL performs an
UPDATE
of the old row. For
example, if column a
is declared as
UNIQUE
and contains the value
1
, the following two statements have similar
effect:
INSERT INTO table (a,b,c) VALUES (1,2,3) ON DUPLICATE KEY UPDATE c=c+1; UPDATE table SET c=c+1 WHERE a=1;
(The effects are not identical for an InnoDB
table where a
is an auto-increment column.
With an auto-increment column, an INSERT
statement increases the auto-increment value but
UPDATE
does not.)
The ON DUPLICATE KEY UPDATE
clause can
contain multiple column assignments, separated by commas.
With ON DUPLICATE KEY UPDATE
, the
affected-rows value per row is 1 if the row is inserted as a new
row, 2 if an existing row is updated, and 0 if an existing row
is set to its current values. If you specify the
CLIENT_FOUND_ROWS
flag to
mysql_real_connect()
when
connecting to mysqld, the affected-rows value
is 1 (not 0) if an existing row is set to its current values.
If column b
is also unique, the
INSERT
is equivalent to this
UPDATE
statement instead:
UPDATE table SET c=c+1 WHERE a=1 OR b=2 LIMIT 1;
If a=1 OR b=2
matches several rows, only
one row is updated. In general, you should
try to avoid using an ON DUPLICATE KEY UPDATE
clause on tables with multiple unique indexes.
You can use the
VALUES(
function in the col_name
)UPDATE
clause to
refer to column values from the
INSERT
portion of the
INSERT ...
ON DUPLICATE KEY UPDATE
statement. In other words,
VALUES(
in the col_name
)ON DUPLICATE KEY UPDATE
clause refers
to the value of col_name
that would
be inserted, had no duplicate-key conflict occurred. This
function is especially useful in multiple-row inserts. The
VALUES()
function is meaningful
only in INSERT ... UPDATE
statements and
returns NULL
otherwise. Example:
INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6) ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);
That statement is identical to the following two statements:
INSERT INTO table (a,b,c) VALUES (1,2,3) ON DUPLICATE KEY UPDATE c=3; INSERT INTO table (a,b,c) VALUES (4,5,6) ON DUPLICATE KEY UPDATE c=9;
If a table contains an AUTO_INCREMENT
column
and INSERT
... ON DUPLICATE KEY UPDATE
inserts or updates a row,
the LAST_INSERT_ID()
function
returns the AUTO_INCREMENT
value.
The DELAYED
option is ignored when you use
ON DUPLICATE KEY UPDATE
.
Because the results of
INSERT ...
SELECT
statements depend on the ordering of rows from
the SELECT
and this order cannot
always be guaranteed, it is possible when logging
INSERT ...
SELECT ON DUPLICATE KEY UPDATE
statements for the
master and the slave to diverge. Thus,
INSERT ...
SELECT ON DUPLICATE KEY UPDATE
statements are flagged
as unsafe for statement-based replication. With this change,
such statements produce a warning in the log when using
statement-based mode and are logged using the row-based format
when using MIXED
mode. In addition, an
INSERT ...
ON DUPLICATE KEY UPDATE
statement against a table
having more than one unique or primary key is also marked as
unsafe. (Bug #11765650, Bug #58637) See also
Section 16.2.1.1, “Advantages and Disadvantages of Statement-Based and Row-Based
Replication”.
In MySQL 5.7, an INSERT ... ON DUPLICATE
KEY UPDATE
on a partitioned table using a storage
engine such as MyISAM
that employs
table-level locks locks any partitions of the table in which a
partitioning key column is updated. (This does not occur with
tables using storage engines such as
InnoDB
that employ row-level
locking.) See
Section 22.6.4, “Partitioning and Locking”, for more
information.
LOAD DATA [LOW_PRIORITY | CONCURRENT] [LOCAL] INFILE 'file_name
' [REPLACE | IGNORE] INTO TABLEtbl_name
[PARTITION (partition_name
,...)] [CHARACTER SETcharset_name
] [{FIELDS | COLUMNS} [TERMINATED BY 'string
'] [[OPTIONALLY] ENCLOSED BY 'char
'] [ESCAPED BY 'char
'] ] [LINES [STARTING BY 'string
'] [TERMINATED BY 'string
'] ] [IGNOREnumber
{LINES | ROWS}] [(col_name_or_user_var
,...)] [SETcol_name
=expr
,...]
The LOAD DATA
INFILE
statement reads rows from a text file into a
table at a very high speed.
LOAD DATA
INFILE
is the complement of
SELECT ... INTO
OUTFILE
. (See Section 13.2.9.1, “SELECT ... INTO Syntax”.) To write
data from a table to a file, use
SELECT ... INTO
OUTFILE
. To read the file back into a table, use
LOAD DATA
INFILE
. The syntax of the FIELDS
and
LINES
clauses is the same for both statements.
Both clauses are optional, but FIELDS
must
precede LINES
if both are specified.
You can also load data files by using the
mysqlimport utility; it operates by sending a
LOAD DATA
INFILE
statement to the server. The
--local
option causes
mysqlimport to read data files from the client
host. You can specify the
--compress
option to get
better performance over slow networks if the client and server
support the compressed protocol. See
Section 4.5.5, “mysqlimport — A Data Import Program”.
For more information about the efficiency of
INSERT
versus
LOAD DATA
INFILE
and speeding up
LOAD DATA
INFILE
, see Section 8.2.4.1, “Optimizing INSERT Statements”.
The file name must be given as a literal string. On Windows,
specify backslashes in path names as forward slashes or doubled
backslashes. The
character_set_filesystem
system
variable controls the interpretation of the file name.
LOAD DATA
supports explicit partition selection
using the PARTITION
option with a
comma-separated list of one or more names of partitions,
subpartitions, or both. When this option is used, if any rows from
the file cannot be inserted into any of the partitions or
subpartitions named in the list, the statement fails with the
error Found a row not matching the given partition
set. For more information, see
Section 22.5, “Partition Selection”.
For partitioned tables using storage engines that employ table
locks, such as MyISAM
, LOAD
DATA
cannot prune any partition locks. This does not
apply to tables using storage engines which employ row-level
locking, such as InnoDB
. For more
information, see
Section 22.6.4, “Partitioning and Locking”.
The server uses the character set indicated by the
character_set_database
system
variable to interpret the information in the file.
SET NAMES
and the setting of
character_set_client
do not
affect interpretation of input. If the contents of the input file
use a character set that differs from the default, it is usually
preferable to specify the character set of the file by using the
CHARACTER SET
clause. A character set of
binary
specifies “no conversion.”
LOAD DATA
INFILE
interprets all fields in the file as having the
same character set, regardless of the data types of the columns
into which field values are loaded. For proper interpretation of
file contents, you must ensure that it was written with the
correct character set. For example, if you write a data file with
mysqldump -T or by issuing a
SELECT ... INTO
OUTFILE
statement in mysql, be sure
to use a --default-character-set
option so that
output is written in the character set to be used when the file is
loaded with LOAD DATA
INFILE
.
It is not possible to load data files that use the
ucs2
, utf16
,
utf16le
, or utf32
character set.
If you use LOW_PRIORITY
, execution of the
LOAD DATA
statement is delayed
until no other clients are reading from the table. This affects
only storage engines that use only table-level locking (such as
MyISAM
, MEMORY
, and
MERGE
).
If you specify CONCURRENT
with a
MyISAM
table that satisfies the condition for
concurrent inserts (that is, it contains no free blocks in the
middle), other threads can retrieve data from the table while
LOAD DATA
is executing. This option
affects the performance of LOAD
DATA
a bit, even if no other thread is using the table
at the same time.
With row-based replication, CONCURRENT
is
replicated regardless of MySQL version. With statement-based
replication CONCURRENT
is not replicated prior
to MySQL 5.5.1 (see Bug #34628). For more information, see
Section 16.4.1.18, “Replication and LOAD DATA INFILE”.
The LOCAL
keyword affects expected location of
the file and error handling, as described later.
LOCAL
works only if your server and your client
both have been configured to permit it. For example, if
mysqld was started with the
local_infile
system variable
disabled, LOCAL
does not work. See
Section 6.1.6, “Security Issues with LOAD DATA LOCAL”.
The LOCAL
keyword affects where the file is
expected to be found:
If LOCAL
is specified, the file is read by
the client program on the client host and sent to the server.
The file can be given as a full path name to specify its exact
location. If given as a relative path name, the name is
interpreted relative to the directory in which the client
program was started.
When using LOCAL
with
LOAD DATA
, a copy of the file
is created in the server's temporary directory. This is
not the directory determined by the value
of tmpdir
or
slave_load_tmpdir
, but rather
the operating system's temporary directory, and is not
configurable in the MySQL Server. (Typically the system
temporary directory is /tmp
on Linux
systems and C:\WINDOWS\TEMP
on Windows.)
Lack of sufficient space for the copy in this directory can
cause the LOAD DATA
LOCAL
statement to fail.
If LOCAL
is not specified, the file must be
located on the server host and is read directly by the server.
The server uses the following rules to locate the file:
If the file name is an absolute path name, the server uses it as given.
If the file name is a relative path name with one or more leading components, the server searches for the file relative to the server's data directory.
If a file name with no leading components is given, the server looks for the file in the database directory of the default database.
In the non-LOCAL
case, these rules mean that a
file named as ./myfile.txt
is read from the
server's data directory, whereas the file named as
myfile.txt
is read from the database
directory of the default database. For example, if
db1
is the default database, the following
LOAD DATA
statement reads the file
data.txt
from the database directory for
db1
, even though the statement explicitly loads
the file into a table in the db2
database:
LOAD DATA INFILE 'data.txt' INTO TABLE db2.my_table;
Non-LOCAL
load operations read text files
located on the server. For security reasons, such operations
require that you have the FILE
privilege. See Section 6.2.1, “Privileges Provided by MySQL”. Also,
non-LOCAL
load operations are subject to the
secure_file_priv
system variable
setting. If the variable value is a nonempty directory name, the
file to be loaded must be located in that directory. If the
variable value is empty (which is insecure), the file need only be
readable by the server.
Using LOCAL
is a bit slower than letting the
server access the files directly, because the contents of the file
must be sent over the connection by the client to the server. On
the other hand, you do not need the
FILE
privilege to load local files.
LOCAL
also affects error handling:
With LOAD DATA
INFILE
, data-interpretation and duplicate-key errors
terminate the operation.
With LOAD DATA
LOCAL INFILE
, data-interpretation and duplicate-key
errors become warnings and the operation continues because the
server has no way to stop transmission of the file in the
middle of the operation. For duplicate-key errors, this is the
same as if IGNORE
is specified.
IGNORE
is explained further later in this
section.
The REPLACE
and IGNORE
keywords control handling of input rows that duplicate existing
rows on unique key values:
If you specify REPLACE
, input rows replace
existing rows. In other words, rows that have the same value
for a primary key or unique index as an existing row. See
Section 13.2.8, “REPLACE Syntax”.
If you specify IGNORE
, rows that duplicate
an existing row on a unique key value are discarded. For more
information, see Comparison of the IGNORE Keyword and Strict SQL Mode.
If you do not specify either option, the behavior depends on
whether the LOCAL
keyword is specified.
Without LOCAL
, an error occurs when a
duplicate key value is found, and the rest of the text file is
ignored. With LOCAL
, the default behavior
is the same as if IGNORE
is specified; this
is because the server has no way to stop transmission of the
file in the middle of the operation.
To ignore foreign key constraints during the load operation, issue
a SET foreign_key_checks = 0
statement before
executing LOAD DATA
.
If you use LOAD DATA
INFILE
on an empty MyISAM
table, all
nonunique indexes are created in a separate batch (as for
REPAIR TABLE
). Normally, this makes
LOAD DATA
INFILE
much faster when you have many indexes. In some
extreme cases, you can create the indexes even faster by turning
them off with ALTER TABLE ... DISABLE KEYS
before loading the file into the table and using ALTER
TABLE ... ENABLE KEYS
to re-create the indexes after
loading the file. See Section 8.2.4.1, “Optimizing INSERT Statements”.
For both the LOAD DATA
INFILE
and
SELECT ... INTO
OUTFILE
statements, the syntax of the
FIELDS
and LINES
clauses is
the same. Both clauses are optional, but FIELDS
must precede LINES
if both are specified.
If you specify a FIELDS
clause, each of its
subclauses (TERMINATED BY
,
[OPTIONALLY] ENCLOSED BY
, and ESCAPED
BY
) is also optional, except that you must specify at
least one of them.
If you specify no FIELDS
or
LINES
clause, the defaults are the same as if
you had written this:
FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' STARTING BY ''
(Backslash is the MySQL escape character within strings in SQL
statements, so to specify a literal backslash, you must specify
two backslashes for the value to be interpreted as a single
backslash. The escape sequences '\t'
and
'\n'
specify tab and newline characters,
respectively.)
In other words, the defaults cause
LOAD DATA
INFILE
to act as follows when reading input:
Look for line boundaries at newlines.
Do not skip over any line prefix.
Break lines into fields at tabs.
Do not expect fields to be enclosed within any quoting characters.
Interpret characters preceded by the escape character
\
as escape sequences. For example,
\t
, \n
, and
\\
signify tab, newline, and backslash,
respectively. See the discussion of FIELDS ESCAPED
BY
later for the full list of escape sequences.
Conversely, the defaults cause
SELECT ... INTO
OUTFILE
to act as follows when writing output:
Write tabs between fields.
Do not enclose fields within any quoting characters.
Use \
to escape instances of tab, newline,
or \
that occur within field values.
Write newlines at the ends of lines.
If you have generated the text file on a Windows system, you
might have to use LINES TERMINATED BY '\r\n'
to read the file properly, because Windows programs typically
use two characters as a line terminator. Some programs, such as
WordPad, might use \r
as a
line terminator when writing files. To read such files, use
LINES TERMINATED BY '\r'
.
If all the lines you want to read in have a common prefix that you
want to ignore, you can use LINES STARTING BY
'
to skip over
the prefix, and anything before it. If a line
does not include the prefix, the entire line is skipped. Suppose
that you issue the following statement:
prefix_string
'
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test FIELDS TERMINATED BY ',' LINES STARTING BY 'xxx';
If the data file looks like this:
xxx"abc",1 something xxx"def",2 "ghi",3
The resulting rows will be ("abc",1)
and
("def",2)
. The third row in the file is skipped
because it does not contain the prefix.
The IGNORE
option can be used to ignore lines at the start of
the file. For example, you can use number
LINESIGNORE 1
LINES
to skip over an initial header line containing
column names:
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test IGNORE 1 LINES;
When you use SELECT
... INTO OUTFILE
in tandem with
LOAD DATA
INFILE
to write data from a database into a file and
then read the file back into the database later, the field- and
line-handling options for both statements must match. Otherwise,
LOAD DATA
INFILE
will not interpret the contents of the file
properly. Suppose that you use
SELECT ... INTO
OUTFILE
to write a file with fields delimited by commas:
SELECT * INTO OUTFILE 'data.txt' FIELDS TERMINATED BY ',' FROM table2;
To read the comma-delimited file back in, the correct statement would be:
LOAD DATA INFILE 'data.txt' INTO TABLE table2 FIELDS TERMINATED BY ',';
If instead you tried to read in the file with the statement shown
following, it wouldn't work because it instructs
LOAD DATA
INFILE
to look for tabs between fields:
LOAD DATA INFILE 'data.txt' INTO TABLE table2 FIELDS TERMINATED BY '\t';
The likely result is that each input line would be interpreted as a single field.
LOAD DATA
INFILE
can be used to read files obtained from external
sources. For example, many programs can export data in
comma-separated values (CSV) format, such that lines have fields
separated by commas and enclosed within double quotation marks,
with an initial line of column names. If the lines in such a file
are terminated by carriage return/newline pairs, the statement
shown here illustrates the field- and line-handling options you
would use to load the file:
LOAD DATA INFILE 'data.txt' INTO TABLE tbl_name
FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES;
If the input values are not necessarily enclosed within quotation
marks, use OPTIONALLY
before the
ENCLOSED BY
keywords.
Any of the field- or line-handling options can specify an empty
string (''
). If not empty, the FIELDS
[OPTIONALLY] ENCLOSED BY
and FIELDS ESCAPED
BY
values must be a single character. The
FIELDS TERMINATED BY
, LINES STARTING
BY
, and LINES TERMINATED BY
values
can be more than one character. For example, to write lines that
are terminated by carriage return/linefeed pairs, or to read a
file containing such lines, specify a LINES TERMINATED BY
'\r\n'
clause.
To read a file containing jokes that are separated by lines
consisting of %%
, you can do this
CREATE TABLE jokes (a INT NOT NULL AUTO_INCREMENT PRIMARY KEY, joke TEXT NOT NULL); LOAD DATA INFILE '/tmp/jokes.txt' INTO TABLE jokes FIELDS TERMINATED BY '' LINES TERMINATED BY '\n%%\n' (joke);
FIELDS [OPTIONALLY] ENCLOSED BY
controls
quoting of fields. For output
(SELECT ... INTO
OUTFILE
), if you omit the word
OPTIONALLY
, all fields are enclosed by the
ENCLOSED BY
character. An example of such
output (using a comma as the field delimiter) is shown here:
"1","a string","100.20" "2","a string containing a , comma","102.20" "3","a string containing a \" quote","102.20" "4","a string containing a \", quote and comma","102.20"
If you specify OPTIONALLY
, the
ENCLOSED BY
character is used only to enclose
values from columns that have a string data type (such as
CHAR
,
BINARY
,
TEXT
, or
ENUM
):
1,"a string",100.20 2,"a string containing a , comma",102.20 3,"a string containing a \" quote",102.20 4,"a string containing a \", quote and comma",102.20
Occurrences of the ENCLOSED BY
character within
a field value are escaped by prefixing them with the
ESCAPED BY
character. Also, if you specify an
empty ESCAPED BY
value, it is possible to
inadvertently generate output that cannot be read properly by
LOAD DATA
INFILE
. For example, the preceding output just shown
would appear as follows if the escape character is empty. Observe
that the second field in the fourth line contains a comma
following the quote, which (erroneously) appears to terminate the
field:
1,"a string",100.20 2,"a string containing a , comma",102.20 3,"a string containing a " quote",102.20 4,"a string containing a ", quote and comma",102.20
For input, the ENCLOSED BY
character, if
present, is stripped from the ends of field values. (This is true
regardless of whether OPTIONALLY
is specified;
OPTIONALLY
has no effect on input
interpretation.) Occurrences of the ENCLOSED BY
character preceded by the ESCAPED BY
character
are interpreted as part of the current field value.
If the field begins with the ENCLOSED BY
character, instances of that character are recognized as
terminating a field value only if followed by the field or line
TERMINATED BY
sequence. To avoid ambiguity,
occurrences of the ENCLOSED BY
character within
a field value can be doubled and are interpreted as a single
instance of the character. For example, if ENCLOSED BY
'"'
is specified, quotation marks are handled as shown
here:
"The ""BIG"" boss" -> The "BIG" boss The "BIG" boss -> The "BIG" boss The ""BIG"" boss -> The ""BIG"" boss
FIELDS ESCAPED BY
controls how to read or write
special characters:
For input, if the FIELDS ESCAPED BY
character is not empty, occurrences of that character are
stripped and the following character is taken literally as
part of a field value. Some two-character sequences that are
exceptions, where the first character is the escape character.
These sequences are shown in the following table (using
\
for the escape character). The rules for
NULL
handling are described later in this
section.
Character | Escape Sequence |
---|---|
\0
| An ASCII NUL (X'00' ) character |
\b
| A backspace character |
\n
| A newline (linefeed) character |
\r
| A carriage return character |
\t
| A tab character. |
\Z
| ASCII 26 (Control+Z) |
\N
| NULL |
For more information about \
-escape syntax,
see Section 9.1.1, “String Literals”.
If the FIELDS ESCAPED BY
character is
empty, escape-sequence interpretation does not occur.
For output, if the FIELDS ESCAPED BY
character is not empty, it is used to prefix the following
characters on output:
The FIELDS ESCAPED BY
character
The FIELDS [OPTIONALLY] ENCLOSED BY
character
The first character of the FIELDS TERMINATED
BY
and LINES TERMINATED BY
values
ASCII 0
(what is actually written
following the escape character is ASCII
0
, not a zero-valued byte)
If the FIELDS ESCAPED BY
character is
empty, no characters are escaped and NULL
is output as NULL
, not
\N
. It is probably not a good idea to
specify an empty escape character, particularly if field
values in your data contain any of the characters in the list
just given.
In certain cases, field- and line-handling options interact:
If LINES TERMINATED BY
is an empty string
and FIELDS TERMINATED BY
is nonempty, lines
are also terminated with FIELDS TERMINATED
BY
.
If the FIELDS TERMINATED BY
and
FIELDS ENCLOSED BY
values are both empty
(''
), a fixed-row (nondelimited) format is
used. With fixed-row format, no delimiters are used between
fields (but you can still have a line terminator). Instead,
column values are read and written using a field width wide
enough to hold all values in the field. For
TINYINT
,
SMALLINT
,
MEDIUMINT
,
INT
, and
BIGINT
, the field widths are 4,
6, 8, 11, and 20, respectively, no matter what the declared
display width is.
LINES TERMINATED BY
is still used to
separate lines. If a line does not contain all fields, the
rest of the columns are set to their default values. If you do
not have a line terminator, you should set this to
''
. In this case, the text file must
contain all fields for each row.
Fixed-row format also affects handling of
NULL
values, as described later.
Fixed-size format does not work if you are using a multibyte character set.
Handling of NULL
values varies according to the
FIELDS
and LINES
options in
use:
For the default FIELDS
and
LINES
values, NULL
is
written as a field value of \N
for output,
and a field value of \N
is read as
NULL
for input (assuming that the
ESCAPED BY
character is
\
).
If FIELDS ENCLOSED BY
is not empty, a field
containing the literal word NULL
as its
value is read as a NULL
value. This differs
from the word NULL
enclosed within
FIELDS ENCLOSED BY
characters, which is
read as the string 'NULL'
.
If FIELDS ESCAPED BY
is empty,
NULL
is written as the word
NULL
.
With fixed-row format (which is used when FIELDS
TERMINATED BY
and FIELDS ENCLOSED
BY
are both empty), NULL
is
written as an empty string. This causes both
NULL
values and empty strings in the table
to be indistinguishable when written to the file because both
are written as empty strings. If you need to be able to tell
the two apart when reading the file back in, you should not
use fixed-row format.
An attempt to load NULL
into a NOT
NULL
column causes assignment of the implicit default
value for the column's data type and a warning, or an error in
strict SQL mode. Implicit default values are discussed in
Section 11.7, “Data Type Default Values”.
Some cases are not supported by
LOAD DATA
INFILE
:
Fixed-size rows (FIELDS TERMINATED BY
and
FIELDS ENCLOSED BY
both empty) and
BLOB
or
TEXT
columns.
If you specify one separator that is the same as or a prefix
of another, LOAD
DATA INFILE
cannot interpret the input properly. For
example, the following FIELDS
clause would
cause problems:
FIELDS TERMINATED BY '"' ENCLOSED BY '"'
If FIELDS ESCAPED BY
is empty, a field
value that contains an occurrence of FIELDS ENCLOSED
BY
or LINES TERMINATED BY
followed by the FIELDS TERMINATED BY
value
causes LOAD DATA
INFILE
to stop reading a field or line too early.
This happens because
LOAD DATA
INFILE
cannot properly determine where the field or
line value ends.
The following example loads all columns of the
persondata
table:
LOAD DATA INFILE 'persondata.txt' INTO TABLE persondata;
By default, when no column list is provided at the end of the
LOAD DATA
INFILE
statement, input lines are expected to contain a
field for each table column. If you want to load only some of a
table's columns, specify a column list:
LOAD DATA INFILE 'persondata.txt' INTO TABLE persondata (col1,col2,...);
You must also specify a column list if the order of the fields in the input file differs from the order of the columns in the table. Otherwise, MySQL cannot tell how to match input fields with table columns.
The column list can contain either column names or user variables.
With user variables, the SET
clause enables you
to perform transformations on their values before assigning the
result to columns.
User variables in the SET
clause can be used in
several ways. The following example uses the first input column
directly for the value of t1.column1
, and
assigns the second input column to a user variable that is
subjected to a division operation before being used for the value
of t1.column2
:
LOAD DATA INFILE 'file.txt' INTO TABLE t1 (column1, @var1) SET column2 = @var1/100;
The SET
clause can be used to supply values not
derived from the input file. The following statement sets
column3
to the current date and time:
LOAD DATA INFILE 'file.txt' INTO TABLE t1 (column1, column2) SET column3 = CURRENT_TIMESTAMP;
You can also discard an input value by assigning it to a user variable and not assigning the variable to a table column:
LOAD DATA INFILE 'file.txt' INTO TABLE t1 (column1, @dummy, column2, @dummy, column3);
Use of the column/variable list and SET
clause
is subject to the following restrictions:
Assignments in the SET
clause should have
only column names on the left hand side of assignment
operators.
You can use subqueries in the right hand side of
SET
assignments. A subquery that returns a
value to be assigned to a column may be a scalar subquery
only. Also, you cannot use a subquery to select from the table
that is being loaded.
Lines ignored by an IGNORE
clause are not
processed for the column/variable list or
SET
clause.
User variables cannot be used when loading data with fixed-row format because user variables do not have a display width.
When processing an input line, LOAD
DATA
splits it into fields and uses the values according
to the column/variable list and the SET
clause,
if they are present. Then the resulting row is inserted into the
table. If there are BEFORE INSERT
or
AFTER INSERT
triggers for the table, they are
activated before or after inserting the row, respectively.
If an input line has too many fields, the extra fields are ignored and the number of warnings is incremented.
If an input line has too few fields, the table columns for which input fields are missing are set to their default values. Default value assignment is described in Section 11.7, “Data Type Default Values”.
An empty field value is interpreted different from a missing field:
For string types, the column is set to the empty string.
For numeric types, the column is set to 0
.
For date and time types, the column is set to the appropriate “zero” value for the type. See Section 11.3, “Date and Time Types”.
These are the same values that result if you assign an empty
string explicitly to a string, numeric, or date or time type
explicitly in an INSERT
or
UPDATE
statement.
Treatment of empty or incorrect field values differs from that
just described if the SQL mode is set to a restrictive value. For
example, if sql_mode
is set to
TRADITIONAL
, conversion of an
empty value or a value such as 'x'
for a
numeric column results in an error, not conversion to 0. (With
LOCAL
or IGNORE
, warnings
occur rather than errors, even with a restrictive
sql_mode
value, and the row is
inserted using the same closest-value behavior used for
nonrestrictive SQL modes. This occurs because the server has no
way to stop transmission of the file in the middle of the
operation.)
TIMESTAMP
columns are set to the
current date and time only if there is a NULL
value for the column (that is, \N
) and the
column is not declared to permit NULL
values,
or if the TIMESTAMP
column's
default value is the current timestamp and it is omitted from the
field list when a field list is specified.
LOAD DATA
INFILE
regards all input as strings, so you cannot use
numeric values for ENUM
or
SET
columns the way you can with
INSERT
statements. All
ENUM
and
SET
values must be specified as
strings.
BIT
values cannot be loaded
directly using binary notation (for example,
b'011010'
). To work around this, use the
SET
clause to strip off the leading
b'
and trailing '
and
perform a base-2 to base-10 conversion so that MySQL loads the
values into the BIT
column
properly:
shell>cat /tmp/bit_test.txt
b'10' b'1111111' shell>mysql test
mysql>LOAD DATA INFILE '/tmp/bit_test.txt'
INTO TABLE bit_test (@var1)
SET b = CAST(CONV(MID(@var1, 3, LENGTH(@var1)-3), 2, 10) AS UNSIGNED);
Query OK, 2 rows affected (0.00 sec) Records: 2 Deleted: 0 Skipped: 0 Warnings: 0 mysql>SELECT BIN(b+0) FROM bit_test;
+----------+ | BIN(b+0) | +----------+ | 10 | | 1111111 | +----------+ 2 rows in set (0.00 sec)
For BIT
values in
0b
binary notation (for example,
0b011010
), use this SET
clause instead to strip off the leading 0b
:
SET b = CAST(CONV(MID(@var1, 3, LENGTH(@var1)-2), 2, 10) AS UNSIGNED)
On Unix, if you need LOAD DATA
to
read from a pipe, you can use the following technique (the example
loads a listing of the /
directory into the
table db1.t1
):
mkfifo /mysql/data/db1/ls.dat chmod 666 /mysql/data/db1/ls.dat find / -ls > /mysql/data/db1/ls.dat & mysql -e "LOAD DATA INFILE 'ls.dat' INTO TABLE t1" db1
Here you must run the command that generates the data to be loaded and the mysql commands either on separate terminals, or run the data generation process in the background (as shown in the preceding example). If you do not do this, the pipe will block until data is read by the mysql process.
When the LOAD DATA
INFILE
statement finishes, it returns an information
string in the following format:
Records: 1 Deleted: 0 Skipped: 0 Warnings: 0
Warnings occur under the same circumstances as when values are
inserted using the INSERT
statement
(see Section 13.2.5, “INSERT Syntax”), except that
LOAD DATA
INFILE
also generates warnings when there are too few or
too many fields in the input row.
You can use SHOW WARNINGS
to get a
list of the first max_error_count
warnings as information about what went wrong. See
Section 13.7.5.40, “SHOW WARNINGS Syntax”.
If you are using the C API, you can get information about the
statement by calling the
mysql_info()
function. See
Section 27.8.7.36, “mysql_info()”.
LOAD XML [LOW_PRIORITY | CONCURRENT] [LOCAL] INFILE 'file_name
' [REPLACE | IGNORE] INTO TABLE [db_name
.]tbl_name
[CHARACTER SETcharset_name
] [ROWS IDENTIFIED BY '<tagname
>'] [IGNOREnumber
{LINES | ROWS}] [(field_name_or_user_var
,...)] [SETcol_name
=expr
,...]
The LOAD XML
statement reads data
from an XML file into a table. The
file_name
must be given as a literal
string. The tagname
in the optional
ROWS IDENTIFIED BY
clause must also be given as
a literal string, and must be surrounded by angle brackets
(<
and >
).
LOAD XML
acts as the complement of
running the mysql client in XML output mode
(that is, starting the client with the
--xml
option). To write data from a
table to an XML file, you can invoke the mysql
client with the --xml
and
-e
options from
the system shell, as shown here:
shell> mysql --xml -e 'SELECT * FROM mydb.mytable' > file.xml
To read the file back into a table, use
LOAD XML
INFILE
. By default, the <row>
element is considered to be the equivalent of a database table
row; this can be changed using the ROWS IDENTIFIED
BY
clause.
This statement supports three different XML formats:
Column names as attributes and column values as attribute values:
<row
column1
="value1
"column2
="value2
" .../>
Column names as tags and column values as the content of these tags:
<row
> <column1
>value1
</column1
> <column2
>value2
</column2
> </row
>
Column names are the name
attributes of
<field>
tags, and values are the
contents of these tags:
<row> <field name='column1
'>value1
</field> <field name='column2
'>value2
</field> </row>
This is the format used by other MySQL tools, such as mysqldump.
All three formats can be used in the same XML file; the import routine automatically detects the format for each row and interprets it correctly. Tags are matched based on the tag or attribute name and the column name.
Prior to MySQL 5.7.9, LOAD XML
did not handle
empty XML elements in the form <element/>
correctly. (Bug #67542, Bug #16171518)
The following clauses work essentially the same way for
LOAD XML
as they do for
LOAD DATA
:
LOW_PRIORITY
or
CONCURRENT
LOCAL
REPLACE
or IGNORE
CHARACTER SET
SET
See Section 13.2.6, “LOAD DATA INFILE Syntax”, for more information about these clauses.
(
is a comma-separated list of one or more XML fields
or user variables. The name of a user variable used for this
purpose must match the name of a field from the XML file, prefixed
with field_name_or_user_var
,
...)@
. You can use field names to select only
desired fields. User variables can be employed to store the
corresponding field values for subsequent re-use.
The IGNORE
or number
LINESIGNORE
clause causes the
first number
ROWSnumber
rows in the XML file to be
skipped. It is analogous to the LOAD
DATA
statement's IGNORE ... LINES
clause.
Suppose that we have a table named person
,
created as shown here:
USE test; CREATE TABLE person ( person_id INT NOT NULL PRIMARY KEY, fname VARCHAR(40) NULL, lname VARCHAR(40) NULL, created TIMESTAMP );
Suppose further that this table is initially empty.
Now suppose that we have a simple XML file
person.xml
, whose contents are as shown here:
<list> <person person_id="1" fname="Kapek" lname="Sainnouine"/> <person person_id="2" fname="Sajon" lname="Rondela"/> <person person_id="3"><fname>Likame</fname><lname>Örrtmons</lname></person> <person person_id="4"><fname>Slar</fname><lname>Manlanth</lname></person> <person><field name="person_id">5</field><field name="fname">Stoma</field> <field name="lname">Milu</field></person> <person><field name="person_id">6</field><field name="fname">Nirtam</field> <field name="lname">Sklöd</field></person> <person person_id="7"><fname>Sungam</fname><lname>Dulbåd</lname></person> <person person_id="8" fname="Sraref" lname="Encmelt"/> </list>
Each of the permissible XML formats discussed previously is represented in this example file.
To import the data in person.xml
into the
person
table, you can use this statement:
mysql>LOAD XML LOCAL INFILE 'person.xml'
->INTO TABLE person
->ROWS IDENTIFIED BY '<person>';
Query OK, 8 rows affected (0.00 sec) Records: 8 Deleted: 0 Skipped: 0 Warnings: 0
Here, we assume that person.xml
is located in
the MySQL data directory. If the file cannot be found, the
following error results:
ERROR 2 (HY000): File '/person.xml' not found (Errcode: 2)
The ROWS IDENTIFIED BY '<person>'
clause
means that each <person>
element in the
XML file is considered equivalent to a row in the table into which
the data is to be imported. In this case, this is the
person
table in the test
database.
As can be seen by the response from the server, 8 rows were
imported into the test.person
table. This can
be verified by a simple SELECT
statement:
mysql> SELECT * FROM person;
+-----------+--------+------------+---------------------+
| person_id | fname | lname | created |
+-----------+--------+------------+---------------------+
| 1 | Kapek | Sainnouine | 2007-07-13 16:18:47 |
| 2 | Sajon | Rondela | 2007-07-13 16:18:47 |
| 3 | Likame | Örrtmons | 2007-07-13 16:18:47 |
| 4 | Slar | Manlanth | 2007-07-13 16:18:47 |
| 5 | Stoma | Nilu | 2007-07-13 16:18:47 |
| 6 | Nirtam | Sklöd | 2007-07-13 16:18:47 |
| 7 | Sungam | Dulbåd | 2007-07-13 16:18:47 |
| 8 | Sreraf | Encmelt | 2007-07-13 16:18:47 |
+-----------+--------+------------+---------------------+
8 rows in set (0.00 sec)
This shows, as stated earlier in this section, that any or all of
the 3 permitted XML formats may appear in a single file and be
read in using LOAD XML
.
The inverse of the import operation just shown—that is, dumping MySQL table data into an XML file—can be accomplished using the mysql client from the system shell, as shown here:
shell>mysql --xml -e "SELECT * FROM test.person" > person-dump.xml
shell>cat person-dump.xml
<?xml version="1.0"?> <resultset statement="SELECT * FROM test.person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <row> <field name="person_id">1</field> <field name="fname">Kapek</field> <field name="lname">Sainnouine</field> </row> <row> <field name="person_id">2</field> <field name="fname">Sajon</field> <field name="lname">Rondela</field> </row> <row> <field name="person_id">3</field> <field name="fname">Likema</field> <field name="lname">Örrtmons</field> </row> <row> <field name="person_id">4</field> <field name="fname">Slar</field> <field name="lname">Manlanth</field> </row> <row> <field name="person_id">5</field> <field name="fname">Stoma</field> <field name="lname">Nilu</field> </row> <row> <field name="person_id">6</field> <field name="fname">Nirtam</field> <field name="lname">Sklöd</field> </row> <row> <field name="person_id">7</field> <field name="fname">Sungam</field> <field name="lname">Dulbåd</field> </row> <row> <field name="person_id">8</field> <field name="fname">Sreraf</field> <field name="lname">Encmelt</field> </row> </resultset>
The --xml
option causes the
mysql client to use XML formatting for its
output; the -e
option causes the client to execute the SQL statement
immediately following the option. See Section 4.5.1, “mysql — The MySQL Command-Line Tool”.
You can verify that the dump is valid by creating a copy of the
person
table and importing the dump file into
the new table, like this:
mysql>USE test;
mysql>CREATE TABLE person2 LIKE person;
Query OK, 0 rows affected (0.00 sec) mysql>LOAD XML LOCAL INFILE 'person-dump.xml'
->INTO TABLE person2;
Query OK, 8 rows affected (0.01 sec) Records: 8 Deleted: 0 Skipped: 0 Warnings: 0 mysql>SELECT * FROM person2;
+-----------+--------+------------+---------------------+ | person_id | fname | lname | created | +-----------+--------+------------+---------------------+ | 1 | Kapek | Sainnouine | 2007-07-13 16:18:47 | | 2 | Sajon | Rondela | 2007-07-13 16:18:47 | | 3 | Likema | Örrtmons | 2007-07-13 16:18:47 | | 4 | Slar | Manlanth | 2007-07-13 16:18:47 | | 5 | Stoma | Nilu | 2007-07-13 16:18:47 | | 6 | Nirtam | Sklöd | 2007-07-13 16:18:47 | | 7 | Sungam | Dulbåd | 2007-07-13 16:18:47 | | 8 | Sreraf | Encmelt | 2007-07-13 16:18:47 | +-----------+--------+------------+---------------------+ 8 rows in set (0.00 sec)
There is no requirement that every field in the XML file be
matched with a column in the corresponding table. Fields which
have no corresponding columns are skipped. You can see this by
first emptying the person2
table and dropping
the created
column, then using the same
LOAD XML
statement we just employed previously,
like this:
mysql>TRUNCATE person2;
Query OK, 8 rows affected (0.26 sec) mysql>ALTER TABLE person2 DROP COLUMN created;
Query OK, 0 rows affected (0.52 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>SHOW CREATE TABLE person2\G
*************************** 1. row *************************** Table: person2 Create Table: CREATE TABLE `person2` ( `person_id` int(11) NOT NULL, `fname` varchar(40) DEFAULT NULL, `lname` varchar(40) DEFAULT NULL, PRIMARY KEY (`person_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 1 row in set (0.00 sec) mysql>LOAD XML LOCAL INFILE 'person-dump.xml'
->INTO TABLE person2;
Query OK, 8 rows affected (0.01 sec) Records: 8 Deleted: 0 Skipped: 0 Warnings: 0 mysql>SELECT * FROM person2;
+-----------+--------+------------+ | person_id | fname | lname | +-----------+--------+------------+ | 1 | Kapek | Sainnouine | | 2 | Sajon | Rondela | | 3 | Likema | Örrtmons | | 4 | Slar | Manlanth | | 5 | Stoma | Nilu | | 6 | Nirtam | Sklöd | | 7 | Sungam | Dulbåd | | 8 | Sreraf | Encmelt | +-----------+--------+------------+ 8 rows in set (0.00 sec)
The order in which the fields are given within each row of the XML
file does not affect the operation of LOAD XML
;
the field order can vary from row to row, and is not required to
be in the same order as the corresponding columns in the table.
As mentioned previously, you can use a
(
list of one or more XML fields (to select desired
fields only) or user variables (to store the corresponding field
values for later use). User variables can be especially useful
when you want to insert data from an XML file into table columns
whose names do not match those of the XML fields. To see how this
works, we first create a table named field_name_or_user_var
,
...)individual
whose structure matches that of the person
table, but whose columns are named differently:
mysql>CREATE TABLE individual (
->individual_id INT NOT NULL PRIMARY KEY,
->name1 VARCHAR(40) NULL,
->name2 VARCHAR(40) NULL,
->made TIMESTAMP
-> ); Query OK, 0 rows affected (0.42 sec)
In this case, you cannot simply load the XML file directly into the table, because the field and column names do not match:
mysql> LOAD XML INFILE '../bin/person-dump.xml' INTO TABLE test.individual;
ERROR 1263 (22004): Column set to default value; NULL supplied to NOT NULL column 'individual_id' at row 1
This happens because the MySQL server looks for field names
matching the column names of the target table. You can work around
this problem by selecting the field values into user variables,
then setting the target table's columns equal to the values
of those variables using SET
. You can perform
both of these operations in a single statement, as shown here:
mysql>LOAD XML INFILE '../bin/person-dump.xml'
->INTO TABLE test.individual (@person_id, @fname, @lname, @created)
->SET individual_id=@person_id, name1=@fname, name2=@lname, made=@created;
Query OK, 8 rows affected (0.05 sec) Records: 8 Deleted: 0 Skipped: 0 Warnings: 0 mysql>SELECT * FROM individual;
+---------------+--------+------------+---------------------+ | individual_id | name1 | name2 | made | +---------------+--------+------------+---------------------+ | 1 | Kapek | Sainnouine | 2007-07-13 16:18:47 | | 2 | Sajon | Rondela | 2007-07-13 16:18:47 | | 3 | Likema | Örrtmons | 2007-07-13 16:18:47 | | 4 | Slar | Manlanth | 2007-07-13 16:18:47 | | 5 | Stoma | Nilu | 2007-07-13 16:18:47 | | 6 | Nirtam | Sklöd | 2007-07-13 16:18:47 | | 7 | Sungam | Dulbåd | 2007-07-13 16:18:47 | | 8 | Srraf | Encmelt | 2007-07-13 16:18:47 | +---------------+--------+------------+---------------------+ 8 rows in set (0.00 sec)
The names of the user variables must match
those of the corresponding fields from the XML file, with the
addition of the required @
prefix to indicate
that they are variables. The user variables need not be listed or
assigned in the same order as the corresponding fields.
Using a ROWS IDENTIFIED BY
'<
clause, it
is possible to import data from the same XML file into database
tables with different definitions. For this example, suppose that
you have a file named tagname
>'address.xml
which
contains the following XML:
<?xml version="1.0"?> <list> <person person_id="1"> <fname>Robert</fname> <lname>Jones</lname> <address address_id="1" street="Mill Creek Road" zip="45365" city="Sidney"/> <address address_id="2" street="Main Street" zip="28681" city="Taylorsville"/> </person> <person person_id="2"> <fname>Mary</fname> <lname>Smith</lname> <address address_id="3" street="River Road" zip="80239" city="Denver"/> <!-- <address address_id="4" street="North Street" zip="37920" city="Knoxville"/> --> </person> </list>
You can again use the test.person
table as
defined previously in this section, after clearing all the
existing records from the table and then showing its structure as
shown here:
mysql<TRUNCATE person;
Query OK, 0 rows affected (0.04 sec) mysql<SHOW CREATE TABLE person\G
*************************** 1. row *************************** Table: person Create Table: CREATE TABLE `person` ( `person_id` int(11) NOT NULL, `fname` varchar(40) DEFAULT NULL, `lname` varchar(40) DEFAULT NULL, `created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`person_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 1 row in set (0.00 sec)
Now create an address
table in the
test
database using the following
CREATE TABLE
statement:
CREATE TABLE address ( address_id INT NOT NULL PRIMARY KEY, person_id INT NULL, street VARCHAR(40) NULL, zip INT NULL, city VARCHAR(40) NULL, created TIMESTAMP );
To import the data from the XML file into the
person
table, execute the following
LOAD XML
statement, which specifies
that rows are to be specified by the
<person>
element, as shown here;
mysql>LOAD XML LOCAL INFILE 'address.xml'
->INTO TABLE person
->ROWS IDENTIFIED BY '<person>';
Query OK, 2 rows affected (0.00 sec) Records: 2 Deleted: 0 Skipped: 0 Warnings: 0
You can verify that the records were imported using a
SELECT
statement:
mysql> SELECT * FROM person;
+-----------+--------+-------+---------------------+
| person_id | fname | lname | created |
+-----------+--------+-------+---------------------+
| 1 | Robert | Jones | 2007-07-24 17:37:06 |
| 2 | Mary | Smith | 2007-07-24 17:37:06 |
+-----------+--------+-------+---------------------+
2 rows in set (0.00 sec)
Since the <address>
elements in the XML
file have no corresponding columns in the
person
table, they are skipped.
To import the data from the <address>
elements into the address
table, use the
LOAD XML
statement shown here:
mysql>LOAD XML LOCAL INFILE 'address.xml'
->INTO TABLE address
->ROWS IDENTIFIED BY '<address>';
Query OK, 3 rows affected (0.00 sec) Records: 3 Deleted: 0 Skipped: 0 Warnings: 0
You can see that the data was imported using a
SELECT
statement such as this one:
mysql> SELECT * FROM address;
+------------+-----------+-----------------+-------+--------------+---------------------+
| address_id | person_id | street | zip | city | created |
+------------+-----------+-----------------+-------+--------------+---------------------+
| 1 | 1 | Mill Creek Road | 45365 | Sidney | 2007-07-24 17:37:37 |
| 2 | 1 | Main Street | 28681 | Taylorsville | 2007-07-24 17:37:37 |
| 3 | 2 | River Road | 80239 | Denver | 2007-07-24 17:37:37 |
+------------+-----------+-----------------+-------+--------------+---------------------+
3 rows in set (0.00 sec)
The data from the <address>
element that
is enclosed in XML comments is not imported. However, since there
is a person_id
column in the
address
table, the value of the
person_id
attribute from the parent
<person>
element for each
<address>
is
imported into the address
table.
Security Considerations.
As with the LOAD DATA
statement,
the transfer of the XML file from the client host to the server
host is initiated by the MySQL server. In theory, a patched
server could be built that would tell the client program to
transfer a file of the server's choosing rather than the file
named by the client in the LOAD
XML
statement. Such a server could access any file on
the client host to which the client user has read access.
In a Web environment, clients usually connect to MySQL from a Web
server. A user that can run any command against the MySQL server
can use LOAD XML
LOCAL
to read any files to which the Web server process
has read access. In this environment, the client with respect to
the MySQL server is actually the Web server, not the remote
program being run by the user who connects to the Web server.
You can disable loading of XML files from clients by starting the
server with --local-infile=0
or
--local-infile=OFF
. This option
can also be used when starting the mysql client
to disable LOAD XML
for the
duration of the client session.
To prevent a client from loading XML files from the server, do not
grant the FILE
privilege to the
corresponding MySQL user account, or revoke this privilege if the
client user account already has it.
Revoking the FILE
privilege (or
not granting it in the first place) keeps the user only from
executing the LOAD XML
INFILE
statement (as well as the
LOAD_FILE()
function; it does
not prevent the user from executing
LOAD XML LOCAL
INFILE
. To disallow this statement, you must start the
server or the client with --local-infile=OFF
.
In other words, the FILE
privilege affects only whether the client can read files on the
server; it has no bearing on whether the client can read files
on the local file system.
For partitioned tables using storage engines that employ table
locks, such as MyISAM
, any locks
caused by LOAD XML
perform locks on all
partitions of the table. This does not apply to tables using
storage engines which employ row-level locking, such as
InnoDB
. For more information, see
Section 22.6.4, “Partitioning and Locking”.
REPLACE [LOW_PRIORITY | DELAYED] [INTO]tbl_name
[PARTITION (partition_name
,...)] [(col_name
,...)] {VALUES | VALUE} ({expr
| DEFAULT},...),(...),...
Or:
REPLACE [LOW_PRIORITY | DELAYED] [INTO]tbl_name
[PARTITION (partition_name
,...)] SETcol_name
={expr
| DEFAULT}, ...
Or:
REPLACE [LOW_PRIORITY | DELAYED] [INTO]tbl_name
[PARTITION (partition_name
,...)] [(col_name
,...)] SELECT ...
REPLACE
works exactly like
INSERT
, except that if an old row
in the table has the same value as a new row for a
PRIMARY KEY
or a UNIQUE
index, the old row is deleted before the new row is inserted. See
Section 13.2.5, “INSERT Syntax”.
REPLACE
is a MySQL extension to the
SQL standard. It either inserts, or deletes
and inserts. For another MySQL extension to standard
SQL—that either inserts or
updates—see
Section 13.2.5.3, “INSERT ... ON DUPLICATE KEY UPDATE Syntax”.
DELAYED
inserts and replaces were deprecated in
MySQL 5.6.6. In MySQL 5.7, DELAYED
is not supported. The server recognizes but ignores the
DELAYED
keyword, handles the replace as a
nondelayed replace, and generates an
ER_WARN_LEGACY_SYNTAX_CONVERTED
warning.
(“REPLACE DELAYED is no longer supported. The statement was
converted to REPLACE.”) The DELAYED
keyword will be removed in a future release.
REPLACE
makes sense only if a
table has a PRIMARY KEY
or
UNIQUE
index. Otherwise, it becomes
equivalent to INSERT
, because
there is no index to be used to determine whether a new row
duplicates another.
Values for all columns are taken from the values specified in the
REPLACE
statement. Any missing
columns are set to their default values, just as happens for
INSERT
. You cannot refer to values
from the current row and use them in the new row. If you use an
assignment such as SET
, the reference
to the column name on the right hand side is treated as
col_name
=
col_name
+ 1DEFAULT(
,
so the assignment is equivalent to col_name
)SET
.
col_name
=
DEFAULT(col_name
) + 1
If a generated column is replaced explicitly, the only permitted
value is DEFAULT
. For information about
generated columns, see
Section 13.1.18.8, “CREATE TABLE and Generated Columns”.
To use REPLACE
, you must have both
the INSERT
and
DELETE
privileges for the table.
REPLACE
supports explicit partition selection
using the PARTITION
keyword with a
comma-separated list of names of partitions, subpartitions, or
both. As with INSERT
, if it is not
possible to insert the new row into any of these partitions or
subpartitions, the REPLACE
statement fails with
the error Found a row not matching the given partition
set. See Section 22.5, “Partition Selection”, for
more information.
The REPLACE
statement returns a
count to indicate the number of rows affected. This is the sum of
the rows deleted and inserted. If the count is 1 for a single-row
REPLACE
, a row was inserted and no
rows were deleted. If the count is greater than 1, one or more old
rows were deleted before the new row was inserted. It is possible
for a single row to replace more than one old row if the table
contains multiple unique indexes and the new row duplicates values
for different old rows in different unique indexes.
The affected-rows count makes it easy to determine whether
REPLACE
only added a row or whether
it also replaced any rows: Check whether the count is 1 (added) or
greater (replaced).
If you are using the C API, the affected-rows count can be
obtained using the
mysql_affected_rows()
function.
You cannot replace into a table and select from the same table in a subquery.
MySQL uses the following algorithm for
REPLACE
(and LOAD DATA ...
REPLACE
):
It is possible that in the case of a duplicate-key error, a
storage engine may perform the REPLACE
as an
update rather than a delete plus insert, but the semantics are the
same. There are no user-visible effects other than a possible
difference in how the storage engine increments
Handler_
status
variables.
xxx
Because the results of REPLACE ... SELECT
statements depend on the ordering of rows from the
SELECT
and this order cannot always
be guaranteed, it is possible when logging these statements for
the master and the slave to diverge. For this reason,
REPLACE ... SELECT
statements are flagged as
unsafe for statement-based replication. With this change, such
statements produce a warning in the log when using the
STATEMENT
binary logging mode, and are logged
using the row-based format when using MIXED
mode. See also Section 16.2.1.1, “Advantages and Disadvantages of Statement-Based and Row-Based
Replication”.
When modifying an existing table that is not partitioned to
accommodate partitioning, or, when modifying the partitioning of
an already partitioned table, you may consider altering the
table's primary key (see
Section 22.6.1, “Partitioning Keys, Primary Keys, and Unique Keys”).
You should be aware that, if you do this, the results of
REPLACE
statements may be affected, just as
they would be if you modified the primary key of a nonpartitioned
table. Consider the table created by the following
CREATE TABLE
statement:
CREATE TABLE test ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, data VARCHAR(64) DEFAULT NULL, ts TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (id) );
When we create this table and run the statements shown in the mysql client, the result is as follows:
mysql>REPLACE INTO test VALUES (1, 'Old', '2014-08-20 18:47:00');
Query OK, 1 row affected (0.04 sec) mysql>REPLACE INTO test VALUES (1, 'New', '2014-08-20 18:47:42');
Query OK, 2 rows affected (0.04 sec) mysql>SELECT * FROM test;
+----+------+---------------------+ | id | data | ts | +----+------+---------------------+ | 1 | New | 2014-08-20 18:47:42 | +----+------+---------------------+ 1 row in set (0.00 sec)
Now we create a second table almost identical to the first, except that the primary key now covers 2 columns, as shown here (emphasized text):
CREATE TABLE test2 (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
data VARCHAR(64) DEFAULT NULL,
ts TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (id, ts)
);
When we run on test2
the same two
REPLACE
statements as we did on the original
test
table, we obtain a different result:
mysql>REPLACE INTO test2 VALUES (1, 'Old', '2014-08-20 18:47:00');
Query OK, 1 row affected (0.05 sec) mysql>REPLACE INTO test2 VALUES (1, 'New', '2014-08-20 18:47:42');
Query OK, 1 row affected (0.06 sec) mysql>SELECT * FROM test2;
+----+------+---------------------+ | id | data | ts | +----+------+---------------------+ | 1 | Old | 2014-08-20 18:47:00 | | 1 | New | 2014-08-20 18:47:42 | +----+------+---------------------+ 2 rows in set (0.00 sec)
This is due to the fact that, when run on
test2
, both the id
and
ts
column values must match those of an
existing row for the row to be replaced; otherwise, a row is
inserted.
In MySQL 5.7, a REPLACE
statement
affecting a partitioned table using a storage engine such as
MyISAM
that employs table-level locks
locks only those partitions containing rows that match the
REPLACE
statement's
WHERE
clause, as long as none of the
table's partitioning columns are updated; otherwise the
entire table is locked. (For storage engines such as
InnoDB
that employ row-level locking,
no locking of partitions takes place.) For more information, see
Section 22.6.4, “Partitioning and Locking”.
SELECT [ALL | DISTINCT | DISTINCTROW ] [HIGH_PRIORITY] [STRAIGHT_JOIN] [SQL_SMALL_RESULT] [SQL_BIG_RESULT] [SQL_BUFFER_RESULT] [SQL_CACHE | SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS]select_expr
[,select_expr
...] [FROMtable_references
[PARTITIONpartition_list
] [WHEREwhere_condition
] [GROUP BY {col_name
|expr
|position
} [ASC | DESC], ... [WITH ROLLUP]] [HAVINGwhere_condition
] [ORDER BY {col_name
|expr
|position
} [ASC | DESC], ...] [LIMIT {[offset
,]row_count
|row_count
OFFSEToffset
}] [PROCEDUREprocedure_name
(argument_list
)] [INTO OUTFILE 'file_name
' [CHARACTER SETcharset_name
]export_options
| INTO DUMPFILE 'file_name
' | INTOvar_name
[,var_name
]] [FOR UPDATE | LOCK IN SHARE MODE]]
SELECT
is used to retrieve rows
selected from one or more tables, and can include
UNION
statements and subqueries.
See Section 13.2.9.3, “UNION Syntax”, and Section 13.2.10, “Subquery Syntax”.
The most commonly used clauses of
SELECT
statements are these:
Each select_expr
indicates a column
that you want to retrieve. There must be at least one
select_expr
.
table_references
indicates the
table or tables from which to retrieve rows. Its syntax is
described in Section 13.2.9.2, “JOIN Syntax”.
SELECT
supports explicit partition
selection using the PARTITION
with a list
of partitions or subpartitions (or both) following the name of
the table in a table_reference
(see
Section 13.2.9.2, “JOIN Syntax”). In this case, rows are selected only
from the partitions listed, and any other partitions of the
table are ignored. For more information and examples, see
Section 22.5, “Partition Selection”.
SELECT ... PARTITION
from tables using
storage engines such as MyISAM
that perform table-level locks (and thus partition locks) lock
only the partitions or subpartitions named by the
PARTITION
option.
See Section 22.6.4, “Partitioning and Locking”, for more information.
The WHERE
clause, if given, indicates the
condition or conditions that rows must satisfy to be selected.
where_condition
is an expression
that evaluates to true for each row to be selected. The
statement selects all rows if there is no
WHERE
clause.
In the WHERE
expression, you can use any of
the functions and operators that MySQL supports, except for
aggregate (summary) functions. See
Section 9.5, “Expression Syntax”, and
Chapter 12, Functions and Operators.
SELECT
can also be used to retrieve
rows computed without reference to any table.
For example:
mysql> SELECT 1 + 1;
-> 2
You are permitted to specify DUAL
as a dummy
table name in situations where no tables are referenced:
mysql> SELECT 1 + 1 FROM DUAL;
-> 2
DUAL
is purely for the convenience of people
who require that all SELECT
statements should have FROM
and possibly other
clauses. MySQL may ignore the clauses. MySQL does not require
FROM DUAL
if no tables are referenced.
In general, clauses used must be given in exactly the order shown
in the syntax description. For example, a
HAVING
clause must come after any
GROUP BY
clause and before any ORDER
BY
clause. The exception is that the
INTO
clause can appear either as shown in the
syntax description or immediately following the
select_expr
list. For more information
about INTO
, see Section 13.2.9.1, “SELECT ... INTO Syntax”.
The list of select_expr
terms comprises
the select list that indicates which columns to retrieve. Terms
specify a column or expression or can use
*
-shorthand:
A select list consisting only of a single unqualified
*
can be used as shorthand to select all
columns from all tables:
SELECT * FROM t1 INNER JOIN t2 ...
can
be used as a qualified shorthand to select all columns from
the named table:
tbl_name
.*
SELECT t1.*, t2.* FROM t1 INNER JOIN t2 ...
Use of an unqualified *
with other items in
the select list may produce a parse error. To avoid this
problem, use a qualified
reference
tbl_name
.*
SELECT AVG(score), t1.* FROM t1 ...
The following list provides additional information about other
SELECT
clauses:
A select_expr
can be given an alias
using AS
. The alias is
used as the expression's column name and can be used in
alias_name
GROUP BY
, ORDER BY
, or
HAVING
clauses. For example:
SELECT CONCAT(last_name,', ',first_name) AS full_name FROM mytable ORDER BY full_name;
The AS
keyword is optional when aliasing a
select_expr
with an identifier. The
preceding example could have been written like this:
SELECT CONCAT(last_name,', ',first_name) full_name FROM mytable ORDER BY full_name;
However, because the AS
is optional, a
subtle problem can occur if you forget the comma between two
select_expr
expressions: MySQL
interprets the second as an alias name. For example, in the
following statement, columnb
is treated as
an alias name:
SELECT columna columnb FROM mytable;
For this reason, it is good practice to be in the habit of
using AS
explicitly when specifying column
aliases.
It is not permissible to refer to a column alias in a
WHERE
clause, because the column value
might not yet be determined when the WHERE
clause is executed. See Section B.5.4.4, “Problems with Column Aliases”.
The FROM
clause
indicates the table or tables from which to retrieve rows. If
you name more than one table, you are performing a join. For
information on join syntax, see Section 13.2.9.2, “JOIN Syntax”. For
each table specified, you can optionally specify an alias.
table_references
tbl_name
[[AS]alias
] [index_hint
]
The use of index hints provides the optimizer with information about how to choose indexes during query processing. For a description of the syntax for specifying these hints, see Section 8.9.4, “Index Hints”.
You can use SET
max_seeks_for_key=
as an alternative way to force MySQL to prefer key scans
instead of table scans. See
Section 5.1.5, “Server System Variables”.
value
You can refer to a table within the default database as
tbl_name
, or as
db_name
.tbl_name
to specify a database explicitly. You can refer to a column as
col_name
,
tbl_name
.col_name
,
or
db_name
.tbl_name
.col_name
.
You need not specify a tbl_name
or
db_name
.tbl_name
prefix for a column reference unless the reference would be
ambiguous. See Section 9.2.1, “Identifier Qualifiers”, for
examples of ambiguity that require the more explicit column
reference forms.
A table reference can be aliased using
or
tbl_name
AS
alias_name
tbl_name alias_name
:
SELECT t1.name, t2.salary FROM employee AS t1, info AS t2 WHERE t1.name = t2.name; SELECT t1.name, t2.salary FROM employee t1, info t2 WHERE t1.name = t2.name;
Columns selected for output can be referred to in
ORDER BY
and GROUP BY
clauses using column names, column aliases, or column
positions. Column positions are integers and begin with 1:
SELECT college, region, seed FROM tournament ORDER BY region, seed; SELECT college, region AS r, seed AS s FROM tournament ORDER BY r, s; SELECT college, region, seed FROM tournament ORDER BY 2, 3;
To sort in reverse order, add the DESC
(descending) keyword to the name of the column in the
ORDER BY
clause that you are sorting by.
The default is ascending order; this can be specified
explicitly using the ASC
keyword.
If ORDER BY
occurs within a subquery and
also is applied in the outer query, the outermost
ORDER BY
takes precedence. For example,
results for the following statement are sorted in descending
order, not ascending order:
(SELECT ... ORDER BY a) ORDER BY a DESC;
Use of column positions is deprecated because the syntax has been removed from the SQL standard.
If you use GROUP BY
, output rows are sorted
according to the GROUP BY
columns as if you
had an ORDER BY
for the same columns. To
avoid the overhead of sorting that GROUP BY
produces, add ORDER BY NULL
:
SELECT a, COUNT(b) FROM test_table GROUP BY a ORDER BY NULL;
Relying on implicit GROUP BY
sorting (that
is, sorting in the absence of ASC
or
DESC
designators) is deprecated. To produce
a given sort order, use explicit ASC
or
DESC
designators for GROUP
BY
columns or provide an ORDER BY
clause.
When you use ORDER BY
or GROUP
BY
to sort a column in a
SELECT
, the server sorts values
using only the initial number of bytes indicated by the
max_sort_length
system
variable.
MySQL extends the GROUP BY
clause so that
you can also specify ASC
and
DESC
after columns named in the clause:
SELECT a, COUNT(b) FROM test_table GROUP BY a DESC;
MySQL extends the use of GROUP BY
to permit
selecting fields that are not mentioned in the GROUP
BY
clause. If you are not getting the results that
you expect from your query, please read the description of
GROUP BY
found in
Section 12.19, “Aggregate (GROUP BY) Functions”.
GROUP BY
permits a WITH
ROLLUP
modifier. See
Section 12.19.2, “GROUP BY Modifiers”.
The HAVING
clause is applied nearly last,
just before items are sent to the client, with no
optimization. (LIMIT
is applied after
HAVING
.)
The SQL standard requires that HAVING
must
reference only columns in the GROUP BY
clause or columns used in aggregate functions. However, MySQL
supports an extension to this behavior, and permits
HAVING
to refer to columns in the
SELECT
list and columns in
outer subqueries as well.
If the HAVING
clause refers to a column
that is ambiguous, a warning occurs. In the following
statement, col2
is ambiguous because it is
used as both an alias and a column name:
SELECT COUNT(col1) AS col2 FROM t GROUP BY col2 HAVING col2 = 2;
Preference is given to standard SQL behavior, so if a
HAVING
column name is used both in
GROUP BY
and as an aliased column in the
output column list, preference is given to the column in the
GROUP BY
column.
Do not use HAVING
for items that should be
in the WHERE
clause. For example, do not
write the following:
SELECTcol_name
FROMtbl_name
HAVINGcol_name
> 0;
Write this instead:
SELECTcol_name
FROMtbl_name
WHEREcol_name
> 0;
The HAVING
clause can refer to aggregate
functions, which the WHERE
clause cannot:
SELECT user, MAX(salary) FROM users GROUP BY user HAVING MAX(salary) > 10;
(This did not work in some older versions of MySQL.)
MySQL permits duplicate column names. That is, there can be
more than one select_expr
with the
same name. This is an extension to standard SQL. Because MySQL
also permits GROUP BY
and
HAVING
to refer to
select_expr
values, this can result
in an ambiguity:
SELECT 12 AS a, a FROM t GROUP BY a;
In that statement, both columns have the name
a
. To ensure that the correct column is
used for grouping, use different names for each
select_expr
.
MySQL resolves unqualified column or alias references in
ORDER BY
clauses by searching in the
select_expr
values, then in the
columns of the tables in the FROM
clause.
For GROUP BY
or HAVING
clauses, it searches the FROM
clause before
searching in the select_expr
values. (For GROUP BY
and
HAVING
, this differs from the pre-MySQL 5.0
behavior that used the same rules as for ORDER
BY
.)
The LIMIT
clause can be used to constrain
the number of rows returned by the
SELECT
statement.
LIMIT
takes one or two numeric arguments,
which must both be nonnegative integer constants, with these
exceptions:
Within prepared statements, LIMIT
parameters can be specified using ?
placeholder markers.
Within stored programs, LIMIT
parameters can be specified using integer-valued routine
parameters or local variables.
With two arguments, the first argument specifies the offset of the first row to return, and the second specifies the maximum number of rows to return. The offset of the initial row is 0 (not 1):
SELECT * FROM tbl LIMIT 5,10; # Retrieve rows 6-15
To retrieve all rows from a certain offset up to the end of the result set, you can use some large number for the second parameter. This statement retrieves all rows from the 96th row to the last:
SELECT * FROM tbl LIMIT 95,18446744073709551615;
With one argument, the value specifies the number of rows to return from the beginning of the result set:
SELECT * FROM tbl LIMIT 5; # Retrieve first 5 rows
In other words, LIMIT
is equivalent
to row_count
LIMIT 0,
.
row_count
For prepared statements, you can use placeholders. The
following statements will return one row from the
tbl
table:
SET @a=1; PREPARE STMT FROM 'SELECT * FROM tbl LIMIT ?'; EXECUTE STMT USING @a;
The following statements will return the second to sixth row
from the tbl
table:
SET @skip=1; SET @numrows=5; PREPARE STMT FROM 'SELECT * FROM tbl LIMIT ?, ?'; EXECUTE STMT USING @skip, @numrows;
For compatibility with PostgreSQL, MySQL also supports the
LIMIT
syntax.
row_count
OFFSET
offset
If LIMIT
occurs within a subquery and also
is applied in the outer query, the outermost
LIMIT
takes precedence. For example, the
following statement produces two rows, not one:
(SELECT ... LIMIT 1) LIMIT 2;
A PROCEDURE
clause names a procedure that
should process the data in the result set. For an example, see
Section 8.4.2.4, “Using PROCEDURE ANALYSE”, which describes
ANALYSE
, a procedure that can be used to
obtain suggestions for optimal column data types that may help
reduce table sizes.
A PROCEDURE
clause is not permitted in a
UNION
statement.
PROCEDURE
syntax is deprecated as of
MySQL 5.7.18, and is removed in MySQL 8.0.
The SELECT ...
INTO
form of SELECT
enables the query result to be written to a file or stored in
variables. For more information, see
Section 13.2.9.1, “SELECT ... INTO Syntax”.
If you use FOR UPDATE
with a storage engine
that uses page or row locks, rows examined by the query are
write-locked until the end of the current transaction. Using
LOCK IN SHARE MODE
sets a shared lock that
permits other transactions to read the examined rows but not
to update or delete them. See
Section 14.5.2.4, “Locking Reads”.
In addition, you cannot use FOR UPDATE
as
part of the SELECT
in a
statement such as
CREATE
TABLE
. (If you
attempt to do so, the statement is rejected with the error
Can't update table
'new_table
SELECT ... FROM
old_table
...old_table
' while
'new_table
' is being
created.) This is a change in behavior from MySQL
5.5 and earlier, which permitted
CREATE
TABLE ... SELECT
statements to make changes in
tables other than the table being created.
Following the SELECT
keyword, you
can use a number of modifiers that affect the operation of the
statement. HIGH_PRIORITY
,
STRAIGHT_JOIN
, and modifiers beginning with
SQL_
are MySQL extensions to standard SQL.
The ALL
and DISTINCT
modifiers specify whether duplicate rows should be returned.
ALL
(the default) specifies that all
matching rows should be returned, including duplicates.
DISTINCT
specifies removal of duplicate
rows from the result set. It is an error to specify both
modifiers. DISTINCTROW
is a synonym for
DISTINCT
.
HIGH_PRIORITY
gives the
SELECT
higher priority than a
statement that updates a table. You should use this only for
queries that are very fast and must be done at once. A
SELECT HIGH_PRIORITY
query that is issued
while the table is locked for reading runs even if there is an
update statement waiting for the table to be free. This
affects only storage engines that use only table-level locking
(such as MyISAM
, MEMORY
,
and MERGE
).
HIGH_PRIORITY
cannot be used with
SELECT
statements that are part
of a UNION
.
STRAIGHT_JOIN
forces the optimizer to join
the tables in the order in which they are listed in the
FROM
clause. You can use this to speed up a
query if the optimizer joins the tables in nonoptimal order.
STRAIGHT_JOIN
also can be used in the
table_references
list. See
Section 13.2.9.2, “JOIN Syntax”.
STRAIGHT_JOIN
does not apply to any table
that the optimizer treats as a
const
or
system
table. Such a table
produces a single row, is read during the optimization phase
of query execution, and references to its columns are replaced
with the appropriate column values before query execution
proceeds. These tables will appear first in the query plan
displayed by EXPLAIN
. See
Section 8.8.1, “Optimizing Queries with EXPLAIN”. This exception may not apply
to const
or
system
tables that are used
on the NULL
-complemented side of an outer
join (that is, the right-side table of a LEFT
JOIN
or the left-side table of a RIGHT
JOIN
.
SQL_BIG_RESULT
or
SQL_SMALL_RESULT
can be used with
GROUP BY
or DISTINCT
to
tell the optimizer that the result set has many rows or is
small, respectively. For SQL_BIG_RESULT
,
MySQL directly uses disk-based temporary tables if needed, and
prefers sorting to using a temporary table with a key on the
GROUP BY
elements. For
SQL_SMALL_RESULT
, MySQL uses fast temporary
tables to store the resulting table instead of using sorting.
This should not normally be needed.
SQL_BUFFER_RESULT
forces the result to be
put into a temporary table. This helps MySQL free the table
locks early and helps in cases where it takes a long time to
send the result set to the client. This modifier can be used
only for top-level SELECT
statements, not for subqueries or following
UNION
.
SQL_CALC_FOUND_ROWS
tells MySQL to
calculate how many rows there would be in the result set,
disregarding any LIMIT
clause. The number
of rows can then be retrieved with SELECT
FOUND_ROWS()
. See
Section 12.14, “Information Functions”.
The SQL_CACHE
and
SQL_NO_CACHE
modifiers affect caching of
query results in the query cache (see
Section 8.10.3, “The MySQL Query Cache”). SQL_CACHE
tells MySQL to store the result in the query cache if it is
cacheable and the value of the
query_cache_type
system
variable is 2
or DEMAND
.
With SQL_NO_CACHE
, the server does not use
the query cache. It neither checks the query cache to see
whether the result is already cached, nor does it cache the
query result.
These two modifiers are mutually exclusive and an error occurs
if they are both specified. Also, these modifiers are not
permitted in subqueries (including subqueries in the
FROM
clause), and
SELECT
statements in unions
other than the first SELECT
.
For views, SQL_NO_CACHE
applies if it
appears in any SELECT
in the
query. For a cacheable query, SQL_CACHE
applies if it appears in the first
SELECT
of a view referred to by
the query.
In MySQL 5.7, a SELECT
from a
partitioned table using a storage engine such as
MyISAM
that employs table-level locks
locks only those partitions containing rows that match the
SELECT
statement's
WHERE
clause. (This does not occur with storage
engines such as InnoDB
that employ
row-level locking.) For more information, see
Section 22.6.4, “Partitioning and Locking”.
The SELECT ...
INTO
form of SELECT
enables a query result to be stored in variables or written to a
file:
SELECT ... INTO
selects column
values and stores them into variables.
var_list
SELECT ... INTO OUTFILE
writes the
selected rows to a file. Column and line terminators can be
specified to produce a specific output format.
SELECT ... INTO DUMPFILE
writes a single
row to a file without any formatting.
The SELECT
syntax description
(see Section 13.2.9, “SELECT Syntax”) shows the INTO
clause near the end of the statement. It is also possible to use
INTO
immediately following the
select_expr
list.
An INTO
clause should not be used in a nested
SELECT
because such a
SELECT
must return its result to
the outer context.
The INTO
clause can name a list of one or
more variables, which can be user-defined variables, stored
procedure or function parameters, or stored program local
variables. (Within a prepared SELECT ... INTO
OUTFILE
statement, only user-defined variables are
permitted;see Section 13.6.4.2, “Local Variable Scope and Resolution”.)
The selected values are assigned to the variables. The number of
variables must match the number of columns. The query should
return a single row. If the query returns no rows, a warning
with error code 1329 occurs (No data
), and
the variable values remain unchanged. If the query returns
multiple rows, error 1172 occurs (Result consisted of
more than one row
). If it is possible that the
statement may retrieve multiple rows, you can use LIMIT
1
to limit the result set to a single row.
SELECT id, data INTO @x, @y FROM test.t1 LIMIT 1;
User variable names are not case sensitive. See Section 9.4, “User-Defined Variables”.
The SELECT ... INTO
OUTFILE '
form of
file_name
'SELECT
writes the selected rows
to a file. The file is created on the server host, so you must
have the FILE
privilege to use
this syntax. file_name
cannot be an
existing file, which among other things prevents files such as
/etc/passwd
and database tables from being
destroyed. The
character_set_filesystem
system
variable controls the interpretation of the file name.
The SELECT ... INTO
OUTFILE
statement is intended primarily to let you
very quickly dump a table to a text file on the server machine.
If you want to create the resulting file on some other host than
the server host, you normally cannot use
SELECT ... INTO
OUTFILE
since there is no way to write a path to the
file relative to the server host's file system.
However, if the MySQL client software is installed on the remote
machine, you can instead use a client command such as
mysql -e "SELECT ..." >
to generate the
file on the client host.
file_name
It is also possible to create the resulting file on a different host other than the server host, if the location of the file on the remote host can be accessed using a network-mapped path on the server's file system. In this case, the presence of mysql (or some other MySQL client program) is not required on the target host.
SELECT ... INTO
OUTFILE
is the complement of
LOAD DATA
INFILE
. Column values are written converted to the
character set specified in the CHARACTER SET
clause. If no such clause is present, values are dumped using
the binary
character set. In effect, there is
no character set conversion. If a result set contains columns in
several character sets, the output data file will as well and
you may not be able to reload the file correctly.
The syntax for the export_options
part of the statement consists of the same
FIELDS
and LINES
clauses
that are used with the
LOAD DATA
INFILE
statement. See Section 13.2.6, “LOAD DATA INFILE Syntax”, for
information about the FIELDS
and
LINES
clauses, including their default values
and permissible values.
FIELDS ESCAPED BY
controls how to write
special characters. If the FIELDS ESCAPED BY
character is not empty, it is used when necessary to avoid
ambiguity as a prefix that precedes following characters on
output:
The FIELDS ESCAPED BY
character
The FIELDS [OPTIONALLY] ENCLOSED BY
character
The first character of the FIELDS TERMINATED
BY
and LINES TERMINATED BY
values
ASCII NUL
(the zero-valued byte; what is
actually written following the escape character is ASCII
0
, not a zero-valued byte)
The FIELDS TERMINATED BY
, ENCLOSED
BY
, ESCAPED BY
, or LINES
TERMINATED BY
characters must be
escaped so that you can read the file back in reliably. ASCII
NUL
is escaped to make it easier to view with
some pagers.
The resulting file does not have to conform to SQL syntax, so nothing else need be escaped.
If the FIELDS ESCAPED BY
character is empty,
no characters are escaped and NULL
is output
as NULL
, not \N
. It is
probably not a good idea to specify an empty escape character,
particularly if field values in your data contain any of the
characters in the list just given.
Here is an example that produces a file in the comma-separated values (CSV) format used by many programs:
SELECT a,b,a+b INTO OUTFILE '/tmp/result.txt' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' FROM test_table;
If you use INTO DUMPFILE
instead of
INTO OUTFILE
, MySQL writes only one row into
the file, without any column or line termination and without
performing any escape processing. This is useful if you want to
store a BLOB
value in a file.
Any file created by INTO OUTFILE
or
INTO DUMPFILE
is writable by all users on
the server host. The reason for this is that the MySQL server
cannot create a file that is owned by anyone other than the
user under whose account it is running. (You should
never run mysqld as
root
for this and other reasons.) The file
thus must be world-writable so that you can manipulate its
contents.
If the secure_file_priv
system variable is set to a nonempty directory name, the file
to be written must be located in that directory.
In the context of
SELECT ...
INTO
statements that occur as part of events executed
by the Event Scheduler, diagnostics messages (not only errors,
but also warnings) are written to the error log, and, on
Windows, to the application event log. For additional
information, see Section 23.4.5, “Event Scheduler Status”.
MySQL supports the following JOIN
syntax for
the table_references
part of
SELECT
statements and
multiple-table DELETE
and
UPDATE
statements:
table_references:
escaped_table_reference
[,escaped_table_reference
] ...escaped_table_reference
:table_reference
| { OJtable_reference
}table_reference
:table_factor
|join_table
table_factor
:tbl_name
[PARTITION (partition_names
)] [[AS]alias
] [index_hint_list
] |table_subquery
[AS]alias
| (table_references
)join_table
:table_reference
[INNER | CROSS] JOINtable_factor
[join_condition
] |table_reference
STRAIGHT_JOINtable_factor
|table_reference
STRAIGHT_JOINtable_factor
ONconditional_expr
|table_reference
{LEFT|RIGHT} [OUTER] JOINtable_reference
join_condition
|table_reference
NATURAL [{LEFT|RIGHT} [OUTER]] JOINtable_factor
join_condition
: ONconditional_expr
| USING (column_list
)index_hint_list
:index_hint
[,index_hint
] ...index_hint
: USE {INDEX|KEY} [FOR {JOIN|ORDER BY|GROUP BY}] ([index_list
]) | IGNORE {INDEX|KEY} [FOR {JOIN|ORDER BY|GROUP BY}] (index_list
) | FORCE {INDEX|KEY} [FOR {JOIN|ORDER BY|GROUP BY}] (index_list
)index_list
:index_name
[,index_name
] ...
A table reference is also known as a join expression.
A table reference (when it refers to a partitioned table) may
contain a PARTITION
option, including a
comma-separated list of partitions, subpartitions, or both. This
option follows the name of the table and precedes any alias
declaration. The effect of this option is that rows are selected
only from the listed partitions or subpartitions. Any partitions
or subpartitions not named in the list are ignored. For more
information, see Section 22.5, “Partition Selection”.
The syntax of table_factor
is
extended in MySQL in comparison with standard SQL. The standard
accepts only table_reference
, not a
list of them inside a pair of parentheses.
This is a conservative extension if each comma in a list of
table_reference
items is considered
as equivalent to an inner join. For example:
SELECT * FROM t1 LEFT JOIN (t2, t3, t4) ON (t2.a = t1.a AND t3.b = t1.b AND t4.c = t1.c)
is equivalent to:
SELECT * FROM t1 LEFT JOIN (t2 CROSS JOIN t3 CROSS JOIN t4) ON (t2.a = t1.a AND t3.b = t1.b AND t4.c = t1.c)
In MySQL, JOIN
, CROSS
JOIN
, and INNER JOIN
are syntactic
equivalents (they can replace each other). In standard SQL, they
are not equivalent. INNER JOIN
is used with
an ON
clause, CROSS JOIN
is used otherwise.
In general, parentheses can be ignored in join expressions containing only inner join operations. MySQL also supports nested joins. See Section 8.2.1.7, “Nested Join Optimization”.
Index hints can be specified to affect how the MySQL optimizer
makes use of indexes. For more information, see
Section 8.9.4, “Index Hints”. Optimizer hints and the
optimizer_switch
system variable are other
ways to influence optimizer use of indexes. See
Section 8.9.2, “Optimizer Hints”, and
Section 8.9.3, “Switchable Optimizations”.
The following list describes general factors to take into account when writing joins:
A table reference can be aliased using
or
tbl_name
AS
alias_name
tbl_name alias_name
:
SELECT t1.name, t2.salary FROM employee AS t1 INNER JOIN info AS t2 ON t1.name = t2.name; SELECT t1.name, t2.salary FROM employee t1 INNER JOIN info t2 ON t1.name = t2.name;
A table_subquery
is also known as
a derived table or subquery in the FROM
clause. See Section 13.2.10.8, “Derived Tables (Subqueries in the FROM Clause)”. Such
subqueries must include an alias to
give the subquery result a table name. A trivial example
follows:
SELECT * FROM (SELECT 1, 2, 3) AS t1;
INNER JOIN
and ,
(comma) are semantically equivalent in the absence of a join
condition: both produce a Cartesian product between the
specified tables (that is, each and every row in the first
table is joined to each and every row in the second table).
However, the precedence of the comma operator is less than
that of INNER JOIN
, CROSS
JOIN
, LEFT JOIN
, and so on. If
you mix comma joins with the other join types when there is
a join condition, an error of the form Unknown
column '
may occur. Information about dealing with
this problem is given later in this section.
col_name
' in 'on
clause'
The conditional_expr
used with
ON
is any conditional expression of the
form that can be used in a WHERE
clause.
Generally, the ON
clause serves for
conditions that specify how to join tables, and the
WHERE
clause restricts which rows to
include in the result set.
If there is no matching row for the right table in the
ON
or USING
part in a
LEFT JOIN
, a row with all columns set to
NULL
is used for the right table. You can
use this fact to find rows in a table that have no
counterpart in another table:
SELECT left_tbl.* FROM left_tbl LEFT JOIN right_tbl ON left_tbl.id = right_tbl.id WHERE right_tbl.id IS NULL;
This example finds all rows in left_tbl
with an id
value that is not present in
right_tbl
(that is, all rows in
left_tbl
with no corresponding row in
right_tbl
). See
Section 8.2.1.8, “Left Join and Right Join Optimization”.
The
USING(
clause names a list of columns that must exist in both
tables. If tables column_list
)a
and
b
both contain columns
c1
, c2
, and
c3
, the following join compares
corresponding columns from the two tables:
a LEFT JOIN b USING (c1, c2, c3)
The NATURAL [LEFT] JOIN
of two tables is
defined to be semantically equivalent to an INNER
JOIN
or a LEFT JOIN
with a
USING
clause that names all columns that
exist in both tables.
RIGHT JOIN
works analogously to
LEFT JOIN
. To keep code portable across
databases, it is recommended that you use LEFT
JOIN
instead of RIGHT JOIN
.
The { OJ ... }
syntax shown in the join
syntax description exists only for compatibility with ODBC.
The curly braces in the syntax should be written literally;
they are not metasyntax as used elsewhere in syntax
descriptions.
SELECT left_tbl.* FROM { OJ left_tbl LEFT OUTER JOIN right_tbl ON left_tbl.id = right_tbl.id } WHERE right_tbl.id IS NULL;
You can use other types of joins within { OJ ...
}
, such as INNER JOIN
or
RIGHT OUTER JOIN
. This helps with
compatibility with some third-party applications, but is not
official ODBC syntax.
STRAIGHT_JOIN
is similar to
JOIN
, except that the left table is
always read before the right table. This can be used for
those (few) cases for which the join optimizer processes the
tables in a suboptimal order.
Some join examples:
SELECT * FROM table1, table2; SELECT * FROM table1 INNER JOIN table2 ON table1.id = table2.id; SELECT * FROM table1 LEFT JOIN table2 ON table1.id = table2.id; SELECT * FROM table1 LEFT JOIN table2 USING (id); SELECT * FROM table1 LEFT JOIN table2 ON table1.id = table2.id LEFT JOIN table3 ON table2.id = table3.id;
Natural joins and joins with USING
, including
outer join variants, are processed according to the SQL:2003
standard:
Redundant columns of a NATURAL
join do
not appear. Consider this set of statements:
CREATE TABLE t1 (i INT, j INT); CREATE TABLE t2 (k INT, j INT); INSERT INTO t1 VALUES(1, 1); INSERT INTO t2 VALUES(1, 1); SELECT * FROM t1 NATURAL JOIN t2; SELECT * FROM t1 JOIN t2 USING (j);
In the first SELECT
statement, column j
appears in both
tables and thus becomes a join column, so, according to
standard SQL, it should appear only once in the output, not
twice. Similarly, in the second SELECT statement, column
j
is named in the
USING
clause and should appear only once
in the output, not twice.
Thus, the statements produce this output:
+------+------+------+ | j | i | k | +------+------+------+ | 1 | 1 | 1 | +------+------+------+ +------+------+------+ | j | i | k | +------+------+------+ | 1 | 1 | 1 | +------+------+------+
Redundant column elimination and column ordering occurs according to standard SQL, producing this display order:
First, coalesced common columns of the two joined tables, in the order in which they occur in the first table
Second, columns unique to the first table, in order in which they occur in that table
Third, columns unique to the second table, in order in which they occur in that table
The single result column that replaces two common columns is
defined using the coalesce operation. That is, for two
t1.a
and t2.a
the
resulting single join column a
is defined
as a = COALESCE(t1.a, t2.a)
, where:
COALESCE(x, y) = (CASE WHEN x IS NOT NULL THEN x ELSE y END)
If the join operation is any other join, the result columns of the join consist of the concatenation of all columns of the joined tables.
A consequence of the definition of coalesced columns is
that, for outer joins, the coalesced column contains the
value of the non-NULL
column if one of
the two columns is always NULL
. If
neither or both columns are NULL
, both
common columns have the same value, so it doesn't matter
which one is chosen as the value of the coalesced column. A
simple way to interpret this is to consider that a coalesced
column of an outer join is represented by the common column
of the inner table of a JOIN
. Suppose
that the tables t1(a, b)
and
t2(a, c)
have the following contents:
t1 t2 ---- ---- 1 x 2 z 2 y 3 w
Then, for this join, column a
contains
the values of t1.a
:
mysql> SELECT * FROM t1 NATURAL LEFT JOIN t2;
+------+------+------+
| a | b | c |
+------+------+------+
| 1 | x | NULL |
| 2 | y | z |
+------+------+------+
By contrast, for this join, column a
contains the values of t2.a
.
mysql> SELECT * FROM t1 NATURAL RIGHT JOIN t2;
+------+------+------+
| a | c | b |
+------+------+------+
| 2 | z | y |
| 3 | w | NULL |
+------+------+------+
Compare those results to the otherwise equivalent queries
with JOIN ... ON
:
mysql> SELECT * FROM t1 LEFT JOIN t2 ON (t1.a = t2.a);
+------+------+------+------+
| a | b | a | c |
+------+------+------+------+
| 1 | x | NULL | NULL |
| 2 | y | 2 | z |
+------+------+------+------+
mysql> SELECT * FROM t1 RIGHT JOIN t2 ON (t1.a = t2.a);
+------+------+------+------+
| a | b | a | c |
+------+------+------+------+
| 2 | y | 2 | z |
| NULL | NULL | 3 | w |
+------+------+------+------+
A USING
clause can be rewritten as an
ON
clause that compares corresponding
columns. However, although USING
and
ON
are similar, they are not quite the
same. Consider the following two queries:
a LEFT JOIN b USING (c1, c2, c3) a LEFT JOIN b ON a.c1 = b.c1 AND a.c2 = b.c2 AND a.c3 = b.c3
With respect to determining which rows satisfy the join condition, both joins are semantically identical.
With respect to determining which columns to display for
SELECT *
expansion, the two joins are not
semantically identical. The USING
join
selects the coalesced value of corresponding columns,
whereas the ON
join selects all columns
from all tables. For the USING
join,
SELECT *
selects these values:
COALESCE(a.c1, b.c1), COALESCE(a.c2, b.c2), COALESCE(a.c3, b.c3)
For the ON
join, SELECT
*
selects these values:
a.c1, a.c2, a.c3, b.c1, b.c2, b.c3
With an inner join, COALESCE(a.c1,
b.c1)
is the same as either
a.c1
or b.c1
because
both columns will have the same value. With an outer join
(such as LEFT JOIN
), one of the two
columns can be NULL
. That column is
omitted from the result.
An ON
clause can refer only to its
operands.
Example:
CREATE TABLE t1 (i1 INT); CREATE TABLE t2 (i2 INT); CREATE TABLE t3 (i3 INT); SELECT * FROM t1 JOIN t2 ON (i1 = i3) JOIN t3;
The statement fails with an Unknown column 'i3' in
'on clause'
error because i3
is
a column in t3
, which is not an operand
of the ON
clause. To enable the join to
be processed, rewrite the statement as follows:
SELECT * FROM t1 JOIN t2 JOIN t3 ON (i1 = i3);
JOIN
has higher precedence than the comma
operator (,
), so the join expression
t1, t2 JOIN t3
is interpreted as
(t1, (t2 JOIN t3))
, not as ((t1,
t2) JOIN t3)
. This affects statements that use an
ON
clause because that clause can refer
only to columns in the operands of the join, and the
precedence affects interpretation of what those operands
are.
Example:
CREATE TABLE t1 (i1 INT, j1 INT); CREATE TABLE t2 (i2 INT, j2 INT); CREATE TABLE t3 (i3 INT, j3 INT); INSERT INTO t1 VALUES(1, 1); INSERT INTO t2 VALUES(1, 1); INSERT INTO t3 VALUES(1, 1); SELECT * FROM t1, t2 JOIN t3 ON (t1.i1 = t3.i3);
The JOIN
takes precedence over the comma
operator, so the operands for the ON
clause are t2
and t3
.
Because t1.i1
is not a column in either
of the operands, the result is an Unknown column
't1.i1' in 'on clause'
error.
To enable the join to be processed, use either of these strategies:
Group the first two tables explicitly with parentheses
so that the operands for the ON
clause are (t1, t2)
and
t3
:
SELECT * FROM (t1, t2) JOIN t3 ON (t1.i1 = t3.i3);
Avoid the use of the comma operator and use
JOIN
instead:
SELECT * FROM t1 JOIN t2 JOIN t3 ON (t1.i1 = t3.i3);
The same precedence interpretation also applies to
statements that mix the comma operator with INNER
JOIN
, CROSS JOIN
, LEFT
JOIN
, and RIGHT JOIN
, all of
which have higher precedence than the comma operator.
A MySQL extension compared to the SQL:2003 standard is that
MySQL permits you to qualify the common (coalesced) columns
of NATURAL
or USING
joins, whereas the standard disallows that.
SELECT ... UNION [ALL | DISTINCT] SELECT ... [UNION [ALL | DISTINCT] SELECT ...]
UNION
is used to combine the
result from multiple SELECT
statements into a single result set.
The column names from the first
SELECT
statement are used as the
column names for the results returned. Selected columns listed
in corresponding positions of each
SELECT
statement should have the
same data type. (For example, the first column selected by the
first statement should have the same type as the first column
selected by the other statements.)
If the data types of corresponding
SELECT
columns do not match, the
types and lengths of the columns in the
UNION
result take into account
the values retrieved by all of the
SELECT
statements. For example,
consider the following:
mysql> SELECT REPEAT('a',1) UNION SELECT REPEAT('b',10);
+---------------+
| REPEAT('a',1) |
+---------------+
| a |
| bbbbbbbbbb |
+---------------+
The SELECT
statements are normal
select statements, but with the following restrictions:
Only the last SELECT
statement can use INTO OUTFILE
. (However,
the entire UNION
result is
written to the file.)
HIGH_PRIORITY
cannot be used with
SELECT
statements that are
part of a UNION
. If you
specify it for the first
SELECT
, it has no effect. If
you specify it for any subsequent
SELECT
statements, a syntax
error results.
The default behavior for UNION
is
that duplicate rows are removed from the result. The optional
DISTINCT
keyword has no effect other than the
default because it also specifies duplicate-row removal. With
the optional ALL
keyword, duplicate-row
removal does not occur and the result includes all matching rows
from all the SELECT
statements.
You can mix UNION
ALL
and UNION
DISTINCT
in the same query. Mixed
UNION
types are treated such that
a DISTINCT
union overrides any
ALL
union to its left. A
DISTINCT
union can be produced explicitly by
using UNION
DISTINCT
or implicitly by using
UNION
with no following
DISTINCT
or ALL
keyword.
To apply ORDER BY
or LIMIT
to an individual SELECT
, place
the clause inside the parentheses that enclose the
SELECT
:
(SELECT a FROM t1 WHERE a=10 AND B=1 ORDER BY a LIMIT 10) UNION (SELECT a FROM t2 WHERE a=11 AND B=2 ORDER BY a LIMIT 10);
Previous versions of MySQL may permit such statements without parentheses. In MySQL 5.7, the requirement for parentheses is enforced.
Use of ORDER BY
for individual
SELECT
statements implies nothing
about the order in which the rows appear in the final result
because UNION
by default produces
an unordered set of rows. Therefore, the use of ORDER
BY
in this context is typically in conjunction with
LIMIT
, so that it is used to determine the
subset of the selected rows to retrieve for the
SELECT
, even though it does not
necessarily affect the order of those rows in the final
UNION
result. If ORDER
BY
appears without LIMIT
in a
SELECT
, it is optimized away
because it will have no effect anyway.
To use an ORDER BY
or
LIMIT
clause to sort or limit the entire
UNION
result, parenthesize the
individual SELECT
statements and
place the ORDER BY
or
LIMIT
after the last one. The following
example uses both clauses:
(SELECT a FROM t1 WHERE a=10 AND B=1) UNION (SELECT a FROM t2 WHERE a=11 AND B=2) ORDER BY a LIMIT 10;
A statement without parentheses is equivalent to one parenthesized as just shown.
This kind of ORDER BY
cannot use column
references that include a table name (that is, names in
tbl_name
.col_name
format). Instead, provide a column alias in the first
SELECT
statement and refer to the
alias in the ORDER BY
. (Alternatively, refer
to the column in the ORDER BY
using its
column position. However, use of column positions is
deprecated.)
Also, if a column to be sorted is aliased, the ORDER
BY
clause must refer to the
alias, not the column name. The first of the following
statements will work, but the second will fail with an
Unknown column 'a' in 'order clause'
error:
(SELECT a AS b FROM t) UNION (SELECT ...) ORDER BY b; (SELECT a AS b FROM t) UNION (SELECT ...) ORDER BY a;
To cause rows in a UNION
result
to consist of the sets of rows retrieved by each
SELECT
one after the other,
select an additional column in each
SELECT
to use as a sort column
and add an ORDER BY
following the last
SELECT
:
(SELECT 1 AS sort_col, col1a, col1b, ... FROM t1) UNION (SELECT 2, col2a, col2b, ... FROM t2) ORDER BY sort_col;
To additionally maintain sort order within individual
SELECT
results, add a secondary
column to the ORDER BY
clause:
(SELECT 1 AS sort_col, col1a, col1b, ... FROM t1) UNION (SELECT 2, col2a, col2b, ... FROM t2) ORDER BY sort_col, col1a;
Use of an additional column also enables you to determine which
SELECT
each row comes from. Extra
columns can provide other identifying information as well, such
as a string that indicates a table name.
As of MySQL 5.7.5, UNION
queries
with an aggregate function in an ORDER BY
clause are rejected with an
ER_AGGREGATE_ORDER_FOR_UNION
error. Example:
SELECT 1 AS foo UNION SELECT 2 ORDER BY MAX(1);
A subquery is a SELECT
statement
within another statement.
All subquery forms and operations that the SQL standard requires are supported, as well as a few features that are MySQL-specific.
Here is an example of a subquery:
SELECT * FROM t1 WHERE column1 = (SELECT column1 FROM t2);
In this example, SELECT * FROM t1 ...
is the
outer query (or outer
statement), and (SELECT column1 FROM
t2)
is the subquery. We say that
the subquery is nested within the outer
query, and in fact it is possible to nest subqueries within other
subqueries, to a considerable depth. A subquery must always appear
within parentheses.
The main advantages of subqueries are:
They allow queries that are structured so that it is possible to isolate each part of a statement.
They provide alternative ways to perform operations that would otherwise require complex joins and unions.
Many people find subqueries more readable than complex joins or unions. Indeed, it was the innovation of subqueries that gave people the original idea of calling the early SQL “Structured Query Language.”
Here is an example statement that shows the major points about subquery syntax as specified by the SQL standard and supported in MySQL:
DELETE FROM t1 WHERE s11 > ANY (SELECT COUNT(*) /* no hint */ FROM t2 WHERE NOT EXISTS (SELECT * FROM t3 WHERE ROW(5*t2.s1,77)= (SELECT 50,11*s1 FROM t4 UNION SELECT 50,77 FROM (SELECT * FROM t5) AS t5)));
A subquery can return a scalar (a single value), a single row, a single column, or a table (one or more rows of one or more columns). These are called scalar, column, row, and table subqueries. Subqueries that return a particular kind of result often can be used only in certain contexts, as described in the following sections.
There are few restrictions on the type of statements in which
subqueries can be used. A subquery can contain many of the
keywords or clauses that an ordinary
SELECT
can contain:
DISTINCT
, GROUP BY
,
ORDER BY
, LIMIT
, joins,
index hints, UNION
constructs,
comments, functions, and so on.
A subquery's outer statement can be any one of:
SELECT
,
INSERT
,
UPDATE
,
DELETE
,
SET
, or
DO
.
In MySQL, you cannot modify a table and select from the same table
in a subquery. This applies to statements such as
DELETE
,
INSERT
,
REPLACE
,
UPDATE
, and (because subqueries can
be used in the SET
clause)
LOAD DATA
INFILE
.
For information about how the optimizer handles subqueries, see Section 8.2.2, “Optimizing Subqueries, Derived Tables, and View References”. For a discussion of restrictions on subquery use, including performance issues for certain forms of subquery syntax, see Section C.4, “Restrictions on Subqueries”.
In its simplest form, a subquery is a scalar subquery that
returns a single value. A scalar subquery is a simple operand,
and you can use it almost anywhere a single column value or
literal is legal, and you can expect it to have those
characteristics that all operands have: a data type, a length,
an indication that it can be NULL
, and so on.
For example:
CREATE TABLE t1 (s1 INT, s2 CHAR(5) NOT NULL); INSERT INTO t1 VALUES(100, 'abcde'); SELECT (SELECT s2 FROM t1);
The subquery in this SELECT
returns a single value ('abcde'
) that has a
data type of CHAR
, a length of 5,
a character set and collation equal to the defaults in effect at
CREATE TABLE
time, and an
indication that the value in the column can be
NULL
. Nullability of the value selected by a
scalar subquery is not copied because if the subquery result is
empty, the result is NULL
. For the subquery
just shown, if t1
were empty, the result
would be NULL
even though
s2
is NOT NULL
.
There are a few contexts in which a scalar subquery cannot be
used. If a statement permits only a literal value, you cannot
use a subquery. For example, LIMIT
requires
literal integer arguments, and
LOAD DATA
INFILE
requires a literal string file name. You cannot
use subqueries to supply these values.
When you see examples in the following sections that contain the
rather spartan construct (SELECT column1 FROM
t1)
, imagine that your own code contains much more
diverse and complex constructions.
Suppose that we make two tables:
CREATE TABLE t1 (s1 INT); INSERT INTO t1 VALUES (1); CREATE TABLE t2 (s1 INT); INSERT INTO t2 VALUES (2);
Then perform a SELECT
:
SELECT (SELECT s1 FROM t2) FROM t1;
The result is 2
because there is a row in
t2
containing a column s1
that has a value of 2
.
A scalar subquery can be part of an expression, but remember the parentheses, even if the subquery is an operand that provides an argument for a function. For example:
SELECT UPPER((SELECT s1 FROM t1)) FROM t2;
The most common use of a subquery is in the form:
non_subquery_operand
comparison_operator
(subquery
)
Where comparison_operator
is one of
these operators:
= > < >= <= <> != <=>
For example:
... WHERE 'a' = (SELECT column1 FROM t1)
MySQL also permits this construct:
non_subquery_operand
LIKE (subquery
)
At one time the only legal place for a subquery was on the right side of a comparison, and you might still find some old DBMSs that insist on this.
Here is an example of a common-form subquery comparison that you
cannot do with a join. It finds all the rows in table
t1
for which the column1
value is equal to a maximum value in table
t2
:
SELECT * FROM t1 WHERE column1 = (SELECT MAX(column2) FROM t2);
Here is another example, which again is impossible with a join
because it involves aggregating for one of the tables. It finds
all rows in table t1
containing a value that
occurs twice in a given column:
SELECT * FROM t1 AS t WHERE 2 = (SELECT COUNT(*) FROM t1 WHERE t1.id = t.id);
For a comparison of the subquery to a scalar, the subquery must return a scalar. For a comparison of the subquery to a row constructor, the subquery must be a row subquery that returns a row with the same number of values as the row constructor. See Section 13.2.10.5, “Row Subqueries”.
Syntax:
operand
comparison_operator
ANY (subquery
)operand
IN (subquery
)operand
comparison_operator
SOME (subquery
)
Where comparison_operator
is one of
these operators:
= > < >= <= <> !=
The ANY
keyword, which must follow a
comparison operator, means “return TRUE
if the comparison is TRUE
for
ANY
of the values in the column that the
subquery returns.” For example:
SELECT s1 FROM t1 WHERE s1 > ANY (SELECT s1 FROM t2);
Suppose that there is a row in table t1
containing (10)
. The expression is
TRUE
if table t2
contains
(21,14,7)
because there is a value
7
in t2
that is less than
10
. The expression is
FALSE
if table t2
contains
(20,10)
, or if table t2
is
empty. The expression is unknown (that is,
NULL
) if table t2
contains
(NULL,NULL,NULL)
.
When used with a subquery, the word IN
is an
alias for = ANY
. Thus, these two statements
are the same:
SELECT s1 FROM t1 WHERE s1 = ANY (SELECT s1 FROM t2); SELECT s1 FROM t1 WHERE s1 IN (SELECT s1 FROM t2);
IN
and = ANY
are not
synonyms when used with an expression list.
IN
can take an expression list, but
= ANY
cannot. See