Operation Reference

This file provides documentation on Alembic migration directives.

The directives here are used within user-defined migration files, within the upgrade() and downgrade() functions, as well as any functions further invoked by those.

All directives exist as methods on a class called Operations. When migration scripts are run, this object is made available to the script via the alembic.op datamember, which is a proxy to an actual instance of Operations. Currently, alembic.op is a real Python module, populated with individual proxies for each method on Operations, so symbols can be imported safely from the alembic.op namespace.

The Operations system is also fully extensible. See Operation Plugins for details on this.

A key design philosophy to the Operation Directives methods is that to the greatest degree possible, they internally generate the appropriate SQLAlchemy metadata, typically involving Table and Constraint objects. This so that migration instructions can be given in terms of just the string names and/or flags involved. The exceptions to this rule include the add_column() and create_table() directives, which require full Column objects, though the table metadata is still generated here.

The functions here all require that a MigrationContext has been configured within the env.py script first, which is typically via EnvironmentContext.configure(). Under normal circumstances they are called from an actual migration script, which itself would be invoked by the EnvironmentContext.run_migrations() method.

class alembic.operations.Operations(migration_context, impl=None)

Define high level migration operations.

Each operation corresponds to some schema migration operation, executed against a particular MigrationContext which in turn represents connectivity to a database, or a file output stream.

While Operations is normally configured as part of the EnvironmentContext.run_migrations() method called from an env.py script, a standalone Operations instance can be made for use cases external to regular Alembic migrations by passing in a MigrationContext:

from alembic.migration import MigrationContext
from alembic.operations import Operations

conn = myengine.connect()
ctx = MigrationContext.configure(conn)
op = Operations(ctx)

op.alter_column("t", "c", nullable=True)

Note that as of 0.8, most of the methods on this class are produced dynamically using the Operations.register_operation() method.

Construct a new Operations

Parameters:migration_context – a MigrationContext instance.
add_column(table_name, column, schema=None)

Issue an “add column” instruction using the current migration context.

e.g.:

from alembic import op
from sqlalchemy import Column, String

op.add_column('organization',
    Column('name', String())
)

The provided Column object can also specify a ForeignKey, referencing a remote table name. Alembic will automatically generate a stub “referenced” table and emit a second ALTER statement in order to add the constraint separately:

from alembic import op
from sqlalchemy import Column, INTEGER, ForeignKey

op.add_column('organization',
    Column('account_id', INTEGER, ForeignKey('accounts.id'))
)

Note that this statement uses the Column construct as is from the SQLAlchemy library. In particular, default values to be created on the database side are specified using the server_default parameter, and not default which only specifies Python-side defaults:

from alembic import op
from sqlalchemy import Column, TIMESTAMP, func

# specify "DEFAULT NOW" along with the column add
op.add_column('account',
    Column('timestamp', TIMESTAMP, server_default=func.now())
)
Parameters:
  • table_name – String name of the parent table.
  • column – a sqlalchemy.schema.Column object representing the new column.
  • schema

    Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

    New in version 0.7.0: ‘schema’ can now accept a quoted_name construct.

alter_column(table_name, column_name, nullable=None, server_default=False, new_column_name=None, type_=None, existing_type=None, existing_server_default=False, existing_nullable=None, schema=None, **kw)

Issue an “alter column” instruction using the current migration context.

Generally, only that aspect of the column which is being changed, i.e. name, type, nullability, default, needs to be specified. Multiple changes can also be specified at once and the backend should “do the right thing”, emitting each change either separately or together as the backend allows.

MySQL has special requirements here, since MySQL cannot ALTER a column without a full specification. When producing MySQL-compatible migration files, it is recommended that the existing_type, existing_server_default, and existing_nullable parameters be present, if not being altered.

Type changes which are against the SQLAlchemy “schema” types Boolean and Enum may also add or drop constraints which accompany those types on backends that don’t support them natively. The existing_type argument is used in this case to identify and remove a previous constraint that was bound to the type object.

Parameters:
  • table_name – string name of the target table.
  • column_name – string name of the target column, as it exists before the operation begins.
  • nullable – Optional; specify True or False to alter the column’s nullability.
  • server_default – Optional; specify a string SQL expression, text(), or DefaultClause to indicate an alteration to the column’s default value. Set to None to have the default removed.
  • new_column_name – Optional; specify a string name here to indicate the new name within a column rename operation.
  • type_ – Optional; a TypeEngine type object to specify a change to the column’s type. For SQLAlchemy types that also indicate a constraint (i.e. Boolean, Enum), the constraint is also generated.
  • autoincrement – set the AUTO_INCREMENT flag of the column; currently understood by the MySQL dialect.
  • existing_type – Optional; a TypeEngine type object to specify the previous type. This is required for all MySQL column alter operations that don’t otherwise specify a new type, as well as for when nullability is being changed on a SQL Server column. It is also used if the type is a so-called SQLlchemy “schema” type which may define a constraint (i.e. Boolean, Enum), so that the constraint can be dropped.
  • existing_server_default – Optional; The existing default value of the column. Required on MySQL if an existing default is not being changed; else MySQL removes the default.
  • existing_nullable – Optional; the existing nullability of the column. Required on MySQL if the existing nullability is not being changed; else MySQL sets this to NULL.
  • existing_autoincrement – Optional; the existing autoincrement of the column. Used for MySQL’s system of altering a column that specifies AUTO_INCREMENT.
  • schema

    Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

    New in version 0.7.0: ‘schema’ can now accept a quoted_name construct.

  • postgresql_using

    String argument which will indicate a SQL expression to render within the Postgresql-specific USING clause within ALTER COLUMN. This string is taken directly as raw SQL which must explicitly include any necessary quoting or escaping of tokens within the expression.

    New in version 0.8.8.

batch_alter_table(*args, **kwds)

Invoke a series of per-table migrations in batch.

Batch mode allows a series of operations specific to a table to be syntactically grouped together, and allows for alternate modes of table migration, in particular the “recreate” style of migration required by SQLite.

“recreate” style is as follows:

  1. A new table is created with the new specification, based on the migration directives within the batch, using a temporary name.
  2. the data copied from the existing table to the new table.
  3. the existing table is dropped.
  4. the new table is renamed to the existing table name.

The directive by default will only use “recreate” style on the SQLite backend, and only if directives are present which require this form, e.g. anything other than add_column(). The batch operation on other backends will proceed using standard ALTER TABLE operations.

The method is used as a context manager, which returns an instance of BatchOperations; this object is the same as Operations except that table names and schema names are omitted. E.g.:

with op.batch_alter_table("some_table") as batch_op:
    batch_op.add_column(Column('foo', Integer))
    batch_op.drop_column('bar')

The operations within the context manager are invoked at once when the context is ended. When run against SQLite, if the migrations include operations not supported by SQLite’s ALTER TABLE, the entire table will be copied to a new one with the new specification, moving all data across as well.

The copy operation by default uses reflection to retrieve the current structure of the table, and therefore batch_alter_table() in this mode requires that the migration is run in “online” mode. The copy_from parameter may be passed which refers to an existing Table object, which will bypass this reflection step.

Note

The table copy operation will currently not copy CHECK constraints, and may not copy UNIQUE constraints that are unnamed, as is possible on SQLite. See the section Dealing with Constraints for workarounds.

Parameters:
  • table_name – name of table
  • schema – optional schema name.
  • recreate – under what circumstances the table should be recreated. At its default of "auto", the SQLite dialect will recreate the table if any operations other than add_column(), create_index(), or drop_index() are present. Other options include "always" and "never".
  • copy_from

    optional Table object that will act as the structure of the table being copied. If omitted, table reflection is used to retrieve the structure of the table.

    New in version 0.7.6: Fully implemented the copy_from parameter.

  • reflect_args

    a sequence of additional positional arguments that will be applied to the table structure being reflected / copied; this may be used to pass column and constraint overrides to the table that will be reflected, in lieu of passing the whole Table using copy_from.

    New in version 0.7.1.

  • reflect_kwargs

    a dictionary of additional keyword arguments that will be applied to the table structure being copied; this may be used to pass additional table and reflection options to the table that will be reflected, in lieu of passing the whole Table using copy_from.

    New in version 0.7.1.

  • table_args – a sequence of additional positional arguments that will be applied to the new Table when created, in addition to those copied from the source table. This may be used to provide additional constraints such as CHECK constraints that may not be reflected.
  • table_kwargs – a dictionary of additional keyword arguments that will be applied to the new Table when created, in addition to those copied from the source table. This may be used to provide for additional table options that may not be reflected.

New in version 0.7.0.

Parameters:naming_convention

a naming convention dictionary of the form described at Integration of Naming Conventions into Operations, Autogenerate which will be applied to the MetaData during the reflection process. This is typically required if one wants to drop SQLite constraints, as these constraints will not have names when reflected on this backend. Requires SQLAlchemy 0.9.4 or greater.

New in version 0.7.1.

Note

batch mode requires SQLAlchemy 0.8 or above.

bulk_insert(table, rows, multiinsert=True)

Issue a “bulk insert” operation using the current migration context.

This provides a means of representing an INSERT of multiple rows which works equally well in the context of executing on a live connection as well as that of generating a SQL script. In the case of a SQL script, the values are rendered inline into the statement.

e.g.:

from alembic import op
from datetime import date
from sqlalchemy.sql import table, column
from sqlalchemy import String, Integer, Date

# Create an ad-hoc table to use for the insert statement.
accounts_table = table('account',
    column('id', Integer),
    column('name', String),
    column('create_date', Date)
)

op.bulk_insert(accounts_table,
    [
        {'id':1, 'name':'John Smith',
                'create_date':date(2010, 10, 5)},
        {'id':2, 'name':'Ed Williams',
                'create_date':date(2007, 5, 27)},
        {'id':3, 'name':'Wendy Jones',
                'create_date':date(2008, 8, 15)},
    ]
)

When using –sql mode, some datatypes may not render inline automatically, such as dates and other special types. When this issue is present, Operations.inline_literal() may be used:

op.bulk_insert(accounts_table,
    [
        {'id':1, 'name':'John Smith',
                'create_date':op.inline_literal("2010-10-05")},
        {'id':2, 'name':'Ed Williams',
                'create_date':op.inline_literal("2007-05-27")},
        {'id':3, 'name':'Wendy Jones',
                'create_date':op.inline_literal("2008-08-15")},
    ],
    multiinsert=False
)

When using Operations.inline_literal() in conjunction with Operations.bulk_insert(), in order for the statement to work in “online” (e.g. non –sql) mode, the multiinsert flag should be set to False, which will have the effect of individual INSERT statements being emitted to the database, each with a distinct VALUES clause, so that the “inline” values can still be rendered, rather than attempting to pass the values as bound parameters.

New in version 0.6.4: Operations.inline_literal() can now be used with Operations.bulk_insert(), and the multiinsert flag has been added to assist in this usage when running in “online” mode.

Parameters:
  • table – a table object which represents the target of the INSERT.
  • rows – a list of dictionaries indicating rows.
  • multiinsert

    when at its default of True and –sql mode is not enabled, the INSERT statement will be executed using “executemany()” style, where all elements in the list of dictionaries are passed as bound parameters in a single list. Setting this to False results in individual INSERT statements being emitted per parameter set, and is needed in those cases where non-literal values are present in the parameter sets.

    New in version 0.6.4.

create_check_constraint(constraint_name, table_name, condition, schema=None, **kw)

Issue a “create check constraint” instruction using the current migration context.

e.g.:

from alembic import op
from sqlalchemy.sql import column, func

op.create_check_constraint(
    "ck_user_name_len",
    "user",
    func.len(column('name')) > 5
)

CHECK constraints are usually against a SQL expression, so ad-hoc table metadata is usually needed. The function will convert the given arguments into a sqlalchemy.schema.CheckConstraint bound to an anonymous table in order to emit the CREATE statement.

Parameters:
  • name – Name of the check constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at Configuring Constraint Naming Conventions, name here can be None, as the event listener will apply the name to the constraint object when it is associated with the table.
  • table_name – String name of the source table.
  • condition – SQL expression that’s the condition of the constraint. Can be a string or SQLAlchemy expression language structure.
  • deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
  • initially – optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.
  • schema

    Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

    New in version 0.7.0: ‘schema’ can now accept a quoted_name construct.

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> constraint_name
  • source -> table_name
create_exclude_constraint(constraint_name, table_name, *elements, **kw)

Issue an alter to create an EXCLUDE constraint using the current migration context.

Note

This method is Postgresql specific, and additionally requires at least SQLAlchemy 1.0.

e.g.:

from alembic import op

op.create_exclude_constraint(
    "user_excl",
    "user",

    ("period", '&&'),
    ("group", '='),
    where=("group != 'some group'")

)

Note that the expressions work the same way as that of the ExcludeConstraint object itself; if plain strings are passed, quoting rules must be applied manually.

Parameters:
  • name – Name of the constraint.
  • table_name – String name of the source table.
  • elements – exclude conditions.
  • where – SQL expression or SQL string with optional WHERE clause.
  • deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
  • initially – optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.
  • schema – Optional schema name to operate within.

New in version 0.9.0.

create_foreign_key(constraint_name, source_table, referent_table, local_cols, remote_cols, onupdate=None, ondelete=None, deferrable=None, initially=None, match=None, source_schema=None, referent_schema=None, **dialect_kw)

Issue a “create foreign key” instruction using the current migration context.

e.g.:

from alembic import op
op.create_foreign_key(
            "fk_user_address", "address",
            "user", ["user_id"], ["id"])

This internally generates a Table object containing the necessary columns, then generates a new ForeignKeyConstraint object which it then associates with the Table. Any event listeners associated with this action will be fired off normally. The AddConstraint construct is ultimately used to generate the ALTER statement.

Parameters:
  • name – Name of the foreign key constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at Configuring Constraint Naming Conventions, name here can be None, as the event listener will apply the name to the constraint object when it is associated with the table.
  • source_table – String name of the source table.
  • referent_table – String name of the destination table.
  • local_cols – a list of string column names in the source table.
  • remote_cols – a list of string column names in the remote table.
  • onupdate – Optional string. If set, emit ON UPDATE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT.
  • ondelete – Optional string. If set, emit ON DELETE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT.
  • deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
  • source_schema – Optional schema name of the source table.
  • referent_schema – Optional schema name of the destination table.

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> constraint_name
  • source -> source_table
  • referent -> referent_table
create_index(index_name, table_name, columns, schema=None, unique=False, **kw)

Issue a “create index” instruction using the current migration context.

e.g.:

from alembic import op
op.create_index('ik_test', 't1', ['foo', 'bar'])

Functional indexes can be produced by using the sqlalchemy.sql.expression.text() construct:

from alembic import op
from sqlalchemy import text
op.create_index('ik_test', 't1', [text('lower(foo)')])

New in version 0.6.7: support for making use of the text() construct in conjunction with Operations.create_index() in order to produce functional expressions within CREATE INDEX.

Parameters:
  • index_name – name of the index.
  • table_name – name of the owning table.
  • columns – a list consisting of string column names and/or text() constructs.
  • schema

    Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

    New in version 0.7.0: ‘schema’ can now accept a quoted_name construct.

  • unique – If True, create a unique index.
  • quote – Force quoting of this column’s name on or off, corresponding to True or False. When left at its default of None, the column identifier will be quoted according to whether the name is case sensitive (identifiers with at least one upper case character are treated as case sensitive), or if it’s a reserved word. This flag is only needed to force quoting of a reserved word which is not known by the SQLAlchemy dialect.
  • **kw – Additional keyword arguments not mentioned above are dialect specific, and passed in the form <dialectname>_<argname>. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> index_name
create_primary_key(constraint_name, table_name, columns, schema=None)

Issue a “create primary key” instruction using the current migration context.

e.g.:

from alembic import op
op.create_primary_key(
            "pk_my_table", "my_table",
            ["id", "version"]
        )

This internally generates a Table object containing the necessary columns, then generates a new PrimaryKeyConstraint object which it then associates with the Table. Any event listeners associated with this action will be fired off normally. The AddConstraint construct is ultimately used to generate the ALTER statement.

Parameters:
  • name – Name of the primary key constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at Configuring Constraint Naming Conventions name here can be None, as the event listener will apply the name to the constraint object when it is associated with the table.
  • table_name – String name of the target table.
  • columns – a list of string column names to be applied to the primary key constraint.
  • schema

    Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

    New in version 0.7.0: ‘schema’ can now accept a quoted_name construct.

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> constraint_name
  • cols -> columns
create_table(table_name, *columns, **kw)

Issue a “create table” instruction using the current migration context.

This directive receives an argument list similar to that of the traditional sqlalchemy.schema.Table construct, but without the metadata:

from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column
from alembic import op

op.create_table(
    'account',
    Column('id', INTEGER, primary_key=True),
    Column('name', VARCHAR(50), nullable=False),
    Column('description', NVARCHAR(200)),
    Column('timestamp', TIMESTAMP, server_default=func.now())
)

Note that create_table() accepts Column constructs directly from the SQLAlchemy library. In particular, default values to be created on the database side are specified using the server_default parameter, and not default which only specifies Python-side defaults:

from alembic import op
from sqlalchemy import Column, TIMESTAMP, func

# specify "DEFAULT NOW" along with the "timestamp" column
op.create_table('account',
    Column('id', INTEGER, primary_key=True),
    Column('timestamp', TIMESTAMP, server_default=func.now())
)

The function also returns a newly created Table object, corresponding to the table specification given, which is suitable for immediate SQL operations, in particular Operations.bulk_insert():

from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column
from alembic import op

account_table = op.create_table(
    'account',
    Column('id', INTEGER, primary_key=True),
    Column('name', VARCHAR(50), nullable=False),
    Column('description', NVARCHAR(200)),
    Column('timestamp', TIMESTAMP, server_default=func.now())
)

op.bulk_insert(
    account_table,
    [
        {"name": "A1", "description": "account 1"},
        {"name": "A2", "description": "account 2"},
    ]
)

New in version 0.7.0.

Parameters:
  • table_name – Name of the table
  • *columns – collection of Column objects within the table, as well as optional Constraint objects and Index objects.
  • schema

    Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

    New in version 0.7.0: ‘schema’ can now accept a quoted_name construct.

  • **kw – Other keyword arguments are passed to the underlying sqlalchemy.schema.Table object created for the command.
Returns:

the Table object corresponding to the parameters given.

New in version 0.7.0: - the Table object is returned.

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> table_name
create_unique_constraint(constraint_name, table_name, columns, schema=None, **kw)

Issue a “create unique constraint” instruction using the current migration context.

e.g.:

from alembic import op
op.create_unique_constraint("uq_user_name", "user", ["name"])

This internally generates a Table object containing the necessary columns, then generates a new UniqueConstraint object which it then associates with the Table. Any event listeners associated with this action will be fired off normally. The AddConstraint construct is ultimately used to generate the ALTER statement.

Parameters:
  • name – Name of the unique constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at Configuring Constraint Naming Conventions, name here can be None, as the event listener will apply the name to the constraint object when it is associated with the table.
  • table_name – String name of the source table.
  • columns – a list of string column names in the source table.
  • deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
  • initially – optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.
  • schema

    Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

    New in version 0.7.0: ‘schema’ can now accept a quoted_name construct.

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> constraint_name
  • source -> table_name
  • local_cols -> columns
drop_column(table_name, column_name, schema=None, **kw)

Issue a “drop column” instruction using the current migration context.

e.g.:

drop_column('organization', 'account_id')
Parameters:
  • table_name – name of table
  • column_name – name of column
  • schema

    Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

    New in version 0.7.0: ‘schema’ can now accept a quoted_name construct.

  • mssql_drop_check – Optional boolean. When True, on Microsoft SQL Server only, first drop the CHECK constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.check_constraints, then exec’s a separate DROP CONSTRAINT for that constraint.
  • mssql_drop_default – Optional boolean. When True, on Microsoft SQL Server only, first drop the DEFAULT constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.default_constraints, then exec’s a separate DROP CONSTRAINT for that default.
  • mssql_drop_foreign_key

    Optional boolean. When True, on Microsoft SQL Server only, first drop a single FOREIGN KEY constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.foreign_keys/sys.foreign_key_columns, then exec’s a separate DROP CONSTRAINT for that default. Only works if the column has exactly one FK constraint which refers to it, at the moment.

    New in version 0.6.2.

drop_constraint(constraint_name, table_name, type_=None, schema=None)

Drop a constraint of the given name, typically via DROP CONSTRAINT.

Parameters:
  • constraint_name – name of the constraint.
  • table_name – table name.
  • type_ – optional, required on MySQL. can be ‘foreignkey’, ‘primary’, ‘unique’, or ‘check’.
  • schema

    Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

    New in version 0.7.0: ‘schema’ can now accept a quoted_name construct.

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> constraint_name
drop_index(index_name, table_name=None, schema=None)

Issue a “drop index” instruction using the current migration context.

e.g.:

drop_index("accounts")
Parameters:
  • index_name – name of the index.
  • table_name – name of the owning table. Some backends such as Microsoft SQL Server require this.
  • schema

    Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

    New in version 0.7.0: ‘schema’ can now accept a quoted_name construct.

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> index_name
drop_table(table_name, schema=None, **kw)

Issue a “drop table” instruction using the current migration context.

e.g.:

drop_table("accounts")
Parameters:
  • table_name – Name of the table
  • schema

    Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

    New in version 0.7.0: ‘schema’ can now accept a quoted_name construct.

  • **kw – Other keyword arguments are passed to the underlying sqlalchemy.schema.Table object created for the command.

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> table_name
execute(sqltext, execution_options=None)

Execute the given SQL using the current migration context.

In a SQL script context, the statement is emitted directly to the output stream. There is no return result, however, as this function is oriented towards generating a change script that can run in “offline” mode. For full interaction with a connected database, use the “bind” available from the context:

from alembic import op
connection = op.get_bind()

Also note that any parameterized statement here will not work in offline mode - INSERT, UPDATE and DELETE statements which refer to literal values would need to render inline expressions. For simple use cases, the inline_literal() function can be used for rudimentary quoting of string values. For “bulk” inserts, consider using bulk_insert().

For example, to emit an UPDATE statement which is equally compatible with both online and offline mode:

from sqlalchemy.sql import table, column
from sqlalchemy import String
from alembic import op

account = table('account',
    column('name', String)
)
op.execute(
    account.update().\
        where(account.c.name==op.inline_literal('account 1')).\
        values({'name':op.inline_literal('account 2')})
        )

Note above we also used the SQLAlchemy sqlalchemy.sql.expression.table() and sqlalchemy.sql.expression.column() constructs to make a brief, ad-hoc table construct just for our UPDATE statement. A full Table construct of course works perfectly fine as well, though note it’s a recommended practice to at least ensure the definition of a table is self-contained within the migration script, rather than imported from a module that may break compatibility with older migrations.

Parameters:sql – Any legal SQLAlchemy expression, including:
Parameters:execution_options – Optional dictionary of execution options, will be passed to sqlalchemy.engine.Connection.execution_options().
f(name)

Indicate a string name that has already had a naming convention applied to it.

This feature combines with the SQLAlchemy naming_convention feature to disambiguate constraint names that have already had naming conventions applied to them, versus those that have not. This is necessary in the case that the "%(constraint_name)s" token is used within a naming convention, so that it can be identified that this particular name should remain fixed.

If the Operations.f() is used on a constraint, the naming convention will not take effect:

op.add_column('t', 'x', Boolean(name=op.f('ck_bool_t_x')))

Above, the CHECK constraint generated will have the name ck_bool_t_x regardless of whether or not a naming convention is in use.

Alternatively, if a naming convention is in use, and ‘f’ is not used, names will be converted along conventions. If the target_metadata contains the naming convention {"ck": "ck_bool_%(table_name)s_%(constraint_name)s"}, then the output of the following:

op.add_column(‘t’, ‘x’, Boolean(name=’x’))

will be:

CONSTRAINT ck_bool_t_x CHECK (x in (1, 0)))

The function is rendered in the output of autogenerate when a particular constraint name is already converted, for SQLAlchemy version 0.9.4 and greater only. Even though naming_convention was introduced in 0.9.2, the string disambiguation service is new as of 0.9.4.

New in version 0.6.4.

get_bind()

Return the current ‘bind’.

Under normal circumstances, this is the Connection currently being used to emit SQL to the database.

In a SQL script context, this value is None. [TODO: verify this]

get_context()

Return the MigrationContext object that’s currently in use.

classmethod implementation_for(op_cls)

Register an implementation for a given MigrateOperation.

This is part of the operation extensibility API.

See also

Operation Plugins - example of use

inline_literal(value, type_=None)

Produce an ‘inline literal’ expression, suitable for using in an INSERT, UPDATE, or DELETE statement.

When using Alembic in “offline” mode, CRUD operations aren’t compatible with SQLAlchemy’s default behavior surrounding literal values, which is that they are converted into bound values and passed separately into the execute() method of the DBAPI cursor. An offline SQL script needs to have these rendered inline. While it should always be noted that inline literal values are an enormous security hole in an application that handles untrusted input, a schema migration is not run in this context, so literals are safe to render inline, with the caveat that advanced types like dates may not be supported directly by SQLAlchemy.

See execute() for an example usage of inline_literal().

The environment can also be configured to attempt to render “literal” values inline automatically, for those simple types that are supported by the dialect; see EnvironmentContext.configure.literal_binds for this more recently added feature.

Parameters:
  • value – The value to render. Strings, integers, and simple numerics should be supported. Other types like boolean, dates, etc. may or may not be supported yet by various backends.
  • type_ – optional - a sqlalchemy.types.TypeEngine subclass stating the type of this value. In SQLAlchemy expressions, this is usually derived automatically from the Python type of the value itself, as well as based on the context in which the value is used.
invoke(operation)

Given a MigrateOperation, invoke it in terms of this Operations instance.

New in version 0.8.0.

classmethod register_operation(name, sourcename=None)

Register a new operation for this class.

This method is normally used to add new operations to the Operations class, and possibly the BatchOperations class as well. All Alembic migration operations are implemented via this system, however the system is also available as a public API to facilitate adding custom operations.

New in version 0.8.0.

rename_table(old_table_name, new_table_name, schema=None)

Emit an ALTER TABLE to rename a table.

Parameters:
  • old_table_name – old name.
  • new_table_name – new name.
  • schema

    Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

    New in version 0.7.0: ‘schema’ can now accept a quoted_name construct.

class alembic.operations.BatchOperations(migration_context, impl=None)

Modifies the interface Operations for batch mode.

This basically omits the table_name and schema parameters from associated methods, as these are a given when running under batch mode.

Note that as of 0.8, most of the methods on this class are produced dynamically using the Operations.register_operation() method.

Construct a new Operations

Parameters:migration_context – a MigrationContext instance.
add_column(column)

Issue an “add column” instruction using the current batch migration context.

alter_column(column_name, nullable=None, server_default=False, new_column_name=None, type_=None, existing_type=None, existing_server_default=False, existing_nullable=None, **kw)

Issue an “alter column” instruction using the current batch migration context.

create_check_constraint(constraint_name, condition, **kw)

Issue a “create check constraint” instruction using the current batch migration context.

The batch form of this call omits the source and schema arguments from the call.

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> constraint_name
create_exclude_constraint(constraint_name, *elements, **kw)

Issue a “create exclude constraint” instruction using the current batch migration context.

Note

This method is Postgresql specific, and additionally requires at least SQLAlchemy 1.0.

New in version 0.9.0.

create_foreign_key(constraint_name, referent_table, local_cols, remote_cols, referent_schema=None, onupdate=None, ondelete=None, deferrable=None, initially=None, match=None, **dialect_kw)

Issue a “create foreign key” instruction using the current batch migration context.

The batch form of this call omits the source and source_schema arguments from the call.

e.g.:

with batch_alter_table("address") as batch_op:
    batch_op.create_foreign_key(
                "fk_user_address",
                "user", ["user_id"], ["id"])

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> constraint_name
  • referent -> referent_table
create_index(index_name, columns, **kw)

Issue a “create index” instruction using the current batch migration context.

create_primary_key(constraint_name, columns)

Issue a “create primary key” instruction using the current batch migration context.

The batch form of this call omits the table_name and schema arguments from the call.

create_unique_constraint(constraint_name, columns, **kw)

Issue a “create unique constraint” instruction using the current batch migration context.

The batch form of this call omits the source and schema arguments from the call.

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> constraint_name
drop_column(column_name, **kw)

Issue a “drop column” instruction using the current batch migration context.

drop_constraint(constraint_name, type_=None)

Issue a “drop constraint” instruction using the current batch migration context.

The batch form of this call omits the table_name and schema arguments from the call.

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> constraint_name
drop_index(index_name, **kw)

Issue a “drop index” instruction using the current batch migration context.

Changed in version 0.8.0: The following positional argument names have been changed:

  • name -> index_name
class alembic.operations.MigrateOperation

base class for migration command and organization objects.

This system is part of the operation extensibility API.

New in version 0.8.0.

info

A dictionary that may be used to store arbitrary information along with this MigrateOperation object.