The cursor class
class cursor
Allows Python code to execute PostgreSQL command in a database session. Cursors are created by the connection.cursor() method: they are bound to the connection for the entire lifetime and all the commands are executed in the context of the database session wrapped by the connection.
Cursors created from the same connection are not isolated, i.e., any changes done to the database by a cursor are immediately visible by the other cursors. Cursors created from different connections can or can not be isolated, depending on the connections' isolation level. See also rollback() and commit() methods.
Cursors are not thread safe: a multithread application can create many cursors from the same connection and should use each cursor from a single thread. See Thread and process safety for details.
Cursors can be used as context managers: leaving the context will close the cursor.
with conn.cursor() as curs:
curs.execute(SQL)
# the cursor is now closed
attribute description
Read-only attribute describing the result of a query. It is a sequence of Column instances, each one describing one result column in order. The attribute is None for operations that do not return rows or if the cursor has not had an operation invoked via the |execute*|_ methods yet.
For compatibility with the DB-API, every object can be unpacked as a 7-items sequence: the attributes retuned this way are the following. For further details and other attributes available check the Column documentation.
-
name: the name of the column returned. -
type_code: the PostgreSQL OID of the column. -
display_size: the actual length of the column in bytes. -
internal_size: the size in bytes of the column associated to this column on the server. -
precision: total number of significant digits in columns of type :sql:NUMERIC.Nonefor other types. -
scale: count of decimal digits in the fractional part in columns of type :sql:NUMERIC.Nonefor other types. -
null_ok: alwaysNoneas not easy to retrieve from the libpq.
Changed in version 2.4. if possible, columns descriptions are named tuple instead of regular tuples.
Changed in version 2.8. columns descriptions are instances of Column, exposing extra
attributes.
method close()
Close the cursor now (rather than whenever del is executed). The cursor will be unusable from this point forward; an InterfaceError will be raised if any operation is attempted with the cursor.
Changed in version 2.5 if the cursor is used in a with statement,. the method is automatically called at the end of the with
block.
attribute closed
Read-only boolean attribute: specifies if the cursor is closed (True) or not (False).
The closed attribute is a Psycopg extension to the |DBAPI|.
New in version 2.0.7.
attribute connection
Read-only attribute returning a reference to the connection object on which the cursor was created.
attribute name
Read-only attribute containing the name of the cursor if it was created as named cursor by connection.cursor(), or None if it is a client side cursor. See Server side cursors.
The name attribute is a Psycopg extension to the |DBAPI|.
attribute scrollable
Read/write attribute: specifies if a named cursor is declared SCROLL, hence is capable to scroll backwards (using scroll()). If True, the cursor can be scrolled backwards, if False it is never scrollable. If None (default) the cursor scroll option is not specified, usually but not always meaning no backward scroll (see the DECLARE notes_).
Note
set the value before calling execute() or use the connection.cursor() scrollable parameter, otherwise the value will have no effect.
New in version 2.5.
The scrollable attribute is a Psycopg extension to the |DBAPI|.
attribute withhold
Read/write attribute: specifies if a named cursor lifetime should extend outside of the current transaction, i.e., it is possible to fetch from the cursor even after a connection.commit() (but not after a connection.rollback()). See Server side cursors
Note
set the value before calling execute() or use the connection.cursor() withhold parameter, otherwise the value will have no effect.
New in version 2.4.3.
The withhold attribute is a Psycopg extension to the |DBAPI|.
Commands execution methods
method execute(query, vars=None)
Execute a database operation (query or command).
Parameters may be provided as sequence or mapping and will be bound to variables in the operation. Variables are specified either with positional (%s) or named (%({name})s) placeholders. See Passing parameters to SQL queries.
The method returns None. If a query was executed, the returned values can be retrieved using |fetch*|_ methods.
method executemany(query, vars_list)
Execute a database operation (query or command) against all parameter tuples or mappings found in the sequence vars_list.
The function is mostly useful for commands that update the database: any result set returned by the query is discarded.
Parameters are bounded to the query using the same rules described in the execute() method.
>>> nums = ((1,), (5,), (10,))
>>> cur.executemany("INSERT INTO test (num) VALUES (%s)", nums)
>>> tuples = ((123, "foo"), (42, "bar"), (23, "baz"))
>>> cur.executemany("INSERT INTO test (num, data) VALUES (%s, %s)", tuples)
Warning
In its current implementation this method is not faster than executing execute() in a loop. For better performance you can use the functions described in Fast execution helpers.
method callproc(procname [, parameters])
Call a stored database procedure with the given name. The sequence of parameters must contain one entry for each argument that the procedure expects. Overloaded procedures are supported. Named parameters can be used by supplying the parameters as a dictionary.
This function is, at present, not DBAPI-compliant. The return value is supposed to consist of the sequence of parameters with modified output and input/output parameters. In future versions, the DBAPI-compliant return value may be implemented, but for now the function returns None.
The procedure may provide a result set as output. This is then made available through the standard |fetch*|_ methods.
Changed in version 2.7. added support for named arguments.
Note
callproc() can only be used with PostgreSQL functions__, not with the procedures__ introduced in PostgreSQL 11, which require the CALL statement to run. Please use a normal execute() to run them.
method mogrify(operation [, parameters])
Return a query string after arguments binding. The string returned is exactly the one that would be sent to the database running the execute() method or similar.
The returned string is always a bytes string.
>>> cur.mogrify("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar'))
"INSERT INTO test (num, data) VALUES (42, E'bar')"
The mogrify() method is a Psycopg extension to the |DBAPI|.
method setinputsizes(sizes)
This method is exposed in compliance with the |DBAPI|. It currently does nothing but it is safe to call it.
Results retrieval methods
The following methods are used to read data from the database after an execute() call.
Note
cursor objects are iterable, so, instead of calling explicitly fetchone() in a loop, the object itself can be used:
>>> cur.execute("SELECT * FROM test;")
>>> for record in cur:
... print(record)
...
(1, 100, "abc'def")
(2, None, 'dada')
(3, 42, 'bar')
Changed in version 2.4. iterating over a named cursor
fetches itersize records at time from the backend.
Previously only one record was fetched per roundtrip, resulting
in a large overhead.
method fetchone()
Fetch the next row of a query result set, returning a single tuple, or None when no more data is available:
>>> cur.execute("SELECT * FROM test WHERE id = %s", (3,))
>>> cur.fetchone()
(3, 42, 'bar')
A ProgrammingError is raised if the previous call to |execute*|_ did not produce any result set or no call was issued yet.
method fetchmany([size=cursor.arraysize])
Fetch the next set of rows of a query result, returning a list of tuples. An empty list is returned when no more rows are available.
The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor's arraysize determines the number of rows to be fetched. The method should try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned:
>>> cur.execute("SELECT * FROM test;")
>>> cur.fetchmany(2)
[(1, 100, "abc'def"), (2, None, 'dada')]
>>> cur.fetchmany(2)
[(3, 42, 'bar')]
>>> cur.fetchmany(2)
[]
A ProgrammingError is raised if the previous call to |execute*|_ did not produce any result set or no call was issued yet.
Note there are performance considerations involved with the size parameter. For optimal performance, it is usually best to use the arraysize attribute. If the size parameter is used, then it is best for it to retain the same value from one fetchmany() call to the next.
method fetchall()
Fetch all (remaining) rows of a query result, returning them as a list of tuples. An empty list is returned if there is no more record to fetch.
>>> cur.execute("SELECT * FROM test;")
>>> cur.fetchall()
[(1, 100, "abc'def"), (2, None, 'dada'), (3, 42, 'bar')]
A ProgrammingError is raised if the previous call to |execute*|_ did not produce any result set or no call was issued yet.
method scroll(value [, mode='relative'])
Scroll the cursor in the result set to a new position according to mode.
If mode is relative (default), value is taken as offset to the current position in the result set, if set to absolute, value states an absolute target position.
If the scroll operation would leave the result set, a ProgrammingError is raised and the cursor position is not changed.
Note
According to the DBAPI, the exception raised for a cursor out of bound should have been IndexError. The best option is probably to catch both exceptions in your code:
try:
cur.scroll(1000 * 1000)
except (ProgrammingError, IndexError), exc:
deal_with_it(exc)
The method can be used both for client-side cursors and server-side cursors. Server-side cursors can usually scroll backwards only if declared scrollable. Moving out-of-bound in a server-side cursor doesn't result in an exception, if the backend doesn't raise any (Postgres doesn't tell us in a reliable way if we went out of bound).
attribute arraysize
This read/write attribute specifies the number of rows to fetch at a time with fetchmany(). It defaults to 1 meaning to fetch a single row at a time.
attribute itersize
Read/write attribute specifying the number of rows to fetch from the backend at each network roundtrip during iteration on a named cursor. The default is 2000.
New in version 2.4.
The itersize attribute is a Psycopg extension to the |DBAPI|.
attribute rowcount
This read-only attribute specifies the number of rows that the last |execute*|_ produced (for DQL (Data Query Language) statements like SELECT) or affected (for DML (Data Manipulation Language) statements like UPDATE or INSERT).
The attribute is -1 in case no |execute*| has been performed on the cursor or the row count of the last operation if it can't be determined by the interface.
Note
The DBAPI interface reserves to redefine the latter case to have the object return None instead of -1 in future versions of the specification.
attribute rownumber
This read-only attribute provides the current 0-based index of the cursor in the result set or None if the index cannot be determined.
The index can be seen as index of the cursor in a sequence (the result set). The next fetch operation will fetch the row indexed by rownumber in that sequence.
attribute lastrowid
This read-only attribute provides the OID of the last row inserted by the cursor. If the table wasn't created with OID support or the last operation is not a single record insert, the attribute is set to None.
Note
PostgreSQL currently advices to not create OIDs on the tables and the default for CREATE TABLEis to not support them. The INSERT ... RETURNINGsyntax available from PostgreSQL 8.3 allows more flexibility.
attribute query
Read-only attribute containing the body of the last query sent to the backend (including bound arguments) as bytes string. None if no query has been executed yet:
>>> cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar'))
>>> cur.query
"INSERT INTO test (num, data) VALUES (42, E'bar')"
The query attribute is a Psycopg extension to the |DBAPI|.
attribute statusmessage
Read-only attribute containing the message returned by the last command:
>>> cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar'))
>>> cur.statusmessage
'INSERT 0 1'
The statusmessage attribute is a Psycopg extension to the |DBAPI|.
method cast(oid, s)
Convert a value from the PostgreSQL string representation to a Python object.
Use the most specific of the typecasters registered by register_type().
New in version 2.4.
The cast() method is a Psycopg extension to the |DBAPI|.
attribute tzinfo_factory
The time zone factory used to handle data types such as TIMESTAMP WITH TIME ZONE. It should be a tzinfo object. Default is datetime.timezone.
Changed in version 2.9. previosly the default factory was psycopg2.tz.FixedOffsetTimezone.
method nextset()
This method is not supported (PostgreSQL does not have multiple data sets) and will raise a NotSupportedError exception.
method setoutputsize(size [, column])
This method is exposed in compliance with the |DBAPI|. It currently does nothing but it is safe to call it.
COPY-related methods
Efficiently copy data from file-like objects to the database and back. See Using COPY TO and COPY FROM for an overview.
The COPY command is a PostgreSQL extension to the SQL standard. As such, its support is a Psycopg extension to the |DBAPI|.
method copy_from(file, table, sep='\t', null='\\N', size=8192, columns=None)
Read data from the file-like object file appending them to the table named table.
param file
: file-like object to read data from. It must have both read() and readline() methods. :param table: name of the table to copy data into. :param sep: columns separator expected in the file. Defaults to a tab. :param null: textual representation of NULL in the file. The default is the two characters string \N. :param size: size of the buffer used to read from the file. :param columns: iterable with name of the columns to import. The length and types should match the content of the file to read. If not specified, it is assumed that the entire table matches the file structure.
Example:
>>> f = StringIO("42\tfoo\n74\tbar\n")
>>> cur.copy_from(f, 'test', columns=('num', 'data'))
>>> cur.execute("select * from test where id > 5;")
>>> cur.fetchall()
[(6, 42, 'foo'), (7, 74, 'bar')]
Changed in version 2.0.6. added the columns parameter.
Changed in version 2.4. data read from files implementing the io.TextIOBase interface
are encoded in the connection encoding when sent to
the backend.
Changed in version 2.9. the table and fields names are now quoted. If you need to specify
a schema-qualified table please use copy_expert().
method copy_to(file, table, sep='\t', null='\\N', columns=None)
Write the content of the table named table to the file-like object file. See Using COPY TO and COPY FROM for an overview.
param file
: file-like object to write data into. It must have a write() method. :param table: name of the table to copy data from. :param sep: columns separator expected in the file. Defaults to a tab. :param null: textual representation of NULL in the file. The default is the two characters string \N. :param columns: iterable with name of the columns to export. If not specified, export all the columns.
Example:
>>> cur.copy_to(sys.stdout, 'test', sep="|")
1|100|abc'def
2|\N|dada
...
Changed in version 2.0.6. added the columns parameter.
Changed in version 2.4. data sent to files implementing the io.TextIOBase interface
are decoded in the connection encoding when read
from the backend.
Changed in version 2.9. the table and fields names are now quoted. If you need to specify
a schema-qualified table please use copy_expert().
method copy_expert(sql, file, size=8192)
Submit a user-composed COPY statement. The method is useful to handle all the parameters that PostgreSQL makes available (see COPYcommand documentation).
param sql
: the COPY statement to execute. :param file: a file-like object to read or write (according to sql). :param size: size of the read buffer to be used in COPY FROM.
The sql statement should be in the form COPY {table} TO STDOUT to export {table} to the file object passed as argument or COPY {table} FROM STDIN to import the content of the file object into {table}. If you need to compose a COPY statement dynamically (because table, fields, or query parameters are in Python variables) you may use the objects provided by the psycopg2.sql module.
file must be a readable file-like object (as required by copy_from()) for sql statement COPY ... FROM STDIN or a writable one (as required by copy_to()) for COPY ... TO STDOUT.
Example:
>>> cur.copy_expert("COPY test TO STDOUT WITH CSV HEADER", sys.stdout)
id,num,data
1,100,abc'def
2,,dada
...
New in version 2.0.6.
Changed in version 2.4. files implementing the io.TextIOBase interface are dealt with
using Unicode data instead of bytes.
Interoperation with other C API modules
attribute pgresult_ptr
Return the cursor's internal PGresult* as integer. Useful to pass the libpq raw result structure to C functions, e.g. via ctypes:
>>> import ctypes
>>> libpq = ctypes.pydll.LoadLibrary(ctypes.util.find_library('pq'))
>>> libpq.PQcmdStatus.argtypes = [ctypes.c_void_p]
>>> libpq.PQcmdStatus.restype = ctypes.c_char_p
>>> curs.execute("select 'x'")
>>> libpq.PQcmdStatus(curs.pgresult_ptr)
b'SELECT 1'
New in version 2.8.