You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
8309 lines
442 KiB
8309 lines
442 KiB
<html><body>
|
|
<style>
|
|
|
|
body, h1, h2, h3, div, span, p, pre, a {
|
|
margin: 0;
|
|
padding: 0;
|
|
border: 0;
|
|
font-weight: inherit;
|
|
font-style: inherit;
|
|
font-size: 100%;
|
|
font-family: inherit;
|
|
vertical-align: baseline;
|
|
}
|
|
|
|
body {
|
|
font-size: 13px;
|
|
padding: 1em;
|
|
}
|
|
|
|
h1 {
|
|
font-size: 26px;
|
|
margin-bottom: 1em;
|
|
}
|
|
|
|
h2 {
|
|
font-size: 24px;
|
|
margin-bottom: 1em;
|
|
}
|
|
|
|
h3 {
|
|
font-size: 20px;
|
|
margin-bottom: 1em;
|
|
margin-top: 1em;
|
|
}
|
|
|
|
pre, code {
|
|
line-height: 1.5;
|
|
font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
|
|
}
|
|
|
|
pre {
|
|
margin-top: 0.5em;
|
|
}
|
|
|
|
h1, h2, h3, p {
|
|
font-family: Arial, sans serif;
|
|
}
|
|
|
|
h1, h2, h3 {
|
|
border-bottom: solid #CCC 1px;
|
|
}
|
|
|
|
.toc_element {
|
|
margin-top: 0.5em;
|
|
}
|
|
|
|
.firstline {
|
|
margin-left: 2 em;
|
|
}
|
|
|
|
.method {
|
|
margin-top: 1em;
|
|
border: solid 1px #CCC;
|
|
padding: 1em;
|
|
background: #EEE;
|
|
}
|
|
|
|
.details {
|
|
font-weight: bold;
|
|
font-size: 14px;
|
|
}
|
|
|
|
</style>
|
|
|
|
<h1><a href="spanner_v1.html">Cloud Spanner API</a> . <a href="spanner_v1.projects.html">projects</a> . <a href="spanner_v1.projects.instances.html">instances</a> . <a href="spanner_v1.projects.instances.databases.html">databases</a> . <a href="spanner_v1.projects.instances.databases.sessions.html">sessions</a></h1>
|
|
<h2>Instance Methods</h2>
|
|
<p class="toc_element">
|
|
<code><a href="#beginTransaction">beginTransaction(session, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Begins a new transaction. This step can often be skipped:</p>
|
|
<p class="toc_element">
|
|
<code><a href="#commit">commit(session, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Commits a transaction. The request includes the mutations to be</p>
|
|
<p class="toc_element">
|
|
<code><a href="#create">create(database, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Creates a new session. A session can be used to perform</p>
|
|
<p class="toc_element">
|
|
<code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Ends a session, releasing server resources associated with it. This will</p>
|
|
<p class="toc_element">
|
|
<code><a href="#executeBatchDml">executeBatchDml(session, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Executes a batch of SQL DML statements. This method allows many statements</p>
|
|
<p class="toc_element">
|
|
<code><a href="#executeSql">executeSql(session, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Executes an SQL statement, returning all results in a single reply. This</p>
|
|
<p class="toc_element">
|
|
<code><a href="#executeStreamingSql">executeStreamingSql(session, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Like ExecuteSql, except returns the result</p>
|
|
<p class="toc_element">
|
|
<code><a href="#get">get(name, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Gets a session. Returns `NOT_FOUND` if the session does not exist.</p>
|
|
<p class="toc_element">
|
|
<code><a href="#list">list(database, pageSize=None, filter=None, pageToken=None, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Lists all sessions in a given database.</p>
|
|
<p class="toc_element">
|
|
<code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
|
|
<p class="firstline">Retrieves the next page of results.</p>
|
|
<p class="toc_element">
|
|
<code><a href="#partitionQuery">partitionQuery(session, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Creates a set of partition tokens that can be used to execute a query</p>
|
|
<p class="toc_element">
|
|
<code><a href="#partitionRead">partitionRead(session, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Creates a set of partition tokens that can be used to execute a read</p>
|
|
<p class="toc_element">
|
|
<code><a href="#read">read(session, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Reads rows from the database using key lookups and scans, as a</p>
|
|
<p class="toc_element">
|
|
<code><a href="#rollback">rollback(session, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Rolls back a transaction, releasing any locks it holds. It is a good</p>
|
|
<p class="toc_element">
|
|
<code><a href="#streamingRead">streamingRead(session, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Like Read, except returns the result set as a</p>
|
|
<h3>Method Details</h3>
|
|
<div class="method">
|
|
<code class="details" id="beginTransaction">beginTransaction(session, body, x__xgafv=None)</code>
|
|
<pre>Begins a new transaction. This step can often be skipped:
|
|
Read, ExecuteSql and
|
|
Commit can begin a new transaction as a
|
|
side-effect.
|
|
|
|
Args:
|
|
session: string, Required. The session in which the transaction runs. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # The request for BeginTransaction.
|
|
"options": { # # Transactions # Required. Options for the new transaction.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # A transaction.
|
|
"readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
|
|
# for the transaction. Not returned by default: see
|
|
# TransactionOptions.ReadOnly.return_read_timestamp.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"id": "A String", # `id` may be used to identify the transaction in subsequent
|
|
# Read,
|
|
# ExecuteSql,
|
|
# Commit, or
|
|
# Rollback calls.
|
|
#
|
|
# Single-use read-only transactions do not have IDs, because
|
|
# single-use transactions do not support multiple requests.
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="commit">commit(session, body, x__xgafv=None)</code>
|
|
<pre>Commits a transaction. The request includes the mutations to be
|
|
applied to rows in the database.
|
|
|
|
`Commit` might return an `ABORTED` error. This can occur at any time;
|
|
commonly, the cause is conflicts with concurrent
|
|
transactions. However, it can also happen for a variety of other
|
|
reasons. If `Commit` returns `ABORTED`, the caller should re-attempt
|
|
the transaction from the beginning, re-using the same session.
|
|
|
|
Args:
|
|
session: string, Required. The session in which the transaction to be committed is running. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # The request for Commit.
|
|
"transactionId": "A String", # Commit a previously-started transaction.
|
|
"mutations": [ # The mutations to be executed when this transaction commits. All
|
|
# mutations are applied atomically, in the order they appear in
|
|
# this list.
|
|
{ # A modification to one or more Cloud Spanner rows. Mutations can be
|
|
# applied to a Cloud Spanner database by sending them in a
|
|
# Commit call.
|
|
"insert": { # Arguments to insert, update, insert_or_update, and # Insert new rows in a table. If any of the rows already exist,
|
|
# the write or transaction fails with error `ALREADY_EXISTS`.
|
|
# replace operations.
|
|
"table": "A String", # Required. The table whose rows will be written.
|
|
"values": [ # The values to be written. `values` can contain more than one
|
|
# list of values. If it does, then multiple rows are written, one
|
|
# for each entry in `values`. Each list in `values` must have
|
|
# exactly as many entries as there are entries in columns
|
|
# above. Sending multiple lists is equivalent to sending multiple
|
|
# `Mutation`s, each containing one `values` entry and repeating
|
|
# table and columns. Individual values in each list are
|
|
# encoded as described here.
|
|
[
|
|
"",
|
|
],
|
|
],
|
|
"columns": [ # The names of the columns in table to be written.
|
|
#
|
|
# The list of columns must contain enough columns to allow
|
|
# Cloud Spanner to derive values for all primary key columns in the
|
|
# row(s) to be modified.
|
|
"A String",
|
|
],
|
|
},
|
|
"replace": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, it is
|
|
# deleted, and the column values provided are inserted
|
|
# instead. Unlike insert_or_update, this means any values not
|
|
# explicitly written become `NULL`.
|
|
# replace operations.
|
|
"table": "A String", # Required. The table whose rows will be written.
|
|
"values": [ # The values to be written. `values` can contain more than one
|
|
# list of values. If it does, then multiple rows are written, one
|
|
# for each entry in `values`. Each list in `values` must have
|
|
# exactly as many entries as there are entries in columns
|
|
# above. Sending multiple lists is equivalent to sending multiple
|
|
# `Mutation`s, each containing one `values` entry and repeating
|
|
# table and columns. Individual values in each list are
|
|
# encoded as described here.
|
|
[
|
|
"",
|
|
],
|
|
],
|
|
"columns": [ # The names of the columns in table to be written.
|
|
#
|
|
# The list of columns must contain enough columns to allow
|
|
# Cloud Spanner to derive values for all primary key columns in the
|
|
# row(s) to be modified.
|
|
"A String",
|
|
],
|
|
},
|
|
"insertOrUpdate": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, then
|
|
# its column values are overwritten with the ones provided. Any
|
|
# column values not explicitly written are preserved.
|
|
# replace operations.
|
|
"table": "A String", # Required. The table whose rows will be written.
|
|
"values": [ # The values to be written. `values` can contain more than one
|
|
# list of values. If it does, then multiple rows are written, one
|
|
# for each entry in `values`. Each list in `values` must have
|
|
# exactly as many entries as there are entries in columns
|
|
# above. Sending multiple lists is equivalent to sending multiple
|
|
# `Mutation`s, each containing one `values` entry and repeating
|
|
# table and columns. Individual values in each list are
|
|
# encoded as described here.
|
|
[
|
|
"",
|
|
],
|
|
],
|
|
"columns": [ # The names of the columns in table to be written.
|
|
#
|
|
# The list of columns must contain enough columns to allow
|
|
# Cloud Spanner to derive values for all primary key columns in the
|
|
# row(s) to be modified.
|
|
"A String",
|
|
],
|
|
},
|
|
"update": { # Arguments to insert, update, insert_or_update, and # Update existing rows in a table. If any of the rows does not
|
|
# already exist, the transaction fails with error `NOT_FOUND`.
|
|
# replace operations.
|
|
"table": "A String", # Required. The table whose rows will be written.
|
|
"values": [ # The values to be written. `values` can contain more than one
|
|
# list of values. If it does, then multiple rows are written, one
|
|
# for each entry in `values`. Each list in `values` must have
|
|
# exactly as many entries as there are entries in columns
|
|
# above. Sending multiple lists is equivalent to sending multiple
|
|
# `Mutation`s, each containing one `values` entry and repeating
|
|
# table and columns. Individual values in each list are
|
|
# encoded as described here.
|
|
[
|
|
"",
|
|
],
|
|
],
|
|
"columns": [ # The names of the columns in table to be written.
|
|
#
|
|
# The list of columns must contain enough columns to allow
|
|
# Cloud Spanner to derive values for all primary key columns in the
|
|
# row(s) to be modified.
|
|
"A String",
|
|
],
|
|
},
|
|
"delete": { # Arguments to delete operations. # Delete rows from a table. Succeeds whether or not the named
|
|
# rows were present.
|
|
"table": "A String", # Required. The table whose rows will be deleted.
|
|
"keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. The primary keys of the rows within table to delete.
|
|
# Delete is idempotent. The transaction will succeed even if some or all
|
|
# rows do not exist.
|
|
# the keys are expected to be in the same table or index. The keys need
|
|
# not be sorted in any particular way.
|
|
#
|
|
# If the same key is specified multiple times in the set (for example
|
|
# if two ranges, two keys, or a key and a range overlap), Cloud Spanner
|
|
# behaves as if the key were only specified once.
|
|
"ranges": [ # A list of key ranges. See KeyRange for more information about
|
|
# key range specifications.
|
|
{ # KeyRange represents a range of rows in a table or index.
|
|
#
|
|
# A range has a start key and an end key. These keys can be open or
|
|
# closed, indicating if the range includes rows with that key.
|
|
#
|
|
# Keys are represented by lists, where the ith value in the list
|
|
# corresponds to the ith component of the table or index primary key.
|
|
# Individual values are encoded as described
|
|
# here.
|
|
#
|
|
# For example, consider the following table definition:
|
|
#
|
|
# CREATE TABLE UserEvents (
|
|
# UserName STRING(MAX),
|
|
# EventDate STRING(10)
|
|
# ) PRIMARY KEY(UserName, EventDate);
|
|
#
|
|
# The following keys name rows in this table:
|
|
#
|
|
# "Bob", "2014-09-23"
|
|
#
|
|
# Since the `UserEvents` table's `PRIMARY KEY` clause names two
|
|
# columns, each `UserEvents` key has two elements; the first is the
|
|
# `UserName`, and the second is the `EventDate`.
|
|
#
|
|
# Key ranges with multiple components are interpreted
|
|
# lexicographically by component using the table or index key's declared
|
|
# sort order. For example, the following range returns all events for
|
|
# user `"Bob"` that occurred in the year 2015:
|
|
#
|
|
# "start_closed": ["Bob", "2015-01-01"]
|
|
# "end_closed": ["Bob", "2015-12-31"]
|
|
#
|
|
# Start and end keys can omit trailing key components. This affects the
|
|
# inclusion and exclusion of rows that exactly match the provided key
|
|
# components: if the key is closed, then rows that exactly match the
|
|
# provided components are included; if the key is open, then rows
|
|
# that exactly match are not included.
|
|
#
|
|
# For example, the following range includes all events for `"Bob"` that
|
|
# occurred during and after the year 2000:
|
|
#
|
|
# "start_closed": ["Bob", "2000-01-01"]
|
|
# "end_closed": ["Bob"]
|
|
#
|
|
# The next example retrieves all events for `"Bob"`:
|
|
#
|
|
# "start_closed": ["Bob"]
|
|
# "end_closed": ["Bob"]
|
|
#
|
|
# To retrieve events before the year 2000:
|
|
#
|
|
# "start_closed": ["Bob"]
|
|
# "end_open": ["Bob", "2000-01-01"]
|
|
#
|
|
# The following range includes all rows in the table:
|
|
#
|
|
# "start_closed": []
|
|
# "end_closed": []
|
|
#
|
|
# This range returns all users whose `UserName` begins with any
|
|
# character from A to C:
|
|
#
|
|
# "start_closed": ["A"]
|
|
# "end_open": ["D"]
|
|
#
|
|
# This range returns all users whose `UserName` begins with B:
|
|
#
|
|
# "start_closed": ["B"]
|
|
# "end_open": ["C"]
|
|
#
|
|
# Key ranges honor column sort order. For example, suppose a table is
|
|
# defined as follows:
|
|
#
|
|
# CREATE TABLE DescendingSortedTable {
|
|
# Key INT64,
|
|
# ...
|
|
# ) PRIMARY KEY(Key DESC);
|
|
#
|
|
# The following range retrieves all rows with key values between 1
|
|
# and 100 inclusive:
|
|
#
|
|
# "start_closed": ["100"]
|
|
# "end_closed": ["1"]
|
|
#
|
|
# Note that 100 is passed as the start, and 1 is passed as the end,
|
|
# because `Key` is a descending column in the schema.
|
|
"endOpen": [ # If the end is open, then the range excludes rows whose first
|
|
# `len(end_open)` key columns exactly match `end_open`.
|
|
"",
|
|
],
|
|
"startOpen": [ # If the start is open, then the range excludes rows whose first
|
|
# `len(start_open)` key columns exactly match `start_open`.
|
|
"",
|
|
],
|
|
"endClosed": [ # If the end is closed, then the range includes all rows whose
|
|
# first `len(end_closed)` key columns exactly match `end_closed`.
|
|
"",
|
|
],
|
|
"startClosed": [ # If the start is closed, then the range includes all rows whose
|
|
# first `len(start_closed)` key columns exactly match `start_closed`.
|
|
"",
|
|
],
|
|
},
|
|
],
|
|
"keys": [ # A list of specific keys. Entries in `keys` should have exactly as
|
|
# many elements as there are columns in the primary or index key
|
|
# with which this `KeySet` is used. Individual key values are
|
|
# encoded as described here.
|
|
[
|
|
"",
|
|
],
|
|
],
|
|
"all": True or False, # For convenience `all` can be set to `true` to indicate that this
|
|
# `KeySet` matches all keys in the table or index. Note that any keys
|
|
# specified in `keys` or `ranges` are only yielded once.
|
|
},
|
|
},
|
|
},
|
|
],
|
|
"singleUseTransaction": { # # Transactions # Execute mutations in a temporary transaction. Note that unlike
|
|
# commit of a previously-started transaction, commit with a
|
|
# temporary transaction is non-idempotent. That is, if the
|
|
# `CommitRequest` is sent to Cloud Spanner more than once (for
|
|
# instance, due to retries in the application, or in the
|
|
# transport library), it is possible that the mutations are
|
|
# executed more than once. If this is undesirable, use
|
|
# BeginTransaction and
|
|
# Commit instead.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # The response for Commit.
|
|
"commitTimestamp": "A String", # The Cloud Spanner timestamp at which the transaction committed.
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="create">create(database, body, x__xgafv=None)</code>
|
|
<pre>Creates a new session. A session can be used to perform
|
|
transactions that read and/or modify data in a Cloud Spanner database.
|
|
Sessions are meant to be reused for many consecutive
|
|
transactions.
|
|
|
|
Sessions can only execute one transaction at a time. To execute
|
|
multiple concurrent read-write/write-only transactions, create
|
|
multiple sessions. Note that standalone reads and queries use a
|
|
transaction internally, and count toward the one transaction
|
|
limit.
|
|
|
|
Cloud Spanner limits the number of sessions that can exist at any given
|
|
time; thus, it is a good idea to delete idle and/or unneeded sessions.
|
|
Aside from explicit deletes, Cloud Spanner can delete sessions for which no
|
|
operations are sent for more than an hour. If a session is deleted,
|
|
requests to it return `NOT_FOUND`.
|
|
|
|
Idle sessions can be kept alive by sending a trivial SQL query
|
|
periodically, e.g., `"SELECT 1"`.
|
|
|
|
Args:
|
|
database: string, Required. The database in which the new session is created. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # The request for CreateSession.
|
|
"session": { # A session in the Cloud Spanner API. # The session to create.
|
|
"labels": { # The labels for the session.
|
|
#
|
|
# * Label keys must be between 1 and 63 characters long and must conform to
|
|
# the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
|
|
# * Label values must be between 0 and 63 characters long and must conform
|
|
# to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
|
|
# * No more than 64 labels can be associated with a given session.
|
|
#
|
|
# See https://goo.gl/xmQnxf for more information on and examples of labels.
|
|
"a_key": "A String",
|
|
},
|
|
"name": "A String", # The name of the session. This is always system-assigned; values provided
|
|
# when creating a session are ignored.
|
|
"approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
|
|
# typically earlier than the actual last use time.
|
|
"createTime": "A String", # Output only. The timestamp when the session is created.
|
|
},
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # A session in the Cloud Spanner API.
|
|
"labels": { # The labels for the session.
|
|
#
|
|
# * Label keys must be between 1 and 63 characters long and must conform to
|
|
# the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
|
|
# * Label values must be between 0 and 63 characters long and must conform
|
|
# to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
|
|
# * No more than 64 labels can be associated with a given session.
|
|
#
|
|
# See https://goo.gl/xmQnxf for more information on and examples of labels.
|
|
"a_key": "A String",
|
|
},
|
|
"name": "A String", # The name of the session. This is always system-assigned; values provided
|
|
# when creating a session are ignored.
|
|
"approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
|
|
# typically earlier than the actual last use time.
|
|
"createTime": "A String", # Output only. The timestamp when the session is created.
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="delete">delete(name, x__xgafv=None)</code>
|
|
<pre>Ends a session, releasing server resources associated with it. This will
|
|
asynchronously trigger cancellation of any operations that are running with
|
|
this session.
|
|
|
|
Args:
|
|
name: string, Required. The name of the session to delete. (required)
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # A generic empty message that you can re-use to avoid defining duplicated
|
|
# empty messages in your APIs. A typical example is to use it as the request
|
|
# or the response type of an API method. For instance:
|
|
#
|
|
# service Foo {
|
|
# rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
|
|
# }
|
|
#
|
|
# The JSON representation for `Empty` is empty JSON object `{}`.
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="executeBatchDml">executeBatchDml(session, body, x__xgafv=None)</code>
|
|
<pre>Executes a batch of SQL DML statements. This method allows many statements
|
|
to be run with lower latency than submitting them sequentially with
|
|
ExecuteSql.
|
|
|
|
Statements are executed in order, sequentially.
|
|
ExecuteBatchDmlResponse will contain a
|
|
ResultSet for each DML statement that has successfully executed. If a
|
|
statement fails, its error status will be returned as part of the
|
|
ExecuteBatchDmlResponse. Execution will
|
|
stop at the first failed statement; the remaining statements will not run.
|
|
|
|
ExecuteBatchDml is expected to return an OK status with a response even if
|
|
there was an error while processing one of the DML statements. Clients must
|
|
inspect response.status to determine if there were any errors while
|
|
processing the request.
|
|
|
|
See more details in
|
|
ExecuteBatchDmlRequest and
|
|
ExecuteBatchDmlResponse.
|
|
|
|
Args:
|
|
session: string, Required. The session in which the DML statements should be performed. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # The request for ExecuteBatchDml
|
|
"seqno": "A String", # A per-transaction sequence number used to identify this request. This is
|
|
# used in the same space as the seqno in
|
|
# ExecuteSqlRequest. See more details
|
|
# in ExecuteSqlRequest.
|
|
"transaction": { # This message is used to select the transaction in which a # The transaction to use. A ReadWrite transaction is required. Single-use
|
|
# transactions are not supported (to avoid replay). The caller must either
|
|
# supply an existing transaction ID or begin a new transaction.
|
|
# Read or
|
|
# ExecuteSql call runs.
|
|
#
|
|
# See TransactionOptions for more information about transactions.
|
|
"begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
|
|
# it. The transaction ID of the new transaction is returned in
|
|
# ResultSetMetadata.transaction, which is a Transaction.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
|
|
# This is the most efficient way to execute a transaction that
|
|
# consists of a single SQL query.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"id": "A String", # Execute the read or SQL query in a previously-started transaction.
|
|
},
|
|
"statements": [ # The list of statements to execute in this batch. Statements are executed
|
|
# serially, such that the effects of statement i are visible to statement
|
|
# i+1. Each statement must be a DML statement. Execution will stop at the
|
|
# first failed statement; the remaining statements will not run.
|
|
#
|
|
# REQUIRES: `statements_size()` > 0.
|
|
{ # A single DML statement.
|
|
"paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
|
|
# from a JSON value. For example, values of type `BYTES` and values
|
|
# of type `STRING` both appear in params as JSON strings.
|
|
#
|
|
# In these cases, `param_types` can be used to specify the exact
|
|
# SQL type for some or all of the SQL statement parameters. See the
|
|
# definition of Type for more information
|
|
# about SQL types.
|
|
"a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
|
|
# table cell or returned from an SQL query.
|
|
"structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
|
|
# provides type information for the struct's fields.
|
|
"code": "A String", # Required. The TypeCode for this type.
|
|
"arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
|
|
# is the type of the array elements.
|
|
},
|
|
},
|
|
"params": { # The DML string can contain parameter placeholders. A parameter
|
|
# placeholder consists of `'@'` followed by the parameter
|
|
# name. Parameter names consist of any combination of letters,
|
|
# numbers, and underscores.
|
|
#
|
|
# Parameters can appear anywhere that a literal value is expected. The
|
|
# same parameter name can be used more than once, for example:
|
|
# `"WHERE id > @msg_id AND id < @msg_id + 100"`
|
|
#
|
|
# It is an error to execute an SQL statement with unbound parameters.
|
|
#
|
|
# Parameter values are specified using `params`, which is a JSON
|
|
# object whose keys are parameter names, and whose values are the
|
|
# corresponding parameter values.
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
"sql": "A String", # Required. The DML string.
|
|
},
|
|
],
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # The response for ExecuteBatchDml. Contains a list
|
|
# of ResultSet, one for each DML statement that has successfully executed.
|
|
# If a statement fails, the error is returned as part of the response payload.
|
|
# Clients can determine whether all DML statements have run successfully, or if
|
|
# a statement failed, using one of the following approaches:
|
|
#
|
|
# 1. Check if `'status'` field is `OkStatus`.
|
|
# 2. Check if `result_sets_size()` equals the number of statements in
|
|
# ExecuteBatchDmlRequest.
|
|
#
|
|
# Example 1: A request with 5 DML statements, all executed successfully.
|
|
#
|
|
# Result: A response with 5 ResultSets, one for each statement in the same
|
|
# order, and an `OkStatus`.
|
|
#
|
|
# Example 2: A request with 5 DML statements. The 3rd statement has a syntax
|
|
# error.
|
|
#
|
|
# Result: A response with 2 ResultSets, for the first 2 statements that
|
|
# run successfully, and a syntax error (`INVALID_ARGUMENT`) status. From
|
|
# `result_set_size()` client can determine that the 3rd statement has failed.
|
|
"status": { # The `Status` type defines a logical error model that is suitable for # If all DML statements are executed successfully, status will be OK.
|
|
# Otherwise, the error status of the first failed statement.
|
|
# different programming environments, including REST APIs and RPC APIs. It is
|
|
# used by [gRPC](https://github.com/grpc). The error model is designed to be:
|
|
#
|
|
# - Simple to use and understand for most users
|
|
# - Flexible enough to meet unexpected needs
|
|
#
|
|
# # Overview
|
|
#
|
|
# The `Status` message contains three pieces of data: error code, error
|
|
# message, and error details. The error code should be an enum value of
|
|
# google.rpc.Code, but it may accept additional error codes if needed. The
|
|
# error message should be a developer-facing English message that helps
|
|
# developers *understand* and *resolve* the error. If a localized user-facing
|
|
# error message is needed, put the localized message in the error details or
|
|
# localize it in the client. The optional error details may contain arbitrary
|
|
# information about the error. There is a predefined set of error detail types
|
|
# in the package `google.rpc` that can be used for common error conditions.
|
|
#
|
|
# # Language mapping
|
|
#
|
|
# The `Status` message is the logical representation of the error model, but it
|
|
# is not necessarily the actual wire format. When the `Status` message is
|
|
# exposed in different client libraries and different wire protocols, it can be
|
|
# mapped differently. For example, it will likely be mapped to some exceptions
|
|
# in Java, but more likely mapped to some error codes in C.
|
|
#
|
|
# # Other uses
|
|
#
|
|
# The error model and the `Status` message can be used in a variety of
|
|
# environments, either with or without APIs, to provide a
|
|
# consistent developer experience across different environments.
|
|
#
|
|
# Example uses of this error model include:
|
|
#
|
|
# - Partial errors. If a service needs to return partial errors to the client,
|
|
# it may embed the `Status` in the normal response to indicate the partial
|
|
# errors.
|
|
#
|
|
# - Workflow errors. A typical workflow has multiple steps. Each step may
|
|
# have a `Status` message for error reporting.
|
|
#
|
|
# - Batch operations. If a client uses batch request and batch response, the
|
|
# `Status` message should be used directly inside batch response, one for
|
|
# each error sub-response.
|
|
#
|
|
# - Asynchronous operations. If an API call embeds asynchronous operation
|
|
# results in its response, the status of those operations should be
|
|
# represented directly using the `Status` message.
|
|
#
|
|
# - Logging. If some API errors are stored in logs, the message `Status` could
|
|
# be used directly after any stripping needed for security/privacy reasons.
|
|
"message": "A String", # A developer-facing error message, which should be in English. Any
|
|
# user-facing error message should be localized and sent in the
|
|
# google.rpc.Status.details field, or localized by the client.
|
|
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
|
|
"details": [ # A list of messages that carry the error details. There is a common set of
|
|
# message types for APIs to use.
|
|
{
|
|
"a_key": "", # Properties of the object. Contains field @type with type URL.
|
|
},
|
|
],
|
|
},
|
|
"resultSets": [ # ResultSets, one for each statement in the request that ran successfully, in
|
|
# the same order as the statements in the request. Each ResultSet will
|
|
# not contain any rows. The ResultSetStats in each ResultSet will
|
|
# contain the number of rows modified by the statement.
|
|
#
|
|
# Only the first ResultSet in the response contains a valid
|
|
# ResultSetMetadata.
|
|
{ # Results from Read or
|
|
# ExecuteSql.
|
|
"rows": [ # Each element in `rows` is a row whose format is defined by
|
|
# metadata.row_type. The ith element
|
|
# in each row matches the ith field in
|
|
# metadata.row_type. Elements are
|
|
# encoded based on type as described
|
|
# here.
|
|
[
|
|
"",
|
|
],
|
|
],
|
|
"stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
|
|
# produced this result set. These can be requested by setting
|
|
# ExecuteSqlRequest.query_mode.
|
|
# DML statements always produce stats containing the number of rows
|
|
# modified, unless executed using the
|
|
# ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
|
|
# Other fields may or may not be populated, based on the
|
|
# ExecuteSqlRequest.query_mode.
|
|
"rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
|
|
# returns a lower bound of the rows modified.
|
|
"rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
|
|
"queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
|
|
"planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
|
|
# with the plan root. Each PlanNode's `id` corresponds to its index in
|
|
# `plan_nodes`.
|
|
{ # Node information for nodes appearing in a QueryPlan.plan_nodes.
|
|
"index": 42, # The `PlanNode`'s index in node list.
|
|
"kind": "A String", # Used to determine the type of node. May be needed for visualizing
|
|
# different kinds of nodes differently. For example, If the node is a
|
|
# SCALAR node, it will have a condensed representation
|
|
# which can be used to directly embed a description of the node in its
|
|
# parent.
|
|
"displayName": "A String", # The display name for the node.
|
|
"executionStats": { # The execution statistics associated with the node, contained in a group of
|
|
# key-value pairs. Only present if the plan was returned as a result of a
|
|
# profile query. For example, number of executions, number of rows/time per
|
|
# execution etc.
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
"childLinks": [ # List of child node `index`es and their relationship to this parent.
|
|
{ # Metadata associated with a parent-child relationship appearing in a
|
|
# PlanNode.
|
|
"variable": "A String", # Only present if the child node is SCALAR and corresponds
|
|
# to an output variable of the parent node. The field carries the name of
|
|
# the output variable.
|
|
# For example, a `TableScan` operator that reads rows from a table will
|
|
# have child links to the `SCALAR` nodes representing the output variables
|
|
# created for each column that is read by the operator. The corresponding
|
|
# `variable` fields will be set to the variable names assigned to the
|
|
# columns.
|
|
"childIndex": 42, # The node to which the link points.
|
|
"type": "A String", # The type of the link. For example, in Hash Joins this could be used to
|
|
# distinguish between the build child and the probe child, or in the case
|
|
# of the child being an output variable, to represent the tag associated
|
|
# with the output variable.
|
|
},
|
|
],
|
|
"shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
|
|
# `SCALAR` PlanNode(s).
|
|
"subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
|
|
# where the `description` string of this node references a `SCALAR`
|
|
# subquery contained in the expression subtree rooted at this node. The
|
|
# referenced `SCALAR` subquery may not necessarily be a direct child of
|
|
# this node.
|
|
"a_key": 42,
|
|
},
|
|
"description": "A String", # A string representation of the expression subtree rooted at this node.
|
|
},
|
|
"metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
|
|
# For example, a Parameter Reference node could have the following
|
|
# information in its metadata:
|
|
#
|
|
# {
|
|
# "parameter_reference": "param1",
|
|
# "parameter_type": "array"
|
|
# }
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
},
|
|
],
|
|
},
|
|
"queryStats": { # Aggregated statistics from the execution of the query. Only present when
|
|
# the query is profiled. For example, a query could return the statistics as
|
|
# follows:
|
|
#
|
|
# {
|
|
# "rows_returned": "3",
|
|
# "elapsed_time": "1.22 secs",
|
|
# "cpu_time": "1.19 secs"
|
|
# }
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
},
|
|
"metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
|
|
"rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
|
|
# set. For example, a SQL query like `"SELECT UserId, UserName FROM
|
|
# Users"` could return a `row_type` value like:
|
|
#
|
|
# "fields": [
|
|
# { "name": "UserId", "type": { "code": "INT64" } },
|
|
# { "name": "UserName", "type": { "code": "STRING" } },
|
|
# ]
|
|
"fields": [ # The list of fields that make up this struct. Order is
|
|
# significant, because values of this struct type are represented as
|
|
# lists, where the order of field values matches the order of
|
|
# fields in the StructType. In turn, the order of fields
|
|
# matches the order of columns in a read request, or the order of
|
|
# fields in the `SELECT` clause of a query.
|
|
{ # Message representing a single field of a struct.
|
|
"type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
|
|
# table cell or returned from an SQL query.
|
|
"structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
|
|
# provides type information for the struct's fields.
|
|
"code": "A String", # Required. The TypeCode for this type.
|
|
"arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
|
|
# is the type of the array elements.
|
|
},
|
|
"name": "A String", # The name of the field. For reads, this is the column name. For
|
|
# SQL queries, it is the column alias (e.g., `"Word"` in the
|
|
# query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
|
|
# `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
|
|
# columns might have an empty name (e.g., !"SELECT
|
|
# UPPER(ColName)"`). Note that a query result can contain
|
|
# multiple fields with the same name.
|
|
},
|
|
],
|
|
},
|
|
"transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
|
|
# information about the new transaction is yielded here.
|
|
"readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
|
|
# for the transaction. Not returned by default: see
|
|
# TransactionOptions.ReadOnly.return_read_timestamp.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"id": "A String", # `id` may be used to identify the transaction in subsequent
|
|
# Read,
|
|
# ExecuteSql,
|
|
# Commit, or
|
|
# Rollback calls.
|
|
#
|
|
# Single-use read-only transactions do not have IDs, because
|
|
# single-use transactions do not support multiple requests.
|
|
},
|
|
},
|
|
},
|
|
],
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="executeSql">executeSql(session, body, x__xgafv=None)</code>
|
|
<pre>Executes an SQL statement, returning all results in a single reply. This
|
|
method cannot be used to return a result set larger than 10 MiB;
|
|
if the query yields more data than that, the query fails with
|
|
a `FAILED_PRECONDITION` error.
|
|
|
|
Operations inside read-write transactions might return `ABORTED`. If
|
|
this occurs, the application should restart the transaction from
|
|
the beginning. See Transaction for more details.
|
|
|
|
Larger result sets can be fetched in streaming fashion by calling
|
|
ExecuteStreamingSql instead.
|
|
|
|
Args:
|
|
session: string, Required. The session in which the SQL query should be performed. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # The request for ExecuteSql and
|
|
# ExecuteStreamingSql.
|
|
"transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
|
|
# temporary read-only transaction with strong concurrency.
|
|
#
|
|
# The transaction to use.
|
|
#
|
|
# For queries, if none is provided, the default is a temporary read-only
|
|
# transaction with strong concurrency.
|
|
#
|
|
# Standard DML statements require a ReadWrite transaction. Single-use
|
|
# transactions are not supported (to avoid replay). The caller must
|
|
# either supply an existing transaction ID or begin a new transaction.
|
|
#
|
|
# Partitioned DML requires an existing PartitionedDml transaction ID.
|
|
# Read or
|
|
# ExecuteSql call runs.
|
|
#
|
|
# See TransactionOptions for more information about transactions.
|
|
"begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
|
|
# it. The transaction ID of the new transaction is returned in
|
|
# ResultSetMetadata.transaction, which is a Transaction.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
|
|
# This is the most efficient way to execute a transaction that
|
|
# consists of a single SQL query.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"id": "A String", # Execute the read or SQL query in a previously-started transaction.
|
|
},
|
|
"seqno": "A String", # A per-transaction sequence number used to identify this request. This
|
|
# makes each request idempotent such that if the request is received multiple
|
|
# times, at most one will succeed.
|
|
#
|
|
# The sequence number must be monotonically increasing within the
|
|
# transaction. If a request arrives for the first time with an out-of-order
|
|
# sequence number, the transaction may be aborted. Replays of previously
|
|
# handled requests will yield the same response as the first execution.
|
|
#
|
|
# Required for DML statements. Ignored for queries.
|
|
"resumeToken": "A String", # If this request is resuming a previously interrupted SQL statement
|
|
# execution, `resume_token` should be copied from the last
|
|
# PartialResultSet yielded before the interruption. Doing this
|
|
# enables the new SQL statement execution to resume where the last one left
|
|
# off. The rest of the request parameters must exactly match the
|
|
# request that yielded this token.
|
|
"partitionToken": "A String", # If present, results will be restricted to the specified partition
|
|
# previously created using PartitionQuery(). There must be an exact
|
|
# match for the values of fields common to this message and the
|
|
# PartitionQueryRequest message used to create this partition_token.
|
|
"paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
|
|
# from a JSON value. For example, values of type `BYTES` and values
|
|
# of type `STRING` both appear in params as JSON strings.
|
|
#
|
|
# In these cases, `param_types` can be used to specify the exact
|
|
# SQL type for some or all of the SQL statement parameters. See the
|
|
# definition of Type for more information
|
|
# about SQL types.
|
|
"a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
|
|
# table cell or returned from an SQL query.
|
|
"structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
|
|
# provides type information for the struct's fields.
|
|
"code": "A String", # Required. The TypeCode for this type.
|
|
"arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
|
|
# is the type of the array elements.
|
|
},
|
|
},
|
|
"queryMode": "A String", # Used to control the amount of debugging information returned in
|
|
# ResultSetStats. If partition_token is set, query_mode can only
|
|
# be set to QueryMode.NORMAL.
|
|
"sql": "A String", # Required. The SQL string.
|
|
"params": { # The SQL string can contain parameter placeholders. A parameter
|
|
# placeholder consists of `'@'` followed by the parameter
|
|
# name. Parameter names consist of any combination of letters,
|
|
# numbers, and underscores.
|
|
#
|
|
# Parameters can appear anywhere that a literal value is expected. The same
|
|
# parameter name can be used more than once, for example:
|
|
# `"WHERE id > @msg_id AND id < @msg_id + 100"`
|
|
#
|
|
# It is an error to execute an SQL statement with unbound parameters.
|
|
#
|
|
# Parameter values are specified using `params`, which is a JSON
|
|
# object whose keys are parameter names, and whose values are the
|
|
# corresponding parameter values.
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # Results from Read or
|
|
# ExecuteSql.
|
|
"rows": [ # Each element in `rows` is a row whose format is defined by
|
|
# metadata.row_type. The ith element
|
|
# in each row matches the ith field in
|
|
# metadata.row_type. Elements are
|
|
# encoded based on type as described
|
|
# here.
|
|
[
|
|
"",
|
|
],
|
|
],
|
|
"stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
|
|
# produced this result set. These can be requested by setting
|
|
# ExecuteSqlRequest.query_mode.
|
|
# DML statements always produce stats containing the number of rows
|
|
# modified, unless executed using the
|
|
# ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
|
|
# Other fields may or may not be populated, based on the
|
|
# ExecuteSqlRequest.query_mode.
|
|
"rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
|
|
# returns a lower bound of the rows modified.
|
|
"rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
|
|
"queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
|
|
"planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
|
|
# with the plan root. Each PlanNode's `id` corresponds to its index in
|
|
# `plan_nodes`.
|
|
{ # Node information for nodes appearing in a QueryPlan.plan_nodes.
|
|
"index": 42, # The `PlanNode`'s index in node list.
|
|
"kind": "A String", # Used to determine the type of node. May be needed for visualizing
|
|
# different kinds of nodes differently. For example, If the node is a
|
|
# SCALAR node, it will have a condensed representation
|
|
# which can be used to directly embed a description of the node in its
|
|
# parent.
|
|
"displayName": "A String", # The display name for the node.
|
|
"executionStats": { # The execution statistics associated with the node, contained in a group of
|
|
# key-value pairs. Only present if the plan was returned as a result of a
|
|
# profile query. For example, number of executions, number of rows/time per
|
|
# execution etc.
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
"childLinks": [ # List of child node `index`es and their relationship to this parent.
|
|
{ # Metadata associated with a parent-child relationship appearing in a
|
|
# PlanNode.
|
|
"variable": "A String", # Only present if the child node is SCALAR and corresponds
|
|
# to an output variable of the parent node. The field carries the name of
|
|
# the output variable.
|
|
# For example, a `TableScan` operator that reads rows from a table will
|
|
# have child links to the `SCALAR` nodes representing the output variables
|
|
# created for each column that is read by the operator. The corresponding
|
|
# `variable` fields will be set to the variable names assigned to the
|
|
# columns.
|
|
"childIndex": 42, # The node to which the link points.
|
|
"type": "A String", # The type of the link. For example, in Hash Joins this could be used to
|
|
# distinguish between the build child and the probe child, or in the case
|
|
# of the child being an output variable, to represent the tag associated
|
|
# with the output variable.
|
|
},
|
|
],
|
|
"shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
|
|
# `SCALAR` PlanNode(s).
|
|
"subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
|
|
# where the `description` string of this node references a `SCALAR`
|
|
# subquery contained in the expression subtree rooted at this node. The
|
|
# referenced `SCALAR` subquery may not necessarily be a direct child of
|
|
# this node.
|
|
"a_key": 42,
|
|
},
|
|
"description": "A String", # A string representation of the expression subtree rooted at this node.
|
|
},
|
|
"metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
|
|
# For example, a Parameter Reference node could have the following
|
|
# information in its metadata:
|
|
#
|
|
# {
|
|
# "parameter_reference": "param1",
|
|
# "parameter_type": "array"
|
|
# }
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
},
|
|
],
|
|
},
|
|
"queryStats": { # Aggregated statistics from the execution of the query. Only present when
|
|
# the query is profiled. For example, a query could return the statistics as
|
|
# follows:
|
|
#
|
|
# {
|
|
# "rows_returned": "3",
|
|
# "elapsed_time": "1.22 secs",
|
|
# "cpu_time": "1.19 secs"
|
|
# }
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
},
|
|
"metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
|
|
"rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
|
|
# set. For example, a SQL query like `"SELECT UserId, UserName FROM
|
|
# Users"` could return a `row_type` value like:
|
|
#
|
|
# "fields": [
|
|
# { "name": "UserId", "type": { "code": "INT64" } },
|
|
# { "name": "UserName", "type": { "code": "STRING" } },
|
|
# ]
|
|
"fields": [ # The list of fields that make up this struct. Order is
|
|
# significant, because values of this struct type are represented as
|
|
# lists, where the order of field values matches the order of
|
|
# fields in the StructType. In turn, the order of fields
|
|
# matches the order of columns in a read request, or the order of
|
|
# fields in the `SELECT` clause of a query.
|
|
{ # Message representing a single field of a struct.
|
|
"type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
|
|
# table cell or returned from an SQL query.
|
|
"structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
|
|
# provides type information for the struct's fields.
|
|
"code": "A String", # Required. The TypeCode for this type.
|
|
"arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
|
|
# is the type of the array elements.
|
|
},
|
|
"name": "A String", # The name of the field. For reads, this is the column name. For
|
|
# SQL queries, it is the column alias (e.g., `"Word"` in the
|
|
# query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
|
|
# `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
|
|
# columns might have an empty name (e.g., !"SELECT
|
|
# UPPER(ColName)"`). Note that a query result can contain
|
|
# multiple fields with the same name.
|
|
},
|
|
],
|
|
},
|
|
"transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
|
|
# information about the new transaction is yielded here.
|
|
"readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
|
|
# for the transaction. Not returned by default: see
|
|
# TransactionOptions.ReadOnly.return_read_timestamp.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"id": "A String", # `id` may be used to identify the transaction in subsequent
|
|
# Read,
|
|
# ExecuteSql,
|
|
# Commit, or
|
|
# Rollback calls.
|
|
#
|
|
# Single-use read-only transactions do not have IDs, because
|
|
# single-use transactions do not support multiple requests.
|
|
},
|
|
},
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="executeStreamingSql">executeStreamingSql(session, body, x__xgafv=None)</code>
|
|
<pre>Like ExecuteSql, except returns the result
|
|
set as a stream. Unlike ExecuteSql, there
|
|
is no limit on the size of the returned result set. However, no
|
|
individual row in the result set can exceed 100 MiB, and no
|
|
column value can exceed 10 MiB.
|
|
|
|
Args:
|
|
session: string, Required. The session in which the SQL query should be performed. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # The request for ExecuteSql and
|
|
# ExecuteStreamingSql.
|
|
"transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
|
|
# temporary read-only transaction with strong concurrency.
|
|
#
|
|
# The transaction to use.
|
|
#
|
|
# For queries, if none is provided, the default is a temporary read-only
|
|
# transaction with strong concurrency.
|
|
#
|
|
# Standard DML statements require a ReadWrite transaction. Single-use
|
|
# transactions are not supported (to avoid replay). The caller must
|
|
# either supply an existing transaction ID or begin a new transaction.
|
|
#
|
|
# Partitioned DML requires an existing PartitionedDml transaction ID.
|
|
# Read or
|
|
# ExecuteSql call runs.
|
|
#
|
|
# See TransactionOptions for more information about transactions.
|
|
"begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
|
|
# it. The transaction ID of the new transaction is returned in
|
|
# ResultSetMetadata.transaction, which is a Transaction.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
|
|
# This is the most efficient way to execute a transaction that
|
|
# consists of a single SQL query.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"id": "A String", # Execute the read or SQL query in a previously-started transaction.
|
|
},
|
|
"seqno": "A String", # A per-transaction sequence number used to identify this request. This
|
|
# makes each request idempotent such that if the request is received multiple
|
|
# times, at most one will succeed.
|
|
#
|
|
# The sequence number must be monotonically increasing within the
|
|
# transaction. If a request arrives for the first time with an out-of-order
|
|
# sequence number, the transaction may be aborted. Replays of previously
|
|
# handled requests will yield the same response as the first execution.
|
|
#
|
|
# Required for DML statements. Ignored for queries.
|
|
"resumeToken": "A String", # If this request is resuming a previously interrupted SQL statement
|
|
# execution, `resume_token` should be copied from the last
|
|
# PartialResultSet yielded before the interruption. Doing this
|
|
# enables the new SQL statement execution to resume where the last one left
|
|
# off. The rest of the request parameters must exactly match the
|
|
# request that yielded this token.
|
|
"partitionToken": "A String", # If present, results will be restricted to the specified partition
|
|
# previously created using PartitionQuery(). There must be an exact
|
|
# match for the values of fields common to this message and the
|
|
# PartitionQueryRequest message used to create this partition_token.
|
|
"paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
|
|
# from a JSON value. For example, values of type `BYTES` and values
|
|
# of type `STRING` both appear in params as JSON strings.
|
|
#
|
|
# In these cases, `param_types` can be used to specify the exact
|
|
# SQL type for some or all of the SQL statement parameters. See the
|
|
# definition of Type for more information
|
|
# about SQL types.
|
|
"a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
|
|
# table cell or returned from an SQL query.
|
|
"structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
|
|
# provides type information for the struct's fields.
|
|
"code": "A String", # Required. The TypeCode for this type.
|
|
"arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
|
|
# is the type of the array elements.
|
|
},
|
|
},
|
|
"queryMode": "A String", # Used to control the amount of debugging information returned in
|
|
# ResultSetStats. If partition_token is set, query_mode can only
|
|
# be set to QueryMode.NORMAL.
|
|
"sql": "A String", # Required. The SQL string.
|
|
"params": { # The SQL string can contain parameter placeholders. A parameter
|
|
# placeholder consists of `'@'` followed by the parameter
|
|
# name. Parameter names consist of any combination of letters,
|
|
# numbers, and underscores.
|
|
#
|
|
# Parameters can appear anywhere that a literal value is expected. The same
|
|
# parameter name can be used more than once, for example:
|
|
# `"WHERE id > @msg_id AND id < @msg_id + 100"`
|
|
#
|
|
# It is an error to execute an SQL statement with unbound parameters.
|
|
#
|
|
# Parameter values are specified using `params`, which is a JSON
|
|
# object whose keys are parameter names, and whose values are the
|
|
# corresponding parameter values.
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # Partial results from a streaming read or SQL query. Streaming reads and
|
|
# SQL queries better tolerate large result sets, large rows, and large
|
|
# values, but are a little trickier to consume.
|
|
"resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such
|
|
# as TCP connection loss. If this occurs, the stream of results can
|
|
# be resumed by re-sending the original request and including
|
|
# `resume_token`. Note that executing any other transaction in the
|
|
# same session invalidates the token.
|
|
"chunkedValue": True or False, # If true, then the final value in values is chunked, and must
|
|
# be combined with more values from subsequent `PartialResultSet`s
|
|
# to obtain a complete field value.
|
|
"values": [ # A streamed result set consists of a stream of values, which might
|
|
# be split into many `PartialResultSet` messages to accommodate
|
|
# large rows and/or large values. Every N complete values defines a
|
|
# row, where N is equal to the number of entries in
|
|
# metadata.row_type.fields.
|
|
#
|
|
# Most values are encoded based on type as described
|
|
# here.
|
|
#
|
|
# It is possible that the last value in values is "chunked",
|
|
# meaning that the rest of the value is sent in subsequent
|
|
# `PartialResultSet`(s). This is denoted by the chunked_value
|
|
# field. Two or more chunked values can be merged to form a
|
|
# complete value as follows:
|
|
#
|
|
# * `bool/number/null`: cannot be chunked
|
|
# * `string`: concatenate the strings
|
|
# * `list`: concatenate the lists. If the last element in a list is a
|
|
# `string`, `list`, or `object`, merge it with the first element in
|
|
# the next list by applying these rules recursively.
|
|
# * `object`: concatenate the (field name, field value) pairs. If a
|
|
# field name is duplicated, then apply these rules recursively
|
|
# to merge the field values.
|
|
#
|
|
# Some examples of merging:
|
|
#
|
|
# # Strings are concatenated.
|
|
# "foo", "bar" => "foobar"
|
|
#
|
|
# # Lists of non-strings are concatenated.
|
|
# [2, 3], [4] => [2, 3, 4]
|
|
#
|
|
# # Lists are concatenated, but the last and first elements are merged
|
|
# # because they are strings.
|
|
# ["a", "b"], ["c", "d"] => ["a", "bc", "d"]
|
|
#
|
|
# # Lists are concatenated, but the last and first elements are merged
|
|
# # because they are lists. Recursively, the last and first elements
|
|
# # of the inner lists are merged because they are strings.
|
|
# ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"]
|
|
#
|
|
# # Non-overlapping object fields are combined.
|
|
# {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"}
|
|
#
|
|
# # Overlapping object fields are merged.
|
|
# {"a": "1"}, {"a": "2"} => {"a": "12"}
|
|
#
|
|
# # Examples of merging objects containing lists of strings.
|
|
# {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
|
|
#
|
|
# For a more complete example, suppose a streaming SQL query is
|
|
# yielding a result set whose rows contain a single string
|
|
# field. The following `PartialResultSet`s might be yielded:
|
|
#
|
|
# {
|
|
# "metadata": { ... }
|
|
# "values": ["Hello", "W"]
|
|
# "chunked_value": true
|
|
# "resume_token": "Af65..."
|
|
# }
|
|
# {
|
|
# "values": ["orl"]
|
|
# "chunked_value": true
|
|
# "resume_token": "Bqp2..."
|
|
# }
|
|
# {
|
|
# "values": ["d"]
|
|
# "resume_token": "Zx1B..."
|
|
# }
|
|
#
|
|
# This sequence of `PartialResultSet`s encodes two rows, one
|
|
# containing the field value `"Hello"`, and a second containing the
|
|
# field value `"World" = "W" + "orl" + "d"`.
|
|
"",
|
|
],
|
|
"stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
|
|
# streaming result set. These can be requested by setting
|
|
# ExecuteSqlRequest.query_mode and are sent
|
|
# only once with the last response in the stream.
|
|
# This field will also be present in the last response for DML
|
|
# statements.
|
|
"rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
|
|
# returns a lower bound of the rows modified.
|
|
"rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
|
|
"queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
|
|
"planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
|
|
# with the plan root. Each PlanNode's `id` corresponds to its index in
|
|
# `plan_nodes`.
|
|
{ # Node information for nodes appearing in a QueryPlan.plan_nodes.
|
|
"index": 42, # The `PlanNode`'s index in node list.
|
|
"kind": "A String", # Used to determine the type of node. May be needed for visualizing
|
|
# different kinds of nodes differently. For example, If the node is a
|
|
# SCALAR node, it will have a condensed representation
|
|
# which can be used to directly embed a description of the node in its
|
|
# parent.
|
|
"displayName": "A String", # The display name for the node.
|
|
"executionStats": { # The execution statistics associated with the node, contained in a group of
|
|
# key-value pairs. Only present if the plan was returned as a result of a
|
|
# profile query. For example, number of executions, number of rows/time per
|
|
# execution etc.
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
"childLinks": [ # List of child node `index`es and their relationship to this parent.
|
|
{ # Metadata associated with a parent-child relationship appearing in a
|
|
# PlanNode.
|
|
"variable": "A String", # Only present if the child node is SCALAR and corresponds
|
|
# to an output variable of the parent node. The field carries the name of
|
|
# the output variable.
|
|
# For example, a `TableScan` operator that reads rows from a table will
|
|
# have child links to the `SCALAR` nodes representing the output variables
|
|
# created for each column that is read by the operator. The corresponding
|
|
# `variable` fields will be set to the variable names assigned to the
|
|
# columns.
|
|
"childIndex": 42, # The node to which the link points.
|
|
"type": "A String", # The type of the link. For example, in Hash Joins this could be used to
|
|
# distinguish between the build child and the probe child, or in the case
|
|
# of the child being an output variable, to represent the tag associated
|
|
# with the output variable.
|
|
},
|
|
],
|
|
"shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
|
|
# `SCALAR` PlanNode(s).
|
|
"subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
|
|
# where the `description` string of this node references a `SCALAR`
|
|
# subquery contained in the expression subtree rooted at this node. The
|
|
# referenced `SCALAR` subquery may not necessarily be a direct child of
|
|
# this node.
|
|
"a_key": 42,
|
|
},
|
|
"description": "A String", # A string representation of the expression subtree rooted at this node.
|
|
},
|
|
"metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
|
|
# For example, a Parameter Reference node could have the following
|
|
# information in its metadata:
|
|
#
|
|
# {
|
|
# "parameter_reference": "param1",
|
|
# "parameter_type": "array"
|
|
# }
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
},
|
|
],
|
|
},
|
|
"queryStats": { # Aggregated statistics from the execution of the query. Only present when
|
|
# the query is profiled. For example, a query could return the statistics as
|
|
# follows:
|
|
#
|
|
# {
|
|
# "rows_returned": "3",
|
|
# "elapsed_time": "1.22 secs",
|
|
# "cpu_time": "1.19 secs"
|
|
# }
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
},
|
|
"metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
|
|
# Only present in the first response.
|
|
"rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
|
|
# set. For example, a SQL query like `"SELECT UserId, UserName FROM
|
|
# Users"` could return a `row_type` value like:
|
|
#
|
|
# "fields": [
|
|
# { "name": "UserId", "type": { "code": "INT64" } },
|
|
# { "name": "UserName", "type": { "code": "STRING" } },
|
|
# ]
|
|
"fields": [ # The list of fields that make up this struct. Order is
|
|
# significant, because values of this struct type are represented as
|
|
# lists, where the order of field values matches the order of
|
|
# fields in the StructType. In turn, the order of fields
|
|
# matches the order of columns in a read request, or the order of
|
|
# fields in the `SELECT` clause of a query.
|
|
{ # Message representing a single field of a struct.
|
|
"type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
|
|
# table cell or returned from an SQL query.
|
|
"structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
|
|
# provides type information for the struct's fields.
|
|
"code": "A String", # Required. The TypeCode for this type.
|
|
"arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
|
|
# is the type of the array elements.
|
|
},
|
|
"name": "A String", # The name of the field. For reads, this is the column name. For
|
|
# SQL queries, it is the column alias (e.g., `"Word"` in the
|
|
# query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
|
|
# `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
|
|
# columns might have an empty name (e.g., !"SELECT
|
|
# UPPER(ColName)"`). Note that a query result can contain
|
|
# multiple fields with the same name.
|
|
},
|
|
],
|
|
},
|
|
"transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
|
|
# information about the new transaction is yielded here.
|
|
"readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
|
|
# for the transaction. Not returned by default: see
|
|
# TransactionOptions.ReadOnly.return_read_timestamp.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"id": "A String", # `id` may be used to identify the transaction in subsequent
|
|
# Read,
|
|
# ExecuteSql,
|
|
# Commit, or
|
|
# Rollback calls.
|
|
#
|
|
# Single-use read-only transactions do not have IDs, because
|
|
# single-use transactions do not support multiple requests.
|
|
},
|
|
},
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="get">get(name, x__xgafv=None)</code>
|
|
<pre>Gets a session. Returns `NOT_FOUND` if the session does not exist.
|
|
This is mainly useful for determining whether a session is still
|
|
alive.
|
|
|
|
Args:
|
|
name: string, Required. The name of the session to retrieve. (required)
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # A session in the Cloud Spanner API.
|
|
"labels": { # The labels for the session.
|
|
#
|
|
# * Label keys must be between 1 and 63 characters long and must conform to
|
|
# the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
|
|
# * Label values must be between 0 and 63 characters long and must conform
|
|
# to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
|
|
# * No more than 64 labels can be associated with a given session.
|
|
#
|
|
# See https://goo.gl/xmQnxf for more information on and examples of labels.
|
|
"a_key": "A String",
|
|
},
|
|
"name": "A String", # The name of the session. This is always system-assigned; values provided
|
|
# when creating a session are ignored.
|
|
"approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
|
|
# typically earlier than the actual last use time.
|
|
"createTime": "A String", # Output only. The timestamp when the session is created.
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="list">list(database, pageSize=None, filter=None, pageToken=None, x__xgafv=None)</code>
|
|
<pre>Lists all sessions in a given database.
|
|
|
|
Args:
|
|
database: string, Required. The database in which to list sessions. (required)
|
|
pageSize: integer, Number of sessions to be returned in the response. If 0 or less, defaults
|
|
to the server's maximum allowed page size.
|
|
filter: string, An expression for filtering the results of the request. Filter rules are
|
|
case insensitive. The fields eligible for filtering are:
|
|
|
|
* `labels.key` where key is the name of a label
|
|
|
|
Some examples of using filters are:
|
|
|
|
* `labels.env:*` --> The session has the label "env".
|
|
* `labels.env:dev` --> The session has the label "env" and the value of
|
|
the label contains the string "dev".
|
|
pageToken: string, If non-empty, `page_token` should contain a
|
|
next_page_token from a previous
|
|
ListSessionsResponse.
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # The response for ListSessions.
|
|
"nextPageToken": "A String", # `next_page_token` can be sent in a subsequent
|
|
# ListSessions call to fetch more of the matching
|
|
# sessions.
|
|
"sessions": [ # The list of requested sessions.
|
|
{ # A session in the Cloud Spanner API.
|
|
"labels": { # The labels for the session.
|
|
#
|
|
# * Label keys must be between 1 and 63 characters long and must conform to
|
|
# the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
|
|
# * Label values must be between 0 and 63 characters long and must conform
|
|
# to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
|
|
# * No more than 64 labels can be associated with a given session.
|
|
#
|
|
# See https://goo.gl/xmQnxf for more information on and examples of labels.
|
|
"a_key": "A String",
|
|
},
|
|
"name": "A String", # The name of the session. This is always system-assigned; values provided
|
|
# when creating a session are ignored.
|
|
"approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
|
|
# typically earlier than the actual last use time.
|
|
"createTime": "A String", # Output only. The timestamp when the session is created.
|
|
},
|
|
],
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="list_next">list_next(previous_request, previous_response)</code>
|
|
<pre>Retrieves the next page of results.
|
|
|
|
Args:
|
|
previous_request: The request for the previous page. (required)
|
|
previous_response: The response from the request for the previous page. (required)
|
|
|
|
Returns:
|
|
A request object that you can call 'execute()' on to request the next
|
|
page. Returns None if there are no more items in the collection.
|
|
</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="partitionQuery">partitionQuery(session, body, x__xgafv=None)</code>
|
|
<pre>Creates a set of partition tokens that can be used to execute a query
|
|
operation in parallel. Each of the returned partition tokens can be used
|
|
by ExecuteStreamingSql to specify a subset
|
|
of the query result to read. The same session and read-only transaction
|
|
must be used by the PartitionQueryRequest used to create the
|
|
partition tokens and the ExecuteSqlRequests that use the partition tokens.
|
|
|
|
Partition tokens become invalid when the session used to create them
|
|
is deleted, is idle for too long, begins a new transaction, or becomes too
|
|
old. When any of these happen, it is not possible to resume the query, and
|
|
the whole operation must be restarted from the beginning.
|
|
|
|
Args:
|
|
session: string, Required. The session used to create the partitions. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # The request for PartitionQuery
|
|
"paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
|
|
# from a JSON value. For example, values of type `BYTES` and values
|
|
# of type `STRING` both appear in params as JSON strings.
|
|
#
|
|
# In these cases, `param_types` can be used to specify the exact
|
|
# SQL type for some or all of the SQL query parameters. See the
|
|
# definition of Type for more information
|
|
# about SQL types.
|
|
"a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
|
|
# table cell or returned from an SQL query.
|
|
"structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
|
|
# provides type information for the struct's fields.
|
|
"code": "A String", # Required. The TypeCode for this type.
|
|
"arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
|
|
# is the type of the array elements.
|
|
},
|
|
},
|
|
"partitionOptions": { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
|
|
# PartitionReadRequest.
|
|
"maxPartitions": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
|
|
# PartitionRead requests.
|
|
#
|
|
# The desired maximum number of partitions to return. For example, this may
|
|
# be set to the number of workers available. The default for this option
|
|
# is currently 10,000. The maximum value is currently 200,000. This is only
|
|
# a hint. The actual number of partitions returned may be smaller or larger
|
|
# than this maximum count request.
|
|
"partitionSizeBytes": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
|
|
# PartitionRead requests.
|
|
#
|
|
# The desired data size for each partition generated. The default for this
|
|
# option is currently 1 GiB. This is only a hint. The actual size of each
|
|
# partition may be smaller or larger than this size request.
|
|
},
|
|
"transaction": { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
|
|
# transactions are not.
|
|
# Read or
|
|
# ExecuteSql call runs.
|
|
#
|
|
# See TransactionOptions for more information about transactions.
|
|
"begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
|
|
# it. The transaction ID of the new transaction is returned in
|
|
# ResultSetMetadata.transaction, which is a Transaction.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
|
|
# This is the most efficient way to execute a transaction that
|
|
# consists of a single SQL query.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"id": "A String", # Execute the read or SQL query in a previously-started transaction.
|
|
},
|
|
"params": { # The SQL query string can contain parameter placeholders. A parameter
|
|
# placeholder consists of `'@'` followed by the parameter
|
|
# name. Parameter names consist of any combination of letters,
|
|
# numbers, and underscores.
|
|
#
|
|
# Parameters can appear anywhere that a literal value is expected. The same
|
|
# parameter name can be used more than once, for example:
|
|
# `"WHERE id > @msg_id AND id < @msg_id + 100"`
|
|
#
|
|
# It is an error to execute an SQL query with unbound parameters.
|
|
#
|
|
# Parameter values are specified using `params`, which is a JSON
|
|
# object whose keys are parameter names, and whose values are the
|
|
# corresponding parameter values.
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
"sql": "A String", # The query request to generate partitions for. The request will fail if
|
|
# the query is not root partitionable. The query plan of a root
|
|
# partitionable query has a single distributed union operator. A distributed
|
|
# union operator conceptually divides one or more tables into multiple
|
|
# splits, remotely evaluates a subquery independently on each split, and
|
|
# then unions all results.
|
|
#
|
|
# This must not contain DML commands, such as INSERT, UPDATE, or
|
|
# DELETE. Use ExecuteStreamingSql with a
|
|
# PartitionedDml transaction for large, partition-friendly DML operations.
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # The response for PartitionQuery
|
|
# or PartitionRead
|
|
"transaction": { # A transaction. # Transaction created by this request.
|
|
"readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
|
|
# for the transaction. Not returned by default: see
|
|
# TransactionOptions.ReadOnly.return_read_timestamp.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"id": "A String", # `id` may be used to identify the transaction in subsequent
|
|
# Read,
|
|
# ExecuteSql,
|
|
# Commit, or
|
|
# Rollback calls.
|
|
#
|
|
# Single-use read-only transactions do not have IDs, because
|
|
# single-use transactions do not support multiple requests.
|
|
},
|
|
"partitions": [ # Partitions created by this request.
|
|
{ # Information returned for each partition returned in a
|
|
# PartitionResponse.
|
|
"partitionToken": "A String", # This token can be passed to Read, StreamingRead, ExecuteSql, or
|
|
# ExecuteStreamingSql requests to restrict the results to those identified by
|
|
# this partition token.
|
|
},
|
|
],
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="partitionRead">partitionRead(session, body, x__xgafv=None)</code>
|
|
<pre>Creates a set of partition tokens that can be used to execute a read
|
|
operation in parallel. Each of the returned partition tokens can be used
|
|
by StreamingRead to specify a subset of the read
|
|
result to read. The same session and read-only transaction must be used by
|
|
the PartitionReadRequest used to create the partition tokens and the
|
|
ReadRequests that use the partition tokens. There are no ordering
|
|
guarantees on rows returned among the returned partition tokens, or even
|
|
within each individual StreamingRead call issued with a partition_token.
|
|
|
|
Partition tokens become invalid when the session used to create them
|
|
is deleted, is idle for too long, begins a new transaction, or becomes too
|
|
old. When any of these happen, it is not possible to resume the read, and
|
|
the whole operation must be restarted from the beginning.
|
|
|
|
Args:
|
|
session: string, Required. The session used to create the partitions. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # The request for PartitionRead
|
|
"index": "A String", # If non-empty, the name of an index on table. This index is
|
|
# used instead of the table primary key when interpreting key_set
|
|
# and sorting result rows. See key_set for further information.
|
|
"transaction": { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
|
|
# transactions are not.
|
|
# Read or
|
|
# ExecuteSql call runs.
|
|
#
|
|
# See TransactionOptions for more information about transactions.
|
|
"begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
|
|
# it. The transaction ID of the new transaction is returned in
|
|
# ResultSetMetadata.transaction, which is a Transaction.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
|
|
# This is the most efficient way to execute a transaction that
|
|
# consists of a single SQL query.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"id": "A String", # Execute the read or SQL query in a previously-started transaction.
|
|
},
|
|
"keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
|
|
# primary keys of the rows in table to be yielded, unless index
|
|
# is present. If index is present, then key_set instead names
|
|
# index keys in index.
|
|
#
|
|
# It is not an error for the `key_set` to name rows that do not
|
|
# exist in the database. Read yields nothing for nonexistent rows.
|
|
# the keys are expected to be in the same table or index. The keys need
|
|
# not be sorted in any particular way.
|
|
#
|
|
# If the same key is specified multiple times in the set (for example
|
|
# if two ranges, two keys, or a key and a range overlap), Cloud Spanner
|
|
# behaves as if the key were only specified once.
|
|
"ranges": [ # A list of key ranges. See KeyRange for more information about
|
|
# key range specifications.
|
|
{ # KeyRange represents a range of rows in a table or index.
|
|
#
|
|
# A range has a start key and an end key. These keys can be open or
|
|
# closed, indicating if the range includes rows with that key.
|
|
#
|
|
# Keys are represented by lists, where the ith value in the list
|
|
# corresponds to the ith component of the table or index primary key.
|
|
# Individual values are encoded as described
|
|
# here.
|
|
#
|
|
# For example, consider the following table definition:
|
|
#
|
|
# CREATE TABLE UserEvents (
|
|
# UserName STRING(MAX),
|
|
# EventDate STRING(10)
|
|
# ) PRIMARY KEY(UserName, EventDate);
|
|
#
|
|
# The following keys name rows in this table:
|
|
#
|
|
# "Bob", "2014-09-23"
|
|
#
|
|
# Since the `UserEvents` table's `PRIMARY KEY` clause names two
|
|
# columns, each `UserEvents` key has two elements; the first is the
|
|
# `UserName`, and the second is the `EventDate`.
|
|
#
|
|
# Key ranges with multiple components are interpreted
|
|
# lexicographically by component using the table or index key's declared
|
|
# sort order. For example, the following range returns all events for
|
|
# user `"Bob"` that occurred in the year 2015:
|
|
#
|
|
# "start_closed": ["Bob", "2015-01-01"]
|
|
# "end_closed": ["Bob", "2015-12-31"]
|
|
#
|
|
# Start and end keys can omit trailing key components. This affects the
|
|
# inclusion and exclusion of rows that exactly match the provided key
|
|
# components: if the key is closed, then rows that exactly match the
|
|
# provided components are included; if the key is open, then rows
|
|
# that exactly match are not included.
|
|
#
|
|
# For example, the following range includes all events for `"Bob"` that
|
|
# occurred during and after the year 2000:
|
|
#
|
|
# "start_closed": ["Bob", "2000-01-01"]
|
|
# "end_closed": ["Bob"]
|
|
#
|
|
# The next example retrieves all events for `"Bob"`:
|
|
#
|
|
# "start_closed": ["Bob"]
|
|
# "end_closed": ["Bob"]
|
|
#
|
|
# To retrieve events before the year 2000:
|
|
#
|
|
# "start_closed": ["Bob"]
|
|
# "end_open": ["Bob", "2000-01-01"]
|
|
#
|
|
# The following range includes all rows in the table:
|
|
#
|
|
# "start_closed": []
|
|
# "end_closed": []
|
|
#
|
|
# This range returns all users whose `UserName` begins with any
|
|
# character from A to C:
|
|
#
|
|
# "start_closed": ["A"]
|
|
# "end_open": ["D"]
|
|
#
|
|
# This range returns all users whose `UserName` begins with B:
|
|
#
|
|
# "start_closed": ["B"]
|
|
# "end_open": ["C"]
|
|
#
|
|
# Key ranges honor column sort order. For example, suppose a table is
|
|
# defined as follows:
|
|
#
|
|
# CREATE TABLE DescendingSortedTable {
|
|
# Key INT64,
|
|
# ...
|
|
# ) PRIMARY KEY(Key DESC);
|
|
#
|
|
# The following range retrieves all rows with key values between 1
|
|
# and 100 inclusive:
|
|
#
|
|
# "start_closed": ["100"]
|
|
# "end_closed": ["1"]
|
|
#
|
|
# Note that 100 is passed as the start, and 1 is passed as the end,
|
|
# because `Key` is a descending column in the schema.
|
|
"endOpen": [ # If the end is open, then the range excludes rows whose first
|
|
# `len(end_open)` key columns exactly match `end_open`.
|
|
"",
|
|
],
|
|
"startOpen": [ # If the start is open, then the range excludes rows whose first
|
|
# `len(start_open)` key columns exactly match `start_open`.
|
|
"",
|
|
],
|
|
"endClosed": [ # If the end is closed, then the range includes all rows whose
|
|
# first `len(end_closed)` key columns exactly match `end_closed`.
|
|
"",
|
|
],
|
|
"startClosed": [ # If the start is closed, then the range includes all rows whose
|
|
# first `len(start_closed)` key columns exactly match `start_closed`.
|
|
"",
|
|
],
|
|
},
|
|
],
|
|
"keys": [ # A list of specific keys. Entries in `keys` should have exactly as
|
|
# many elements as there are columns in the primary or index key
|
|
# with which this `KeySet` is used. Individual key values are
|
|
# encoded as described here.
|
|
[
|
|
"",
|
|
],
|
|
],
|
|
"all": True or False, # For convenience `all` can be set to `true` to indicate that this
|
|
# `KeySet` matches all keys in the table or index. Note that any keys
|
|
# specified in `keys` or `ranges` are only yielded once.
|
|
},
|
|
"partitionOptions": { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
|
|
# PartitionReadRequest.
|
|
"maxPartitions": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
|
|
# PartitionRead requests.
|
|
#
|
|
# The desired maximum number of partitions to return. For example, this may
|
|
# be set to the number of workers available. The default for this option
|
|
# is currently 10,000. The maximum value is currently 200,000. This is only
|
|
# a hint. The actual number of partitions returned may be smaller or larger
|
|
# than this maximum count request.
|
|
"partitionSizeBytes": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
|
|
# PartitionRead requests.
|
|
#
|
|
# The desired data size for each partition generated. The default for this
|
|
# option is currently 1 GiB. This is only a hint. The actual size of each
|
|
# partition may be smaller or larger than this size request.
|
|
},
|
|
"table": "A String", # Required. The name of the table in the database to be read.
|
|
"columns": [ # The columns of table to be returned for each row matching
|
|
# this request.
|
|
"A String",
|
|
],
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # The response for PartitionQuery
|
|
# or PartitionRead
|
|
"transaction": { # A transaction. # Transaction created by this request.
|
|
"readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
|
|
# for the transaction. Not returned by default: see
|
|
# TransactionOptions.ReadOnly.return_read_timestamp.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"id": "A String", # `id` may be used to identify the transaction in subsequent
|
|
# Read,
|
|
# ExecuteSql,
|
|
# Commit, or
|
|
# Rollback calls.
|
|
#
|
|
# Single-use read-only transactions do not have IDs, because
|
|
# single-use transactions do not support multiple requests.
|
|
},
|
|
"partitions": [ # Partitions created by this request.
|
|
{ # Information returned for each partition returned in a
|
|
# PartitionResponse.
|
|
"partitionToken": "A String", # This token can be passed to Read, StreamingRead, ExecuteSql, or
|
|
# ExecuteStreamingSql requests to restrict the results to those identified by
|
|
# this partition token.
|
|
},
|
|
],
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="read">read(session, body, x__xgafv=None)</code>
|
|
<pre>Reads rows from the database using key lookups and scans, as a
|
|
simple key/value style alternative to
|
|
ExecuteSql. This method cannot be used to
|
|
return a result set larger than 10 MiB; if the read matches more
|
|
data than that, the read fails with a `FAILED_PRECONDITION`
|
|
error.
|
|
|
|
Reads inside read-write transactions might return `ABORTED`. If
|
|
this occurs, the application should restart the transaction from
|
|
the beginning. See Transaction for more details.
|
|
|
|
Larger result sets can be yielded in streaming fashion by calling
|
|
StreamingRead instead.
|
|
|
|
Args:
|
|
session: string, Required. The session in which the read should be performed. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # The request for Read and
|
|
# StreamingRead.
|
|
"index": "A String", # If non-empty, the name of an index on table. This index is
|
|
# used instead of the table primary key when interpreting key_set
|
|
# and sorting result rows. See key_set for further information.
|
|
"transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
|
|
# temporary read-only transaction with strong concurrency.
|
|
# Read or
|
|
# ExecuteSql call runs.
|
|
#
|
|
# See TransactionOptions for more information about transactions.
|
|
"begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
|
|
# it. The transaction ID of the new transaction is returned in
|
|
# ResultSetMetadata.transaction, which is a Transaction.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
|
|
# This is the most efficient way to execute a transaction that
|
|
# consists of a single SQL query.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"id": "A String", # Execute the read or SQL query in a previously-started transaction.
|
|
},
|
|
"resumeToken": "A String", # If this request is resuming a previously interrupted read,
|
|
# `resume_token` should be copied from the last
|
|
# PartialResultSet yielded before the interruption. Doing this
|
|
# enables the new read to resume where the last read left off. The
|
|
# rest of the request parameters must exactly match the request
|
|
# that yielded this token.
|
|
"partitionToken": "A String", # If present, results will be restricted to the specified partition
|
|
# previously created using PartitionRead(). There must be an exact
|
|
# match for the values of fields common to this message and the
|
|
# PartitionReadRequest message used to create this partition_token.
|
|
"keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
|
|
# primary keys of the rows in table to be yielded, unless index
|
|
# is present. If index is present, then key_set instead names
|
|
# index keys in index.
|
|
#
|
|
# If the partition_token field is empty, rows are yielded
|
|
# in table primary key order (if index is empty) or index key order
|
|
# (if index is non-empty). If the partition_token field is not
|
|
# empty, rows will be yielded in an unspecified order.
|
|
#
|
|
# It is not an error for the `key_set` to name rows that do not
|
|
# exist in the database. Read yields nothing for nonexistent rows.
|
|
# the keys are expected to be in the same table or index. The keys need
|
|
# not be sorted in any particular way.
|
|
#
|
|
# If the same key is specified multiple times in the set (for example
|
|
# if two ranges, two keys, or a key and a range overlap), Cloud Spanner
|
|
# behaves as if the key were only specified once.
|
|
"ranges": [ # A list of key ranges. See KeyRange for more information about
|
|
# key range specifications.
|
|
{ # KeyRange represents a range of rows in a table or index.
|
|
#
|
|
# A range has a start key and an end key. These keys can be open or
|
|
# closed, indicating if the range includes rows with that key.
|
|
#
|
|
# Keys are represented by lists, where the ith value in the list
|
|
# corresponds to the ith component of the table or index primary key.
|
|
# Individual values are encoded as described
|
|
# here.
|
|
#
|
|
# For example, consider the following table definition:
|
|
#
|
|
# CREATE TABLE UserEvents (
|
|
# UserName STRING(MAX),
|
|
# EventDate STRING(10)
|
|
# ) PRIMARY KEY(UserName, EventDate);
|
|
#
|
|
# The following keys name rows in this table:
|
|
#
|
|
# "Bob", "2014-09-23"
|
|
#
|
|
# Since the `UserEvents` table's `PRIMARY KEY` clause names two
|
|
# columns, each `UserEvents` key has two elements; the first is the
|
|
# `UserName`, and the second is the `EventDate`.
|
|
#
|
|
# Key ranges with multiple components are interpreted
|
|
# lexicographically by component using the table or index key's declared
|
|
# sort order. For example, the following range returns all events for
|
|
# user `"Bob"` that occurred in the year 2015:
|
|
#
|
|
# "start_closed": ["Bob", "2015-01-01"]
|
|
# "end_closed": ["Bob", "2015-12-31"]
|
|
#
|
|
# Start and end keys can omit trailing key components. This affects the
|
|
# inclusion and exclusion of rows that exactly match the provided key
|
|
# components: if the key is closed, then rows that exactly match the
|
|
# provided components are included; if the key is open, then rows
|
|
# that exactly match are not included.
|
|
#
|
|
# For example, the following range includes all events for `"Bob"` that
|
|
# occurred during and after the year 2000:
|
|
#
|
|
# "start_closed": ["Bob", "2000-01-01"]
|
|
# "end_closed": ["Bob"]
|
|
#
|
|
# The next example retrieves all events for `"Bob"`:
|
|
#
|
|
# "start_closed": ["Bob"]
|
|
# "end_closed": ["Bob"]
|
|
#
|
|
# To retrieve events before the year 2000:
|
|
#
|
|
# "start_closed": ["Bob"]
|
|
# "end_open": ["Bob", "2000-01-01"]
|
|
#
|
|
# The following range includes all rows in the table:
|
|
#
|
|
# "start_closed": []
|
|
# "end_closed": []
|
|
#
|
|
# This range returns all users whose `UserName` begins with any
|
|
# character from A to C:
|
|
#
|
|
# "start_closed": ["A"]
|
|
# "end_open": ["D"]
|
|
#
|
|
# This range returns all users whose `UserName` begins with B:
|
|
#
|
|
# "start_closed": ["B"]
|
|
# "end_open": ["C"]
|
|
#
|
|
# Key ranges honor column sort order. For example, suppose a table is
|
|
# defined as follows:
|
|
#
|
|
# CREATE TABLE DescendingSortedTable {
|
|
# Key INT64,
|
|
# ...
|
|
# ) PRIMARY KEY(Key DESC);
|
|
#
|
|
# The following range retrieves all rows with key values between 1
|
|
# and 100 inclusive:
|
|
#
|
|
# "start_closed": ["100"]
|
|
# "end_closed": ["1"]
|
|
#
|
|
# Note that 100 is passed as the start, and 1 is passed as the end,
|
|
# because `Key` is a descending column in the schema.
|
|
"endOpen": [ # If the end is open, then the range excludes rows whose first
|
|
# `len(end_open)` key columns exactly match `end_open`.
|
|
"",
|
|
],
|
|
"startOpen": [ # If the start is open, then the range excludes rows whose first
|
|
# `len(start_open)` key columns exactly match `start_open`.
|
|
"",
|
|
],
|
|
"endClosed": [ # If the end is closed, then the range includes all rows whose
|
|
# first `len(end_closed)` key columns exactly match `end_closed`.
|
|
"",
|
|
],
|
|
"startClosed": [ # If the start is closed, then the range includes all rows whose
|
|
# first `len(start_closed)` key columns exactly match `start_closed`.
|
|
"",
|
|
],
|
|
},
|
|
],
|
|
"keys": [ # A list of specific keys. Entries in `keys` should have exactly as
|
|
# many elements as there are columns in the primary or index key
|
|
# with which this `KeySet` is used. Individual key values are
|
|
# encoded as described here.
|
|
[
|
|
"",
|
|
],
|
|
],
|
|
"all": True or False, # For convenience `all` can be set to `true` to indicate that this
|
|
# `KeySet` matches all keys in the table or index. Note that any keys
|
|
# specified in `keys` or `ranges` are only yielded once.
|
|
},
|
|
"limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit`
|
|
# is zero, the default is no limit. A limit cannot be specified if
|
|
# `partition_token` is set.
|
|
"table": "A String", # Required. The name of the table in the database to be read.
|
|
"columns": [ # The columns of table to be returned for each row matching
|
|
# this request.
|
|
"A String",
|
|
],
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # Results from Read or
|
|
# ExecuteSql.
|
|
"rows": [ # Each element in `rows` is a row whose format is defined by
|
|
# metadata.row_type. The ith element
|
|
# in each row matches the ith field in
|
|
# metadata.row_type. Elements are
|
|
# encoded based on type as described
|
|
# here.
|
|
[
|
|
"",
|
|
],
|
|
],
|
|
"stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
|
|
# produced this result set. These can be requested by setting
|
|
# ExecuteSqlRequest.query_mode.
|
|
# DML statements always produce stats containing the number of rows
|
|
# modified, unless executed using the
|
|
# ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
|
|
# Other fields may or may not be populated, based on the
|
|
# ExecuteSqlRequest.query_mode.
|
|
"rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
|
|
# returns a lower bound of the rows modified.
|
|
"rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
|
|
"queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
|
|
"planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
|
|
# with the plan root. Each PlanNode's `id` corresponds to its index in
|
|
# `plan_nodes`.
|
|
{ # Node information for nodes appearing in a QueryPlan.plan_nodes.
|
|
"index": 42, # The `PlanNode`'s index in node list.
|
|
"kind": "A String", # Used to determine the type of node. May be needed for visualizing
|
|
# different kinds of nodes differently. For example, If the node is a
|
|
# SCALAR node, it will have a condensed representation
|
|
# which can be used to directly embed a description of the node in its
|
|
# parent.
|
|
"displayName": "A String", # The display name for the node.
|
|
"executionStats": { # The execution statistics associated with the node, contained in a group of
|
|
# key-value pairs. Only present if the plan was returned as a result of a
|
|
# profile query. For example, number of executions, number of rows/time per
|
|
# execution etc.
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
"childLinks": [ # List of child node `index`es and their relationship to this parent.
|
|
{ # Metadata associated with a parent-child relationship appearing in a
|
|
# PlanNode.
|
|
"variable": "A String", # Only present if the child node is SCALAR and corresponds
|
|
# to an output variable of the parent node. The field carries the name of
|
|
# the output variable.
|
|
# For example, a `TableScan` operator that reads rows from a table will
|
|
# have child links to the `SCALAR` nodes representing the output variables
|
|
# created for each column that is read by the operator. The corresponding
|
|
# `variable` fields will be set to the variable names assigned to the
|
|
# columns.
|
|
"childIndex": 42, # The node to which the link points.
|
|
"type": "A String", # The type of the link. For example, in Hash Joins this could be used to
|
|
# distinguish between the build child and the probe child, or in the case
|
|
# of the child being an output variable, to represent the tag associated
|
|
# with the output variable.
|
|
},
|
|
],
|
|
"shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
|
|
# `SCALAR` PlanNode(s).
|
|
"subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
|
|
# where the `description` string of this node references a `SCALAR`
|
|
# subquery contained in the expression subtree rooted at this node. The
|
|
# referenced `SCALAR` subquery may not necessarily be a direct child of
|
|
# this node.
|
|
"a_key": 42,
|
|
},
|
|
"description": "A String", # A string representation of the expression subtree rooted at this node.
|
|
},
|
|
"metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
|
|
# For example, a Parameter Reference node could have the following
|
|
# information in its metadata:
|
|
#
|
|
# {
|
|
# "parameter_reference": "param1",
|
|
# "parameter_type": "array"
|
|
# }
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
},
|
|
],
|
|
},
|
|
"queryStats": { # Aggregated statistics from the execution of the query. Only present when
|
|
# the query is profiled. For example, a query could return the statistics as
|
|
# follows:
|
|
#
|
|
# {
|
|
# "rows_returned": "3",
|
|
# "elapsed_time": "1.22 secs",
|
|
# "cpu_time": "1.19 secs"
|
|
# }
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
},
|
|
"metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
|
|
"rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
|
|
# set. For example, a SQL query like `"SELECT UserId, UserName FROM
|
|
# Users"` could return a `row_type` value like:
|
|
#
|
|
# "fields": [
|
|
# { "name": "UserId", "type": { "code": "INT64" } },
|
|
# { "name": "UserName", "type": { "code": "STRING" } },
|
|
# ]
|
|
"fields": [ # The list of fields that make up this struct. Order is
|
|
# significant, because values of this struct type are represented as
|
|
# lists, where the order of field values matches the order of
|
|
# fields in the StructType. In turn, the order of fields
|
|
# matches the order of columns in a read request, or the order of
|
|
# fields in the `SELECT` clause of a query.
|
|
{ # Message representing a single field of a struct.
|
|
"type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
|
|
# table cell or returned from an SQL query.
|
|
"structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
|
|
# provides type information for the struct's fields.
|
|
"code": "A String", # Required. The TypeCode for this type.
|
|
"arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
|
|
# is the type of the array elements.
|
|
},
|
|
"name": "A String", # The name of the field. For reads, this is the column name. For
|
|
# SQL queries, it is the column alias (e.g., `"Word"` in the
|
|
# query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
|
|
# `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
|
|
# columns might have an empty name (e.g., !"SELECT
|
|
# UPPER(ColName)"`). Note that a query result can contain
|
|
# multiple fields with the same name.
|
|
},
|
|
],
|
|
},
|
|
"transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
|
|
# information about the new transaction is yielded here.
|
|
"readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
|
|
# for the transaction. Not returned by default: see
|
|
# TransactionOptions.ReadOnly.return_read_timestamp.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"id": "A String", # `id` may be used to identify the transaction in subsequent
|
|
# Read,
|
|
# ExecuteSql,
|
|
# Commit, or
|
|
# Rollback calls.
|
|
#
|
|
# Single-use read-only transactions do not have IDs, because
|
|
# single-use transactions do not support multiple requests.
|
|
},
|
|
},
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="rollback">rollback(session, body, x__xgafv=None)</code>
|
|
<pre>Rolls back a transaction, releasing any locks it holds. It is a good
|
|
idea to call this for any transaction that includes one or more
|
|
Read or ExecuteSql requests and
|
|
ultimately decides not to commit.
|
|
|
|
`Rollback` returns `OK` if it successfully aborts the transaction, the
|
|
transaction was already aborted, or the transaction is not
|
|
found. `Rollback` never returns `ABORTED`.
|
|
|
|
Args:
|
|
session: string, Required. The session in which the transaction to roll back is running. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # The request for Rollback.
|
|
"transactionId": "A String", # Required. The transaction to roll back.
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # A generic empty message that you can re-use to avoid defining duplicated
|
|
# empty messages in your APIs. A typical example is to use it as the request
|
|
# or the response type of an API method. For instance:
|
|
#
|
|
# service Foo {
|
|
# rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
|
|
# }
|
|
#
|
|
# The JSON representation for `Empty` is empty JSON object `{}`.
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="streamingRead">streamingRead(session, body, x__xgafv=None)</code>
|
|
<pre>Like Read, except returns the result set as a
|
|
stream. Unlike Read, there is no limit on the
|
|
size of the returned result set. However, no individual row in
|
|
the result set can exceed 100 MiB, and no column value can exceed
|
|
10 MiB.
|
|
|
|
Args:
|
|
session: string, Required. The session in which the read should be performed. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # The request for Read and
|
|
# StreamingRead.
|
|
"index": "A String", # If non-empty, the name of an index on table. This index is
|
|
# used instead of the table primary key when interpreting key_set
|
|
# and sorting result rows. See key_set for further information.
|
|
"transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
|
|
# temporary read-only transaction with strong concurrency.
|
|
# Read or
|
|
# ExecuteSql call runs.
|
|
#
|
|
# See TransactionOptions for more information about transactions.
|
|
"begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
|
|
# it. The transaction ID of the new transaction is returned in
|
|
# ResultSetMetadata.transaction, which is a Transaction.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
|
|
# This is the most efficient way to execute a transaction that
|
|
# consists of a single SQL query.
|
|
#
|
|
#
|
|
# Each session can have at most one active transaction at a time. After the
|
|
# active transaction is completed, the session can immediately be
|
|
# re-used for the next transaction. It is not necessary to create a
|
|
# new session for each transaction.
|
|
#
|
|
# # Transaction Modes
|
|
#
|
|
# Cloud Spanner supports three transaction modes:
|
|
#
|
|
# 1. Locking read-write. This type of transaction is the only way
|
|
# to write data into Cloud Spanner. These transactions rely on
|
|
# pessimistic locking and, if necessary, two-phase commit.
|
|
# Locking read-write transactions may abort, requiring the
|
|
# application to retry.
|
|
#
|
|
# 2. Snapshot read-only. This transaction type provides guaranteed
|
|
# consistency across several reads, but does not allow
|
|
# writes. Snapshot read-only transactions can be configured to
|
|
# read at timestamps in the past. Snapshot read-only
|
|
# transactions do not need to be committed.
|
|
#
|
|
# 3. Partitioned DML. This type of transaction is used to execute
|
|
# a single Partitioned DML statement. Partitioned DML partitions
|
|
# the key space and runs the DML statement over each partition
|
|
# in parallel using separate, internal transactions that commit
|
|
# independently. Partitioned DML transactions do not need to be
|
|
# committed.
|
|
#
|
|
# For transactions that only read, snapshot read-only transactions
|
|
# provide simpler semantics and are almost always faster. In
|
|
# particular, read-only transactions do not take locks, so they do
|
|
# not conflict with read-write transactions. As a consequence of not
|
|
# taking locks, they also do not abort, so retry loops are not needed.
|
|
#
|
|
# Transactions may only read/write data in a single database. They
|
|
# may, however, read/write data in different tables within that
|
|
# database.
|
|
#
|
|
# ## Locking Read-Write Transactions
|
|
#
|
|
# Locking transactions may be used to atomically read-modify-write
|
|
# data anywhere in a database. This type of transaction is externally
|
|
# consistent.
|
|
#
|
|
# Clients should attempt to minimize the amount of time a transaction
|
|
# is active. Faster transactions commit with higher probability
|
|
# and cause less contention. Cloud Spanner attempts to keep read locks
|
|
# active as long as the transaction continues to do reads, and the
|
|
# transaction has not been terminated by
|
|
# Commit or
|
|
# Rollback. Long periods of
|
|
# inactivity at the client may cause Cloud Spanner to release a
|
|
# transaction's locks and abort it.
|
|
#
|
|
# Conceptually, a read-write transaction consists of zero or more
|
|
# reads or SQL statements followed by
|
|
# Commit. At any time before
|
|
# Commit, the client can send a
|
|
# Rollback request to abort the
|
|
# transaction.
|
|
#
|
|
# ### Semantics
|
|
#
|
|
# Cloud Spanner can commit the transaction if all read locks it acquired
|
|
# are still valid at commit time, and it is able to acquire write
|
|
# locks for all writes. Cloud Spanner can abort the transaction for any
|
|
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
|
|
# that the transaction has not modified any user data in Cloud Spanner.
|
|
#
|
|
# Unless the transaction commits, Cloud Spanner makes no guarantees about
|
|
# how long the transaction's locks were held for. It is an error to
|
|
# use Cloud Spanner locks for any sort of mutual exclusion other than
|
|
# between Cloud Spanner transactions themselves.
|
|
#
|
|
# ### Retrying Aborted Transactions
|
|
#
|
|
# When a transaction aborts, the application can choose to retry the
|
|
# whole transaction again. To maximize the chances of successfully
|
|
# committing the retry, the client should execute the retry in the
|
|
# same session as the original attempt. The original session's lock
|
|
# priority increases with each consecutive abort, meaning that each
|
|
# attempt has a slightly better chance of success than the previous.
|
|
#
|
|
# Under some circumstances (e.g., many transactions attempting to
|
|
# modify the same row(s)), a transaction can abort many times in a
|
|
# short period before successfully committing. Thus, it is not a good
|
|
# idea to cap the number of retries a transaction can attempt;
|
|
# instead, it is better to limit the total amount of wall time spent
|
|
# retrying.
|
|
#
|
|
# ### Idle Transactions
|
|
#
|
|
# A transaction is considered idle if it has no outstanding reads or
|
|
# SQL queries and has not started a read or SQL query within the last 10
|
|
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
|
|
# don't hold on to locks indefinitely. In that case, the commit will
|
|
# fail with error `ABORTED`.
|
|
#
|
|
# If this behavior is undesirable, periodically executing a simple
|
|
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
|
|
# transaction from becoming idle.
|
|
#
|
|
# ## Snapshot Read-Only Transactions
|
|
#
|
|
# Snapshot read-only transactions provides a simpler method than
|
|
# locking read-write transactions for doing several consistent
|
|
# reads. However, this type of transaction does not support writes.
|
|
#
|
|
# Snapshot transactions do not take locks. Instead, they work by
|
|
# choosing a Cloud Spanner timestamp, then executing all reads at that
|
|
# timestamp. Since they do not acquire locks, they do not block
|
|
# concurrent read-write transactions.
|
|
#
|
|
# Unlike locking read-write transactions, snapshot read-only
|
|
# transactions never abort. They can fail if the chosen read
|
|
# timestamp is garbage collected; however, the default garbage
|
|
# collection policy is generous enough that most applications do not
|
|
# need to worry about this in practice.
|
|
#
|
|
# Snapshot read-only transactions do not need to call
|
|
# Commit or
|
|
# Rollback (and in fact are not
|
|
# permitted to do so).
|
|
#
|
|
# To execute a snapshot transaction, the client specifies a timestamp
|
|
# bound, which tells Cloud Spanner how to choose a read timestamp.
|
|
#
|
|
# The types of timestamp bound are:
|
|
#
|
|
# - Strong (the default).
|
|
# - Bounded staleness.
|
|
# - Exact staleness.
|
|
#
|
|
# If the Cloud Spanner database to be read is geographically distributed,
|
|
# stale read-only transactions can execute more quickly than strong
|
|
# or read-write transaction, because they are able to execute far
|
|
# from the leader replica.
|
|
#
|
|
# Each type of timestamp bound is discussed in detail below.
|
|
#
|
|
# ### Strong
|
|
#
|
|
# Strong reads are guaranteed to see the effects of all transactions
|
|
# that have committed before the start of the read. Furthermore, all
|
|
# rows yielded by a single read are consistent with each other -- if
|
|
# any part of the read observes a transaction, all parts of the read
|
|
# see the transaction.
|
|
#
|
|
# Strong reads are not repeatable: two consecutive strong read-only
|
|
# transactions might return inconsistent results if there are
|
|
# concurrent writes. If consistency across reads is required, the
|
|
# reads should be executed within a transaction or at an exact read
|
|
# timestamp.
|
|
#
|
|
# See TransactionOptions.ReadOnly.strong.
|
|
#
|
|
# ### Exact Staleness
|
|
#
|
|
# These timestamp bounds execute reads at a user-specified
|
|
# timestamp. Reads at a timestamp are guaranteed to see a consistent
|
|
# prefix of the global transaction history: they observe
|
|
# modifications done by all transactions with a commit timestamp <=
|
|
# the read timestamp, and observe none of the modifications done by
|
|
# transactions with a larger commit timestamp. They will block until
|
|
# all conflicting transactions that may be assigned commit timestamps
|
|
# <= the read timestamp have finished.
|
|
#
|
|
# The timestamp can either be expressed as an absolute Cloud Spanner commit
|
|
# timestamp or a staleness relative to the current time.
|
|
#
|
|
# These modes do not require a "negotiation phase" to pick a
|
|
# timestamp. As a result, they execute slightly faster than the
|
|
# equivalent boundedly stale concurrency modes. On the other hand,
|
|
# boundedly stale reads usually return fresher results.
|
|
#
|
|
# See TransactionOptions.ReadOnly.read_timestamp and
|
|
# TransactionOptions.ReadOnly.exact_staleness.
|
|
#
|
|
# ### Bounded Staleness
|
|
#
|
|
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
|
|
# subject to a user-provided staleness bound. Cloud Spanner chooses the
|
|
# newest timestamp within the staleness bound that allows execution
|
|
# of the reads at the closest available replica without blocking.
|
|
#
|
|
# All rows yielded are consistent with each other -- if any part of
|
|
# the read observes a transaction, all parts of the read see the
|
|
# transaction. Boundedly stale reads are not repeatable: two stale
|
|
# reads, even if they use the same staleness bound, can execute at
|
|
# different timestamps and thus return inconsistent results.
|
|
#
|
|
# Boundedly stale reads execute in two phases: the first phase
|
|
# negotiates a timestamp among all replicas needed to serve the
|
|
# read. In the second phase, reads are executed at the negotiated
|
|
# timestamp.
|
|
#
|
|
# As a result of the two phase execution, bounded staleness reads are
|
|
# usually a little slower than comparable exact staleness
|
|
# reads. However, they are typically able to return fresher
|
|
# results, and are more likely to execute at the closest replica.
|
|
#
|
|
# Because the timestamp negotiation requires up-front knowledge of
|
|
# which rows will be read, it can only be used with single-use
|
|
# read-only transactions.
|
|
#
|
|
# See TransactionOptions.ReadOnly.max_staleness and
|
|
# TransactionOptions.ReadOnly.min_read_timestamp.
|
|
#
|
|
# ### Old Read Timestamps and Garbage Collection
|
|
#
|
|
# Cloud Spanner continuously garbage collects deleted and overwritten data
|
|
# in the background to reclaim storage space. This process is known
|
|
# as "version GC". By default, version GC reclaims versions after they
|
|
# are one hour old. Because of this, Cloud Spanner cannot perform reads
|
|
# at read timestamps more than one hour in the past. This
|
|
# restriction also applies to in-progress reads and/or SQL queries whose
|
|
# timestamp become too old while executing. Reads and SQL queries with
|
|
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
|
|
#
|
|
# ## Partitioned DML Transactions
|
|
#
|
|
# Partitioned DML transactions are used to execute DML statements with a
|
|
# different execution strategy that provides different, and often better,
|
|
# scalability properties for large, table-wide operations than DML in a
|
|
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
|
# should prefer using ReadWrite transactions.
|
|
#
|
|
# Partitioned DML partitions the keyspace and runs the DML statement on each
|
|
# partition in separate, internal transactions. These transactions commit
|
|
# automatically when complete, and run independently from one another.
|
|
#
|
|
# To reduce lock contention, this execution strategy only acquires read locks
|
|
# on rows that match the WHERE clause of the statement. Additionally, the
|
|
# smaller per-partition transactions hold locks for less time.
|
|
#
|
|
# That said, Partitioned DML is not a drop-in replacement for standard DML used
|
|
# in ReadWrite transactions.
|
|
#
|
|
# - The DML statement must be fully-partitionable. Specifically, the statement
|
|
# must be expressible as the union of many statements which each access only
|
|
# a single row of the table.
|
|
#
|
|
# - The statement is not applied atomically to all rows of the table. Rather,
|
|
# the statement is applied atomically to partitions of the table, in
|
|
# independent transactions. Secondary index rows are updated atomically
|
|
# with the base table rows.
|
|
#
|
|
# - Partitioned DML does not guarantee exactly-once execution semantics
|
|
# against a partition. The statement will be applied at least once to each
|
|
# partition. It is strongly recommended that the DML statement should be
|
|
# idempotent to avoid unexpected results. For instance, it is potentially
|
|
# dangerous to run a statement such as
|
|
# `UPDATE table SET column = column + 1` as it could be run multiple times
|
|
# against some rows.
|
|
#
|
|
# - The partitions are committed automatically - there is no support for
|
|
# Commit or Rollback. If the call returns an error, or if the client issuing
|
|
# the ExecuteSql call dies, it is possible that some rows had the statement
|
|
# executed on them successfully. It is also possible that statement was
|
|
# never executed against other rows.
|
|
#
|
|
# - Partitioned DML transactions may only contain the execution of a single
|
|
# DML statement via ExecuteSql or ExecuteStreamingSql.
|
|
#
|
|
# - If any error is encountered during the execution of the partitioned DML
|
|
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
|
|
# value that cannot be stored due to schema constraints), then the
|
|
# operation is stopped at that point and an error is returned. It is
|
|
# possible that at this point, some partitions have been committed (or even
|
|
# committed multiple times), and other partitions have not been run at all.
|
|
#
|
|
# Given the above, Partitioned DML is good fit for large, database-wide,
|
|
# operations that are idempotent, such as deleting old rows from a very large
|
|
# table.
|
|
"readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
|
|
#
|
|
# Authorization to begin a read-write transaction requires
|
|
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
|
|
# on the `session` resource.
|
|
# transaction type has no options.
|
|
},
|
|
"readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
|
|
#
|
|
# Authorization to begin a read-only transaction requires
|
|
# `spanner.databases.beginReadOnlyTransaction` permission
|
|
# on the `session` resource.
|
|
"minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
|
|
#
|
|
# This is useful for requesting fresher data than some previous
|
|
# read, or data that is fresh enough to observe the effects of some
|
|
# previously committed transaction whose timestamp is known.
|
|
#
|
|
# Note that this option can only be used in single-use transactions.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
|
|
# the Transaction message that describes the transaction.
|
|
"maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
|
|
# seconds. Guarantees that all writes that have committed more
|
|
# than the specified number of seconds ago are visible. Because
|
|
# Cloud Spanner chooses the exact timestamp, this mode works even if
|
|
# the client's local clock is substantially skewed from Cloud Spanner
|
|
# commit timestamps.
|
|
#
|
|
# Useful for reading the freshest data available at a nearby
|
|
# replica, while bounding the possible staleness if the local
|
|
# replica has fallen behind.
|
|
#
|
|
# Note that this option can only be used in single-use
|
|
# transactions.
|
|
"exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
|
|
# old. The timestamp is chosen soon after the read is started.
|
|
#
|
|
# Guarantees that all writes that have committed more than the
|
|
# specified number of seconds ago are visible. Because Cloud Spanner
|
|
# chooses the exact timestamp, this mode works even if the client's
|
|
# local clock is substantially skewed from Cloud Spanner commit
|
|
# timestamps.
|
|
#
|
|
# Useful for reading at nearby replicas without the distributed
|
|
# timestamp negotiation overhead of `max_staleness`.
|
|
"readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
|
|
# reads at a specific timestamp are repeatable; the same read at
|
|
# the same timestamp always returns the same data. If the
|
|
# timestamp is in the future, the read will block until the
|
|
# specified timestamp, modulo the read's deadline.
|
|
#
|
|
# Useful for large scale consistent reads such as mapreduces, or
|
|
# for coordinating many reads against a consistent snapshot of the
|
|
# data.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"strong": True or False, # Read at a timestamp where all previously committed transactions
|
|
# are visible.
|
|
},
|
|
"partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
|
|
#
|
|
# Authorization to begin a Partitioned DML transaction requires
|
|
# `spanner.databases.beginPartitionedDmlTransaction` permission
|
|
# on the `session` resource.
|
|
},
|
|
},
|
|
"id": "A String", # Execute the read or SQL query in a previously-started transaction.
|
|
},
|
|
"resumeToken": "A String", # If this request is resuming a previously interrupted read,
|
|
# `resume_token` should be copied from the last
|
|
# PartialResultSet yielded before the interruption. Doing this
|
|
# enables the new read to resume where the last read left off. The
|
|
# rest of the request parameters must exactly match the request
|
|
# that yielded this token.
|
|
"partitionToken": "A String", # If present, results will be restricted to the specified partition
|
|
# previously created using PartitionRead(). There must be an exact
|
|
# match for the values of fields common to this message and the
|
|
# PartitionReadRequest message used to create this partition_token.
|
|
"keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
|
|
# primary keys of the rows in table to be yielded, unless index
|
|
# is present. If index is present, then key_set instead names
|
|
# index keys in index.
|
|
#
|
|
# If the partition_token field is empty, rows are yielded
|
|
# in table primary key order (if index is empty) or index key order
|
|
# (if index is non-empty). If the partition_token field is not
|
|
# empty, rows will be yielded in an unspecified order.
|
|
#
|
|
# It is not an error for the `key_set` to name rows that do not
|
|
# exist in the database. Read yields nothing for nonexistent rows.
|
|
# the keys are expected to be in the same table or index. The keys need
|
|
# not be sorted in any particular way.
|
|
#
|
|
# If the same key is specified multiple times in the set (for example
|
|
# if two ranges, two keys, or a key and a range overlap), Cloud Spanner
|
|
# behaves as if the key were only specified once.
|
|
"ranges": [ # A list of key ranges. See KeyRange for more information about
|
|
# key range specifications.
|
|
{ # KeyRange represents a range of rows in a table or index.
|
|
#
|
|
# A range has a start key and an end key. These keys can be open or
|
|
# closed, indicating if the range includes rows with that key.
|
|
#
|
|
# Keys are represented by lists, where the ith value in the list
|
|
# corresponds to the ith component of the table or index primary key.
|
|
# Individual values are encoded as described
|
|
# here.
|
|
#
|
|
# For example, consider the following table definition:
|
|
#
|
|
# CREATE TABLE UserEvents (
|
|
# UserName STRING(MAX),
|
|
# EventDate STRING(10)
|
|
# ) PRIMARY KEY(UserName, EventDate);
|
|
#
|
|
# The following keys name rows in this table:
|
|
#
|
|
# "Bob", "2014-09-23"
|
|
#
|
|
# Since the `UserEvents` table's `PRIMARY KEY` clause names two
|
|
# columns, each `UserEvents` key has two elements; the first is the
|
|
# `UserName`, and the second is the `EventDate`.
|
|
#
|
|
# Key ranges with multiple components are interpreted
|
|
# lexicographically by component using the table or index key's declared
|
|
# sort order. For example, the following range returns all events for
|
|
# user `"Bob"` that occurred in the year 2015:
|
|
#
|
|
# "start_closed": ["Bob", "2015-01-01"]
|
|
# "end_closed": ["Bob", "2015-12-31"]
|
|
#
|
|
# Start and end keys can omit trailing key components. This affects the
|
|
# inclusion and exclusion of rows that exactly match the provided key
|
|
# components: if the key is closed, then rows that exactly match the
|
|
# provided components are included; if the key is open, then rows
|
|
# that exactly match are not included.
|
|
#
|
|
# For example, the following range includes all events for `"Bob"` that
|
|
# occurred during and after the year 2000:
|
|
#
|
|
# "start_closed": ["Bob", "2000-01-01"]
|
|
# "end_closed": ["Bob"]
|
|
#
|
|
# The next example retrieves all events for `"Bob"`:
|
|
#
|
|
# "start_closed": ["Bob"]
|
|
# "end_closed": ["Bob"]
|
|
#
|
|
# To retrieve events before the year 2000:
|
|
#
|
|
# "start_closed": ["Bob"]
|
|
# "end_open": ["Bob", "2000-01-01"]
|
|
#
|
|
# The following range includes all rows in the table:
|
|
#
|
|
# "start_closed": []
|
|
# "end_closed": []
|
|
#
|
|
# This range returns all users whose `UserName` begins with any
|
|
# character from A to C:
|
|
#
|
|
# "start_closed": ["A"]
|
|
# "end_open": ["D"]
|
|
#
|
|
# This range returns all users whose `UserName` begins with B:
|
|
#
|
|
# "start_closed": ["B"]
|
|
# "end_open": ["C"]
|
|
#
|
|
# Key ranges honor column sort order. For example, suppose a table is
|
|
# defined as follows:
|
|
#
|
|
# CREATE TABLE DescendingSortedTable {
|
|
# Key INT64,
|
|
# ...
|
|
# ) PRIMARY KEY(Key DESC);
|
|
#
|
|
# The following range retrieves all rows with key values between 1
|
|
# and 100 inclusive:
|
|
#
|
|
# "start_closed": ["100"]
|
|
# "end_closed": ["1"]
|
|
#
|
|
# Note that 100 is passed as the start, and 1 is passed as the end,
|
|
# because `Key` is a descending column in the schema.
|
|
"endOpen": [ # If the end is open, then the range excludes rows whose first
|
|
# `len(end_open)` key columns exactly match `end_open`.
|
|
"",
|
|
],
|
|
"startOpen": [ # If the start is open, then the range excludes rows whose first
|
|
# `len(start_open)` key columns exactly match `start_open`.
|
|
"",
|
|
],
|
|
"endClosed": [ # If the end is closed, then the range includes all rows whose
|
|
# first `len(end_closed)` key columns exactly match `end_closed`.
|
|
"",
|
|
],
|
|
"startClosed": [ # If the start is closed, then the range includes all rows whose
|
|
# first `len(start_closed)` key columns exactly match `start_closed`.
|
|
"",
|
|
],
|
|
},
|
|
],
|
|
"keys": [ # A list of specific keys. Entries in `keys` should have exactly as
|
|
# many elements as there are columns in the primary or index key
|
|
# with which this `KeySet` is used. Individual key values are
|
|
# encoded as described here.
|
|
[
|
|
"",
|
|
],
|
|
],
|
|
"all": True or False, # For convenience `all` can be set to `true` to indicate that this
|
|
# `KeySet` matches all keys in the table or index. Note that any keys
|
|
# specified in `keys` or `ranges` are only yielded once.
|
|
},
|
|
"limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit`
|
|
# is zero, the default is no limit. A limit cannot be specified if
|
|
# `partition_token` is set.
|
|
"table": "A String", # Required. The name of the table in the database to be read.
|
|
"columns": [ # The columns of table to be returned for each row matching
|
|
# this request.
|
|
"A String",
|
|
],
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # Partial results from a streaming read or SQL query. Streaming reads and
|
|
# SQL queries better tolerate large result sets, large rows, and large
|
|
# values, but are a little trickier to consume.
|
|
"resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such
|
|
# as TCP connection loss. If this occurs, the stream of results can
|
|
# be resumed by re-sending the original request and including
|
|
# `resume_token`. Note that executing any other transaction in the
|
|
# same session invalidates the token.
|
|
"chunkedValue": True or False, # If true, then the final value in values is chunked, and must
|
|
# be combined with more values from subsequent `PartialResultSet`s
|
|
# to obtain a complete field value.
|
|
"values": [ # A streamed result set consists of a stream of values, which might
|
|
# be split into many `PartialResultSet` messages to accommodate
|
|
# large rows and/or large values. Every N complete values defines a
|
|
# row, where N is equal to the number of entries in
|
|
# metadata.row_type.fields.
|
|
#
|
|
# Most values are encoded based on type as described
|
|
# here.
|
|
#
|
|
# It is possible that the last value in values is "chunked",
|
|
# meaning that the rest of the value is sent in subsequent
|
|
# `PartialResultSet`(s). This is denoted by the chunked_value
|
|
# field. Two or more chunked values can be merged to form a
|
|
# complete value as follows:
|
|
#
|
|
# * `bool/number/null`: cannot be chunked
|
|
# * `string`: concatenate the strings
|
|
# * `list`: concatenate the lists. If the last element in a list is a
|
|
# `string`, `list`, or `object`, merge it with the first element in
|
|
# the next list by applying these rules recursively.
|
|
# * `object`: concatenate the (field name, field value) pairs. If a
|
|
# field name is duplicated, then apply these rules recursively
|
|
# to merge the field values.
|
|
#
|
|
# Some examples of merging:
|
|
#
|
|
# # Strings are concatenated.
|
|
# "foo", "bar" => "foobar"
|
|
#
|
|
# # Lists of non-strings are concatenated.
|
|
# [2, 3], [4] => [2, 3, 4]
|
|
#
|
|
# # Lists are concatenated, but the last and first elements are merged
|
|
# # because they are strings.
|
|
# ["a", "b"], ["c", "d"] => ["a", "bc", "d"]
|
|
#
|
|
# # Lists are concatenated, but the last and first elements are merged
|
|
# # because they are lists. Recursively, the last and first elements
|
|
# # of the inner lists are merged because they are strings.
|
|
# ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"]
|
|
#
|
|
# # Non-overlapping object fields are combined.
|
|
# {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"}
|
|
#
|
|
# # Overlapping object fields are merged.
|
|
# {"a": "1"}, {"a": "2"} => {"a": "12"}
|
|
#
|
|
# # Examples of merging objects containing lists of strings.
|
|
# {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
|
|
#
|
|
# For a more complete example, suppose a streaming SQL query is
|
|
# yielding a result set whose rows contain a single string
|
|
# field. The following `PartialResultSet`s might be yielded:
|
|
#
|
|
# {
|
|
# "metadata": { ... }
|
|
# "values": ["Hello", "W"]
|
|
# "chunked_value": true
|
|
# "resume_token": "Af65..."
|
|
# }
|
|
# {
|
|
# "values": ["orl"]
|
|
# "chunked_value": true
|
|
# "resume_token": "Bqp2..."
|
|
# }
|
|
# {
|
|
# "values": ["d"]
|
|
# "resume_token": "Zx1B..."
|
|
# }
|
|
#
|
|
# This sequence of `PartialResultSet`s encodes two rows, one
|
|
# containing the field value `"Hello"`, and a second containing the
|
|
# field value `"World" = "W" + "orl" + "d"`.
|
|
"",
|
|
],
|
|
"stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
|
|
# streaming result set. These can be requested by setting
|
|
# ExecuteSqlRequest.query_mode and are sent
|
|
# only once with the last response in the stream.
|
|
# This field will also be present in the last response for DML
|
|
# statements.
|
|
"rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
|
|
# returns a lower bound of the rows modified.
|
|
"rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
|
|
"queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
|
|
"planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
|
|
# with the plan root. Each PlanNode's `id` corresponds to its index in
|
|
# `plan_nodes`.
|
|
{ # Node information for nodes appearing in a QueryPlan.plan_nodes.
|
|
"index": 42, # The `PlanNode`'s index in node list.
|
|
"kind": "A String", # Used to determine the type of node. May be needed for visualizing
|
|
# different kinds of nodes differently. For example, If the node is a
|
|
# SCALAR node, it will have a condensed representation
|
|
# which can be used to directly embed a description of the node in its
|
|
# parent.
|
|
"displayName": "A String", # The display name for the node.
|
|
"executionStats": { # The execution statistics associated with the node, contained in a group of
|
|
# key-value pairs. Only present if the plan was returned as a result of a
|
|
# profile query. For example, number of executions, number of rows/time per
|
|
# execution etc.
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
"childLinks": [ # List of child node `index`es and their relationship to this parent.
|
|
{ # Metadata associated with a parent-child relationship appearing in a
|
|
# PlanNode.
|
|
"variable": "A String", # Only present if the child node is SCALAR and corresponds
|
|
# to an output variable of the parent node. The field carries the name of
|
|
# the output variable.
|
|
# For example, a `TableScan` operator that reads rows from a table will
|
|
# have child links to the `SCALAR` nodes representing the output variables
|
|
# created for each column that is read by the operator. The corresponding
|
|
# `variable` fields will be set to the variable names assigned to the
|
|
# columns.
|
|
"childIndex": 42, # The node to which the link points.
|
|
"type": "A String", # The type of the link. For example, in Hash Joins this could be used to
|
|
# distinguish between the build child and the probe child, or in the case
|
|
# of the child being an output variable, to represent the tag associated
|
|
# with the output variable.
|
|
},
|
|
],
|
|
"shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
|
|
# `SCALAR` PlanNode(s).
|
|
"subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
|
|
# where the `description` string of this node references a `SCALAR`
|
|
# subquery contained in the expression subtree rooted at this node. The
|
|
# referenced `SCALAR` subquery may not necessarily be a direct child of
|
|
# this node.
|
|
"a_key": 42,
|
|
},
|
|
"description": "A String", # A string representation of the expression subtree rooted at this node.
|
|
},
|
|
"metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
|
|
# For example, a Parameter Reference node could have the following
|
|
# information in its metadata:
|
|
#
|
|
# {
|
|
# "parameter_reference": "param1",
|
|
# "parameter_type": "array"
|
|
# }
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
},
|
|
],
|
|
},
|
|
"queryStats": { # Aggregated statistics from the execution of the query. Only present when
|
|
# the query is profiled. For example, a query could return the statistics as
|
|
# follows:
|
|
#
|
|
# {
|
|
# "rows_returned": "3",
|
|
# "elapsed_time": "1.22 secs",
|
|
# "cpu_time": "1.19 secs"
|
|
# }
|
|
"a_key": "", # Properties of the object.
|
|
},
|
|
},
|
|
"metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
|
|
# Only present in the first response.
|
|
"rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
|
|
# set. For example, a SQL query like `"SELECT UserId, UserName FROM
|
|
# Users"` could return a `row_type` value like:
|
|
#
|
|
# "fields": [
|
|
# { "name": "UserId", "type": { "code": "INT64" } },
|
|
# { "name": "UserName", "type": { "code": "STRING" } },
|
|
# ]
|
|
"fields": [ # The list of fields that make up this struct. Order is
|
|
# significant, because values of this struct type are represented as
|
|
# lists, where the order of field values matches the order of
|
|
# fields in the StructType. In turn, the order of fields
|
|
# matches the order of columns in a read request, or the order of
|
|
# fields in the `SELECT` clause of a query.
|
|
{ # Message representing a single field of a struct.
|
|
"type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
|
|
# table cell or returned from an SQL query.
|
|
"structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
|
|
# provides type information for the struct's fields.
|
|
"code": "A String", # Required. The TypeCode for this type.
|
|
"arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
|
|
# is the type of the array elements.
|
|
},
|
|
"name": "A String", # The name of the field. For reads, this is the column name. For
|
|
# SQL queries, it is the column alias (e.g., `"Word"` in the
|
|
# query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
|
|
# `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
|
|
# columns might have an empty name (e.g., !"SELECT
|
|
# UPPER(ColName)"`). Note that a query result can contain
|
|
# multiple fields with the same name.
|
|
},
|
|
],
|
|
},
|
|
"transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
|
|
# information about the new transaction is yielded here.
|
|
"readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
|
|
# for the transaction. Not returned by default: see
|
|
# TransactionOptions.ReadOnly.return_read_timestamp.
|
|
#
|
|
# A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
|
|
# Example: `"2014-10-02T15:01:23.045123456Z"`.
|
|
"id": "A String", # `id` may be used to identify the transaction in subsequent
|
|
# Read,
|
|
# ExecuteSql,
|
|
# Commit, or
|
|
# Rollback calls.
|
|
#
|
|
# Single-use read-only transactions do not have IDs, because
|
|
# single-use transactions do not support multiple requests.
|
|
},
|
|
},
|
|
}</pre>
|
|
</div>
|
|
|
|
</body></html> |