You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
2305 lines
117 KiB
2305 lines
117 KiB
<html><body>
|
|
<style>
|
|
|
|
body, h1, h2, h3, div, span, p, pre, a {
|
|
margin: 0;
|
|
padding: 0;
|
|
border: 0;
|
|
font-weight: inherit;
|
|
font-style: inherit;
|
|
font-size: 100%;
|
|
font-family: inherit;
|
|
vertical-align: baseline;
|
|
}
|
|
|
|
body {
|
|
font-size: 13px;
|
|
padding: 1em;
|
|
}
|
|
|
|
h1 {
|
|
font-size: 26px;
|
|
margin-bottom: 1em;
|
|
}
|
|
|
|
h2 {
|
|
font-size: 24px;
|
|
margin-bottom: 1em;
|
|
}
|
|
|
|
h3 {
|
|
font-size: 20px;
|
|
margin-bottom: 1em;
|
|
margin-top: 1em;
|
|
}
|
|
|
|
pre, code {
|
|
line-height: 1.5;
|
|
font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
|
|
}
|
|
|
|
pre {
|
|
margin-top: 0.5em;
|
|
}
|
|
|
|
h1, h2, h3, p {
|
|
font-family: Arial, sans serif;
|
|
}
|
|
|
|
h1, h2, h3 {
|
|
border-bottom: solid #CCC 1px;
|
|
}
|
|
|
|
.toc_element {
|
|
margin-top: 0.5em;
|
|
}
|
|
|
|
.firstline {
|
|
margin-left: 2 em;
|
|
}
|
|
|
|
.method {
|
|
margin-top: 1em;
|
|
border: solid 1px #CCC;
|
|
padding: 1em;
|
|
background: #EEE;
|
|
}
|
|
|
|
.details {
|
|
font-weight: bold;
|
|
font-size: 14px;
|
|
}
|
|
|
|
</style>
|
|
|
|
<h1><a href="ml_v1.html">Cloud Machine Learning Engine</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.models.html">models</a></h1>
|
|
<h2>Instance Methods</h2>
|
|
<p class="toc_element">
|
|
<code><a href="ml_v1.projects.models.versions.html">versions()</a></code>
|
|
</p>
|
|
<p class="firstline">Returns the versions Resource.</p>
|
|
|
|
<p class="toc_element">
|
|
<code><a href="#create">create(parent, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Creates a model which will later contain one or more versions.</p>
|
|
<p class="toc_element">
|
|
<code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Deletes a model.</p>
|
|
<p class="toc_element">
|
|
<code><a href="#get">get(name, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Gets information about a model, including its name, the description (if</p>
|
|
<p class="toc_element">
|
|
<code><a href="#getIamPolicy">getIamPolicy(resource, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Gets the access control policy for a resource.</p>
|
|
<p class="toc_element">
|
|
<code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</a></code></p>
|
|
<p class="firstline">Lists the models in a project.</p>
|
|
<p class="toc_element">
|
|
<code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
|
|
<p class="firstline">Retrieves the next page of results.</p>
|
|
<p class="toc_element">
|
|
<code><a href="#patch">patch(name, body, updateMask=None, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Updates a specific model resource.</p>
|
|
<p class="toc_element">
|
|
<code><a href="#setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Sets the access control policy on the specified resource. Replaces any</p>
|
|
<p class="toc_element">
|
|
<code><a href="#testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</a></code></p>
|
|
<p class="firstline">Returns permissions that a caller has on the specified resource.</p>
|
|
<h3>Method Details</h3>
|
|
<div class="method">
|
|
<code class="details" id="create">create(parent, body, x__xgafv=None)</code>
|
|
<pre>Creates a model which will later contain one or more versions.
|
|
|
|
You must add at least one version before you can request predictions from
|
|
the model. Add versions by calling
|
|
[projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create).
|
|
|
|
Args:
|
|
parent: string, Required. The project name. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # Represents a machine learning solution.
|
|
#
|
|
# A model can have multiple versions, each of which is a deployed, trained
|
|
# model ready to receive prediction requests. The model itself is just a
|
|
# container.
|
|
"description": "A String", # Optional. The description specified for the model when it was created.
|
|
"onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
|
|
# streams to Stackdriver Logging. These can be more verbose than the standard
|
|
# access logs (see `onlinePredictionLogging`) and can incur higher cost.
|
|
# However, they are helpful for debugging. Note that
|
|
# [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
|
|
# your project receives prediction requests at a high QPS. Estimate your
|
|
# costs before enabling this option.
|
|
#
|
|
# Default is false.
|
|
"labels": { # Optional. One or more labels that you can add, to organize your models.
|
|
# Each label is a key-value pair, where both the key and the value are
|
|
# arbitrary strings that you supply.
|
|
# For more information, see the documentation on
|
|
# <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
|
|
"a_key": "A String",
|
|
},
|
|
"regions": [ # Optional. The list of regions where the model is going to be deployed.
|
|
# Currently only one region per model is supported.
|
|
# Defaults to 'us-central1' if nothing is set.
|
|
# See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
|
|
# for AI Platform services.
|
|
# Note:
|
|
# * No matter where a model is deployed, it can always be accessed by
|
|
# users from anywhere, both for online and batch prediction.
|
|
# * The region for a batch prediction job is set by the region field when
|
|
# submitting the batch prediction job and does not take its value from
|
|
# this field.
|
|
"A String",
|
|
],
|
|
"etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
|
|
# prevent simultaneous updates of a model from overwriting each other.
|
|
# It is strongly suggested that systems make use of the `etag` in the
|
|
# read-modify-write cycle to perform model updates in order to avoid race
|
|
# conditions: An `etag` is returned in the response to `GetModel`, and
|
|
# systems are expected to put that etag in the request to `UpdateModel` to
|
|
# ensure that their change will be applied to the model as intended.
|
|
"defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
|
|
# handle prediction requests that do not specify a version.
|
|
#
|
|
# You can change the default version by calling
|
|
# [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
|
|
#
|
|
# Each version is a trained model deployed in the cloud, ready to handle
|
|
# prediction requests. A model can have multiple versions. You can get
|
|
# information about all of the versions of a given model by calling
|
|
# [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
|
|
"errorMessage": "A String", # Output only. The details of a failure or a cancellation.
|
|
"labels": { # Optional. One or more labels that you can add, to organize your model
|
|
# versions. Each label is a key-value pair, where both the key and the value
|
|
# are arbitrary strings that you supply.
|
|
# For more information, see the documentation on
|
|
# <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
|
|
"a_key": "A String",
|
|
},
|
|
"machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
|
|
# applies to online prediction service.
|
|
# <dl>
|
|
# <dt>mls1-c1-m2</dt>
|
|
# <dd>
|
|
# The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
|
|
# name for this machine type is "mls1-highmem-1".
|
|
# </dd>
|
|
# <dt>mls1-c4-m2</dt>
|
|
# <dd>
|
|
# In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
|
|
# deprecated name for this machine type is "mls1-highcpu-4".
|
|
# </dd>
|
|
# </dl>
|
|
"description": "A String", # Optional. The description specified for the version when it was created.
|
|
"runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
|
|
# If not set, AI Platform uses the default stable version, 1.0. For more
|
|
# information, see the
|
|
# [runtime version list](/ml-engine/docs/runtime-version-list) and
|
|
# [how to manage runtime versions](/ml-engine/docs/versioning).
|
|
"manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
|
|
# model. You should generally use `auto_scaling` with an appropriate
|
|
# `min_nodes` instead, but this option is available if you want more
|
|
# predictable billing. Beware that latency and error rates will increase
|
|
# if the traffic exceeds that capability of the system to serve it based
|
|
# on the selected number of nodes.
|
|
"nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
|
|
# starting from the time the model is deployed, so the cost of operating
|
|
# this model will be proportional to `nodes` * number of hours since
|
|
# last billing cycle plus the cost for each prediction performed.
|
|
},
|
|
"predictionClass": "A String", # Optional. The fully qualified name
|
|
# (<var>module_name</var>.<var>class_name</var>) of a class that implements
|
|
# the Predictor interface described in this reference field. The module
|
|
# containing this class should be included in a package provided to the
|
|
# [`packageUris` field](#Version.FIELDS.package_uris).
|
|
#
|
|
# Specify this field if and only if you are deploying a [custom prediction
|
|
# routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
# If you specify this field, you must set
|
|
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
|
|
#
|
|
# The following code sample provides the Predictor interface:
|
|
#
|
|
# ```py
|
|
# class Predictor(object):
|
|
# """Interface for constructing custom predictors."""
|
|
#
|
|
# def predict(self, instances, **kwargs):
|
|
# """Performs custom prediction.
|
|
#
|
|
# Instances are the decoded values from the request. They have already
|
|
# been deserialized from JSON.
|
|
#
|
|
# Args:
|
|
# instances: A list of prediction input instances.
|
|
# **kwargs: A dictionary of keyword args provided as additional
|
|
# fields on the predict request body.
|
|
#
|
|
# Returns:
|
|
# A list of outputs containing the prediction results. This list must
|
|
# be JSON serializable.
|
|
# """
|
|
# raise NotImplementedError()
|
|
#
|
|
# @classmethod
|
|
# def from_path(cls, model_dir):
|
|
# """Creates an instance of Predictor using the given path.
|
|
#
|
|
# Loading of the predictor should be done in this method.
|
|
#
|
|
# Args:
|
|
# model_dir: The local directory that contains the exported model
|
|
# file along with any additional files uploaded when creating the
|
|
# version resource.
|
|
#
|
|
# Returns:
|
|
# An instance implementing this Predictor class.
|
|
# """
|
|
# raise NotImplementedError()
|
|
# ```
|
|
#
|
|
# Learn more about [the Predictor interface and custom prediction
|
|
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
"autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
|
|
# response to increases and decreases in traffic. Care should be
|
|
# taken to ramp up traffic according to the model's ability to scale
|
|
# or you will start seeing increases in latency and 429 response codes.
|
|
"minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
|
|
# nodes are always up, starting from the time the model is deployed.
|
|
# Therefore, the cost of operating this model will be at least
|
|
# `rate` * `min_nodes` * number of hours since last billing cycle,
|
|
# where `rate` is the cost per node-hour as documented in the
|
|
# [pricing guide](/ml-engine/docs/pricing),
|
|
# even if no predictions are performed. There is additional cost for each
|
|
# prediction performed.
|
|
#
|
|
# Unlike manual scaling, if the load gets too heavy for the nodes
|
|
# that are up, the service will automatically add nodes to handle the
|
|
# increased load as well as scale back as traffic drops, always maintaining
|
|
# at least `min_nodes`. You will be charged for the time in which additional
|
|
# nodes are used.
|
|
#
|
|
# If not specified, `min_nodes` defaults to 0, in which case, when traffic
|
|
# to a model stops (and after a cool-down period), nodes will be shut down
|
|
# and no charges will be incurred until traffic to the model resumes.
|
|
#
|
|
# You can set `min_nodes` when creating the model version, and you can also
|
|
# update `min_nodes` for an existing version:
|
|
# <pre>
|
|
# update_body.json:
|
|
# {
|
|
# 'autoScaling': {
|
|
# 'minNodes': 5
|
|
# }
|
|
# }
|
|
# </pre>
|
|
# HTTP request:
|
|
# <pre>
|
|
# PATCH
|
|
# https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
|
|
# -d @./update_body.json
|
|
# </pre>
|
|
},
|
|
"serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
|
|
"state": "A String", # Output only. The state of a version.
|
|
"pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
|
|
# version is '2.7'. Python '3.5' is available when `runtime_version` is set
|
|
# to '1.4' and above. Python '2.7' works with all supported runtime versions.
|
|
"framework": "A String", # Optional. The machine learning framework AI Platform uses to train
|
|
# this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
|
|
# `XGBOOST`. If you do not specify a framework, AI Platform
|
|
# will analyze files in the deployment_uri to determine a framework. If you
|
|
# choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
|
|
# of the model to 1.4 or greater.
|
|
#
|
|
# Do **not** specify a framework if you're deploying a [custom
|
|
# prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
"packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
|
|
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
|
|
# or [scikit-learn pipelines with custom
|
|
# code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
|
|
#
|
|
# For a custom prediction routine, one of these packages must contain your
|
|
# Predictor class (see
|
|
# [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
|
|
# include any dependencies used by your Predictor or scikit-learn pipeline
|
|
# uses that are not already included in your selected [runtime
|
|
# version](/ml-engine/docs/tensorflow/runtime-version-list).
|
|
#
|
|
# If you specify this field, you must also set
|
|
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
|
|
"A String",
|
|
],
|
|
"etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
|
|
# prevent simultaneous updates of a model from overwriting each other.
|
|
# It is strongly suggested that systems make use of the `etag` in the
|
|
# read-modify-write cycle to perform model updates in order to avoid race
|
|
# conditions: An `etag` is returned in the response to `GetVersion`, and
|
|
# systems are expected to put that etag in the request to `UpdateVersion` to
|
|
# ensure that their change will be applied to the model as intended.
|
|
"lastUseTime": "A String", # Output only. The time the version was last used for prediction.
|
|
"deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
|
|
# create the version. See the
|
|
# [guide to model
|
|
# deployment](/ml-engine/docs/tensorflow/deploying-models) for more
|
|
# information.
|
|
#
|
|
# When passing Version to
|
|
# [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
|
|
# the model service uses the specified location as the source of the model.
|
|
# Once deployed, the model version is hosted by the prediction service, so
|
|
# this location is useful only as a historical record.
|
|
# The total number of model files can't exceed 1000.
|
|
"createTime": "A String", # Output only. The time the version was created.
|
|
"isDefault": True or False, # Output only. If true, this version will be used to handle prediction
|
|
# requests that do not specify a version.
|
|
#
|
|
# You can change the default version by calling
|
|
# [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
|
|
"name": "A String", # Required.The name specified for the version when it was created.
|
|
#
|
|
# The version name must be unique within the model it is created in.
|
|
},
|
|
"onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
|
|
# Logging. These logs are like standard server access logs, containing
|
|
# information like timestamp and latency for each request. Note that
|
|
# [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
|
|
# your project receives prediction requests at a high queries per second rate
|
|
# (QPS). Estimate your costs before enabling this option.
|
|
#
|
|
# Default is false.
|
|
"name": "A String", # Required. The name specified for the model when it was created.
|
|
#
|
|
# The model name must be unique within the project it is created in.
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # Represents a machine learning solution.
|
|
#
|
|
# A model can have multiple versions, each of which is a deployed, trained
|
|
# model ready to receive prediction requests. The model itself is just a
|
|
# container.
|
|
"description": "A String", # Optional. The description specified for the model when it was created.
|
|
"onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
|
|
# streams to Stackdriver Logging. These can be more verbose than the standard
|
|
# access logs (see `onlinePredictionLogging`) and can incur higher cost.
|
|
# However, they are helpful for debugging. Note that
|
|
# [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
|
|
# your project receives prediction requests at a high QPS. Estimate your
|
|
# costs before enabling this option.
|
|
#
|
|
# Default is false.
|
|
"labels": { # Optional. One or more labels that you can add, to organize your models.
|
|
# Each label is a key-value pair, where both the key and the value are
|
|
# arbitrary strings that you supply.
|
|
# For more information, see the documentation on
|
|
# <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
|
|
"a_key": "A String",
|
|
},
|
|
"regions": [ # Optional. The list of regions where the model is going to be deployed.
|
|
# Currently only one region per model is supported.
|
|
# Defaults to 'us-central1' if nothing is set.
|
|
# See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
|
|
# for AI Platform services.
|
|
# Note:
|
|
# * No matter where a model is deployed, it can always be accessed by
|
|
# users from anywhere, both for online and batch prediction.
|
|
# * The region for a batch prediction job is set by the region field when
|
|
# submitting the batch prediction job and does not take its value from
|
|
# this field.
|
|
"A String",
|
|
],
|
|
"etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
|
|
# prevent simultaneous updates of a model from overwriting each other.
|
|
# It is strongly suggested that systems make use of the `etag` in the
|
|
# read-modify-write cycle to perform model updates in order to avoid race
|
|
# conditions: An `etag` is returned in the response to `GetModel`, and
|
|
# systems are expected to put that etag in the request to `UpdateModel` to
|
|
# ensure that their change will be applied to the model as intended.
|
|
"defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
|
|
# handle prediction requests that do not specify a version.
|
|
#
|
|
# You can change the default version by calling
|
|
# [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
|
|
#
|
|
# Each version is a trained model deployed in the cloud, ready to handle
|
|
# prediction requests. A model can have multiple versions. You can get
|
|
# information about all of the versions of a given model by calling
|
|
# [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
|
|
"errorMessage": "A String", # Output only. The details of a failure or a cancellation.
|
|
"labels": { # Optional. One or more labels that you can add, to organize your model
|
|
# versions. Each label is a key-value pair, where both the key and the value
|
|
# are arbitrary strings that you supply.
|
|
# For more information, see the documentation on
|
|
# <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
|
|
"a_key": "A String",
|
|
},
|
|
"machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
|
|
# applies to online prediction service.
|
|
# <dl>
|
|
# <dt>mls1-c1-m2</dt>
|
|
# <dd>
|
|
# The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
|
|
# name for this machine type is "mls1-highmem-1".
|
|
# </dd>
|
|
# <dt>mls1-c4-m2</dt>
|
|
# <dd>
|
|
# In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
|
|
# deprecated name for this machine type is "mls1-highcpu-4".
|
|
# </dd>
|
|
# </dl>
|
|
"description": "A String", # Optional. The description specified for the version when it was created.
|
|
"runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
|
|
# If not set, AI Platform uses the default stable version, 1.0. For more
|
|
# information, see the
|
|
# [runtime version list](/ml-engine/docs/runtime-version-list) and
|
|
# [how to manage runtime versions](/ml-engine/docs/versioning).
|
|
"manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
|
|
# model. You should generally use `auto_scaling` with an appropriate
|
|
# `min_nodes` instead, but this option is available if you want more
|
|
# predictable billing. Beware that latency and error rates will increase
|
|
# if the traffic exceeds that capability of the system to serve it based
|
|
# on the selected number of nodes.
|
|
"nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
|
|
# starting from the time the model is deployed, so the cost of operating
|
|
# this model will be proportional to `nodes` * number of hours since
|
|
# last billing cycle plus the cost for each prediction performed.
|
|
},
|
|
"predictionClass": "A String", # Optional. The fully qualified name
|
|
# (<var>module_name</var>.<var>class_name</var>) of a class that implements
|
|
# the Predictor interface described in this reference field. The module
|
|
# containing this class should be included in a package provided to the
|
|
# [`packageUris` field](#Version.FIELDS.package_uris).
|
|
#
|
|
# Specify this field if and only if you are deploying a [custom prediction
|
|
# routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
# If you specify this field, you must set
|
|
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
|
|
#
|
|
# The following code sample provides the Predictor interface:
|
|
#
|
|
# ```py
|
|
# class Predictor(object):
|
|
# """Interface for constructing custom predictors."""
|
|
#
|
|
# def predict(self, instances, **kwargs):
|
|
# """Performs custom prediction.
|
|
#
|
|
# Instances are the decoded values from the request. They have already
|
|
# been deserialized from JSON.
|
|
#
|
|
# Args:
|
|
# instances: A list of prediction input instances.
|
|
# **kwargs: A dictionary of keyword args provided as additional
|
|
# fields on the predict request body.
|
|
#
|
|
# Returns:
|
|
# A list of outputs containing the prediction results. This list must
|
|
# be JSON serializable.
|
|
# """
|
|
# raise NotImplementedError()
|
|
#
|
|
# @classmethod
|
|
# def from_path(cls, model_dir):
|
|
# """Creates an instance of Predictor using the given path.
|
|
#
|
|
# Loading of the predictor should be done in this method.
|
|
#
|
|
# Args:
|
|
# model_dir: The local directory that contains the exported model
|
|
# file along with any additional files uploaded when creating the
|
|
# version resource.
|
|
#
|
|
# Returns:
|
|
# An instance implementing this Predictor class.
|
|
# """
|
|
# raise NotImplementedError()
|
|
# ```
|
|
#
|
|
# Learn more about [the Predictor interface and custom prediction
|
|
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
"autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
|
|
# response to increases and decreases in traffic. Care should be
|
|
# taken to ramp up traffic according to the model's ability to scale
|
|
# or you will start seeing increases in latency and 429 response codes.
|
|
"minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
|
|
# nodes are always up, starting from the time the model is deployed.
|
|
# Therefore, the cost of operating this model will be at least
|
|
# `rate` * `min_nodes` * number of hours since last billing cycle,
|
|
# where `rate` is the cost per node-hour as documented in the
|
|
# [pricing guide](/ml-engine/docs/pricing),
|
|
# even if no predictions are performed. There is additional cost for each
|
|
# prediction performed.
|
|
#
|
|
# Unlike manual scaling, if the load gets too heavy for the nodes
|
|
# that are up, the service will automatically add nodes to handle the
|
|
# increased load as well as scale back as traffic drops, always maintaining
|
|
# at least `min_nodes`. You will be charged for the time in which additional
|
|
# nodes are used.
|
|
#
|
|
# If not specified, `min_nodes` defaults to 0, in which case, when traffic
|
|
# to a model stops (and after a cool-down period), nodes will be shut down
|
|
# and no charges will be incurred until traffic to the model resumes.
|
|
#
|
|
# You can set `min_nodes` when creating the model version, and you can also
|
|
# update `min_nodes` for an existing version:
|
|
# <pre>
|
|
# update_body.json:
|
|
# {
|
|
# 'autoScaling': {
|
|
# 'minNodes': 5
|
|
# }
|
|
# }
|
|
# </pre>
|
|
# HTTP request:
|
|
# <pre>
|
|
# PATCH
|
|
# https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
|
|
# -d @./update_body.json
|
|
# </pre>
|
|
},
|
|
"serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
|
|
"state": "A String", # Output only. The state of a version.
|
|
"pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
|
|
# version is '2.7'. Python '3.5' is available when `runtime_version` is set
|
|
# to '1.4' and above. Python '2.7' works with all supported runtime versions.
|
|
"framework": "A String", # Optional. The machine learning framework AI Platform uses to train
|
|
# this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
|
|
# `XGBOOST`. If you do not specify a framework, AI Platform
|
|
# will analyze files in the deployment_uri to determine a framework. If you
|
|
# choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
|
|
# of the model to 1.4 or greater.
|
|
#
|
|
# Do **not** specify a framework if you're deploying a [custom
|
|
# prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
"packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
|
|
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
|
|
# or [scikit-learn pipelines with custom
|
|
# code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
|
|
#
|
|
# For a custom prediction routine, one of these packages must contain your
|
|
# Predictor class (see
|
|
# [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
|
|
# include any dependencies used by your Predictor or scikit-learn pipeline
|
|
# uses that are not already included in your selected [runtime
|
|
# version](/ml-engine/docs/tensorflow/runtime-version-list).
|
|
#
|
|
# If you specify this field, you must also set
|
|
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
|
|
"A String",
|
|
],
|
|
"etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
|
|
# prevent simultaneous updates of a model from overwriting each other.
|
|
# It is strongly suggested that systems make use of the `etag` in the
|
|
# read-modify-write cycle to perform model updates in order to avoid race
|
|
# conditions: An `etag` is returned in the response to `GetVersion`, and
|
|
# systems are expected to put that etag in the request to `UpdateVersion` to
|
|
# ensure that their change will be applied to the model as intended.
|
|
"lastUseTime": "A String", # Output only. The time the version was last used for prediction.
|
|
"deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
|
|
# create the version. See the
|
|
# [guide to model
|
|
# deployment](/ml-engine/docs/tensorflow/deploying-models) for more
|
|
# information.
|
|
#
|
|
# When passing Version to
|
|
# [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
|
|
# the model service uses the specified location as the source of the model.
|
|
# Once deployed, the model version is hosted by the prediction service, so
|
|
# this location is useful only as a historical record.
|
|
# The total number of model files can't exceed 1000.
|
|
"createTime": "A String", # Output only. The time the version was created.
|
|
"isDefault": True or False, # Output only. If true, this version will be used to handle prediction
|
|
# requests that do not specify a version.
|
|
#
|
|
# You can change the default version by calling
|
|
# [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
|
|
"name": "A String", # Required.The name specified for the version when it was created.
|
|
#
|
|
# The version name must be unique within the model it is created in.
|
|
},
|
|
"onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
|
|
# Logging. These logs are like standard server access logs, containing
|
|
# information like timestamp and latency for each request. Note that
|
|
# [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
|
|
# your project receives prediction requests at a high queries per second rate
|
|
# (QPS). Estimate your costs before enabling this option.
|
|
#
|
|
# Default is false.
|
|
"name": "A String", # Required. The name specified for the model when it was created.
|
|
#
|
|
# The model name must be unique within the project it is created in.
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="delete">delete(name, x__xgafv=None)</code>
|
|
<pre>Deletes a model.
|
|
|
|
You can only delete a model if there are no versions in it. You can delete
|
|
versions by calling
|
|
[projects.models.versions.delete](/ml-engine/reference/rest/v1/projects.models.versions/delete).
|
|
|
|
Args:
|
|
name: string, Required. The name of the model. (required)
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # This resource represents a long-running operation that is the result of a
|
|
# network API call.
|
|
"metadata": { # Service-specific metadata associated with the operation. It typically
|
|
# contains progress information and common metadata such as create time.
|
|
# Some services might not provide such metadata. Any method that returns a
|
|
# long-running operation should document the metadata type, if any.
|
|
"a_key": "", # Properties of the object. Contains field @type with type URL.
|
|
},
|
|
"error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
|
|
# different programming environments, including REST APIs and RPC APIs. It is
|
|
# used by [gRPC](https://github.com/grpc). Each `Status` message contains
|
|
# three pieces of data: error code, error message, and error details.
|
|
#
|
|
# You can find out more about this error model and how to work with it in the
|
|
# [API Design Guide](https://cloud.google.com/apis/design/errors).
|
|
"message": "A String", # A developer-facing error message, which should be in English. Any
|
|
# user-facing error message should be localized and sent in the
|
|
# google.rpc.Status.details field, or localized by the client.
|
|
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
|
|
"details": [ # A list of messages that carry the error details. There is a common set of
|
|
# message types for APIs to use.
|
|
{
|
|
"a_key": "", # Properties of the object. Contains field @type with type URL.
|
|
},
|
|
],
|
|
},
|
|
"done": True or False, # If the value is `false`, it means the operation is still in progress.
|
|
# If `true`, the operation is completed, and either `error` or `response` is
|
|
# available.
|
|
"response": { # The normal response of the operation in case of success. If the original
|
|
# method returns no data on success, such as `Delete`, the response is
|
|
# `google.protobuf.Empty`. If the original method is standard
|
|
# `Get`/`Create`/`Update`, the response should be the resource. For other
|
|
# methods, the response should have the type `XxxResponse`, where `Xxx`
|
|
# is the original method name. For example, if the original method name
|
|
# is `TakeSnapshot()`, the inferred response type is
|
|
# `TakeSnapshotResponse`.
|
|
"a_key": "", # Properties of the object. Contains field @type with type URL.
|
|
},
|
|
"name": "A String", # The server-assigned name, which is only unique within the same service that
|
|
# originally returns it. If you use the default HTTP mapping, the
|
|
# `name` should be a resource name ending with `operations/{unique_id}`.
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="get">get(name, x__xgafv=None)</code>
|
|
<pre>Gets information about a model, including its name, the description (if
|
|
set), and the default version (if at least one version of the model has
|
|
been deployed).
|
|
|
|
Args:
|
|
name: string, Required. The name of the model. (required)
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # Represents a machine learning solution.
|
|
#
|
|
# A model can have multiple versions, each of which is a deployed, trained
|
|
# model ready to receive prediction requests. The model itself is just a
|
|
# container.
|
|
"description": "A String", # Optional. The description specified for the model when it was created.
|
|
"onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
|
|
# streams to Stackdriver Logging. These can be more verbose than the standard
|
|
# access logs (see `onlinePredictionLogging`) and can incur higher cost.
|
|
# However, they are helpful for debugging. Note that
|
|
# [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
|
|
# your project receives prediction requests at a high QPS. Estimate your
|
|
# costs before enabling this option.
|
|
#
|
|
# Default is false.
|
|
"labels": { # Optional. One or more labels that you can add, to organize your models.
|
|
# Each label is a key-value pair, where both the key and the value are
|
|
# arbitrary strings that you supply.
|
|
# For more information, see the documentation on
|
|
# <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
|
|
"a_key": "A String",
|
|
},
|
|
"regions": [ # Optional. The list of regions where the model is going to be deployed.
|
|
# Currently only one region per model is supported.
|
|
# Defaults to 'us-central1' if nothing is set.
|
|
# See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
|
|
# for AI Platform services.
|
|
# Note:
|
|
# * No matter where a model is deployed, it can always be accessed by
|
|
# users from anywhere, both for online and batch prediction.
|
|
# * The region for a batch prediction job is set by the region field when
|
|
# submitting the batch prediction job and does not take its value from
|
|
# this field.
|
|
"A String",
|
|
],
|
|
"etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
|
|
# prevent simultaneous updates of a model from overwriting each other.
|
|
# It is strongly suggested that systems make use of the `etag` in the
|
|
# read-modify-write cycle to perform model updates in order to avoid race
|
|
# conditions: An `etag` is returned in the response to `GetModel`, and
|
|
# systems are expected to put that etag in the request to `UpdateModel` to
|
|
# ensure that their change will be applied to the model as intended.
|
|
"defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
|
|
# handle prediction requests that do not specify a version.
|
|
#
|
|
# You can change the default version by calling
|
|
# [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
|
|
#
|
|
# Each version is a trained model deployed in the cloud, ready to handle
|
|
# prediction requests. A model can have multiple versions. You can get
|
|
# information about all of the versions of a given model by calling
|
|
# [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
|
|
"errorMessage": "A String", # Output only. The details of a failure or a cancellation.
|
|
"labels": { # Optional. One or more labels that you can add, to organize your model
|
|
# versions. Each label is a key-value pair, where both the key and the value
|
|
# are arbitrary strings that you supply.
|
|
# For more information, see the documentation on
|
|
# <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
|
|
"a_key": "A String",
|
|
},
|
|
"machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
|
|
# applies to online prediction service.
|
|
# <dl>
|
|
# <dt>mls1-c1-m2</dt>
|
|
# <dd>
|
|
# The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
|
|
# name for this machine type is "mls1-highmem-1".
|
|
# </dd>
|
|
# <dt>mls1-c4-m2</dt>
|
|
# <dd>
|
|
# In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
|
|
# deprecated name for this machine type is "mls1-highcpu-4".
|
|
# </dd>
|
|
# </dl>
|
|
"description": "A String", # Optional. The description specified for the version when it was created.
|
|
"runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
|
|
# If not set, AI Platform uses the default stable version, 1.0. For more
|
|
# information, see the
|
|
# [runtime version list](/ml-engine/docs/runtime-version-list) and
|
|
# [how to manage runtime versions](/ml-engine/docs/versioning).
|
|
"manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
|
|
# model. You should generally use `auto_scaling` with an appropriate
|
|
# `min_nodes` instead, but this option is available if you want more
|
|
# predictable billing. Beware that latency and error rates will increase
|
|
# if the traffic exceeds that capability of the system to serve it based
|
|
# on the selected number of nodes.
|
|
"nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
|
|
# starting from the time the model is deployed, so the cost of operating
|
|
# this model will be proportional to `nodes` * number of hours since
|
|
# last billing cycle plus the cost for each prediction performed.
|
|
},
|
|
"predictionClass": "A String", # Optional. The fully qualified name
|
|
# (<var>module_name</var>.<var>class_name</var>) of a class that implements
|
|
# the Predictor interface described in this reference field. The module
|
|
# containing this class should be included in a package provided to the
|
|
# [`packageUris` field](#Version.FIELDS.package_uris).
|
|
#
|
|
# Specify this field if and only if you are deploying a [custom prediction
|
|
# routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
# If you specify this field, you must set
|
|
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
|
|
#
|
|
# The following code sample provides the Predictor interface:
|
|
#
|
|
# ```py
|
|
# class Predictor(object):
|
|
# """Interface for constructing custom predictors."""
|
|
#
|
|
# def predict(self, instances, **kwargs):
|
|
# """Performs custom prediction.
|
|
#
|
|
# Instances are the decoded values from the request. They have already
|
|
# been deserialized from JSON.
|
|
#
|
|
# Args:
|
|
# instances: A list of prediction input instances.
|
|
# **kwargs: A dictionary of keyword args provided as additional
|
|
# fields on the predict request body.
|
|
#
|
|
# Returns:
|
|
# A list of outputs containing the prediction results. This list must
|
|
# be JSON serializable.
|
|
# """
|
|
# raise NotImplementedError()
|
|
#
|
|
# @classmethod
|
|
# def from_path(cls, model_dir):
|
|
# """Creates an instance of Predictor using the given path.
|
|
#
|
|
# Loading of the predictor should be done in this method.
|
|
#
|
|
# Args:
|
|
# model_dir: The local directory that contains the exported model
|
|
# file along with any additional files uploaded when creating the
|
|
# version resource.
|
|
#
|
|
# Returns:
|
|
# An instance implementing this Predictor class.
|
|
# """
|
|
# raise NotImplementedError()
|
|
# ```
|
|
#
|
|
# Learn more about [the Predictor interface and custom prediction
|
|
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
"autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
|
|
# response to increases and decreases in traffic. Care should be
|
|
# taken to ramp up traffic according to the model's ability to scale
|
|
# or you will start seeing increases in latency and 429 response codes.
|
|
"minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
|
|
# nodes are always up, starting from the time the model is deployed.
|
|
# Therefore, the cost of operating this model will be at least
|
|
# `rate` * `min_nodes` * number of hours since last billing cycle,
|
|
# where `rate` is the cost per node-hour as documented in the
|
|
# [pricing guide](/ml-engine/docs/pricing),
|
|
# even if no predictions are performed. There is additional cost for each
|
|
# prediction performed.
|
|
#
|
|
# Unlike manual scaling, if the load gets too heavy for the nodes
|
|
# that are up, the service will automatically add nodes to handle the
|
|
# increased load as well as scale back as traffic drops, always maintaining
|
|
# at least `min_nodes`. You will be charged for the time in which additional
|
|
# nodes are used.
|
|
#
|
|
# If not specified, `min_nodes` defaults to 0, in which case, when traffic
|
|
# to a model stops (and after a cool-down period), nodes will be shut down
|
|
# and no charges will be incurred until traffic to the model resumes.
|
|
#
|
|
# You can set `min_nodes` when creating the model version, and you can also
|
|
# update `min_nodes` for an existing version:
|
|
# <pre>
|
|
# update_body.json:
|
|
# {
|
|
# 'autoScaling': {
|
|
# 'minNodes': 5
|
|
# }
|
|
# }
|
|
# </pre>
|
|
# HTTP request:
|
|
# <pre>
|
|
# PATCH
|
|
# https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
|
|
# -d @./update_body.json
|
|
# </pre>
|
|
},
|
|
"serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
|
|
"state": "A String", # Output only. The state of a version.
|
|
"pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
|
|
# version is '2.7'. Python '3.5' is available when `runtime_version` is set
|
|
# to '1.4' and above. Python '2.7' works with all supported runtime versions.
|
|
"framework": "A String", # Optional. The machine learning framework AI Platform uses to train
|
|
# this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
|
|
# `XGBOOST`. If you do not specify a framework, AI Platform
|
|
# will analyze files in the deployment_uri to determine a framework. If you
|
|
# choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
|
|
# of the model to 1.4 or greater.
|
|
#
|
|
# Do **not** specify a framework if you're deploying a [custom
|
|
# prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
"packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
|
|
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
|
|
# or [scikit-learn pipelines with custom
|
|
# code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
|
|
#
|
|
# For a custom prediction routine, one of these packages must contain your
|
|
# Predictor class (see
|
|
# [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
|
|
# include any dependencies used by your Predictor or scikit-learn pipeline
|
|
# uses that are not already included in your selected [runtime
|
|
# version](/ml-engine/docs/tensorflow/runtime-version-list).
|
|
#
|
|
# If you specify this field, you must also set
|
|
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
|
|
"A String",
|
|
],
|
|
"etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
|
|
# prevent simultaneous updates of a model from overwriting each other.
|
|
# It is strongly suggested that systems make use of the `etag` in the
|
|
# read-modify-write cycle to perform model updates in order to avoid race
|
|
# conditions: An `etag` is returned in the response to `GetVersion`, and
|
|
# systems are expected to put that etag in the request to `UpdateVersion` to
|
|
# ensure that their change will be applied to the model as intended.
|
|
"lastUseTime": "A String", # Output only. The time the version was last used for prediction.
|
|
"deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
|
|
# create the version. See the
|
|
# [guide to model
|
|
# deployment](/ml-engine/docs/tensorflow/deploying-models) for more
|
|
# information.
|
|
#
|
|
# When passing Version to
|
|
# [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
|
|
# the model service uses the specified location as the source of the model.
|
|
# Once deployed, the model version is hosted by the prediction service, so
|
|
# this location is useful only as a historical record.
|
|
# The total number of model files can't exceed 1000.
|
|
"createTime": "A String", # Output only. The time the version was created.
|
|
"isDefault": True or False, # Output only. If true, this version will be used to handle prediction
|
|
# requests that do not specify a version.
|
|
#
|
|
# You can change the default version by calling
|
|
# [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
|
|
"name": "A String", # Required.The name specified for the version when it was created.
|
|
#
|
|
# The version name must be unique within the model it is created in.
|
|
},
|
|
"onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
|
|
# Logging. These logs are like standard server access logs, containing
|
|
# information like timestamp and latency for each request. Note that
|
|
# [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
|
|
# your project receives prediction requests at a high queries per second rate
|
|
# (QPS). Estimate your costs before enabling this option.
|
|
#
|
|
# Default is false.
|
|
"name": "A String", # Required. The name specified for the model when it was created.
|
|
#
|
|
# The model name must be unique within the project it is created in.
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="getIamPolicy">getIamPolicy(resource, x__xgafv=None)</code>
|
|
<pre>Gets the access control policy for a resource.
|
|
Returns an empty policy if the resource exists and does not have a policy
|
|
set.
|
|
|
|
Args:
|
|
resource: string, REQUIRED: The resource for which the policy is being requested.
|
|
See the operation documentation for the appropriate value for this field. (required)
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # Defines an Identity and Access Management (IAM) policy. It is used to
|
|
# specify access control policies for Cloud Platform resources.
|
|
#
|
|
#
|
|
# A `Policy` consists of a list of `bindings`. A `binding` binds a list of
|
|
# `members` to a `role`, where the members can be user accounts, Google groups,
|
|
# Google domains, and service accounts. A `role` is a named list of permissions
|
|
# defined by IAM.
|
|
#
|
|
# **JSON Example**
|
|
#
|
|
# {
|
|
# "bindings": [
|
|
# {
|
|
# "role": "roles/owner",
|
|
# "members": [
|
|
# "user:mike@example.com",
|
|
# "group:admins@example.com",
|
|
# "domain:google.com",
|
|
# "serviceAccount:my-other-app@appspot.gserviceaccount.com"
|
|
# ]
|
|
# },
|
|
# {
|
|
# "role": "roles/viewer",
|
|
# "members": ["user:sean@example.com"]
|
|
# }
|
|
# ]
|
|
# }
|
|
#
|
|
# **YAML Example**
|
|
#
|
|
# bindings:
|
|
# - members:
|
|
# - user:mike@example.com
|
|
# - group:admins@example.com
|
|
# - domain:google.com
|
|
# - serviceAccount:my-other-app@appspot.gserviceaccount.com
|
|
# role: roles/owner
|
|
# - members:
|
|
# - user:sean@example.com
|
|
# role: roles/viewer
|
|
#
|
|
#
|
|
# For a description of IAM and its features, see the
|
|
# [IAM developer's guide](https://cloud.google.com/iam/docs).
|
|
"bindings": [ # Associates a list of `members` to a `role`.
|
|
# `bindings` with no members will result in an error.
|
|
{ # Associates `members` with a `role`.
|
|
"role": "A String", # Role that is assigned to `members`.
|
|
# For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
|
|
"members": [ # Specifies the identities requesting access for a Cloud Platform resource.
|
|
# `members` can have the following values:
|
|
#
|
|
# * `allUsers`: A special identifier that represents anyone who is
|
|
# on the internet; with or without a Google account.
|
|
#
|
|
# * `allAuthenticatedUsers`: A special identifier that represents anyone
|
|
# who is authenticated with a Google account or a service account.
|
|
#
|
|
# * `user:{emailid}`: An email address that represents a specific Google
|
|
# account. For example, `alice@gmail.com` .
|
|
#
|
|
#
|
|
# * `serviceAccount:{emailid}`: An email address that represents a service
|
|
# account. For example, `my-other-app@appspot.gserviceaccount.com`.
|
|
#
|
|
# * `group:{emailid}`: An email address that represents a Google group.
|
|
# For example, `admins@example.com`.
|
|
#
|
|
#
|
|
# * `domain:{domain}`: The G Suite domain (primary) that represents all the
|
|
# users of that domain. For example, `google.com` or `example.com`.
|
|
#
|
|
"A String",
|
|
],
|
|
"condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
|
|
# NOTE: An unsatisfied condition will not allow user access via current
|
|
# binding. Different bindings, including their conditions, are examined
|
|
# independently.
|
|
#
|
|
# title: "User account presence"
|
|
# description: "Determines whether the request has a user account"
|
|
# expression: "size(request.user) > 0"
|
|
"description": "A String", # An optional description of the expression. This is a longer text which
|
|
# describes the expression, e.g. when hovered over it in a UI.
|
|
"expression": "A String", # Textual representation of an expression in
|
|
# Common Expression Language syntax.
|
|
#
|
|
# The application context of the containing message determines which
|
|
# well-known feature set of CEL is supported.
|
|
"location": "A String", # An optional string indicating the location of the expression for error
|
|
# reporting, e.g. a file name and a position in the file.
|
|
"title": "A String", # An optional title for the expression, i.e. a short string describing
|
|
# its purpose. This can be used e.g. in UIs which allow to enter the
|
|
# expression.
|
|
},
|
|
},
|
|
],
|
|
"etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
|
|
# prevent simultaneous updates of a policy from overwriting each other.
|
|
# It is strongly suggested that systems make use of the `etag` in the
|
|
# read-modify-write cycle to perform policy updates in order to avoid race
|
|
# conditions: An `etag` is returned in the response to `getIamPolicy`, and
|
|
# systems are expected to put that etag in the request to `setIamPolicy` to
|
|
# ensure that their change will be applied to the same version of the policy.
|
|
#
|
|
# If no `etag` is provided in the call to `setIamPolicy`, then the existing
|
|
# policy is overwritten blindly.
|
|
"version": 42, # Deprecated.
|
|
"auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
|
|
{ # Specifies the audit configuration for a service.
|
|
# The configuration determines which permission types are logged, and what
|
|
# identities, if any, are exempted from logging.
|
|
# An AuditConfig must have one or more AuditLogConfigs.
|
|
#
|
|
# If there are AuditConfigs for both `allServices` and a specific service,
|
|
# the union of the two AuditConfigs is used for that service: the log_types
|
|
# specified in each AuditConfig are enabled, and the exempted_members in each
|
|
# AuditLogConfig are exempted.
|
|
#
|
|
# Example Policy with multiple AuditConfigs:
|
|
#
|
|
# {
|
|
# "audit_configs": [
|
|
# {
|
|
# "service": "allServices"
|
|
# "audit_log_configs": [
|
|
# {
|
|
# "log_type": "DATA_READ",
|
|
# "exempted_members": [
|
|
# "user:foo@gmail.com"
|
|
# ]
|
|
# },
|
|
# {
|
|
# "log_type": "DATA_WRITE",
|
|
# },
|
|
# {
|
|
# "log_type": "ADMIN_READ",
|
|
# }
|
|
# ]
|
|
# },
|
|
# {
|
|
# "service": "fooservice.googleapis.com"
|
|
# "audit_log_configs": [
|
|
# {
|
|
# "log_type": "DATA_READ",
|
|
# },
|
|
# {
|
|
# "log_type": "DATA_WRITE",
|
|
# "exempted_members": [
|
|
# "user:bar@gmail.com"
|
|
# ]
|
|
# }
|
|
# ]
|
|
# }
|
|
# ]
|
|
# }
|
|
#
|
|
# For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
|
|
# logging. It also exempts foo@gmail.com from DATA_READ logging, and
|
|
# bar@gmail.com from DATA_WRITE logging.
|
|
"auditLogConfigs": [ # The configuration for logging of each type of permission.
|
|
{ # Provides the configuration for logging a type of permissions.
|
|
# Example:
|
|
#
|
|
# {
|
|
# "audit_log_configs": [
|
|
# {
|
|
# "log_type": "DATA_READ",
|
|
# "exempted_members": [
|
|
# "user:foo@gmail.com"
|
|
# ]
|
|
# },
|
|
# {
|
|
# "log_type": "DATA_WRITE",
|
|
# }
|
|
# ]
|
|
# }
|
|
#
|
|
# This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
|
|
# foo@gmail.com from DATA_READ logging.
|
|
"exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
|
|
# permission.
|
|
# Follows the same format of Binding.members.
|
|
"A String",
|
|
],
|
|
"logType": "A String", # The log type that this config enables.
|
|
},
|
|
],
|
|
"service": "A String", # Specifies a service that will be enabled for audit logging.
|
|
# For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
|
|
# `allServices` is a special value that covers all services.
|
|
},
|
|
],
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</code>
|
|
<pre>Lists the models in a project.
|
|
|
|
Each project can contain multiple models, and each model can have multiple
|
|
versions.
|
|
|
|
If there are no models that match the request parameters, the list request
|
|
returns an empty response body: {}.
|
|
|
|
Args:
|
|
parent: string, Required. The name of the project whose models are to be listed. (required)
|
|
pageToken: string, Optional. A page token to request the next page of results.
|
|
|
|
You get the token from the `next_page_token` field of the response from
|
|
the previous call.
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
pageSize: integer, Optional. The number of models to retrieve per "page" of results. If there
|
|
are more remaining results than this number, the response message will
|
|
contain a valid value in the `next_page_token` field.
|
|
|
|
The default value is 20, and the maximum page size is 100.
|
|
filter: string, Optional. Specifies the subset of models to retrieve.
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # Response message for the ListModels method.
|
|
"nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a
|
|
# subsequent call.
|
|
"models": [ # The list of models.
|
|
{ # Represents a machine learning solution.
|
|
#
|
|
# A model can have multiple versions, each of which is a deployed, trained
|
|
# model ready to receive prediction requests. The model itself is just a
|
|
# container.
|
|
"description": "A String", # Optional. The description specified for the model when it was created.
|
|
"onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
|
|
# streams to Stackdriver Logging. These can be more verbose than the standard
|
|
# access logs (see `onlinePredictionLogging`) and can incur higher cost.
|
|
# However, they are helpful for debugging. Note that
|
|
# [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
|
|
# your project receives prediction requests at a high QPS. Estimate your
|
|
# costs before enabling this option.
|
|
#
|
|
# Default is false.
|
|
"labels": { # Optional. One or more labels that you can add, to organize your models.
|
|
# Each label is a key-value pair, where both the key and the value are
|
|
# arbitrary strings that you supply.
|
|
# For more information, see the documentation on
|
|
# <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
|
|
"a_key": "A String",
|
|
},
|
|
"regions": [ # Optional. The list of regions where the model is going to be deployed.
|
|
# Currently only one region per model is supported.
|
|
# Defaults to 'us-central1' if nothing is set.
|
|
# See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
|
|
# for AI Platform services.
|
|
# Note:
|
|
# * No matter where a model is deployed, it can always be accessed by
|
|
# users from anywhere, both for online and batch prediction.
|
|
# * The region for a batch prediction job is set by the region field when
|
|
# submitting the batch prediction job and does not take its value from
|
|
# this field.
|
|
"A String",
|
|
],
|
|
"etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
|
|
# prevent simultaneous updates of a model from overwriting each other.
|
|
# It is strongly suggested that systems make use of the `etag` in the
|
|
# read-modify-write cycle to perform model updates in order to avoid race
|
|
# conditions: An `etag` is returned in the response to `GetModel`, and
|
|
# systems are expected to put that etag in the request to `UpdateModel` to
|
|
# ensure that their change will be applied to the model as intended.
|
|
"defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
|
|
# handle prediction requests that do not specify a version.
|
|
#
|
|
# You can change the default version by calling
|
|
# [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
|
|
#
|
|
# Each version is a trained model deployed in the cloud, ready to handle
|
|
# prediction requests. A model can have multiple versions. You can get
|
|
# information about all of the versions of a given model by calling
|
|
# [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
|
|
"errorMessage": "A String", # Output only. The details of a failure or a cancellation.
|
|
"labels": { # Optional. One or more labels that you can add, to organize your model
|
|
# versions. Each label is a key-value pair, where both the key and the value
|
|
# are arbitrary strings that you supply.
|
|
# For more information, see the documentation on
|
|
# <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
|
|
"a_key": "A String",
|
|
},
|
|
"machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
|
|
# applies to online prediction service.
|
|
# <dl>
|
|
# <dt>mls1-c1-m2</dt>
|
|
# <dd>
|
|
# The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
|
|
# name for this machine type is "mls1-highmem-1".
|
|
# </dd>
|
|
# <dt>mls1-c4-m2</dt>
|
|
# <dd>
|
|
# In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
|
|
# deprecated name for this machine type is "mls1-highcpu-4".
|
|
# </dd>
|
|
# </dl>
|
|
"description": "A String", # Optional. The description specified for the version when it was created.
|
|
"runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
|
|
# If not set, AI Platform uses the default stable version, 1.0. For more
|
|
# information, see the
|
|
# [runtime version list](/ml-engine/docs/runtime-version-list) and
|
|
# [how to manage runtime versions](/ml-engine/docs/versioning).
|
|
"manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
|
|
# model. You should generally use `auto_scaling` with an appropriate
|
|
# `min_nodes` instead, but this option is available if you want more
|
|
# predictable billing. Beware that latency and error rates will increase
|
|
# if the traffic exceeds that capability of the system to serve it based
|
|
# on the selected number of nodes.
|
|
"nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
|
|
# starting from the time the model is deployed, so the cost of operating
|
|
# this model will be proportional to `nodes` * number of hours since
|
|
# last billing cycle plus the cost for each prediction performed.
|
|
},
|
|
"predictionClass": "A String", # Optional. The fully qualified name
|
|
# (<var>module_name</var>.<var>class_name</var>) of a class that implements
|
|
# the Predictor interface described in this reference field. The module
|
|
# containing this class should be included in a package provided to the
|
|
# [`packageUris` field](#Version.FIELDS.package_uris).
|
|
#
|
|
# Specify this field if and only if you are deploying a [custom prediction
|
|
# routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
# If you specify this field, you must set
|
|
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
|
|
#
|
|
# The following code sample provides the Predictor interface:
|
|
#
|
|
# ```py
|
|
# class Predictor(object):
|
|
# """Interface for constructing custom predictors."""
|
|
#
|
|
# def predict(self, instances, **kwargs):
|
|
# """Performs custom prediction.
|
|
#
|
|
# Instances are the decoded values from the request. They have already
|
|
# been deserialized from JSON.
|
|
#
|
|
# Args:
|
|
# instances: A list of prediction input instances.
|
|
# **kwargs: A dictionary of keyword args provided as additional
|
|
# fields on the predict request body.
|
|
#
|
|
# Returns:
|
|
# A list of outputs containing the prediction results. This list must
|
|
# be JSON serializable.
|
|
# """
|
|
# raise NotImplementedError()
|
|
#
|
|
# @classmethod
|
|
# def from_path(cls, model_dir):
|
|
# """Creates an instance of Predictor using the given path.
|
|
#
|
|
# Loading of the predictor should be done in this method.
|
|
#
|
|
# Args:
|
|
# model_dir: The local directory that contains the exported model
|
|
# file along with any additional files uploaded when creating the
|
|
# version resource.
|
|
#
|
|
# Returns:
|
|
# An instance implementing this Predictor class.
|
|
# """
|
|
# raise NotImplementedError()
|
|
# ```
|
|
#
|
|
# Learn more about [the Predictor interface and custom prediction
|
|
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
"autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
|
|
# response to increases and decreases in traffic. Care should be
|
|
# taken to ramp up traffic according to the model's ability to scale
|
|
# or you will start seeing increases in latency and 429 response codes.
|
|
"minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
|
|
# nodes are always up, starting from the time the model is deployed.
|
|
# Therefore, the cost of operating this model will be at least
|
|
# `rate` * `min_nodes` * number of hours since last billing cycle,
|
|
# where `rate` is the cost per node-hour as documented in the
|
|
# [pricing guide](/ml-engine/docs/pricing),
|
|
# even if no predictions are performed. There is additional cost for each
|
|
# prediction performed.
|
|
#
|
|
# Unlike manual scaling, if the load gets too heavy for the nodes
|
|
# that are up, the service will automatically add nodes to handle the
|
|
# increased load as well as scale back as traffic drops, always maintaining
|
|
# at least `min_nodes`. You will be charged for the time in which additional
|
|
# nodes are used.
|
|
#
|
|
# If not specified, `min_nodes` defaults to 0, in which case, when traffic
|
|
# to a model stops (and after a cool-down period), nodes will be shut down
|
|
# and no charges will be incurred until traffic to the model resumes.
|
|
#
|
|
# You can set `min_nodes` when creating the model version, and you can also
|
|
# update `min_nodes` for an existing version:
|
|
# <pre>
|
|
# update_body.json:
|
|
# {
|
|
# 'autoScaling': {
|
|
# 'minNodes': 5
|
|
# }
|
|
# }
|
|
# </pre>
|
|
# HTTP request:
|
|
# <pre>
|
|
# PATCH
|
|
# https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
|
|
# -d @./update_body.json
|
|
# </pre>
|
|
},
|
|
"serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
|
|
"state": "A String", # Output only. The state of a version.
|
|
"pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
|
|
# version is '2.7'. Python '3.5' is available when `runtime_version` is set
|
|
# to '1.4' and above. Python '2.7' works with all supported runtime versions.
|
|
"framework": "A String", # Optional. The machine learning framework AI Platform uses to train
|
|
# this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
|
|
# `XGBOOST`. If you do not specify a framework, AI Platform
|
|
# will analyze files in the deployment_uri to determine a framework. If you
|
|
# choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
|
|
# of the model to 1.4 or greater.
|
|
#
|
|
# Do **not** specify a framework if you're deploying a [custom
|
|
# prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
"packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
|
|
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
|
|
# or [scikit-learn pipelines with custom
|
|
# code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
|
|
#
|
|
# For a custom prediction routine, one of these packages must contain your
|
|
# Predictor class (see
|
|
# [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
|
|
# include any dependencies used by your Predictor or scikit-learn pipeline
|
|
# uses that are not already included in your selected [runtime
|
|
# version](/ml-engine/docs/tensorflow/runtime-version-list).
|
|
#
|
|
# If you specify this field, you must also set
|
|
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
|
|
"A String",
|
|
],
|
|
"etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
|
|
# prevent simultaneous updates of a model from overwriting each other.
|
|
# It is strongly suggested that systems make use of the `etag` in the
|
|
# read-modify-write cycle to perform model updates in order to avoid race
|
|
# conditions: An `etag` is returned in the response to `GetVersion`, and
|
|
# systems are expected to put that etag in the request to `UpdateVersion` to
|
|
# ensure that their change will be applied to the model as intended.
|
|
"lastUseTime": "A String", # Output only. The time the version was last used for prediction.
|
|
"deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
|
|
# create the version. See the
|
|
# [guide to model
|
|
# deployment](/ml-engine/docs/tensorflow/deploying-models) for more
|
|
# information.
|
|
#
|
|
# When passing Version to
|
|
# [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
|
|
# the model service uses the specified location as the source of the model.
|
|
# Once deployed, the model version is hosted by the prediction service, so
|
|
# this location is useful only as a historical record.
|
|
# The total number of model files can't exceed 1000.
|
|
"createTime": "A String", # Output only. The time the version was created.
|
|
"isDefault": True or False, # Output only. If true, this version will be used to handle prediction
|
|
# requests that do not specify a version.
|
|
#
|
|
# You can change the default version by calling
|
|
# [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
|
|
"name": "A String", # Required.The name specified for the version when it was created.
|
|
#
|
|
# The version name must be unique within the model it is created in.
|
|
},
|
|
"onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
|
|
# Logging. These logs are like standard server access logs, containing
|
|
# information like timestamp and latency for each request. Note that
|
|
# [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
|
|
# your project receives prediction requests at a high queries per second rate
|
|
# (QPS). Estimate your costs before enabling this option.
|
|
#
|
|
# Default is false.
|
|
"name": "A String", # Required. The name specified for the model when it was created.
|
|
#
|
|
# The model name must be unique within the project it is created in.
|
|
},
|
|
],
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="list_next">list_next(previous_request, previous_response)</code>
|
|
<pre>Retrieves the next page of results.
|
|
|
|
Args:
|
|
previous_request: The request for the previous page. (required)
|
|
previous_response: The response from the request for the previous page. (required)
|
|
|
|
Returns:
|
|
A request object that you can call 'execute()' on to request the next
|
|
page. Returns None if there are no more items in the collection.
|
|
</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="patch">patch(name, body, updateMask=None, x__xgafv=None)</code>
|
|
<pre>Updates a specific model resource.
|
|
|
|
Currently the only supported fields to update are `description` and
|
|
`default_version.name`.
|
|
|
|
Args:
|
|
name: string, Required. The project name. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # Represents a machine learning solution.
|
|
#
|
|
# A model can have multiple versions, each of which is a deployed, trained
|
|
# model ready to receive prediction requests. The model itself is just a
|
|
# container.
|
|
"description": "A String", # Optional. The description specified for the model when it was created.
|
|
"onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
|
|
# streams to Stackdriver Logging. These can be more verbose than the standard
|
|
# access logs (see `onlinePredictionLogging`) and can incur higher cost.
|
|
# However, they are helpful for debugging. Note that
|
|
# [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
|
|
# your project receives prediction requests at a high QPS. Estimate your
|
|
# costs before enabling this option.
|
|
#
|
|
# Default is false.
|
|
"labels": { # Optional. One or more labels that you can add, to organize your models.
|
|
# Each label is a key-value pair, where both the key and the value are
|
|
# arbitrary strings that you supply.
|
|
# For more information, see the documentation on
|
|
# <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
|
|
"a_key": "A String",
|
|
},
|
|
"regions": [ # Optional. The list of regions where the model is going to be deployed.
|
|
# Currently only one region per model is supported.
|
|
# Defaults to 'us-central1' if nothing is set.
|
|
# See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
|
|
# for AI Platform services.
|
|
# Note:
|
|
# * No matter where a model is deployed, it can always be accessed by
|
|
# users from anywhere, both for online and batch prediction.
|
|
# * The region for a batch prediction job is set by the region field when
|
|
# submitting the batch prediction job and does not take its value from
|
|
# this field.
|
|
"A String",
|
|
],
|
|
"etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
|
|
# prevent simultaneous updates of a model from overwriting each other.
|
|
# It is strongly suggested that systems make use of the `etag` in the
|
|
# read-modify-write cycle to perform model updates in order to avoid race
|
|
# conditions: An `etag` is returned in the response to `GetModel`, and
|
|
# systems are expected to put that etag in the request to `UpdateModel` to
|
|
# ensure that their change will be applied to the model as intended.
|
|
"defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
|
|
# handle prediction requests that do not specify a version.
|
|
#
|
|
# You can change the default version by calling
|
|
# [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
|
|
#
|
|
# Each version is a trained model deployed in the cloud, ready to handle
|
|
# prediction requests. A model can have multiple versions. You can get
|
|
# information about all of the versions of a given model by calling
|
|
# [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
|
|
"errorMessage": "A String", # Output only. The details of a failure or a cancellation.
|
|
"labels": { # Optional. One or more labels that you can add, to organize your model
|
|
# versions. Each label is a key-value pair, where both the key and the value
|
|
# are arbitrary strings that you supply.
|
|
# For more information, see the documentation on
|
|
# <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
|
|
"a_key": "A String",
|
|
},
|
|
"machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
|
|
# applies to online prediction service.
|
|
# <dl>
|
|
# <dt>mls1-c1-m2</dt>
|
|
# <dd>
|
|
# The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
|
|
# name for this machine type is "mls1-highmem-1".
|
|
# </dd>
|
|
# <dt>mls1-c4-m2</dt>
|
|
# <dd>
|
|
# In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
|
|
# deprecated name for this machine type is "mls1-highcpu-4".
|
|
# </dd>
|
|
# </dl>
|
|
"description": "A String", # Optional. The description specified for the version when it was created.
|
|
"runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
|
|
# If not set, AI Platform uses the default stable version, 1.0. For more
|
|
# information, see the
|
|
# [runtime version list](/ml-engine/docs/runtime-version-list) and
|
|
# [how to manage runtime versions](/ml-engine/docs/versioning).
|
|
"manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
|
|
# model. You should generally use `auto_scaling` with an appropriate
|
|
# `min_nodes` instead, but this option is available if you want more
|
|
# predictable billing. Beware that latency and error rates will increase
|
|
# if the traffic exceeds that capability of the system to serve it based
|
|
# on the selected number of nodes.
|
|
"nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
|
|
# starting from the time the model is deployed, so the cost of operating
|
|
# this model will be proportional to `nodes` * number of hours since
|
|
# last billing cycle plus the cost for each prediction performed.
|
|
},
|
|
"predictionClass": "A String", # Optional. The fully qualified name
|
|
# (<var>module_name</var>.<var>class_name</var>) of a class that implements
|
|
# the Predictor interface described in this reference field. The module
|
|
# containing this class should be included in a package provided to the
|
|
# [`packageUris` field](#Version.FIELDS.package_uris).
|
|
#
|
|
# Specify this field if and only if you are deploying a [custom prediction
|
|
# routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
# If you specify this field, you must set
|
|
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
|
|
#
|
|
# The following code sample provides the Predictor interface:
|
|
#
|
|
# ```py
|
|
# class Predictor(object):
|
|
# """Interface for constructing custom predictors."""
|
|
#
|
|
# def predict(self, instances, **kwargs):
|
|
# """Performs custom prediction.
|
|
#
|
|
# Instances are the decoded values from the request. They have already
|
|
# been deserialized from JSON.
|
|
#
|
|
# Args:
|
|
# instances: A list of prediction input instances.
|
|
# **kwargs: A dictionary of keyword args provided as additional
|
|
# fields on the predict request body.
|
|
#
|
|
# Returns:
|
|
# A list of outputs containing the prediction results. This list must
|
|
# be JSON serializable.
|
|
# """
|
|
# raise NotImplementedError()
|
|
#
|
|
# @classmethod
|
|
# def from_path(cls, model_dir):
|
|
# """Creates an instance of Predictor using the given path.
|
|
#
|
|
# Loading of the predictor should be done in this method.
|
|
#
|
|
# Args:
|
|
# model_dir: The local directory that contains the exported model
|
|
# file along with any additional files uploaded when creating the
|
|
# version resource.
|
|
#
|
|
# Returns:
|
|
# An instance implementing this Predictor class.
|
|
# """
|
|
# raise NotImplementedError()
|
|
# ```
|
|
#
|
|
# Learn more about [the Predictor interface and custom prediction
|
|
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
"autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
|
|
# response to increases and decreases in traffic. Care should be
|
|
# taken to ramp up traffic according to the model's ability to scale
|
|
# or you will start seeing increases in latency and 429 response codes.
|
|
"minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
|
|
# nodes are always up, starting from the time the model is deployed.
|
|
# Therefore, the cost of operating this model will be at least
|
|
# `rate` * `min_nodes` * number of hours since last billing cycle,
|
|
# where `rate` is the cost per node-hour as documented in the
|
|
# [pricing guide](/ml-engine/docs/pricing),
|
|
# even if no predictions are performed. There is additional cost for each
|
|
# prediction performed.
|
|
#
|
|
# Unlike manual scaling, if the load gets too heavy for the nodes
|
|
# that are up, the service will automatically add nodes to handle the
|
|
# increased load as well as scale back as traffic drops, always maintaining
|
|
# at least `min_nodes`. You will be charged for the time in which additional
|
|
# nodes are used.
|
|
#
|
|
# If not specified, `min_nodes` defaults to 0, in which case, when traffic
|
|
# to a model stops (and after a cool-down period), nodes will be shut down
|
|
# and no charges will be incurred until traffic to the model resumes.
|
|
#
|
|
# You can set `min_nodes` when creating the model version, and you can also
|
|
# update `min_nodes` for an existing version:
|
|
# <pre>
|
|
# update_body.json:
|
|
# {
|
|
# 'autoScaling': {
|
|
# 'minNodes': 5
|
|
# }
|
|
# }
|
|
# </pre>
|
|
# HTTP request:
|
|
# <pre>
|
|
# PATCH
|
|
# https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
|
|
# -d @./update_body.json
|
|
# </pre>
|
|
},
|
|
"serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
|
|
"state": "A String", # Output only. The state of a version.
|
|
"pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
|
|
# version is '2.7'. Python '3.5' is available when `runtime_version` is set
|
|
# to '1.4' and above. Python '2.7' works with all supported runtime versions.
|
|
"framework": "A String", # Optional. The machine learning framework AI Platform uses to train
|
|
# this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
|
|
# `XGBOOST`. If you do not specify a framework, AI Platform
|
|
# will analyze files in the deployment_uri to determine a framework. If you
|
|
# choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
|
|
# of the model to 1.4 or greater.
|
|
#
|
|
# Do **not** specify a framework if you're deploying a [custom
|
|
# prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
|
|
"packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
|
|
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
|
|
# or [scikit-learn pipelines with custom
|
|
# code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
|
|
#
|
|
# For a custom prediction routine, one of these packages must contain your
|
|
# Predictor class (see
|
|
# [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
|
|
# include any dependencies used by your Predictor or scikit-learn pipeline
|
|
# uses that are not already included in your selected [runtime
|
|
# version](/ml-engine/docs/tensorflow/runtime-version-list).
|
|
#
|
|
# If you specify this field, you must also set
|
|
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
|
|
"A String",
|
|
],
|
|
"etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
|
|
# prevent simultaneous updates of a model from overwriting each other.
|
|
# It is strongly suggested that systems make use of the `etag` in the
|
|
# read-modify-write cycle to perform model updates in order to avoid race
|
|
# conditions: An `etag` is returned in the response to `GetVersion`, and
|
|
# systems are expected to put that etag in the request to `UpdateVersion` to
|
|
# ensure that their change will be applied to the model as intended.
|
|
"lastUseTime": "A String", # Output only. The time the version was last used for prediction.
|
|
"deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
|
|
# create the version. See the
|
|
# [guide to model
|
|
# deployment](/ml-engine/docs/tensorflow/deploying-models) for more
|
|
# information.
|
|
#
|
|
# When passing Version to
|
|
# [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
|
|
# the model service uses the specified location as the source of the model.
|
|
# Once deployed, the model version is hosted by the prediction service, so
|
|
# this location is useful only as a historical record.
|
|
# The total number of model files can't exceed 1000.
|
|
"createTime": "A String", # Output only. The time the version was created.
|
|
"isDefault": True or False, # Output only. If true, this version will be used to handle prediction
|
|
# requests that do not specify a version.
|
|
#
|
|
# You can change the default version by calling
|
|
# [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
|
|
"name": "A String", # Required.The name specified for the version when it was created.
|
|
#
|
|
# The version name must be unique within the model it is created in.
|
|
},
|
|
"onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
|
|
# Logging. These logs are like standard server access logs, containing
|
|
# information like timestamp and latency for each request. Note that
|
|
# [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
|
|
# your project receives prediction requests at a high queries per second rate
|
|
# (QPS). Estimate your costs before enabling this option.
|
|
#
|
|
# Default is false.
|
|
"name": "A String", # Required. The name specified for the model when it was created.
|
|
#
|
|
# The model name must be unique within the project it is created in.
|
|
}
|
|
|
|
updateMask: string, Required. Specifies the path, relative to `Model`, of the field to update.
|
|
|
|
For example, to change the description of a model to "foo" and set its
|
|
default version to "version_1", the `update_mask` parameter would be
|
|
specified as `description`, `default_version.name`, and the `PATCH`
|
|
request body would specify the new value, as follows:
|
|
{
|
|
"description": "foo",
|
|
"defaultVersion": {
|
|
"name":"version_1"
|
|
}
|
|
}
|
|
|
|
Currently the supported update masks are `description` and
|
|
`default_version.name`.
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # This resource represents a long-running operation that is the result of a
|
|
# network API call.
|
|
"metadata": { # Service-specific metadata associated with the operation. It typically
|
|
# contains progress information and common metadata such as create time.
|
|
# Some services might not provide such metadata. Any method that returns a
|
|
# long-running operation should document the metadata type, if any.
|
|
"a_key": "", # Properties of the object. Contains field @type with type URL.
|
|
},
|
|
"error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
|
|
# different programming environments, including REST APIs and RPC APIs. It is
|
|
# used by [gRPC](https://github.com/grpc). Each `Status` message contains
|
|
# three pieces of data: error code, error message, and error details.
|
|
#
|
|
# You can find out more about this error model and how to work with it in the
|
|
# [API Design Guide](https://cloud.google.com/apis/design/errors).
|
|
"message": "A String", # A developer-facing error message, which should be in English. Any
|
|
# user-facing error message should be localized and sent in the
|
|
# google.rpc.Status.details field, or localized by the client.
|
|
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
|
|
"details": [ # A list of messages that carry the error details. There is a common set of
|
|
# message types for APIs to use.
|
|
{
|
|
"a_key": "", # Properties of the object. Contains field @type with type URL.
|
|
},
|
|
],
|
|
},
|
|
"done": True or False, # If the value is `false`, it means the operation is still in progress.
|
|
# If `true`, the operation is completed, and either `error` or `response` is
|
|
# available.
|
|
"response": { # The normal response of the operation in case of success. If the original
|
|
# method returns no data on success, such as `Delete`, the response is
|
|
# `google.protobuf.Empty`. If the original method is standard
|
|
# `Get`/`Create`/`Update`, the response should be the resource. For other
|
|
# methods, the response should have the type `XxxResponse`, where `Xxx`
|
|
# is the original method name. For example, if the original method name
|
|
# is `TakeSnapshot()`, the inferred response type is
|
|
# `TakeSnapshotResponse`.
|
|
"a_key": "", # Properties of the object. Contains field @type with type URL.
|
|
},
|
|
"name": "A String", # The server-assigned name, which is only unique within the same service that
|
|
# originally returns it. If you use the default HTTP mapping, the
|
|
# `name` should be a resource name ending with `operations/{unique_id}`.
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</code>
|
|
<pre>Sets the access control policy on the specified resource. Replaces any
|
|
existing policy.
|
|
|
|
Args:
|
|
resource: string, REQUIRED: The resource for which the policy is being specified.
|
|
See the operation documentation for the appropriate value for this field. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # Request message for `SetIamPolicy` method.
|
|
"policy": { # Defines an Identity and Access Management (IAM) policy. It is used to # REQUIRED: The complete policy to be applied to the `resource`. The size of
|
|
# the policy is limited to a few 10s of KB. An empty policy is a
|
|
# valid policy but certain Cloud Platform services (such as Projects)
|
|
# might reject them.
|
|
# specify access control policies for Cloud Platform resources.
|
|
#
|
|
#
|
|
# A `Policy` consists of a list of `bindings`. A `binding` binds a list of
|
|
# `members` to a `role`, where the members can be user accounts, Google groups,
|
|
# Google domains, and service accounts. A `role` is a named list of permissions
|
|
# defined by IAM.
|
|
#
|
|
# **JSON Example**
|
|
#
|
|
# {
|
|
# "bindings": [
|
|
# {
|
|
# "role": "roles/owner",
|
|
# "members": [
|
|
# "user:mike@example.com",
|
|
# "group:admins@example.com",
|
|
# "domain:google.com",
|
|
# "serviceAccount:my-other-app@appspot.gserviceaccount.com"
|
|
# ]
|
|
# },
|
|
# {
|
|
# "role": "roles/viewer",
|
|
# "members": ["user:sean@example.com"]
|
|
# }
|
|
# ]
|
|
# }
|
|
#
|
|
# **YAML Example**
|
|
#
|
|
# bindings:
|
|
# - members:
|
|
# - user:mike@example.com
|
|
# - group:admins@example.com
|
|
# - domain:google.com
|
|
# - serviceAccount:my-other-app@appspot.gserviceaccount.com
|
|
# role: roles/owner
|
|
# - members:
|
|
# - user:sean@example.com
|
|
# role: roles/viewer
|
|
#
|
|
#
|
|
# For a description of IAM and its features, see the
|
|
# [IAM developer's guide](https://cloud.google.com/iam/docs).
|
|
"bindings": [ # Associates a list of `members` to a `role`.
|
|
# `bindings` with no members will result in an error.
|
|
{ # Associates `members` with a `role`.
|
|
"role": "A String", # Role that is assigned to `members`.
|
|
# For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
|
|
"members": [ # Specifies the identities requesting access for a Cloud Platform resource.
|
|
# `members` can have the following values:
|
|
#
|
|
# * `allUsers`: A special identifier that represents anyone who is
|
|
# on the internet; with or without a Google account.
|
|
#
|
|
# * `allAuthenticatedUsers`: A special identifier that represents anyone
|
|
# who is authenticated with a Google account or a service account.
|
|
#
|
|
# * `user:{emailid}`: An email address that represents a specific Google
|
|
# account. For example, `alice@gmail.com` .
|
|
#
|
|
#
|
|
# * `serviceAccount:{emailid}`: An email address that represents a service
|
|
# account. For example, `my-other-app@appspot.gserviceaccount.com`.
|
|
#
|
|
# * `group:{emailid}`: An email address that represents a Google group.
|
|
# For example, `admins@example.com`.
|
|
#
|
|
#
|
|
# * `domain:{domain}`: The G Suite domain (primary) that represents all the
|
|
# users of that domain. For example, `google.com` or `example.com`.
|
|
#
|
|
"A String",
|
|
],
|
|
"condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
|
|
# NOTE: An unsatisfied condition will not allow user access via current
|
|
# binding. Different bindings, including their conditions, are examined
|
|
# independently.
|
|
#
|
|
# title: "User account presence"
|
|
# description: "Determines whether the request has a user account"
|
|
# expression: "size(request.user) > 0"
|
|
"description": "A String", # An optional description of the expression. This is a longer text which
|
|
# describes the expression, e.g. when hovered over it in a UI.
|
|
"expression": "A String", # Textual representation of an expression in
|
|
# Common Expression Language syntax.
|
|
#
|
|
# The application context of the containing message determines which
|
|
# well-known feature set of CEL is supported.
|
|
"location": "A String", # An optional string indicating the location of the expression for error
|
|
# reporting, e.g. a file name and a position in the file.
|
|
"title": "A String", # An optional title for the expression, i.e. a short string describing
|
|
# its purpose. This can be used e.g. in UIs which allow to enter the
|
|
# expression.
|
|
},
|
|
},
|
|
],
|
|
"etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
|
|
# prevent simultaneous updates of a policy from overwriting each other.
|
|
# It is strongly suggested that systems make use of the `etag` in the
|
|
# read-modify-write cycle to perform policy updates in order to avoid race
|
|
# conditions: An `etag` is returned in the response to `getIamPolicy`, and
|
|
# systems are expected to put that etag in the request to `setIamPolicy` to
|
|
# ensure that their change will be applied to the same version of the policy.
|
|
#
|
|
# If no `etag` is provided in the call to `setIamPolicy`, then the existing
|
|
# policy is overwritten blindly.
|
|
"version": 42, # Deprecated.
|
|
"auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
|
|
{ # Specifies the audit configuration for a service.
|
|
# The configuration determines which permission types are logged, and what
|
|
# identities, if any, are exempted from logging.
|
|
# An AuditConfig must have one or more AuditLogConfigs.
|
|
#
|
|
# If there are AuditConfigs for both `allServices` and a specific service,
|
|
# the union of the two AuditConfigs is used for that service: the log_types
|
|
# specified in each AuditConfig are enabled, and the exempted_members in each
|
|
# AuditLogConfig are exempted.
|
|
#
|
|
# Example Policy with multiple AuditConfigs:
|
|
#
|
|
# {
|
|
# "audit_configs": [
|
|
# {
|
|
# "service": "allServices"
|
|
# "audit_log_configs": [
|
|
# {
|
|
# "log_type": "DATA_READ",
|
|
# "exempted_members": [
|
|
# "user:foo@gmail.com"
|
|
# ]
|
|
# },
|
|
# {
|
|
# "log_type": "DATA_WRITE",
|
|
# },
|
|
# {
|
|
# "log_type": "ADMIN_READ",
|
|
# }
|
|
# ]
|
|
# },
|
|
# {
|
|
# "service": "fooservice.googleapis.com"
|
|
# "audit_log_configs": [
|
|
# {
|
|
# "log_type": "DATA_READ",
|
|
# },
|
|
# {
|
|
# "log_type": "DATA_WRITE",
|
|
# "exempted_members": [
|
|
# "user:bar@gmail.com"
|
|
# ]
|
|
# }
|
|
# ]
|
|
# }
|
|
# ]
|
|
# }
|
|
#
|
|
# For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
|
|
# logging. It also exempts foo@gmail.com from DATA_READ logging, and
|
|
# bar@gmail.com from DATA_WRITE logging.
|
|
"auditLogConfigs": [ # The configuration for logging of each type of permission.
|
|
{ # Provides the configuration for logging a type of permissions.
|
|
# Example:
|
|
#
|
|
# {
|
|
# "audit_log_configs": [
|
|
# {
|
|
# "log_type": "DATA_READ",
|
|
# "exempted_members": [
|
|
# "user:foo@gmail.com"
|
|
# ]
|
|
# },
|
|
# {
|
|
# "log_type": "DATA_WRITE",
|
|
# }
|
|
# ]
|
|
# }
|
|
#
|
|
# This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
|
|
# foo@gmail.com from DATA_READ logging.
|
|
"exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
|
|
# permission.
|
|
# Follows the same format of Binding.members.
|
|
"A String",
|
|
],
|
|
"logType": "A String", # The log type that this config enables.
|
|
},
|
|
],
|
|
"service": "A String", # Specifies a service that will be enabled for audit logging.
|
|
# For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
|
|
# `allServices` is a special value that covers all services.
|
|
},
|
|
],
|
|
},
|
|
"updateMask": "A String", # OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only
|
|
# the fields in the mask will be modified. If no mask is provided, the
|
|
# following default mask is used:
|
|
# paths: "bindings, etag"
|
|
# This field is only used by Cloud IAM.
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # Defines an Identity and Access Management (IAM) policy. It is used to
|
|
# specify access control policies for Cloud Platform resources.
|
|
#
|
|
#
|
|
# A `Policy` consists of a list of `bindings`. A `binding` binds a list of
|
|
# `members` to a `role`, where the members can be user accounts, Google groups,
|
|
# Google domains, and service accounts. A `role` is a named list of permissions
|
|
# defined by IAM.
|
|
#
|
|
# **JSON Example**
|
|
#
|
|
# {
|
|
# "bindings": [
|
|
# {
|
|
# "role": "roles/owner",
|
|
# "members": [
|
|
# "user:mike@example.com",
|
|
# "group:admins@example.com",
|
|
# "domain:google.com",
|
|
# "serviceAccount:my-other-app@appspot.gserviceaccount.com"
|
|
# ]
|
|
# },
|
|
# {
|
|
# "role": "roles/viewer",
|
|
# "members": ["user:sean@example.com"]
|
|
# }
|
|
# ]
|
|
# }
|
|
#
|
|
# **YAML Example**
|
|
#
|
|
# bindings:
|
|
# - members:
|
|
# - user:mike@example.com
|
|
# - group:admins@example.com
|
|
# - domain:google.com
|
|
# - serviceAccount:my-other-app@appspot.gserviceaccount.com
|
|
# role: roles/owner
|
|
# - members:
|
|
# - user:sean@example.com
|
|
# role: roles/viewer
|
|
#
|
|
#
|
|
# For a description of IAM and its features, see the
|
|
# [IAM developer's guide](https://cloud.google.com/iam/docs).
|
|
"bindings": [ # Associates a list of `members` to a `role`.
|
|
# `bindings` with no members will result in an error.
|
|
{ # Associates `members` with a `role`.
|
|
"role": "A String", # Role that is assigned to `members`.
|
|
# For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
|
|
"members": [ # Specifies the identities requesting access for a Cloud Platform resource.
|
|
# `members` can have the following values:
|
|
#
|
|
# * `allUsers`: A special identifier that represents anyone who is
|
|
# on the internet; with or without a Google account.
|
|
#
|
|
# * `allAuthenticatedUsers`: A special identifier that represents anyone
|
|
# who is authenticated with a Google account or a service account.
|
|
#
|
|
# * `user:{emailid}`: An email address that represents a specific Google
|
|
# account. For example, `alice@gmail.com` .
|
|
#
|
|
#
|
|
# * `serviceAccount:{emailid}`: An email address that represents a service
|
|
# account. For example, `my-other-app@appspot.gserviceaccount.com`.
|
|
#
|
|
# * `group:{emailid}`: An email address that represents a Google group.
|
|
# For example, `admins@example.com`.
|
|
#
|
|
#
|
|
# * `domain:{domain}`: The G Suite domain (primary) that represents all the
|
|
# users of that domain. For example, `google.com` or `example.com`.
|
|
#
|
|
"A String",
|
|
],
|
|
"condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
|
|
# NOTE: An unsatisfied condition will not allow user access via current
|
|
# binding. Different bindings, including their conditions, are examined
|
|
# independently.
|
|
#
|
|
# title: "User account presence"
|
|
# description: "Determines whether the request has a user account"
|
|
# expression: "size(request.user) > 0"
|
|
"description": "A String", # An optional description of the expression. This is a longer text which
|
|
# describes the expression, e.g. when hovered over it in a UI.
|
|
"expression": "A String", # Textual representation of an expression in
|
|
# Common Expression Language syntax.
|
|
#
|
|
# The application context of the containing message determines which
|
|
# well-known feature set of CEL is supported.
|
|
"location": "A String", # An optional string indicating the location of the expression for error
|
|
# reporting, e.g. a file name and a position in the file.
|
|
"title": "A String", # An optional title for the expression, i.e. a short string describing
|
|
# its purpose. This can be used e.g. in UIs which allow to enter the
|
|
# expression.
|
|
},
|
|
},
|
|
],
|
|
"etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
|
|
# prevent simultaneous updates of a policy from overwriting each other.
|
|
# It is strongly suggested that systems make use of the `etag` in the
|
|
# read-modify-write cycle to perform policy updates in order to avoid race
|
|
# conditions: An `etag` is returned in the response to `getIamPolicy`, and
|
|
# systems are expected to put that etag in the request to `setIamPolicy` to
|
|
# ensure that their change will be applied to the same version of the policy.
|
|
#
|
|
# If no `etag` is provided in the call to `setIamPolicy`, then the existing
|
|
# policy is overwritten blindly.
|
|
"version": 42, # Deprecated.
|
|
"auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
|
|
{ # Specifies the audit configuration for a service.
|
|
# The configuration determines which permission types are logged, and what
|
|
# identities, if any, are exempted from logging.
|
|
# An AuditConfig must have one or more AuditLogConfigs.
|
|
#
|
|
# If there are AuditConfigs for both `allServices` and a specific service,
|
|
# the union of the two AuditConfigs is used for that service: the log_types
|
|
# specified in each AuditConfig are enabled, and the exempted_members in each
|
|
# AuditLogConfig are exempted.
|
|
#
|
|
# Example Policy with multiple AuditConfigs:
|
|
#
|
|
# {
|
|
# "audit_configs": [
|
|
# {
|
|
# "service": "allServices"
|
|
# "audit_log_configs": [
|
|
# {
|
|
# "log_type": "DATA_READ",
|
|
# "exempted_members": [
|
|
# "user:foo@gmail.com"
|
|
# ]
|
|
# },
|
|
# {
|
|
# "log_type": "DATA_WRITE",
|
|
# },
|
|
# {
|
|
# "log_type": "ADMIN_READ",
|
|
# }
|
|
# ]
|
|
# },
|
|
# {
|
|
# "service": "fooservice.googleapis.com"
|
|
# "audit_log_configs": [
|
|
# {
|
|
# "log_type": "DATA_READ",
|
|
# },
|
|
# {
|
|
# "log_type": "DATA_WRITE",
|
|
# "exempted_members": [
|
|
# "user:bar@gmail.com"
|
|
# ]
|
|
# }
|
|
# ]
|
|
# }
|
|
# ]
|
|
# }
|
|
#
|
|
# For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
|
|
# logging. It also exempts foo@gmail.com from DATA_READ logging, and
|
|
# bar@gmail.com from DATA_WRITE logging.
|
|
"auditLogConfigs": [ # The configuration for logging of each type of permission.
|
|
{ # Provides the configuration for logging a type of permissions.
|
|
# Example:
|
|
#
|
|
# {
|
|
# "audit_log_configs": [
|
|
# {
|
|
# "log_type": "DATA_READ",
|
|
# "exempted_members": [
|
|
# "user:foo@gmail.com"
|
|
# ]
|
|
# },
|
|
# {
|
|
# "log_type": "DATA_WRITE",
|
|
# }
|
|
# ]
|
|
# }
|
|
#
|
|
# This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
|
|
# foo@gmail.com from DATA_READ logging.
|
|
"exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
|
|
# permission.
|
|
# Follows the same format of Binding.members.
|
|
"A String",
|
|
],
|
|
"logType": "A String", # The log type that this config enables.
|
|
},
|
|
],
|
|
"service": "A String", # Specifies a service that will be enabled for audit logging.
|
|
# For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
|
|
# `allServices` is a special value that covers all services.
|
|
},
|
|
],
|
|
}</pre>
|
|
</div>
|
|
|
|
<div class="method">
|
|
<code class="details" id="testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</code>
|
|
<pre>Returns permissions that a caller has on the specified resource.
|
|
If the resource does not exist, this will return an empty set of
|
|
permissions, not a NOT_FOUND error.
|
|
|
|
Note: This operation is designed to be used for building permission-aware
|
|
UIs and command-line tools, not for authorization checking. This operation
|
|
may "fail open" without warning.
|
|
|
|
Args:
|
|
resource: string, REQUIRED: The resource for which the policy detail is being requested.
|
|
See the operation documentation for the appropriate value for this field. (required)
|
|
body: object, The request body. (required)
|
|
The object takes the form of:
|
|
|
|
{ # Request message for `TestIamPermissions` method.
|
|
"permissions": [ # The set of permissions to check for the `resource`. Permissions with
|
|
# wildcards (such as '*' or 'storage.*') are not allowed. For more
|
|
# information see
|
|
# [IAM Overview](https://cloud.google.com/iam/docs/overview#permissions).
|
|
"A String",
|
|
],
|
|
}
|
|
|
|
x__xgafv: string, V1 error format.
|
|
Allowed values
|
|
1 - v1 error format
|
|
2 - v2 error format
|
|
|
|
Returns:
|
|
An object of the form:
|
|
|
|
{ # Response message for `TestIamPermissions` method.
|
|
"permissions": [ # A subset of `TestPermissionsRequest.permissions` that the caller is
|
|
# allowed.
|
|
"A String",
|
|
],
|
|
}</pre>
|
|
</div>
|
|
|
|
</body></html> |