You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

373 lines
22 KiB

/*
* Copyright (C) 2019 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package android.hardware.neuralnetworks@1.3;
import @1.1::ExecutionPreference;
import @1.2::Constant;
import @1.2::DeviceType;
import @1.2::Extension;
import @1.2::IDevice;
import BufferDesc;
import BufferRole;
import Capabilities;
import ErrorStatus;
import Model;
import OptionalTimePoint;
import Priority;
import IBuffer;
import IPreparedModel;
import IPreparedModelCallback;
/**
* This interface represents a device driver.
*/
interface IDevice extends @1.2::IDevice {
/**
* Gets the capabilities of a driver.
*
* @return status Error status of the call, must be:
* - NONE if successful
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if there is an unspecified error
* @return capabilities Capabilities of the driver.
*/
getCapabilities_1_3() generates (ErrorStatus status, Capabilities capabilities);
/**
* Gets the supported operations in a model.
*
* getSupportedOperations indicates which operations of the top-level
* subgraph are fully supported by the vendor driver. If an operation may
* not be supported for any reason, getSupportedOperations must return
* false for that operation.
*
* The {@link OperationType::IF} and {@link OperationType::WHILE}
* operations may only be fully supported if the vendor driver fully
* supports all operations in the referenced subgraphs.
*
* @param model A model whose operations--and their corresponding operands--
* are to be verified by the driver.
* @return status Error status of the call, must be:
* - NONE if successful
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if there is an unspecified error
* - INVALID_ARGUMENT if provided model is invalid
* @return supportedOperations A list of supported operations, where true
* indicates the operation is supported and false indicates the
* operation is not supported. The index of "supported" corresponds with
* the index of the operation it is describing.
*/
getSupportedOperations_1_3(Model model)
generates (ErrorStatus status, vec<bool> supportedOperations);
/**
* Asynchronously creates a prepared model for execution and optionally
* saves it into cache files.
*
* prepareModel is used to make any necessary transformations to or
* alternative representations to a model for execution, possibly including
* transformations on the constant data, optimization on the model's graph,
* or compilation into the device's native binary format. The model itself
* is not changed.
*
* Optionally, caching information may be provided for the driver to save
* the prepared model to cache files for faster model compilation time when
* the same model preparation is requested in the future. There are two
* types of cache file handles provided to the driver: model cache and data
* cache. For more information on the two types of cache handles, refer to
* getNumberOfCacheFilesNeeded.
*
* The file descriptors must be opened with read and write permission. A
* file may have any size, and the corresponding file descriptor may have
* any offset. The driver must truncate a file to zero size before writing
* to that file. The file descriptors may be closed by the client once the
* asynchronous preparation has finished. The driver must dup a file
* descriptor if it wants to get access to the cache file later.
*
* The model is prepared asynchronously with respect to the caller. The
* prepareModel function must verify the inputs to the preparedModel
* function related to preparing the model (as opposed to saving the
* prepared model to cache) are correct. If there is an error, prepareModel
* must immediately invoke the callback with the appropriate ErrorStatus
* value and nullptr for the IPreparedModel, then return with the same
* ErrorStatus. If the inputs to the prepareModel function that are related
* to preparing the model are valid and there is no error, prepareModel must
* launch an asynchronous task to prepare the model in the background, and
* immediately return from prepareModel with ErrorStatus::NONE. If the
* asynchronous task fails to launch, prepareModel must immediately invoke
* the callback with ErrorStatus::GENERAL_FAILURE and nullptr for the
* IPreparedModel, then return with ErrorStatus::GENERAL_FAILURE.
*
* When the asynchronous task has finished preparing the model, it must
* immediately invoke the callback function provided as an input to
* prepareModel. If the model was prepared successfully, the callback object
* must be invoked with an error status of ErrorStatus::NONE and the
* produced IPreparedModel object. If an error occurred preparing the model,
* the callback object must be invoked with the appropriate ErrorStatus
* value and nullptr for the IPreparedModel.
*
* The model is prepared with a priority. This priority is relative to other
* prepared models owned by the same client. Higher priority executions may
* use more compute resources than lower priority executions, and may
* preempt or starve lower priority executions.
*
* prepareModel_1_3 can be called with an optional deadline. If the model
* is not able to be prepared before the provided deadline, the model
* preparation may be aborted, and either {@link
* ErrorStatus::MISSED_DEADLINE_TRANSIENT} or {@link
* ErrorStatus::MISSED_DEADLINE_PERSISTENT} may be returned. The error due
* to an abort must be sent the same way as other errors, described above.
* The deadline is represented as nanoseconds since the epoch of the steady
* clock (as if from std::chrono::steady_clock::time_point), but the service
* may convert it to the nanoseconds since boot time (as if from
* clock_gettime(CLOCK_BOOTTIME, &ts) or
* android::base::boot_clock::time_point) to account for time when the
* system is suspended. This conversion can by done by finding the timeout
* duration remaining compared to the steady_clock and adding it to the
* current boot_clock time.
*
* Optionally, the driver may save the prepared model to cache during the
* asynchronous preparation. Any error that occurs when saving to cache must
* not affect the status of preparing the model. Even if the input arguments
* related to the cache may be invalid, or the driver may fail to save to
* cache, the prepareModel function must finish preparing the model. The
* driver may choose not to save to cache even if the caching information is
* provided and valid.
*
* The only information that may be unknown to the model at this stage is
* the shape of the tensors, which may only be known at execution time. As
* such, some driver services may return partially prepared models, where
* the prepared model may only be finished when it is paired with a set of
* inputs to the model. Note that the same prepared model object may be used
* with different shapes of inputs on different (possibly concurrent)
* executions.
*
* Multiple threads may call prepareModel on the same model concurrently.
*
* @param model The model to be prepared for execution.
* @param preference Indicates the intended execution behavior of a prepared
* model.
* @param priority The priority of the prepared model relative to other
* prepared models owned by the client.
* @param deadline The time by which the model is expected to be prepared.
* If the model cannot be prepared by the deadline, the preparation may
* be aborted.
* @param modelCache A vector of handles with each entry holding exactly one
* cache file descriptor for the security-sensitive cache. The length of
* the vector must either be 0 indicating that caching information is
* not provided, or match the numModelCache returned from
* getNumberOfCacheFilesNeeded. The cache handles will be provided in
* the same order when retrieving the preparedModel from cache files
* with prepareModelFromCache_1_3.
* @param dataCache A vector of handles with each entry holding exactly one
* cache file descriptor for the constants' cache. The length of the
* vector must either be 0 indicating that caching information is not
* provided, or match the numDataCache returned from
* getNumberOfCacheFilesNeeded. The cache handles will be provided in
* the same order when retrieving the preparedModel from cache files
* with prepareModelFromCache_1_3.
* @param token A caching token of length Constant::BYTE_SIZE_OF_CACHE_TOKEN
* identifying the prepared model. The same token will be provided when
* retrieving the prepared model from the cache files with
* prepareModelFromCache_1_3. Tokens should be chosen to have a low rate of
* collision for a particular application. The driver cannot detect a
* collision; a collision will result in a failed execution or in a
* successful execution that produces incorrect output values. If both
* modelCache and dataCache are empty indicating that caching
* information is not provided, this token must be ignored.
* @param callback A callback object used to return the error status of
* preparing the model for execution and the prepared model if
* successful, nullptr otherwise. The callback object's notify function
* must be called exactly once, even if the model could not be prepared.
* @return status Error status of launching a task which prepares the model
* in the background; must be:
* - NONE if preparation task is successfully launched
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if there is an unspecified error
* - INVALID_ARGUMENT if one of the input arguments related to preparing
* the model is invalid
* - MISSED_DEADLINE_* if the preparation is aborted because the model
* cannot be prepared by the deadline
* - RESOURCE_EXHAUSTED_* if the task was aborted by the driver
*/
prepareModel_1_3(Model model, ExecutionPreference preference,
Priority priority, OptionalTimePoint deadline,
vec<handle> modelCache, vec<handle> dataCache,
uint8_t[Constant:BYTE_SIZE_OF_CACHE_TOKEN] token,
IPreparedModelCallback callback)
generates (ErrorStatus status);
/**
* Creates a prepared model from cache files for execution.
*
* prepareModelFromCache_1_3 is used to retrieve a prepared model directly from
* cache files to avoid slow model compilation time. There are
* two types of cache file handles provided to the driver: model cache
* and data cache. For more information on the two types of cache handles,
* refer to getNumberOfCacheFilesNeeded.
*
* The file descriptors must be opened with read and write permission. A file may
* have any size, and the corresponding file descriptor may have any offset. The
* driver must truncate a file to zero size before writing to that file. The file
* descriptors may be closed by the client once the asynchronous preparation has
* finished. The driver must dup a file descriptor if it wants to get access to
* the cache file later.
*
* The model is prepared asynchronously with respect to the caller. The
* prepareModelFromCache_1_3 function must verify the inputs to the
* prepareModelFromCache_1_3 function are correct, and that the security-sensitive
* cache has not been modified since it was last written by the driver.
* If there is an error, or if compilation caching is not supported, or if the
* security-sensitive cache has been modified, prepareModelFromCache_1_3 must
* immediately invoke the callback with the appropriate ErrorStatus value and
* nullptr for the IPreparedModel, then return with the same ErrorStatus. If
* the inputs to the prepareModelFromCache_1_3 function are valid, the security-sensitive
* cache is not modified, and there is no error, prepareModelFromCache_1_3 must launch an
* asynchronous task to prepare the model in the background, and immediately return
* from prepareModelFromCache_1_3 with ErrorStatus::NONE. If the asynchronous task
* fails to launch, prepareModelFromCache_1_3 must immediately invoke the callback
* with ErrorStatus::GENERAL_FAILURE and nullptr for the IPreparedModel, then
* return with ErrorStatus::GENERAL_FAILURE.
*
* When the asynchronous task has finished preparing the model, it must
* immediately invoke the callback function provided as an input to
* prepareModelFromCache_1_3. If the model was prepared successfully, the
* callback object must be invoked with an error status of ErrorStatus::NONE
* and the produced IPreparedModel object. If an error occurred preparing
* the model, the callback object must be invoked with the appropriate
* ErrorStatus value and nullptr for the IPreparedModel.
*
* prepareModelFromCache_1_3 can be called with an optional deadline. If the
* model is not able to prepared before the provided deadline, the model
* preparation may be aborted, and either {@link
* ErrorStatus::MISSED_DEADLINE_TRANSIENT}
* or {@link ErrorStatus::MISSED_DEADLINE_PERSISTENT} may be returned. The
* error due to an abort must be sent the same way as other errors,
* described above. The deadline is represented as nanoseconds since the
* epoch of the steady clock (as if from
* std::chrono::steady_clock::time_point), but the service may convert it to
* the nanoseconds since boot time (as if from
* clock_gettime(CLOCK_BOOTTIME, &ts) or
* android::base::boot_clock::time_point) to account for time when the
* system is suspended. This conversion can by done by finding the timeout
* duration remaining compared to the steady_clock and adding it to the
* current boot_clock time.
*
* The only information that may be unknown to the model at this stage is
* the shape of the tensors, which may only be known at execution time. As
* such, some driver services may return partially prepared models, where
* the prepared model may only be finished when it is paired with a set of
* inputs to the model. Note that the same prepared model object may be
* used with different shapes of inputs on different (possibly concurrent)
* executions.
*
* @param deadline The time by which the model is expected to be prepared.
* If the model cannot be prepared by the deadline, the preparation may
* be aborted.
* @param modelCache A vector of handles with each entry holding exactly one
* cache file descriptor for the security-sensitive cache. The length of
* the vector must match the numModelCache returned from getNumberOfCacheFilesNeeded.
* The cache handles will be provided in the same order as with prepareModel_1_3.
* @param dataCache A vector of handles with each entry holding exactly one
* cache file descriptor for the constants' cache. The length of the vector
* must match the numDataCache returned from getNumberOfCacheFilesNeeded.
* The cache handles will be provided in the same order as with prepareModel_1_3.
* @param token A caching token of length Constant::BYTE_SIZE_OF_CACHE_TOKEN
* identifying the prepared model. It is the same token provided when saving
* the cache files with prepareModel_1_3. Tokens should be chosen
* to have a low rate of collision for a particular application. The driver
* cannot detect a collision; a collision will result in a failed execution
* or in a successful execution that produces incorrect output values.
* @param callback A callback object used to return the error status of
* preparing the model for execution and the prepared model if
* successful, nullptr otherwise. The callback object's notify function
* must be called exactly once, even if the model could not be prepared.
* @return status Error status of launching a task which prepares the model
* in the background; must be:
* - NONE if preparation task is successfully launched
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if caching is not supported or if there is an
* unspecified error
* - INVALID_ARGUMENT if one of the input arguments is invalid
* - MISSED_DEADLINE_* if the preparation is aborted because the model
* cannot be prepared by the deadline
* - RESOURCE_EXHAUSTED_* if the task was aborted by the driver
*/
prepareModelFromCache_1_3(OptionalTimePoint deadline,
vec<handle> modelCache, vec<handle> dataCache,
uint8_t[Constant:BYTE_SIZE_OF_CACHE_TOKEN] token,
IPreparedModelCallback callback)
generates (ErrorStatus status);
/**
* Allocates a driver-managed buffer with the properties specified by the buffer descriptor
* as well as the input and output roles.
*
* The allocate function must verify its inputs are correct. If there is an error, or if a
* certain role or property is not supported by the driver, the allocate
* function must return with an appropriate ErrorStatus, a nullptr as the IBuffer, and 0 as the
* buffer token. If the allocation is successful, this method must return with ErrorStatus::NONE
* and the produced IBuffer with a positive token identifying the allocated buffer. A successful
* allocation must accommodate all of the specified roles and buffer properties.
*
* The buffer is allocated to an uninitialized state. An uninitialized buffer may only be used
* in ways that are specified by outputRoles. A buffer is initialized after it is used as an
* output in a successful execution, or after a successful invocation of IBuffer::copyFrom on
* the buffer. An initialized buffer may be used according to all roles specified in inputRoles
* and outputRoles. A buffer will return to the uninitialized state if it is used as an output
* in a failed execution, or after a failed invocation of IBuffer::copyFrom on the buffer.
*
* The dimensions of the buffer can be deduced from the buffer descriptor as well as the
* dimensions of the corresponding model operands of the input and output roles. The dimensions
* or rank of the buffer may be unknown at this stage. As such, some driver services may only
* create a placeholder and defer the actual allocation until execution time. Note that the
* same buffer may be used for different shapes of outputs on different executions. When the
* buffer is used as an input, the input shape must be the same as the output shape from the
* last execution using this buffer as an output.
*
* The driver must apply proper validatation upon every usage of the buffer, and must fail the
* execution immediately if the usage is illegal.
*
* @param desc A buffer descriptor specifying the properties of the buffer to allocate.
* @param preparedModels A vector of IPreparedModel objects. Must only contain IPreparedModel
* objects from the same IDevice as this method is being invoked on.
* @param inputRoles A vector of roles with each specifying an input to a prepared model.
* @param outputRoles A vector of roles with each specifying an output to a prepared model.
* Each role specified in inputRoles and outputRoles must be unique. The corresponding
* model operands of the roles must have the same OperandType, scale, zero point, and
* ExtraParams. The dimensions of the operands and the dimensions specified in the buffer
* descriptor must be compatible with each other. Two dimensions are incompatible if there
* is at least one axis that is fully specified in both but has different values.
* @return status Error status of the buffer allocation. Must be:
* - NONE if successful
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if a certain buffer property or a certain role is not supported,
* or if there is an unspecified error
* - INVALID_ARGUMENT if one of the input arguments is invalid
* @return buffer The allocated IBuffer object. If the buffer was unable to be allocated
* due to an error, nullptr must be returned.
* @return token A positive token identifying the allocated buffer. The same token will be
* provided when referencing the buffer as one of the memory pools in the request of an
* execution. The token must not collide with the tokens of other IBuffer objects that are
* currently alive in the same driver service. If the buffer was unable to be allocated
* due to an error, the token must be 0.
*/
allocate(BufferDesc desc, vec<IPreparedModel> preparedModels, vec<BufferRole> inputRoles,
vec<BufferRole> outputRoles)
generates (ErrorStatus status, IBuffer buffer, uint32_t token);
};