diff --git a/index.bs b/index.bs index 35ee539c..ccec0054 100644 --- a/index.bs +++ b/index.bs @@ -395,7 +395,7 @@ th, td { Introduction {#intro} ===================== -The Web Neural Network API defines a web-friendly hardware-agnostic abstraction layer that makes use of Machine Learning capabilities of operating systems and underlying hardware platforms without being tied to platform-specific capabilities. The abstraction layer addresses the requirements of key Machine Learning JavaScript frameworks and also allows web developers familiar with the ML domain to write custom code without the help of libraries. A complementary Model Loader API defines a higher-level abstraction targeting primarily web developers. +The Web Neural Network API defines a web-friendly hardware-agnostic abstraction layer that makes use of Machine Learning capabilities of operating systems and underlying hardware platforms without being tied to platform-specific capabilities. The abstraction layer addresses the requirements of key Machine Learning JavaScript frameworks and also allows web developers familiar with the ML domain to write custom code without the help of libraries. For an illustrated introduction, please see the explainer. @@ -671,8 +671,8 @@ array data (such as its shape). As mentioned above, the operations have a functional semantics. This allows the implementation to potentially share the array data between multiple tensors. For example, the implementation -of operations such as reshape, or slice, or squeeze may return a view of its input tensor -that shares the same buffer as the input tensor. (In the case of reshape or squeeze, +of operations such as reshape, or slice may return a view of its input tensor +that shares the same buffer as the input tensor. (In the case of reshape, the entire data is shared, while in the case of slice, a part of the input data is shared.) The implementation may use views, as above, for intermediate values. @@ -738,7 +738,7 @@ Navigator includes NavigatorML; WorkerNavigator includes NavigatorML; -## The ML interface ## {#api-ml} +## {{ML}} interface ## {#api-ml} -
-{{MLGraph}} has the following internal slots: -
- : \[[context]] of type {{MLContext}} +
+{{MLActivation}} has the following internal slots: +
+ : \[[name]] of type [=string=] :: - The context of type {{MLContext}} associated with this {{MLGraph}}. - - : \[[inputDescriptors]] of type [=record=]<{{DOMString}}, {{MLOperandDescriptor}}> + The {{MLActivation}}'s name. + : \[[builder]] of type {{MLGraphBuilder}} :: - Maps the name of an input {{MLOperand}} to its {{MLOperandDescriptor}} for all input {{MLOperand}}s of this {{MLGraph}}. - - : \[[outputDescriptors]] of type [=record=]<{{DOMString}}, {{MLOperandDescriptor}}> + The graph builder object this {{MLActivation}} belongs to. + : \[[options]] of type [=object=] :: - Maps the name of an output {{MLOperand}} to its {{MLOperandDescriptor}} for all output {{MLOperand}}s of this {{MLGraph}}. - - : \[[implementation]] + A dictionary containing {{MLActivation}} options. + : \[[operator]] of type [=object=] :: - The underlying implementation provided by the User Agent. + Reference to {{MLActivation}}'s corresponding [=implementation-defined=] platform operator object.
-### The MLOperandDescriptor dictionary ### {#api-mloperanddescriptor} - +### Creating {{MLActivation}} ### {#api-mlactivation-create} +
+The {{MLActivation}} objects (including the ones passed as input to methods) are created by the methods of {{MLGraphBuilder}} and are identified by their name. The |options| dictionary is defined by those methods. The actual creation of the activation function e.g. a [[#api-mlgraphbuilder-sigmoid-method]] or [[#api-mlgraphbuilder-relu-method]] can then be deferred until when the rest of the graph is ready to connect with it such as during the construction of [[#api-mlgraphbuilder-conv2d]] for example. +
- The byte length of an {{MLOperandDescriptor}} |desc| is the value returned by the following steps: + To create an MLActivation given |builder|, |name|, |options| and |init-steps|, run the following steps:
- 1. Let |elementLength| be 1. - 1. [=map/For each=] |dimension| of |desc|.{{MLOperandDescriptor/dimensions}}: - 1. Set |elementLength| to |elementLength| × |dimension|. - 1. Let |elementSize| be the [=element size=] of one of the {{ArrayBufferView}} types that matches |desc|.{{MLOperandDescriptor/dataType}} according to [this table](#appendices-mloperanddatatype-arraybufferview-compatibility). - 1. Return |elementLength| × |elementSize|. + 1. [=Assert=]: the type of |builder| is {{MLGraphBuilder}}. + 1. If |name| is empty, then [=exception/throw=] a "{{TypeError}}". + 1. Let |activation| be a new [=object=]. + 1. Set |activation|.{{MLActivation/[[builder]]}} to |builder|. + 1. Set |activation|.{{MLActivation/[[name]]}} to |name|. + 1. If |options| is an [=object=], set |activation|.{{MLActivation/[[options]]}} to |options|. + 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. + 1. Make a request to the underlying platform to: + 1. Create an [=implementation-defined=] platform operator |opImpl| for the given |name| operation. + 1. Store a reference of |opImpl| in |activation|.{{MLActivation/[[operator]]}}. + 1. If |init-steps| are defined, run |init-steps| with |options|. + 1. Otherwise, initialize |activation|.{{MLActivation/[[operator]]}} given |options| in an [=implementation-defined=] way for the given |name| operation. + 1. Return |activation|.
-### The MLOperand interface ### {#api-mloperand} +## {{MLCommandEncoder}} interface ## {#api-mlcommandencoder} +The {{MLCommandEncoder}} interface represents a method of execution that synchronously records the computational workload of a compiled {{MLGraph}} to a {{GPUCommandBuffer}} on the calling thread. Since the workload is not immediately executed, just recorded, this method allows more flexibility for the caller to determine how and when the recorded commands will be submitted for execution on the GPU relative to other GPU workload on the same or different queue. -An {{MLOperand}} represents an intermediary graph being constructed as a result of compositing parts of an operation into a fully composed operation. +
-{{MLOperand}} has the following internal slots: -
- : \[[builder]] of type {{MLGraphBuilder}} - :: - The {{MLOperand}}'s associated builder object. - - : \[[descriptor]] of type {{MLOperandDescriptor}} - :: - The {{MLOperand}}'s descriptor. - - : \[[name]] of type [=string=] - :: - The {{MLOperand}}'s name (only for input operands). - - : \[[operand]] of type [=object=] +{{MLCommandEncoder}} has the following internal slots: +
+ : \[[context]] of type {{MLContext}} :: - Reference to {{MLOperand}}'s corresponding [=implementation-defined=] platform operand object. + The context of type {{MLContext}} associated with this {{MLCommandEncoder}}. - : \[[operator]] of type [=object=] + : \[[implementation]] :: - Reference to {{MLOperand}}'s corresponding [=implementation-defined=] platform operator object. + The underlying implementation provided by the User Agent.
-
-To get the rank of an {{MLOperand}} |operand|, run the following steps: -
- 1. Return the [=list/size=] of |operand|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}. -
-
+### Graph Initialization ### {#api-mlcommandencoder-graph-initialization} +Record the initialization of the {{MLGraph}}. This is a necessary step for optimal performance during graph execution as it gives the platform an opportunity to prepare and optimize constant input data for the subsequent execution of the graph. This method should only be called once per graph. -Since the {{MLOperand/[[builder]]}} object is bound by the {{MLGraphBuilder/constructor()}} constructor to an {{MLContext}} object, an {{MLOperand}} is also always bound to the same {{MLContext}} object. + -#### Creating {{MLOperand}} #### {#api-mloperand-create} -The {{MLOperand}} objects are created by the methods of {{MLGraphBuilder}}, internally using the following algorithms. +
+ **Arguments:** + - *graph*: an {{MLGraph}}. The compiled graph to be initialized with graph constant inputs. -
- - To create an MLOperand given |builder| and |desc|, run the following steps: - -
- 1. [=Assert=]: the type of |builder| is {{MLGraphBuilder}}. - 1. [=Assert=]: the type of |desc| is {{MLOperandDescriptor}}. - 1. Let |operand| be a new [=object=]. - 1. Set |operand|.{{MLOperand/[[builder]]}} to |builder|. - 1. Set |operand|.{{MLOperand/[[descriptor]]}} to |desc|. - 1. Return |operand|. -
-
+ **Returns:** {{undefined}}. +
- To copy an MLOperand given |operand|, run the following steps: + The initializeGraph(graph) method steps are: -
- 1. [=Assert=]: the type of |operand| is {{MLOperand}}. - 1. Let |result| be a new [=object=]. - 1. Set |result|.{{MLOperand/[[builder]]}} to |operand|.{{MLOperand/[[builder]]}}. - 1. Set |result|.{{MLOperand/[[descriptor]]}} to |operand|.{{MLOperand/[[descriptor]]}}. - 1. If |operand|.{{MLOperand/[[name]]}} [=map/exists=], then set |result|.{{MLOperand/[[name]]}} to |operand|.{{MLOperand/[[name]]}}. - 1. Return |result|. +
+
+ Graph initialization stage typically involves a process known as "weight preprocessing" where all the constant inputs to the graph are preprocessed and cached at the operating system level for subsequent graph execution calls. The initializing inputs are typically the constant weight data specified through the {{MLGraphBuilder/constant(descriptor, bufferView)|MLGraphBuilder/constant(value, type)}} method as constant operands during graph construction time. +
-
- - To check dimensions given |dimensions| and |type|, run the following steps: - -
- 1. If the [=list/size=] of |dimensions| is 0, return false. - 1. If the [=list/size=] of |dimensions| is too large to be supported by the implementation, return false. - 1. If any element of |dimensions| is not a positive number, or it is too large to be supported by the implementation given |type|, return false. - 1. Return true. -
-
+### Dispatch Execution Commands ### {#api-mlcommandencoder-dispatch-commands} +Record the {{MLGraph}} execution with the inputs {{MLNamedGPUResources}} and outputs {{MLNamedGPUResources}}. + + + +
+ **Arguments:** + - *graph*: an {{MLGraph}}. The compiled graph to be executed. + - *inputs*: an {{MLNamedGPUResources}}. The resources of inputs. + - *outputs*: an {{MLNamedGPUResources}}. The pre-allocated resources of required outputs. + + **Returns:** {{undefined}}. +
- To validate MLOperand given |operand| and |builder|, run the following steps: + The dispatch(|graph|, |inputs|, |outputs|) method steps are:
- 1. [=Assert=]: the type of |operand|.{{MLOperand/[[builder]]}} is {{MLGraphBuilder}}. - 1. If |builder| is not equal to |operand|.{{MLOperand/[[builder]]}}, return false. - 1. Let |desc| be |operand|.{{MLOperand/[[descriptor]]}}. - 1. If |desc|.{{MLOperandDescriptor/dimensions}} [=map/exists=] and invoking check dimensions given |desc|.{{MLOperandDescriptor/dimensions}} and |desc|.{{MLOperandDescriptor/dataType}} returns false, then return false. - 1. Return true. + 1. If any of the following requirements are unmet, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. +
+ 1. [=map/For each=] |key| → |value| of |inputs|: + 1. |graph|.{{MLGraph/[[inputDescriptors]]}}[|key|] must [=map/exist=]. + 1. Let |inputDesc| be |graph|.{{MLGraph/[[inputDescriptors]]}}[|key|]. + 1. If |value| is a {{GPUBuffer}}, then: + 1. |value|.{{GPUBuffer/size}} must equal to [=byte length=] of |inputDesc|. + 1. [=map/For each=] |key| → |value| of |outputs|: + 1. |graph|.{{MLGraph/[[outputDescriptors]]}}[|key|] must [=map/exist=]. + 1. Let |outputDesc| be |graph|.{{MLGraph/[[outputDescriptors]]}}[|key|]. + 1. If |value| is a {{GPUBuffer}}, then: + 1. |value|.{{GPUBuffer/size}} must equal to [=byte length=] of |outputDesc|. +
+ 1. [=map/For each=] |key| → |value| of |inputs|: + 1. Set the input of |graph|.{{MLGraph/[[implementation]]}} that is associated with |key| to |value|. + 1. [=map/For each=] |key| → |value| of |outputs|: + 1. Set the output of |graph|.{{MLGraph/[[implementation]]}} that is associated with |key| to |value|. + 1. Issue a compute request of |graph|.{{MLGraph/[[implementation]]}}. + 1. If there is an error returned by |graph|.{{MLGraph/[[implementation]]}}, then: + 1. Throw an "{{OperationError}}" {{DOMException}}. + 1. Return {{undefined}}.
-### The MLActivation interface ### {#api-mlactivation} - -Objects implementing the {{MLActivation}} interface represent activation function types. +### Generate GPU Command Buffer ### {#api-mlcommandencoder-generate-gpu-command-buffer} +Complete the recording of ML workload and return a WebGPU-compatible {{GPUCommandBuffer}} containing the recorded workload. -
-{{MLActivation}} has the following internal slots: -
- : \[[name]] of type [=string=] - :: - The {{MLActivation}}'s name. - : \[[builder]] of type {{MLGraphBuilder}} - :: - The graph builder object this {{MLActivation}} belongs to. - : \[[options]] of type [=object=] - :: - A dictionary containing {{MLActivation}} options. - : \[[operator]] of type [=object=] - :: - Reference to {{MLActivation}}'s corresponding [=implementation-defined=] platform operator object. -
-
- -
-These activations function types are used to create other operations. One such use of this interface is for when an activation function is fused into another operation such as [[#api-mlgraphbuilder-conv2d]] or [[#api-mlgraphbuilder-batchnorm]] during a graph construction session. Such fused activation functions can provide a significant performance improvement when supported natively by the underlying implementation. This is intended as an optimization opportunity for implementers. -
+
+ **Arguments:** + - *descriptor*: an optional {{GPUCommandBufferDescriptor}}. Descriptor of the command buffer. -#### Creating {{MLActivation}} #### {#api-mlactivation-create} -
-The {{MLActivation}} objects (including the ones passed as input to methods) are created by the methods of {{MLGraphBuilder}} and are identified by their name. The |options| dictionary is defined by those methods. The actual creation of the activation function e.g. a [[#api-mlgraphbuilder-sigmoid-method]] or [[#api-mlgraphbuilder-relu-method]] can then be deferred until when the rest of the graph is ready to connect with it such as during the construction of [[#api-mlgraphbuilder-conv2d]] for example. + **Returns:** {{GPUCommandBuffer}}.
- To create an MLActivation given |builder|, |name|, |options| and |init-steps|, run the following steps: + The finish(|descriptor|) method steps are:
- 1. [=Assert=]: the type of |builder| is {{MLGraphBuilder}}. - 1. If |name| is empty, then [=exception/throw=] a "{{TypeError}}". - 1. Let |activation| be a new [=object=]. - 1. Set |activation|.{{MLActivation/[[builder]]}} to |builder|. - 1. Set |activation|.{{MLActivation/[[name]]}} to |name|. - 1. If |options| is an [=object=], set |activation|.{{MLActivation/[[options]]}} to |options|. 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. - 1. Make a request to the underlying platform to: - 1. Create an [=implementation-defined=] platform operator |opImpl| for the given |name| operation. - 1. Store a reference of |opImpl| in |activation|.{{MLActivation/[[operator]]}}. - 1. If |init-steps| are defined, run |init-steps| with |options|. - 1. Otherwise, initialize |activation|.{{MLActivation/[[operator]]}} given |options| in an [=implementation-defined=] way for the given |name| operation. - 1. Return |activation|. + 1. Make a request to the underlying platform to complete the recording of the ML workload, given |descriptor|. +
+ See the related WebGPU steps. +
+ 1. Return a {{GPUCommandBuffer}} containing the recorded workload.
-## The MLContext interface ## {#api-mlcontext} +## {{MLContext}} interface ## {#api-mlcontext} The {{MLContext}} interface represents a global state of neural network compute workload and execution processes. Each {{MLContext}} object has associated [=context type=], [=device type=] and [=power preference=]. The context type is the type of the execution context that manages the resources and facilitates the compilation and execution of the neural network graph: @@ -1129,7 +1090,7 @@ interface MLContext {}; When the {{[[contextType]]}} is set to [=default-context|default=] with the {{MLContextOptions}}.{{deviceType}} set to [=device-type-gpu|gpu=], the user agent is responsible for creating an internal GPU device that operates within the context and is capable of ML workload submission on behalf of the calling application. In this setting however, only {{ArrayBufferView}} inputs and outputs are allowed in and out of the graph execution since the application has no way to know what type of internal GPU device is being created on their behalf. In this case, the user agent is responsible for automatic uploads and downloads of the inputs and outputs to and from the GPU memory using this said internal device.
-### The {{MLContext}} validation algorithm ### {#api-mlcontext-validate} +### {{MLContext}} validation algorithm ### {#api-mlcontext-validate}
@@ -1270,7 +1231,7 @@ partial interface MLContext {
-### The {{MLNamedArrayBufferViews}} transfer algorithm ### {#mlnamedarraybufferviews-transfer-alg} +### {{MLNamedArrayBufferViews}} transfer algorithm ### {#mlnamedarraybufferviews-transfer-alg}
@@ -1383,136 +1344,36 @@ partial interface MLContext { **Returns:** {{MLCommandEncoder}}. The command encoder used to record ML workload on the GPU. -## The MLCommandEncoder interface ## {#api-mlcommandencoder} -The {{MLCommandEncoder}} interface represents a method of execution that synchronously records the computational workload of a compiled {{MLGraph}} to a {{GPUCommandBuffer}} on the calling thread. Since the workload is not immediately executed, just recorded, this method allows more flexibility for the caller to determine how and when the recorded commands will be submitted for execution on the GPU relative to other GPU workload on the same or different queue. +## {{MLGraph}} interface ## {#api-mlgraph} +The {{MLGraph}} interface represents a compiled computational graph. A compiled graph once constructed is immutable and cannot be subsequently changed.
-{{MLCommandEncoder}} has the following internal slots: -
+{{MLGraph}} has the following internal slots: +
: \[[context]] of type {{MLContext}} :: - The context of type {{MLContext}} associated with this {{MLCommandEncoder}}. - - : \[[implementation]] - :: - The underlying implementation provided by the User Agent. -
-
- -### Graph Initialization ### {#api-mlcommandencoder-graph-initialization} -Record the initialization of the {{MLGraph}}. This is a necessary step for optimal performance during graph execution as it gives the platform an opportunity to prepare and optimize constant input data for the subsequent execution of the graph. This method should only be called once per graph. - - - -
- **Arguments:** - - *graph*: an {{MLGraph}}. The compiled graph to be initialized with graph constant inputs. - - **Returns:** {{undefined}}. -
- -
- - The initializeGraph(graph) method steps are: - -
-
- Graph initialization stage typically involves a process known as "weight preprocessing" where all the constant inputs to the graph are preprocessed and cached at the operating system level for subsequent graph execution calls. The initializing inputs are typically the constant weight data specified through the {{MLGraphBuilder/constant(descriptor, bufferView)|MLGraphBuilder/constant(value, dataType)}} method as constant operands during graph construction time. -
-
-
- -### Dispatch Execution Commands ### {#api-mlcommandencoder-dispatch-commands} -Record the {{MLGraph}} execution with the inputs {{MLNamedGPUResources}} and outputs {{MLNamedGPUResources}}. - - - -
- **Arguments:** - - *graph*: an {{MLGraph}}. The compiled graph to be executed. - - *inputs*: an {{MLNamedGPUResources}}. The resources of inputs. - - *outputs*: an {{MLNamedGPUResources}}. The pre-allocated resources of required outputs. - - **Returns:** {{undefined}}. -
- -
- - The dispatch(|graph|, |inputs|, |outputs|) method steps are: - -
- 1. If any of the following requirements are unmet, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. -
- 1. [=map/For each=] |key| → |value| of |inputs|: - 1. |graph|.{{MLGraph/[[inputDescriptors]]}}[|key|] must [=map/exist=]. - 1. Let |inputDesc| be |graph|.{{MLGraph/[[inputDescriptors]]}}[|key|]. - 1. If |value| is a {{GPUBuffer}}, then: - 1. |value|.{{GPUBuffer/size}} must equal to [=byte length=] of |inputDesc|. - 1. [=map/For each=] |key| → |value| of |outputs|: - 1. |graph|.{{MLGraph/[[outputDescriptors]]}}[|key|] must [=map/exist=]. - 1. Let |outputDesc| be |graph|.{{MLGraph/[[outputDescriptors]]}}[|key|]. - 1. If |value| is a {{GPUBuffer}}, then: - 1. |value|.{{GPUBuffer/size}} must equal to [=byte length=] of |outputDesc|. -
- 1. [=map/For each=] |key| → |value| of |inputs|: - 1. Set the input of |graph|.{{MLGraph/[[implementation]]}} that is associated with |key| to |value|. - 1. [=map/For each=] |key| → |value| of |outputs|: - 1. Set the output of |graph|.{{MLGraph/[[implementation]]}} that is associated with |key| to |value|. - 1. Issue a compute request of |graph|.{{MLGraph/[[implementation]]}}. - 1. If there is an error returned by |graph|.{{MLGraph/[[implementation]]}}, then: - 1. Throw an "{{OperationError}}" {{DOMException}}. - 1. Return {{undefined}}. -
-
- -### Generate GPU Command Buffer ### {#api-mlcommandencoder-generate-gpu-command-buffer} -Complete the recording of ML workload and return a WebGPU-compatible {{GPUCommandBuffer}} containing the recorded workload. - - + The context of type {{MLContext}} associated with this {{MLGraph}}. -
- **Arguments:** - - *descriptor*: an optional {{GPUCommandBufferDescriptor}}. Descriptor of the command buffer. + : \[[inputDescriptors]] of type [=record=]<{{DOMString}}, {{MLOperandDescriptor}}> + :: + Maps the name of an input {{MLOperand}} to its {{MLOperandDescriptor}} for all input {{MLOperand}}s of this {{MLGraph}}. - **Returns:** {{GPUCommandBuffer}}. -
+ : \[[outputDescriptors]] of type [=record=]<{{DOMString}}, {{MLOperandDescriptor}}> + :: + Maps the name of an output {{MLOperand}} to its {{MLOperandDescriptor}} for all output {{MLOperand}}s of this {{MLGraph}}. -
- - The finish(|descriptor|) method steps are: - -
- 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. - 1. Make a request to the underlying platform to complete the recording of the ML workload, given |descriptor|. -
- See the related WebGPU steps. -
- 1. Return a {{GPUCommandBuffer}} containing the recorded workload. -
-
+ : \[[implementation]] + :: + The underlying implementation provided by the User Agent. + + -## The MLGraphBuilder interface ## {#api-mlgraphbuilder} +## {{MLGraphBuilder}} interface ## {#api-mlgraphbuilder} The {{MLGraphBuilder}} interface defines a set of operations as identified by the [[#usecases]] that can be composed into a computational graph. It also represents the intermediate state of a graph building session. @@ -1539,7 +1400,7 @@ interface MLGraphBuilder { MLOperand constant(MLOperandDescriptor descriptor, MLBufferView bufferView); // Create a single-value operand from the specified number of the specified type. - MLOperand constant(double value, optional MLOperandDataType dataType = "float32"); + MLOperand constant(double value, optional MLOperandDataType type = "float32"); // Compile the graph up to the specified output operands asynchronously. Promise build(MLNamedOperands outputs); @@ -1578,7 +1439,7 @@ Both {{MLGraphBuilder}}.{{MLGraphBuilder/build()}} and {{MLGraphBuilder}}.{{MLGr -### The {{MLGraphBuilder}} constructor ### {#api-mlgraphbuilder-constructor} +### {{MLGraphBuilder}} constructor ### {#api-mlgraphbuilder-constructor}
@@ -1591,45 +1452,191 @@ Both {{MLGraphBuilder}}.{{MLGraphBuilder/build()}} and {{MLGraphBuilder}}.{{MLGr
-### The {{MLGraphBuilder/input()}} method ### {#api-mlgraphbuilder-input} -Create a named {{MLOperand}} based on a descriptor, that can be used as an input. +### argMin/Max ### {#api-mlgraphbuilder-argminmax} +Return the index location of the minimum or maxmium values of all the input values along the axes. + + + +{{MLArgMinMaxOptions}} has the following members: +
+ : axes + :: + A sequence of {{unsigned long}}. The dimensions to reduce. The values in the sequence must be in the range [0, N-1] where N is the [=rank=] of the input tensor. If not present, all dimensions are reduced. + + : keepDimensions + :: + A {{boolean}}. If true, retains reduced dimensions with [=list/size=] 1. The default value is false. + + : selectLastIndex + :: + A {{boolean}}. If true, select the last index instead of the first found along the axes. The default value is false. +
**Arguments:** - - *name*: a [=string=] name of the input. - - *descriptor*: an {{MLOperandDescriptor}} object. - **Returns:**: an {{MLOperand}} object. + - *input*: an {{MLOperand}}. The input N-D tensor. + - *options*: an optional {{MLArgMinMaxOptions}}. The optional parameters of the operation. + + **Returns:** an {{MLOperand}}. The N-D tensor of the reduced shape. The values must be of type "int64" in the range [0, N-1] where N is the corresponding size of each of the input dimensions specified by options.axes.
- The input(|name|, |descriptor|) method steps are: + To create argMin/argMax operation given |op|, |input| and |options|, run the following steps:
-
- The permissions and context validity have been checked by [[#api-mlgraphbuilder-constructor]] steps. -
- 1. If |name| is empty, then [=exception/throw=] a {{TypeError}}. - 1. [=Assert=]: the type of |descriptor| is {{MLOperandDescriptor}}. - 1. [=Assert=]: If |descriptor|.{{MLOperandDescriptor/dimensions}} does not [=map/exist=], then |descriptor| defines a scalar input. - 1. If |descriptor|.{{MLOperandDescriptor/dimensions}} [=map/exists=]: - 1. If the [=check dimensions=] steps given |descriptor|.{{MLOperandDescriptor/dataType}} and |descriptor|.{{MLOperandDescriptor/dimensions}} return false, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. - 1. If the [=byte length=] of |descriptor| is not supported by the underlying platform, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. [=Assert=]: |op| is one of "argMin", "argMax". + 1. [=Assert=]: the type of |input| is {{MLOperand}}. + 1. If |options|.{{MLArgMinMaxOptions/axes}} [=map/exists=], if any of its elements is not in [=the range=] 0 to the [=rank=] of |input|, exclusive, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. Let |outputShape| be the result of invoking the underlying implementation for calculating reduction output dimensions, given |options|. + 1. Let |desc| be a new {{MLOperandDescriptor}}. + 1. Set |desc|.{{MLOperandDescriptor/dataType}} to "int64". + 1. Set |desc|.{{MLOperandDescriptor/dimensions}} to |outputShape|. 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. - 1. Let |operand| be the result of creating an MLOperand given [=this=] and |descriptor|. - 1. Set |operand|.{{MLOperand/[[name]]}} to |name|. + 1. Let |output| be the result of creating an MLOperand given [=this=] and |desc|. 1. Make a request to the underlying platform to: - 1. Create an [=implementation-defined=] platform input operand |operandImpl| given |descriptor|. - 1. Store a reference of |operandImpl| in |operand|.{{MLOperand/[[operand]]}}. - 1. Register |operand| as an input. - 1. Return |operand|. + 1. Let |opImpl| be an [=implementation-defined=] platform operator for the |op| argMin or argMax operation, given |options|. + 1. Store a reference of |opImpl| in |output|.{{MLOperand/[[operator]]}}. + 1. Create an [=implementation-defined=] platform operand |outputImpl| to represent the output, given |output| and |opImpl|. + 1. Store a reference to |outputImpl| in |output|.{{MLOperand/[[operand]]}}. + 1. Connect |input|.{{MLOperand/[[operand]]}} as input to |opImpl|. + 1. Connect |output|.{{MLOperand/[[operand]]}} as output to |opImpl|. + 1. Return |output|. +
+
+ +
+ + The following argMin/argMax algorithms are supported. + +
+ The argMin(|input|, |options|) method steps are: + 1. Let |output| be the result of running the [=MLGraphBuilder/argminmax-op | create argMin/argMax operation=] given "argMin", |input| and |options|. + 1. If that [=exception/throws=] an error, then re-[=exception/throw=] the error. + 1. Return |output|. +
+ +
+ The argMax(|input|, |options|) method steps are: + 1. Let |output| be the result of running the [=MLGraphBuilder/argminmax-op | create argMin/argMax operation=] given "argMax", |input| and |options|. + 1. If that [=exception/throws=] an error, then re-[=exception/throw=] the error. + 1. Return |output|. +
+
+ +### batchNormalization ### {#api-mlgraphbuilder-batchnorm} +Normalize the values of the input tensor using [[Batch-Normalization]]. For each input feature, the mean and variance values of that feature are computed across all the samples in the batch dimension while the model is trained. These mean and variance values are then subsequently given to this operation during model inference. + + + +{{MLBatchNormalizationOptions}} has the following members: +
+ : scale + :: + An {{MLOperand}}. Specifies the 1-D tensor of the scaling values whose is equal to the size of the input dimension denoted by {{MLBatchNormalizationOptions/axis}}. + + : bias + :: + An {{MLOperand}}. Specifies the 1-D tensor of the bias values whose [=list/size=] is equal to the size of the input dimension denoted by {{MLBatchNormalizationOptions/axis}}. + + : axis + :: + An {{unsigned long}} scalar. Specifies the index to the feature count dimension of the input shape for which the mean and variance values are. Its value must be in the range [0, N-1] where N is the [=rank=] of the input tensor. The default value is 1, corresponding to the channel (*"c"*) dimension in the *"nchw"* data layout. + + : epsilon + :: + A {{float}} scalar. Specifies A small value to prevent computational error due to divide-by-zero. + + : activation + :: + An {{MLActivation}} object. Specifies the optional activation function that immediately follows the normalization operation. +
+ +
+ **Arguments:** + - *input*: an {{MLOperand}}. The input N-D tensor. + - *mean*: an {{MLOperand}}. Specifies the 1-D tensor of the mean values of the input features across the batch. Its [=list/size=] is equal to the size of the input dimension denoted by {{MLBatchNormalizationOptions/axis}}. + - *variance*: an {{MLOperand}}. The 1-D tensor of the variance values of the input features across the batch whose [=list/size=] is equal to the size of the input dimension denoted by {{MLBatchNormalizationOptions/axis}}. + - *options*: an optional {{MLBatchNormalizationOptions}}. Specifies the optional parameters of the operation. + + **Returns:** an {{MLOperand}}. The batch-normalized N-D tensor of the same shape as *input*. +
+ +
+ + The batchNormalization(|input|, |mean|, |variance|, |options|) method steps are: + +
+ 1. [=Assert=]: the type of |input|, |mean| and |variance| is {{MLOperand}}. + 1. If |options|.axis is not in [=the range=] 0 to the [=rank=] of |input|, exclusive, then [=exception/throw=] a {{TypeError}}. + 1. If the [=list/size=] of |mean|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}} is not 1, then [=exception/throw=] a {{TypeError}}. + 1. If |mean|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[0] is not equal to |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[|options|.{{MLBatchNormalizationOptions/axis}}], then [=exception/throw=] a {{TypeError}}. + 1. If the [=list/size=] of |variance|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}} is not 1, then [=exception/throw=] a {{TypeError}}. + 1. If |variance|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[0] is not equal to |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[|options|.{{MLBatchNormalizationOptions/axis}}], then [=exception/throw=] a {{TypeError}}. + 1. If |options|.{{MLBatchNormalizationOptions/scale}} [=map/exists=]: + 1. If its [=list/size=] is not 1, then [=exception/throw=] a {{TypeError}}. + 1. If |options|.{{MLBatchNormalizationOptions/scale}}.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[0] is not equal to |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[|options|.{{MLBatchNormalizationOptions/axis}}], then [=exception/throw=] a {{TypeError}}. + 1. If |options|.{{MLBatchNormalizationOptions/bias}} [=map/exists=]: + 1. If its [=list/size=] is not 1, then [=exception/throw=] a {{TypeError}}. + 1. If |options|.{{MLBatchNormalizationOptions/bias}}.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[0] is not equal to |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[|options|.{{MLBatchNormalizationOptions/axis}}], then [=exception/throw=] a {{TypeError}}. + 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. + 1. Let |output| be the result of creating an MLOperand given [=this=] and |input|.{{MLOperand/[[descriptor]]}}, that may use the same underlying data as |input|. + 1. Make a request to the underlying platform to initialize the batch normalization: + 1. Create an [=implementation-defined=] platform operator |batchNormImpl| for this method, given |input|, |mean|, |variance| and |options|. + 1. If |options|.activation [=map/exists=],register it as activation to |batchNormImpl|. + 1. Connect |output| as output to |batchNormImpl|. + 1. Return |output|.
-### The build() method ### {#api-mlgraphbuilder-build} +
+
+ + The behavior of this operation when the input tensor is 4-D of the *"nchw"* layout and the activation is of operator type *relu* can be generically emulated from the usage of other operations as follow. However, user agents typically have a more efficient implementation for it, therefore its usage is encouraged from the performance standpoint. + +
+    const shape = [1,null,1,1];
+    return builder.relu(
+      builder.add(
+        builder.mul(
+          builder.reshape(options.scale, shape),
+          builder.div(
+            builder.sub(input, builder.reshape(mean, shape)),
+            builder.sqrt(builder.add(builder.reshape(variance, shape), builder.constant(options.epsilon)))
+            )),
+        builder.reshape(options.bias, shape)));
+    
+
+
+ +### build ### {#api-mlgraphbuilder-build} Build a composed graph up to a given output operand into a computational graph, asynchronously or synchronously. -#### The {{MLGraphBuilder/build(outputs)}} method #### {#api-mlgraphbuilder-build-outputs} +#### {{MLGraphBuilder/build(outputs)}} #### {#api-mlgraphbuilder-build-outputs}
@@ -1646,7 +1653,7 @@ Build a composed graph up to a given output operand into a computational graph,
-#### The {{MLGraphBuilder/buildSync(outputs)}} method #### {#api-mlgraphbuilder-buildsync-outputs} +#### {{MLGraphBuilder/buildSync(outputs)}} #### {#api-mlgraphbuilder-buildsync-outputs}
@@ -1682,174 +1689,42 @@ Build a composed graph up to a given output operand into a computational graph,
-### The constant() method ### {#api-mlgraphbuilder-constant-method} -Create a constant {{MLOperand}} that can be used in {{MLGraphBuilder}} methods. - -#### The {{MLGraphBuilder/constant(descriptor, bufferView)}} method #### {#api-mlgraphbuilder-constant} -
- **Arguments:** - - *descriptor*: an {{MLOperandDescriptor}} object - - *bufferView*: an {{MLBufferView}} - **Returns:**: an {{MLOperand}} object. -
- -
- - The constant(|descriptor|, |bufferView|) method steps are: - -
-
- The permissions and context validity have been checked by [[#api-mlgraphbuilder-constructor]] steps. -
- 1. [=Assert=]: the type of |descriptor| is {{MLOperandDescriptor}}. - 1. If the [=byte length=] of |descriptor| is not supported by the underlying platform, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. - 1. If the [=check dimensions=] steps given |descriptor|.{{MLOperandDescriptor/dataType}} and |descriptor|.{{MLOperandDescriptor/dimensions}} return false, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. - 1. If validating buffer with descriptor given |bufferView| and |descriptor| returns false, then [=exception/throw=] a {{TypeError}}. - 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. - 1. Let |operand| be the result of creating an MLOperand given [=this=] and |descriptor|. - 1. Let |bytes| be the result of invoking the [=get a copy of the bytes held by the buffer source=] steps given |bufferView|. - 1. Make a request to the underlying platform to: - 1. Create an [=implementation-defined=] platform operand |constantImpl| to represent a constant, given |descriptor|. - 1. Store a reference of |constantImpl| in |operand|.{{MLOperand/[[operand]]}}. - 1. Register |operand| as a tensor constant with |bytes| as value. - 1. Return |operand|. -
-
- -#### The {{MLGraphBuilder/constant(value, dataType)}} method #### {#api-mlgraphbuilder-constant-value-datatype} - -
- **Arguments:** - - *value*: a number - - *dataType*: an optional {{MLOperandDataType}}, by default *"float32"*. - **Returns:**: an {{MLOperand}} object. -
- -
- - The constant(|value|, |dataType|) method steps are: - -
-
- The permissions and context validity have been checked by [[#api-mlgraphbuilder-constructor]] steps. -
- 1. Let |descriptor| be a new {{MLOperandDescriptor}}. - 1. Set |descriptor|.{{MLOperandDescriptor/dataType}} to |dataType|. - 1. Set |descriptor|.{{MLOperandDescriptor/dimensions}} to `undefined`. -
- In the case of a scalar constant, |descriptor|.{{MLOperandDescriptor/dimensions}} is ignored. -
- 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. - 1. Let |operand| be the result of creating an MLOperand given [=this=] and |descriptor|. - 1. Make a request to the underlying platform to: - 1. Create an [=implementation-defined=] platform operand |constantImpl| to represent a constant, given |descriptor|. - 1. Store a reference of |constantImpl| in |operand|.{{MLOperand/[[operand]]}}. - 1. Register |operand| as a scalar constant with |value| as value. - 1. Return |operand|. -
-
- -### The batchNormalization() method ### {#api-mlgraphbuilder-batchnorm} -Normalize the tensor values of input features across the batch dimension using [[Batch-Normalization]]. For each input feature, the mean and variance values of that feature supplied in this calculation as parameters are previously computed across the batch dimension of the input during the model training phase of this operation. - +### cast ### {#api-mlgraphbuilder-cast} +Cast each element in the input tensor to the target data type. - -{{MLBatchNormalizationOptions}} has the following members: -
- : scale - :: - An {{MLOperand}}. Specifies the 1-D tensor of the scaling values whose is equal to the size of the input dimension denoted by {{MLBatchNormalizationOptions/axis}}. - - : bias - :: - An {{MLOperand}}. Specifies the 1-D tensor of the bias values whose [=list/size=] is equal to the size of the input dimension denoted by {{MLBatchNormalizationOptions/axis}}. - - : axis - :: - A {{long}} scalar. Specifies the index to the feature count dimension of the input shape for which the mean and variance values are. Its value must be in the range [0, N-1] where N is the [=rank=] of the input tensor. The default value is 1, corresponding to the channel (*"c"*) dimension in the *"nchw"* data layout. - - : epsilon - :: - A {{float}} scalar. Specifies A small value to prevent computational error due to divide-by-zero. - - : activation - :: - An {{MLActivation}} object. Specifies the optional activation function that immediately follows the normalization operation. -
-
**Arguments:** - *input*: an {{MLOperand}}. The input N-D tensor. - - *mean*: an {{MLOperand}}. Specifies the 1-D tensor of the mean values of the input features across the batch. Its [=list/size=] is equal to the size of the input dimension denoted by {{MLBatchNormalizationOptions/axis}}. - - *variance*: an {{MLOperand}}. The 1-D tensor of the variance values of the input features across the batch whose [=list/size=] is equal to the size of the input dimension denoted by {{MLBatchNormalizationOptions/axis}}. - - *options*: an optional {{MLBatchNormalizationOptions}}. Specifies the optional parameters of the operation. + - *type*: an {{MLOperandDataType}}. The target data type. - **Returns:** an {{MLOperand}}. The batch-normalized N-D tensor of the same shape as *input*. + **Returns:** an {{MLOperand}}. The N-D tensor of the same shape as *input* with each element casted to the target data type.
- The batchNormalization(|input|, |mean|, |variance|, |options|) method steps are: + The cast(|input|, |type|) method steps are:
- 1. [=Assert=]: the type of |input|, |mean| and |variance| is {{MLOperand}}. - 1. If |options|.axis is not in [=the range=] 0 to the [=rank=] of |input|, exclusive, then [=exception/throw=] a {{TypeError}}. - 1. If the [=list/size=] of |mean|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}} is not 1, then [=exception/throw=] a {{TypeError}}. - 1. If |mean|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[0] is not equal to |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[|options|.{{MLBatchNormalizationOptions/axis}}], then [=exception/throw=] a {{TypeError}}. - 1. If the [=list/size=] of |variance|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}} is not 1, then [=exception/throw=] a {{TypeError}}. - 1. If |variance|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[0] is not equal to |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[|options|.{{MLBatchNormalizationOptions/axis}}], then [=exception/throw=] a {{TypeError}}. - 1. If |options|.{{MLBatchNormalizationOptions/scale}} [=map/exists=]: - 1. If its [=list/size=] is not 1, then [=exception/throw=] a {{TypeError}}. - 1. If |options|.{{MLBatchNormalizationOptions/scale}}.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[0] is not equal to |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[|options|.{{MLBatchNormalizationOptions/axis}}], then [=exception/throw=] a {{TypeError}}. - 1. If |options|.{{MLBatchNormalizationOptions/bias}} [=map/exists=]: - 1. If its [=list/size=] is not 1, then [=exception/throw=] a {{TypeError}}. - 1. If |options|.{{MLBatchNormalizationOptions/bias}}.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[0] is not equal to |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[|options|.{{MLBatchNormalizationOptions/axis}}], then [=exception/throw=] a {{TypeError}}. + 1. [=Assert=]: the type of |input| is {{MLOperand}}. 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. - 1. Let |output| be the result of creating an MLOperand given [=this=] and |input|.{{MLOperand/[[descriptor]]}}, that may use the same underlying data as |input|. - 1. Make a request to the underlying platform to initialize the batch normalization: - 1. Create an [=implementation-defined=] platform operator |batchNormImpl| for this method, given |input|, |mean|, |variance| and |options|. - 1. If |options|.activation [=map/exists=],register it as activation to |batchNormImpl|. - 1. Connect |output| as output to |batchNormImpl|. + 1. Let |operand| be the result of creating an MLOperand given [=this=], |input| and |type|. + 1. Let |output| be the result of copying an MLOperand given |input|. + 1. Make a request to the underlying platform to: + 1. Create an [=implementation-defined=] platform operator |castImpl| for this method, given |type|. + 1. Store a reference of |castImpl| in |output|.{{MLOperand/[[operator]]}}. + 1. Create an [=implementation-defined=] platform operand |outputImpl| to represent an output, given |output| and |castImpl|. + 1. Store a reference to |outputImpl| in |output|.{{MLOperand/[[operand]]}}. + 1. Connect |operand|.{{MLOperand/[[operand]]}} as input to |castImpl|. + 1. Connect |output|.{{MLOperand/[[operand]]}} as output to |castImpl|. 1. Return |output|.
-
-
- - The behavior of this operation when the input tensor is 4-D of the *"nchw"* layout and the activation is of operator type *relu* can be generically emulated from the usage of other operations as follow. However, user agents typically have a more efficient implementation for it, therefore its usage is encouraged from the performance standpoint. - -
-    const shape = [1,null,1,1];
-    return builder.relu(
-      builder.add(
-        builder.mul(
-          builder.reshape(options.scale, shape),
-          builder.div(
-            builder.sub(input, builder.reshape(mean, shape)),
-            builder.pow(
-              builder.add(builder.reshape(variance, shape), builder.constant(options.epsilon)),
-              builder.constant(0.5))
-            )),
-        builder.reshape(options.bias, shape)));
-    
-
-
- -### The clamp() method ### {#api-mlgraphbuilder-clamp} +### clamp ### {#api-mlgraphbuilder-clamp} Clamp the input tensor element-wise within a range specified by the minimum and maximum values. @@ -1874,16 +1749,16 @@ partial interface MLGraphBuilder {
     if (options.minValue === undefined) {
       if (options.maxValue === undefined) {
-        return operand;
+        return input;
       } else {
-        return builder.min(operand, builder.constant(options.maxValue));
+        return builder.min(input, builder.constant(options.maxValue));
       }
     } else {
       if (options.maxValue === undefined) {
-        return builder.max(operand, builder.constant(options.minValue));
+        return builder.max(input, builder.constant(options.minValue));
       } else {
         return builder.min(
-            builder.max(operand, builder.constant(options.minValue)),
+            builder.max(input, builder.constant(options.minValue)),
             builder.constant(options.maxValue));
       }
     }
@@ -1901,10 +1776,10 @@ partial interface MLGraphBuilder {
   
 
-#### The {{MLGraphBuilder/clamp(operand, options)}} method #### {#api-mlgraphbuilder-clamp-operand-options} +#### {{MLGraphBuilder/clamp(input, options)}} #### {#api-mlgraphbuilder-clamp-operand-options}
**Arguments:** - - *operand*: an {{MLOperand}}. The input tensor. + - *input*: an {{MLOperand}}. The input tensor. - *options*: an optional {{MLClampOptions}}. The optional parameters of the operation. - *minValue*: a {{float}} scalar. Specifies the minimum value of the range. When it is not specified, the clamping is not performed on the lower limit of the range. - *maxValue*: a {{float}} scalar. Specifies the maximum value of the range. When it is not specified, the clamping is not performed on the upper limit of the range. @@ -1913,27 +1788,26 @@ partial interface MLGraphBuilder {
- - The clamp(|operand|, |options|) method steps are: + The clamp(|input|, |options|) method steps are:
- 1. [=Assert=]: the type of |operand| is {{MLOperand}}. + 1. [=Assert=]: the type of |input| is {{MLOperand}}. 1. If running the check clamp options steps given |options| returns false, then [=exception/throw=] a {{TypeError}}. 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. - 1. Let |output| be the result of copying an MLOperand given |operand|. + 1. Let |output| be the result of copying an MLOperand given |input|. 1. Make a request to the underlying platform to: 1. Create an [=implementation-defined=] platform operator |clampImpl| for this method, given |options|.{{MLClampOptions/minValue}} and |options|.{{MLClampOptions/maxValue}}. 1. Store a reference of |clampImpl| in |output|.{{MLOperand/[[operator]]}}. 1. Create an [=implementation-defined=] platform operand |outputImpl| to represent clamp output, given |output| and |clampImpl|. 1. Store a reference to |outputImpl| in |output|.{{MLOperand/[[operand]]}}. - 1. Connect |operand|.{{MLOperand/[[operand]]}} as input to |clampImpl|. + 1. Connect |input|.{{MLOperand/[[operand]]}} as input to |clampImpl|. 1. Connect |output|.{{MLOperand/[[operand]]}} as output to |clampImpl|. 1. Return |output|.
-#### The {{MLGraphBuilder/clamp(options)}} method #### {#api-mlgraphbuilder-clamp-options} +#### {{MLGraphBuilder/clamp(options)}} #### {#api-mlgraphbuilder-clamp-options}
**Arguments:** - *options*: an optional {{MLClampOptions}}. The optional parameters of the operation. @@ -1956,7 +1830,7 @@ partial interface MLGraphBuilder {
-### The concat() method ### {#api-mlgraphbuilder-concat} +### concat ### {#api-mlgraphbuilder-concat} Concatenates the input tensors along a given axis. + +
+ **Arguments:** + - *a*: an {{MLOperand}}. The first input tensor. + - *b*: an {{MLOperand}}. The second input tensor when specified. + + **Returns:** an {{MLOperand}}. The output tensor that contains the result of element-wise comparison of the two input tensors. +
+
+ **Operation types:** + - *equal*: Compare if the values of the two input tensors are equal, element-wise. + - *greater*: Compare if the values of the first input tensor is greater, element-wise. + - *greaterOrEqual*: Compare if the values of the first input tensor is greater or equal, element-wise. + - *lesser*: Compare if the values of the first input tensor is lesser, element-wise. + - *lesserOrEqual*: Compare if the values of the first input tensor is lesser or equal, element-wise. + - *not*: Invert the values of the input tensor to values 0 or 1, element-wise. Specifically, when the input value is non-zero, invert it to a {{boolean}} value 0. Conversely, for a zero input value, invert it to a {{boolean}} value 1. +
+ +
+Although operations *greaterOrEqual* and *lesserOrEqual* can each be implemented in terms of operations *not*, *lesser*, and *greater* in other words `greater-or-equal(a, b)` is `not(lesser(a, b))`, they are specifically defined to handle NaN cases and for performance reason to avoid double comparisons. +
+ +
+ + To create element-wise logical operation given |op|, |a| and |b|, run the following steps: + +
+ 1. [=Assert=]: |op| is one of "equal", "greater", "greaterOrEqual", "lesser", "lesserOrEqual", "not". + 1. [=Assert=]: the type of |a| and |b| if available is {{MLOperand}}. + 1. If |op| is "not". + 1. If |a|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dataType}} isn't "uint8", then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. If |op| is anything else but "not". + 1. If |a|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dataType}} is not equal to |b|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dataType}}, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. Let |descriptor| be a new {{MLOperandDescriptor}}. + 1. Set |descriptor|.{{MLOperandDescriptor/dataType}} to "uint8". + 1. Let |descriptor|.{{MLOperandDescriptor/dimensions}} be the result of running the [=MLGraphBuilder/broadcast-shapes=] steps given |a|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}} and |b|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}. + 1. If that [=exception/throws=] an error, re-[=exception/throw=] the error. + 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. + 1. Let |output| be the result of creating an MLOperand given [=this=] and |descriptor|. + 1. Make a request to the underlying platform to: + 1. Let |opImpl| be an [=implementation-defined=] platform operator for the binary operation |op|, given |a| and |b|. + 1. Store a reference of |opImpl| in |output|.{{MLOperand/[[operator]]}}. + 1. Create an [=implementation-defined=] platform operand |outputImpl| to represent the output, given |output| and |opImpl|. + 1. Store a reference to |outputImpl| in |output|.{{MLOperand/[[operand]]}}. + 1. Connect |a|.{{MLOperand/[[operand]]}} and |b|.{{MLOperand/[[operand]]}} as inputs to |opImpl|. + 1. Connect |output|.{{MLOperand/[[operand]]}} as output to |opImpl|. + 1. Return |output|.
- The element-wise binary operation algorithms invoke the [=MLGraphBuilder/element-wise-binary-op | create element-wise binary operation=] steps as follows. + The element-wise logical operation algorithms invoke the [=MLGraphBuilder/element-wise-logical-op | create element-wise logical operation=] steps as follows.
- The add(|a|, |b|) method steps are: - 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-binary-op | create element-wise binary operation=] given "add", |a| and |b|. - 1. If that [=exception/throws=] an error, then re-[=exception/throw=] the error. - 1. Return |output|. -
- -
- The sub(|a|, |b|) method steps are: - 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-binary-op | create element-wise binary operation=] given "sub", |a| and |b|. + The equal(|a|, |b|) method steps are: + 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-logical-op | create element-wise logical operation=] given "equal", |a| and |b|. 1. If that [=exception/throws=] an error, then re-[=exception/throw=] the error. 1. Return |output|.
- The mul(|a|, |b|) method steps are: - 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-binary-op | create element-wise binary operation=] given "mul", |a| and |b|. + The greater(|a|, |b|) method steps are: + 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-logical-op | create element-wise logical operation=] given "greater", |a| and |b|. 1. If that [=exception/throws=] an error, then re-[=exception/throw=] the error. 1. Return |output|.
- The div(|a|, |b|) method steps are: - 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-binary-op | create element-wise binary operation=] given "div", |a| and |b|. + The greaterOrEqual(|a|, |b|) method steps are: + 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-logical-op | create element-wise logical operation=] given "greaterOrEqual", |a| and |b|. 1. If that [=exception/throws=] an error, then re-[=exception/throw=] the error. 1. Return |output|.
- The max(|a|, |b|) method steps are: - 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-binary-op | create element-wise binary operation=] given "max", |a| and |b|. + The lesser(|a|, |b|) method steps are: + 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-logical-op | create element-wise logical operation=] given "lesser", |a| and |b|. 1. If that [=exception/throws=] an error, then re-[=exception/throw=] the error. 1. Return |output|.
- The min(|a|, |b|) method steps are: - 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-binary-op | create element-wise binary operation=] given "min", |a| and |b|. + The lesserOrEqual(|a|, |b|) method steps are: + 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-logical-op | create element-wise logical operation=] given "lesserOrEqual", |a| and |b|. 1. If that [=exception/throws=] an error, then re-[=exception/throw=] the error. 1. Return |output|.
- The pow(|a|, |b|) method steps are: - 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-binary-op | create element-wise binary operation=] given "pow", |a| and |b|. + The not(|a|) method steps are: + 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-logical-op | create element-wise logical operation=] given "not" and |a|. 1. If that [=exception/throws=] an error, then re-[=exception/throw=] the error. 1. Return |output|.
@@ -2506,11 +2610,15 @@ partial interface MLGraphBuilder { MLOperand abs(MLOperand input); MLOperand ceil(MLOperand input); MLOperand cos(MLOperand input); + MLOperand erf(MLOperand input); MLOperand exp(MLOperand input); MLOperand floor(MLOperand input); + MLOperand identity(MLOperand input); MLOperand log(MLOperand input); MLOperand neg(MLOperand input); + MLOperand reciprocal(MLOperand input); MLOperand sin(MLOperand input); + MLOperand sqrt(MLOperand input); MLOperand tan(MLOperand input); }; @@ -2523,26 +2631,30 @@ partial interface MLGraphBuilder { element-wise unary operation of the input tensor. The shape of the output tensor is the same as the shape of input tensor.
+
**Operation types:** - *abs*: Compute the absolute value of the input tensor, element-wise. - *ceil*: Compute the ceiling of the input tensor, element-wise. - *cos*: Compute the cosine of the input tensor, element-wise. + - *erf*: Compute the error function [[Error-Function]] of the input tensor, element-wise. - *exp*: Compute the exponential of the input tensor, element-wise. - *floor*: Compute the floor of the input tensor, element-wise. + - *identity*: Copy the value of the input tensor to the output tensor, element-wise. - *log*: Compute the natural logarithm of the input tensor, element-wise. - *neg*: Compute the numerical negative value of the input tensor, element-wise. + - *reciprocal*: Compute the reciprocal of the input tensor, element-wise. - *sin*: Compute the sine of the input tensor, element-wise. + - *sqrt*: Compute the square root of the input tensor, element-wise. - *tan*: Compute the tangent of the input tensor, element-wise.
- To create element-wise unary operation given |op| and |input|, run the following steps:
- 1. [=Assert=]: |op| is one of "abs", "ceil", "cos", "exp", "floor", "log", "neg", "sin", "tan". + 1. [=Assert=]: |op| is one of "abs", "ceil", "cos", "erf", "exp", "floor", "identity", "log", "neg", "reciprocal", "sin", "sqrt", "tan". 1. [=Assert=]: the type of |input| is {{MLOperand}}. 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. 1. Let |output| be the result of copying an MLOperand given |input|. @@ -2556,7 +2668,6 @@ partial interface MLGraphBuilder { 1. Return |output|.
-
The element-wise unary operation algorithms invoke the [=MLGraphBuilder/element-wise-unary-op | create element-wise unary operation=] steps as follows. @@ -2583,6 +2694,13 @@ partial interface MLGraphBuilder { 1. Return |output|. +
+ The erf(|input|) method steps are: + 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-unary-op | create element-wise unary operation=] given "erf" and |input|. + 1. If that [=exception/throws=] an error, then re-[=exception/throw=] the error. + 1. Return |output|. +
+
The exp(|input|) method steps are: 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-unary-op | create element-wise unary operation=] given "exp" and |input|. @@ -2597,6 +2715,13 @@ partial interface MLGraphBuilder { 1. Return |output|.
+
+ The identity(|input|) method steps are: + 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-unary-op | create element-wise unary operation=] given "identity" and |input|. + 1. If that [=exception/throws=] an error, then re-[=exception/throw=] the error. + 1. Return |output|. +
+
The log(|input|) method steps are: 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-unary-op | create element-wise unary operation=] given "log" and |input|. @@ -2611,6 +2736,13 @@ partial interface MLGraphBuilder { 1. Return |output|.
+
+ The reciprocal(|input|) method steps are: + 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-unary-op | create element-wise unary operation=] given "reciprocal" and |input|. + 1. If that [=exception/throws=] an error, then re-[=exception/throw=] the error. + 1. Return |output|. +
+
The sin(|input|) method steps are: 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-unary-op | create element-wise unary operation=] given "sin" and |input|. @@ -2618,6 +2750,13 @@ partial interface MLGraphBuilder { 1. Return |output|.
+
+ The sqrt(|input|) method steps are: + 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-unary-op | create element-wise unary operation=] given "sqrt" and |input|. + 1. If that [=exception/throws=] an error, then re-[=exception/throw=] the error. + 1. Return |output|. +
+
The tan(|input|) method steps are: 1. Let |output| be the result of running the [=MLGraphBuilder/element-wise-unary-op | create element-wise unary operation=] given "tan" and |input|. @@ -2627,7 +2766,7 @@ partial interface MLGraphBuilder {
-### The elu() method ### {#api-mlgraphbuilder-elu} +### elu ### {#api-mlgraphbuilder-elu} Calculate the exponential linear unit function (ELU) on the input tensor element-wise. The calculation follows the expression `max(0, x) + alpha * (exp(min(0, x)) - 1)`. +
+ **Arguments:** + - *input*: an {{MLOperand}}. An input tensor + - *newShape*: a sequence of {{unsigned long}}. The new shape the input tensor is expanded to. + + **Returns:** an {{MLOperand}}. The tensor with expanded size dimensions. +
+ +
+ + The expand(|input|, |newShape|) method steps are: + +
+
+ The permissions and context validity have been checked by [[#api-mlgraphbuilder-constructor]] steps. +
+ 1. [=Assert=]: the type of |input| is {{MLOperand}} object. + 1. [=Assert=]: the type of |newShape| is a `sequence of unsigned long`. + 1. If any of the following steps fail, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. Let |inputDesc| be |input|.{{MLOperand/[[descriptor]]}}. + 1. If the sequence length of |newShape| is not equal to the [=rank=] of |inputDesc|, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. Let |outputDesc| be a copy of |inputDesc|. + 1. [=map/For each=] |index| in [=the range=] 0 to the [=rank=] of |input|, exclusive: + 1. Let |size| be the |input|.{{MLOperand/shape()}}[|index|]. + 1. If |size| is not equal to 1 and not equal to |newShape|[index], then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. If |size| is equal to 1, then let |outputDesc|.{{MLOperandDescriptor/dimensions}}[|index|] be |newShape|[|index|]. + 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. + 1. Let |output| be the result of creating an MLOperand given [=this=] and |outputDesc|. + 1. Make a request to the underlying platform to: + 1. Create an [=implementation-defined=] platform operator |expandImpl| for this method, given |input| and |newShape|. + 1. Store a reference of |expandImpl| in |output|.{{MLOperand/[[operator]]}}. + 1. Create an [=implementation-defined=] platform operand |outputImpl| to represent output,given |output| and |expandImpl|. + 1. Store a reference to |outputImpl| in |output|.{{MLOperand/[[operand]]}}. + 1. Connect |input| as input to |expandImpl|. + 1. Connect |output|.{{MLOperand/[[operand]]}} as output to |expandImpl|. + 1. Return |output|. +
+
+ +### gather ### {#api-mlgraphbuilder-gather} +Gather values of the input tensor along an axis according to the indices. + + +{{MLGatherOptions}} has the following members: +
+ : axis + :: + An {{unsigned long}} scalar specifying the axis along which the gathered values are obtained. Its value must be in the range [0, N-1] where N is the [=rank=] of the input tensor. +
+ +
+ **Arguments:** + - *input*: an {{MLOperand}}. The input N-D tensor from which the values are gathered. + - *indices*: an {{MLOperand}}. The indices N-D tensor of the input values to gather. The values must be of type "uint32" or "int64" in the range [0, N-1] where N is the size of the input dimension indexed by *options.axis*. + - *options*: an optional {{MLGatherOptions}}. The optional parameters of the operation. + + **Returns:** an {{MLOperand}}. The output N-D tensor of [=rank=] equal to the [=rank=] of *input* + the [=rank=] of *indices* - 1. +
+ +
+ + The gather(|input|, |indices|, |options|) method steps are: + +
+ 1. [=Assert=]: the type of |input| and |indices| is {{MLOperand}}. + 1. If |indices|.{{MLOperand/dataType()}} is neither "uint32" nor "int64", then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. Let |shapeInput| be |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}} and |rankInput| be the [=list/size=] of |shapeInput|. + 1. Let |shapeIndices| be |indices|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}. + 1. Let |axis| be |options|.{{MLGatherOptions/axis}}. + 1. Let |axisSize| be |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[|axis|] + 1. If |axis| is greater than or equal to |rankInput|, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. [=map/For each=] |index| → |value| of |indices|: + 1. If |index| is greater than or equal to |axisSize|, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. Let |dimCount| be zero. + 1. Let |rankOutput| be zero. + 1. Let |shapeOutput| be an empty list. + 1. [=map/For each=] |size| → |value| of |shapeInput|: + 1. If |dimCount| is equal to |axis| then [=break=]. + 1. Set |shapeOutput|[|dimCount|] to |size|. + 1. Increment |dimCount| by one. + 1. Set |rankOutput| to |dimCount|. + 1. Let |dimCount| be zero. + 1. [=map/For each=] |size| → |value| of |shapeIndices|: + 1. Set |shapeOutput|[|rankOutput| + |dimCount|] to |size|. + 1. Increment |dimCount| by one. + 1. Set |rankOutput| to |rankOutput| + |dimCount|. + 1. Let |dimCount| be zero. + 1. [=map/For each=] |size| → |value| of |shapeInput|: + 1. If |dimCount| is less than or equal to |axis| then [=continue=]. + 1. Set |shapeOutput|[|rankOutput| + |dimCount| - |axis| - 1] to |size|. + 1. Increment |dimCount| by one. + 1. Let |desc| a new {{MLOperandDescriptor}}. + 1. Set |desc|.{{MLOperandDescriptor/dimensions}} to |shapeOutput|. + 1. Set |desc|.{{MLOperandDescriptor/dataType}} to |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dataType}}. + 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. + 1. Let |output| be the result of creating an MLOperand given |desc|. + 1. Make a request to the underlying platform to: + 1. Let |opImpl| be an [=implementation-defined=] platform operator for the Gather operation, given |input|, |indices|, and |options|. + 1. Store a reference of |opImpl| in |output|.{{MLOperand/[[operator]]}}. + 1. Create an [=implementation-defined=] platform operand |outputImpl| to represent the output, given |output| and |opImpl|. + 1. Store a reference to |outputImpl| in |output|.{{MLOperand/[[operand]]}}. + 1. Connect |input|.{{MLOperand/[[operand]]}} and |indices|.{{MLOperand/[[operand]]}} as inputs to |opImpl|. + 1. Connect |output|.{{MLOperand/[[operand]]}} as output to |opImpl|. + 1. Return |output|. +
+
+ +
+
+ + Examples of how gather works in different slicing schemes. + +
+    // input of shape [4,3]:
+    //   [[ 0,  1,  2],
+    //    [10, 11, 12], 
+    //    [20, 21, 22], 
+    //    [30, 31, 32]]
+    const input = builder.constant(
+    { dimensions: [4,3] }, new Float32Array([0,1,2,10,11,12,20,21,22,30,31,32]));
+
+    const indices1 = builder.constant(
+    { dataType: 'uint32', dimensions: [2] }, new Uint32Array([3,1]));
+
+    const indices2 = builder.constant(
+    { dataType: 'uint32', dimensions: [3] }, new Uint32Array([2,1,1]));
+
+    const indices3 = builder.constant(
+    { dataType: 'uint32', dimensions: [2,2] }, new Uint32Array([0,1,1,2]));
+
+    // axis = 0 (default)
+    // indices of shape [2]: 
+    //   [3,1]
+    // output of shape [2,3]:
+    //   [[30, 31, 32], 
+    //    [10, 11, 12]]
+    const output1 = builder.gather(input, indices1);
+
+    // axis = 1
+    // indices of shape [3]:
+    //   [2,1,1]
+    // output of shape [4,3]:
+    //   [[ 2,  1,  1],
+    //    [12, 11, 11], 
+    //    [22, 21, 21],
+    //    [32, 31, 31]]
+    const output2 = builder.gather(input, indices2, { axis: 1 });
+
+    // axis = 1
+    // indices of shape [2,2]: 
+    //   [[0, 1], 
+    //    [1, 2]]
+    // output of shape [4,2,2]:
+    //   [[[ 0,  1], [ 1,  2]],
+    //    [[10, 11], [11, 12]],
+    //    [[20, 21], [21, 22]],
+    //    [[30, 31], [31, 32]]]
+    const output3 = builder.gather(input, indices3, { axis: 1 });
+  
+
+
+ +### gemm ### {#api-mlgraphbuilder-gemm} Calculate the [general matrix multiplication of the Basic Linear Algebra Subprograms](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3). The calculation follows the expression `alpha * A * B + beta * C`, where `A` is a 2-D tensor with shape [M, K] or [K, M], `B` is a 2-D tensor with shape [K, N] or [N, K], and `C` is broadcastable to the shape [M, N]. `A` and `B` may optionally be transposed prior to the calculation. + +The {{MLLayerNormalizationOptions}} members are: +
+ : scale + :: + An {{MLOperand}}. Specifies the N-D tensor of the scaling values whose shape is determined by the |axes| member in that each value in |axes| indicates the dimension of the input tensor with scaling values. For example, for an |axes| values of [1,2,3], the shape of this tensor is the list of the corresponding sizes of the input dimension 1, 2 and 3. When this member is not present, the scaling value is assumed to be 1. + + : bias + :: + An {{MLOperand}}. Specifies the N-D tensor of the bias values whose shape is determined by the |axes| member in that each value in |axes| indicates the dimension of the input tensor with bias values. For example, for an |axes| values of [1,2,3], the shape of this tensor is the list of the corresponding sizes of the input dimension 1, 2 and 3. When this member is not present, the bias value is assumed to be 0. + + : axes + :: + A sequence of {{unsigned long}}. The indices to the input dimensions to reduce. When this member is not present, it is assumed to be [1,2,3] that is, the reduction for the mean and variance values are calculated across all the input features for each individual sample in the batch. + + : epsilon + :: + A {{float}} scalar. Specifies a small value to prevent computational error due to divide-by-zero. +
+ +
+ **Arguments:** + - *input*: an {{MLOperand}}. The input N-D tensor. + - *options*: an optional {{MLLayerNormalizationOptions}}. The optional parameters of the operation. + + **Returns:** an {{MLOperand}}. The layer-normalized N-D tensor of the same shape as *input*. +
+ +
+ + The layerNormalization(|input|, |options|) method steps are: + +
+ 1. [=Assert=]: the type of |input| is {{MLOperand}}. + 1. [=Assert=]: the type of |options|.{{MLLayerNormalizationOptions/scale}} is {{MLOperand}}. + 1. If the [=rank=] of |options|.{{MLLayerNormalizationOptions/scale}} is not equal to the [=list/size=] of |options|.{{MLLayerNormalizationOptions/axes}}, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. [=Assert=]: the type of |options|.{{MLLayerNormalizationOptions/bias}} is {{MLOperand}}. + 1. If the [=rank=] of |options|.{{MLLayerNormalizationOptions/bias}} is not equal to the [=list/size=] of |options|.{{MLLayerNormalizationOptions/axes}}, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. [=map/For each=] |index| in [=the range=] 0 to the [=list/size=] of |options|.{{MLLayerNormalizationOptions/axes}}, exclusive: + 1. Let |axis| be |options|.{{MLLayerNormalizationOptions/axes}}[|index|]. + 1. If |axis| is greater or equal to the [=list/size=] of |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. Let |size| be |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[|axis|]. + 1. If |options|.{{MLLayerNormalizationOptions/scale}}.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[|index|] is not equal to |size|, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. If |options|.{{MLLayerNormalizationOptions/bias}}.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}[|index|] is not equal to |size|, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. + 1. Let |output| be the result of copying an MLOperand given |input|. + 1. Make a request to the underlying platform to: + 1. Let |opImpl| be an [=implementation-defined=] platform operator for the instance normalization operation, given |options|. + 1. Store a reference of |opImpl| in |output|.{{MLOperand/[[operator]]}}. + 1. Create an [=implementation-defined=] platform operand |outputImpl| to represent the output, given |output| and |opImpl|. + 1. Store a reference to |outputImpl| in |output|.{{MLOperand/[[operand]]}}. + 1. Connect |input|.{{MLOperand/[[operand]]}} as input to |opImpl|. + 1. Connect |output|.{{MLOperand/[[operand]]}} as output to |opImpl|. + 1. Return |output|. +
+
+ +
+
+ + The behavior of this operation when the axes parameter is set to [1,2,3] can be generically emulated from + the usage of other operations as follow. However, user agents typically have a more efficient implementation for it, + therefore its usage is encouraged from the performance standpoint. + +
+    // The reduction of the mean and variance values happens over the spatial dimensions 
+    // across all the input features (i.e. all channels) of the input tensor.
+    const reduceOptions = { axes: [1,2,3], keepDimensions: true };
+    const mean = builder.reduceMean(input, reduceOptions);
+    const variance = builder.reduceMean(
+      builder.pow(
+        builder.sub(input, mean),
+        buider.constant(2)),
+      reduceOptions
+      );
+
+    // The scale and bias tensors are of the shape of the input dimensions specified 
+    // by the values in the axes parameter (i.e. [1,2,3]).
+    return builder.add(
+      builder.mul(
+        options.scale,
+        builder.div(
+          builder.sub(input, mean),
+          buidler.sqrt(builder.add(variance, options.epsilon))
+          )
+        ),
+      options.bias
+      );
+  
+
+
+ +### leakyRelu ### {#api-mlgraphbuilder-leakyrelu} Calculate the leaky version of rectified linear function on the input tensor element-wise. The calculation follows the expression `max(0, x) + alpha ∗ min(0, x)`. - -
- **Arguments:** - - *input*: an {{MLOperand}}. The input 4-D tensor. The logical shape - is interpreted according to the value of *options.layout*. - - *options*: an optional {{MLPool2dOptions}}. The optional parameters of the operation. - - **Returns:** an {{MLOperand}}. The output 4-D tensor that contains the - result of the reduction. The logical shape is interpreted according to the - value of *layout*. More specifically, if the *options.roundingType* is *"floor"*, the spatial dimensions of the output tensor can be calculated as follow: - - *output size = floor(1 + (input size - filter size + beginning padding + ending padding) / stride)* - - or if *options.roundingType* is *"ceil"*: - - *output size = ceil(1 + (input size - filter size + beginning padding + ending padding) / stride)* -
+ sequence outputSizes; +}; -
- A *global* pooling operation such as one for the max pooling operation is a variant of pooling where the window dimensions is the spatial dimensions (last two dimensions) of the input shape, as follow. -
-    // 'global' max pooling
-    builder.maxPool2d(input);
-    
-
+partial interface MLGraphBuilder { + MLOperand averagePool2d(MLOperand input, optional MLPool2dOptions options = {}); + MLOperand l2Pool2d(MLOperand input, optional MLPool2dOptions options = {}); + MLOperand maxPool2d(MLOperand input, optional MLPool2dOptions options = {}); +}; + {{MLPool2dOptions}} has the following members:
@@ -4387,11 +4824,34 @@ partial interface MLGraphBuilder { Specifies the sizes of the two spacial dimensions of the output tensor. When the output sizes are explicitly specified, the {{MLPool2dOptions/roundingType}} is ignored. If not specified, the output sizes are automatically computed. -
-
+
+ **Arguments:** + - *input*: an {{MLOperand}}. The input 4-D tensor. The logical shape + is interpreted according to the value of *options.layout*. + - *options*: an optional {{MLPool2dOptions}}. The optional parameters of the operation. + + **Returns:** an {{MLOperand}}. The output 4-D tensor that contains the + result of the reduction. The logical shape is interpreted according to the + value of *layout*. More specifically, if the *options.roundingType* is *"floor"*, the spatial dimensions of the output tensor can be calculated as follow: + + *output size = floor(1 + (input size - filter size + beginning padding + ending padding) / stride)* + + or if *options.roundingType* is *"ceil"*: + + *output size = ceil(1 + (input size - filter size + beginning padding + ending padding) / stride)* +
+ +
+ A *global* pooling operation such as one for the max pooling operation is a variant of pooling where the window dimensions is the spatial dimensions (last two dimensions) of the input shape, as follow. +
+    // 'global' max pooling
+    builder.maxPool2d(input);
+    
+
+
To create pooling operation given |op|, |input| and |options|, run the following steps: @@ -4454,7 +4914,16 @@ partial interface MLGraphBuilder {
-### The prelu() method ### {#api-mlgraphbuilder-prelu} +#### averagePool2d #### {#api-mlgraphbuilder-pool2d-average} +Calculate the average value for patches of a feature map, and use it to create a pooled feature map. See [[#api-mlgraphbuilder-pool2d]] for more detail. + +#### l2Pool2d #### {#api-mlgraphbuilder-pool2d-l2} +Apply the L2 norm function to a region of the input feature map. The L2 norm is the square root of the sum of the squares of its elements. See [[#api-mlgraphbuilder-pool2d]] for more detail. + +#### maxPool2d #### {#api-mlgraphbuilder-pool2d-max} +Calculate the maximum value for patches of a feature map, and use it to create a pooled feature map. See [[#api-mlgraphbuilder-pool2d]] for more detail. + +### prelu ### {#api-mlgraphbuilder-prelu} Calculate the parametric version of rectified linear function (Parametric ReLU) on the input tensor element-wise. Parametric ReLU is a type of leaky ReLU that, instead of having a scalar slope like 0.01, making the slope (coefficient of leakage) into a parameter that is learned during the model training phase of this operation. The calculation follows the expression `max(0, x) + slope ∗ min(0, x)`. +{{MLReduceOptions}} has the following members: +
+ : axes + :: + A sequence of {{unsigned long}}. The dimensions to reduce. The values in the sequence must be in the range [0, N-1] where N is the [=rank=] of the input tensor. If not present, all dimensions are reduced. + + : keepDimensions + :: + A {{boolean}}. If true, retains reduced dimensions with [=list/size=] 1. The default value is false. +
+
**Arguments:** - *input*: an {{MLOperand}}. The input tensor. - *options*: an optional {{MLReduceOptions}}. The optional parameters of the operation. - - *axes*: a sequence of {{unsigned long}}. The dimensions to reduce. The values in the sequence must be in the range [0, N-1] where N is the [=rank=] of the input tensor. - If not present, all dimensions are reduced. - - *keepDimensions*: a {{boolean}}. If true, retains reduced dimensions with [=list/size=] 1. - The default value is false. **Returns:** an {{MLOperand}}. The reduced output tensor.
@@ -4559,7 +5035,6 @@ partial interface MLGraphBuilder {
- To create reduce operation given |op|, |input| and |options|, run the following steps: @@ -4567,8 +5042,12 @@ partial interface MLGraphBuilder { 1. [=Assert=]: |op| is one of "reduceL1", "reduceL2", "reduceLogSum", "reduceLogSumExp", "reduceMax", "reduceMean", "reduceMin", "reduceProduct", "reduceSum", "reduceSumSquare". 1. [=Assert=]: the type of |input| is {{MLOperand}}. 1. If |options|.{{MLReduceOptions/axes}} [=map/exists=], if any of its elements is not in [=the range=] 0 to the [=rank=] of |input|, exclusive, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. Let |outputShape| be the result of invoking the underlying implementation for calculating reduction output dimensions, given |options|. + 1. Let |desc| a new {{MLOperandDescriptor}}. + 1. Set |desc|.{{MLOperandDescriptor/dataType}} to |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dataType}}. + 1. Set |desc|.{{MLOperandDescriptor/dimensions}} to |outputShape|. 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. - 1. Let |output| be the result of copying an MLOperand given |input|. + 1. Let |output| be the result of creating an MLOperand given [=this=] and |desc|. 1. Make a request to the underlying platform to: 1. Let |opImpl| be an [=implementation-defined=] platform operator for the |op| reduce operation, given |options|. 1. Store a reference of |opImpl| in |output|.{{MLOperand/[[operator]]}}. @@ -4655,7 +5134,7 @@ partial interface MLGraphBuilder {
-### The relu() method ### {#api-mlgraphbuilder-relu-method} +### relu ### {#api-mlgraphbuilder-relu-method} Compute the rectified linear function of the input tensor. + +
+
+ + The behavior of this operation can be generically emulated from the usage of + other operations as follow. However, user agents typically have a more + efficient implementation for it, therefore its usage is encouraged from the + performance standpoint. + +
+    return builder.div(
+              builder.sub(builder.exp(builder.mul(builder.constant(2), x)), builder.constant(1)),
+              builder.add(builder.exp(builder.mul(builder.constant(2), x)), builder.constant(1)));
+    
+
+
+ +#### {{MLGraphBuilder/tanh(input)}} #### {#api-mlgraphbuilder-tanh-input} +
+ **Arguments:** + - *input*: an {{MLOperand}}. The input tensor. + + **Returns:** + - an {{MLOperand}}. The output tensor of the same shape as *input*. +
+ +
+ + + The tanh(|input|) method steps are: + +
+ 1. [=Assert=]: the type of |input| is {{MLOperand}}. + 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. + 1. Let |output| be the result of copying an MLOperand given |input|. + 1. Make a request to the underlying platform to: + 1. Let |opImpl| be an [=implementation-defined=] platform operator for the hyperbolic tangent operation. + 1. Store a reference of |opImpl| in |output|.{{MLOperand/[[operator]]}}. + 1. Create an [=implementation-defined=] platform operand |outputImpl| to represent the output, given |output| and |opImpl|. + 1. Store a reference to |outputImpl| in |output|.{{MLOperand/[[operand]]}}. + 1. Connect |input|.{{MLOperand/[[operand]]}} as input to |opImpl|. + 1. Connect |output|.{{MLOperand/[[operand]]}} as output to |opImpl|. + 1. Return |output|. +
+
+ +#### {{MLGraphBuilder/tanh()}} #### {#api-mlgraphbuilder-tanh} +
+ **Arguments:** + - None. + + **Returns:** + - an {{MLActivation}}. The activation function representing the tanh operation. +
+ +
+ + The tanh() method steps are: + +
+ 1. Let |op| be the result of creating an MLActivation given [=this=] and `"tanh"`. + 1. If that [=exception/throws=] an error, re-[=exception/throw=] the error. + 1. Return |op|. +
+
+ +### transpose ### {#api-mlgraphbuilder-transpose} +Permute the dimensions of the input tensor according to the *permutation* argument. + + +{{MLTransposeOptions}} has the following members: +
+ : permutation + :: + A sequence of {{unsigned long}} values. + Specifies the values used to permute the output shape. + The default value is [N-1, ..., 0], where N is the [=rank=] of the input tensor, e.g. [2,1,0] for a 3-D tensor. + These default values cause the output to become a transposed tensor of the input. When specified, the number of values in the sequence must be the same as the [=rank=] of the input tensor, and the values in the sequence must be within the range from 0 to N-1 with no two or more same values found in the sequence. +
+ +
+ **Arguments:** + - *input*: an {{MLOperand}}. The input N-D tensor. + - *options*: an optional {{MLTransposeOptions}}. The optional parameters of the operation. + + **Returns:** an {{MLOperand}}. The permuted or transposed N-D tensor. +
+ +
+ + The transpose(|input|, |options|) method steps are: + +
+ 1. [=Assert=]: the type of |input| is {{MLOperand}}. + 1. If |options|.{{MLTransposeOptions/permutation}} does not [=map/exist=], let |options|.{{MLTransposeOptions/permutation}} be the reversed sequence of all indices for |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}. + 1. Otherwise if |options|.{{MLTransposeOptions/permutation}} [=map/exists=]: + 1. If the [=rank=] of |options|.{{MLTransposeOptions/permutation}} is not the same as the [=rank=] of |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}, then [=exception/throw=] a {{TypeError}}. + 1. If the values in |options|.{{MLTransposeOptions/permutation}} are not in [=the range=] 0 and the [=rank=] of |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}} exclusive, then [=exception/throw=] a {{TypeError}}. + 1. If the values in |options|.{{MLTransposeOptions/permutation}} contain duplicate value, then [=exception/throw=] a {{TypeError}}. + 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. + 1. Let |output| be the result of copying an MLOperand given |input|. + 1. Make a request to the underlying platform to: + 1. Let |opImpl| be an [=implementation-defined=] platform operator for the transpose operation, given |options|. + 1. Store a reference of |opImpl| in |output|.{{MLOperand/[[operator]]}}. + 1. Create an [=implementation-defined=] platform operand |outputImpl| to represent the output, given |output| and |opImpl|. + 1. Store a reference to |outputImpl| in |output|.{{MLOperand/[[operand]]}}. + 1. Connect |input|.{{MLOperand/[[operand]]}} as input to |opImpl|. + 1. Connect |output|.{{MLOperand/[[operand]]}} as output to |opImpl|. + 1. Return |output|. +
+
+ +### triangular ### {#api-mlgraphbuilder-triangular} +Given a 2-D tensor (matrix), return a 2-D tensor containing either the upper or lower triangular part of the input tensor. + + + +{{MLTriangularOptions}} has the following members: +
+ : upper + :: + A {{boolean}} value. Indicate whether the output the upper or the lower part of the input matrix is retained. If not set, it is assumed to be true, indicating that the upper part is retained. + : diagonal + :: + A {{long}} value. Specify how many diagonals above or below the main diagonals of the input matrix are retained or excluded. If not set, this value is assumed to be 0, which means no diagonals other than the main diagonals are affected. +
+ +
+ **Arguments:** + - *input*: an {{MLOperand}}. The input 2-D tensor. + - *options*: an optional {{MLTriangularOptions}}. The optional parameters of the operation. + + **Returns:** an {{MLOperand}}. The output 2-D tensor representing a triangular matrix. +
+ +
+ + The triangular(|input|, |options|) method steps are: + +
+ 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. + 1. Let |output| be the result of copying an MLOperand given |input|. + 1. Make a request to the underlying platform to: + 1. Let |opImpl| be an [=implementation-defined=] platform operator for the triangular operation, given |options|. + 1. Store a reference of |opImpl| in |output|.{{MLOperand/[[operator]]}}. + 1. Create an [=implementation-defined=] platform operand |outputImpl| to represent the output, given |output| and |opImpl|. + 1. Store a reference to |outputImpl| in |output|.{{MLOperand/[[operand]]}}. + 1. Connect |input|.{{MLOperand/[[operand]]}} as input to |opImpl|. + 1. Connect |output|.{{MLOperand/[[operand]]}} as output to |opImpl|. + 1. Return |output|. +
+
+ +
+
+ + Examples of how triangular works in different diagonal settings.
-    // This sample shows the case that the splits parameter is an array.
-    const outputs = [];
-    let starts = Array(input_rank).fill(0);
-    let sizes = input_shape;
-    let start = 0;
-    for (const size of splits) {
-      starts[options.axis] = start;
-      sizes[options.axis] = size;
-      outputs.push(builder.slice(input, starts, sizes));
-      start += size;
-    }
-    return outputs;
+    // input:
+    //   [[7, 1, 2],
+    //    [9, 4, 8],
+    //    [2, 6, 3]]
+    const input = builder.constant(
+    { dimensions: [3,3] }, new Float32Array([7,1,2,9,4,8,2,6,3]));
+
+    // upper triangular matrix:
+    //   [[7, 1, 2], 
+    //    [0, 4, 8],
+    //    [0, 0, 3]]
+    const upper = builder.triangular(input);
+
+    // upper triangular matrix with one additional set of diagonals excluded:
+    //   [[0, 1, 2], 
+    //    [0, 0, 8],
+    //    [0, 0, 0]]
+    const upperPositive = builder.triangular(input, { diagonal: 1 });
+
+    // upper triangular matrix with one additional set of diagonals retained:
+    //   [[7, 1, 2], 
+    //    [9, 4, 8],
+    //    [0, 6, 3]]
+    const upperNegative = builder.triangular(input, { diagonal: -1 });
+
+    // lower triangular matrix:
+    //   [[7, 0, 0],
+    //    [9, 4, 0],
+    //    [2, 6, 3]]
+    const lower = builder.triangular(input, { upper: false });
+
+    // lower triangular matrix with one additional set of diagonals retained:
+    //   [[7, 1, 0],
+    //    [9, 4, 8],
+    //    [2, 6, 3]]
+    const lowerPositive = builder.triangular(input, { upper: false, diagonal: 1 });
+
+    // lower triangular matrix with one additional set of diagonals excluded:
+    //   [[0, 0, 0],
+    //    [9, 0, 0],
+    //    [2, 6, 0]]
+    const lowerNegative = builder.triangular(input, { upper: false, diagonal: -1 });
   
-### The squeeze() method ### {#api-mlgraphbuilder-squeeze} -Reduce the [=rank=] of a tensor by eliminating dimensions with size 1 of the tensor shape. Squeeze only affects the tensor's logical dimensions. It does not copy or change the content in the tensor. -
**Arguments:** - - *input*: an {{MLOperand}}. The input tensor. - - *options*: an optional {{MLSqueezeOptions}}. The optional parameters of the operation. + - *condition*: an {{MLOperand}}. The condition tensor. + - *input*: an {{MLOperand}}. The input tensor from which the value is selected when the condition of the corresponding element is set to true. + - *other*: an {{MLOperand}}. The other tensor from which the value is selected when the condition of the corresponding element is set to false. - **Returns:** an {{MLOperand}}. The output tensor of the same or reduced rank with the shape dimensions of size 1 eliminated. + **Returns:** an {{MLOperand}}. The output tensor that contains the values selected element-wise from either the input or the other tensor.
-{{MLSqueezeOptions}} has the following members: -
- : axes - :: - A sequence of {{unsigned long}}. - Specifies the indices to the shape dimensions of size 1 to eliminate. The values in the sequence must be in the range [0, N-1] where N is the [=rank=] of the input tensor. - When not specified, every shape dimensions of size 1 in the tensor are eliminated. -
-
- - The squeeze(|input|, |options|) method steps are: + The where(|condition|, |input|, |other|) method steps are:
- 1. [=Assert=]: the type of |input| is {{MLOperand}}. - 1. If |options|.{{MLSqueezeOptions/axes}} [=map/exists=], then: - 1. Let |dimensions| be |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}. - 1. Let |axesLength| be the [=list/size=] of |options|.{{MLSqueezeOptions/axes}}. - 1. If |axesLength| is not smaller than the rank of |dimensions|, - 1. For |index| in [=the range=] 0 to |axesLength|, exclusive: - 1. Let |oneDimIndex| be |options|.{{MLSqueezeOptions/axes}}[|index|]. - 1. If |dimensions|[|oneDimIndex|] is not 1, then [=exception/throw=] a {{TypeError}}. + 1. [=Assert=]: the type of |condition|, |input| and |other| is {{MLOperand}}. + 1. If |condition|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dataType}} is not equal to "uint8", then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. If |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dataType}} is not equal to |other|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dataType}}, then [=exception/throw=] a "{{DataError}}" {{DOMException}}. + 1. Let |descriptor| be a new {{MLOperandDescriptor}}. + 1. Set |descriptor|.{{MLOperandDescriptor/dataType}} to |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dataType}}. + 1. Let |descriptor|.{{MLOperandDescriptor/dimensions}} be the result of running the [=MLGraphBuilder/broadcast-shapes=] steps given |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}} and |other|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}. + 1. If that [=exception/throws=] an error, re-[=exception/throw=] the error. + 1. If |condition| is not unidirectionally broadcastable to |descriptor|.{{MLOperandDescriptor/dimensions}} according to the [[!numpy-broadcasting-rule]], then [=exception/throw=] a "{{DataError}}" {{DOMException}}. 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. - 1. Let |output| be the result of copying an MLOperand given |input|. + 1. Let |output| be the result of creating an MLOperand given [=this=] and |descriptor|. 1. Make a request to the underlying platform to: - 1. Let |opImpl| be an [=implementation-defined=] platform operator for the squeeze operation, given |options|. + 1. Let |opImpl| be an [=implementation-defined=] platform operator for the where operation, given |condition|, |input| and |other|. 1. Store a reference of |opImpl| in |output|.{{MLOperand/[[operator]]}}. 1. Create an [=implementation-defined=] platform operand |outputImpl| to represent the output, given |output| and |opImpl|. 1. Store a reference to |outputImpl| in |output|.{{MLOperand/[[operand]]}}. - 1. Connect |input|.{{MLOperand/[[operand]]}} as input to |opImpl|. + 1. Connect |condition|.{{MLOperand/[[operand]]}}, |input| and |other|.{{MLOperand/[[operand]]}} as inputs to |opImpl|. 1. Connect |output|.{{MLOperand/[[operand]]}} as output to |opImpl|. 1. Return |output|.
-### The tanh() method ### {#api-mlgraphbuilder-tanh-method} -Compute the hyperbolic tangent function of the input tensor. The calculation follows the expression `(exp(2 * x) - 1) / (exp(2 * x) + 1)`. - -
- The behavior of this operation can be generically emulated from the usage of - other operations as follow. However, user agents typically have a more - efficient implementation for it, therefore its usage is encouraged from the - performance standpoint. + The behavior of this operation can be generically emulated from the usage of other operations as follow. However, user agents typically have a more efficient implementation for it, therefore its usage is encouraged from the performance standpoint.
-    return builder.div(
-              builder.sub(builder.exp(builder.mul(builder.constant(2), x)), builder.constant(1)),
-              builder.add(builder.exp(builder.mul(builder.constant(2), x)), builder.constant(1)));
+    const c = builder.clamp(condition, {'minValue': 0, 'maxValue': 1});
+    builder.add(
+      builder.mul(
+        input,
+        builder.cast(c, input.dataType())),
+      builder.mul(
+        other,
+        builder.cast(builder.not(c), other.dataType())));
     
-#### The {{MLGraphBuilder/tanh(input)}} method #### {#api-mlgraphbuilder-tanh-input} -
- **Arguments:** - - *input*: an {{MLOperand}}. The input tensor. +## {{MLOperand}} interface ## {#api-mloperand} - **Returns:** - - an {{MLOperand}}. The output tensor of the same shape as *input*. +An {{MLOperand}} represents an intermediary graph being constructed as a result of compositing parts of an operation into a fully composed operation. + +For instance, an {{MLOperand}} may represent a constant feeding to an operation or the result from combining multiple constants together into an operation. See also [[#programming-model]]. + + + +
+{{MLOperand}} has the following internal slots: +
+ : \[[builder]] of type {{MLGraphBuilder}} + :: + The {{MLOperand}}'s associated builder object. + + : \[[descriptor]] of type {{MLOperandDescriptor}} + :: + The {{MLOperand}}'s descriptor. + + : \[[name]] of type [=string=] + :: + The {{MLOperand}}'s name (only for input operands). + + : \[[operand]] of type [=object=] + :: + Reference to {{MLOperand}}'s corresponding [=implementation-defined=] platform operand object. + + : \[[operator]] of type [=object=] + :: + Reference to {{MLOperand}}'s corresponding [=implementation-defined=] platform operator object. +
-
+
+To get the rank of an {{MLOperand}} |operand|, run the following steps: +
+ 1. Return the [=list/size=] of |operand|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}. +
+
+ +Since the {{MLOperand/[[builder]]}} object is bound by the {{MLGraphBuilder/constructor()}} constructor to an {{MLContext}} object, an {{MLOperand}} is also always bound to the same {{MLContext}} object. +### Creating {{MLOperand}} ### {#api-mloperand-create} +The {{MLOperand}} objects are created by the methods of {{MLGraphBuilder}}, internally using the following algorithms. + +
- The tanh(|input|) method steps are: + To create an MLOperand given |builder| and |desc|, run the following steps:
- 1. [=Assert=]: the type of |input| is {{MLOperand}}. - 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. - 1. Let |output| be the result of copying an MLOperand given |input|. - 1. Make a request to the underlying platform to: - 1. Let |opImpl| be an [=implementation-defined=] platform operator for the hyperbolic tangent operation. - 1. Store a reference of |opImpl| in |output|.{{MLOperand/[[operator]]}}. - 1. Create an [=implementation-defined=] platform operand |outputImpl| to represent the output, given |output| and |opImpl|. - 1. Store a reference to |outputImpl| in |output|.{{MLOperand/[[operand]]}}. - 1. Connect |input|.{{MLOperand/[[operand]]}} as input to |opImpl|. - 1. Connect |output|.{{MLOperand/[[operand]]}} as output to |opImpl|. - 1. Return |output|. + 1. [=Assert=]: the type of |builder| is {{MLGraphBuilder}}. + 1. [=Assert=]: the type of |desc| is {{MLOperandDescriptor}}. + 1. Let |operand| be a new [=object=]. + 1. Set |operand|.{{MLOperand/[[builder]]}} to |builder|. + 1. Set |operand|.{{MLOperand/[[descriptor]]}} to |desc|. + 1. Return |operand|.
-#### The {{MLGraphBuilder/tanh()}} method #### {#api-mlgraphbuilder-tanh} -
- **Arguments:** - - None. +
+ + To copy an MLOperand given |operand|, run the following steps: + +
+ 1. [=Assert=]: the type of |operand| is {{MLOperand}}. + 1. Let |result| be a new [=object=]. + 1. Set |result|.{{MLOperand/[[builder]]}} to |operand|.{{MLOperand/[[builder]]}}. + 1. Set |result|.{{MLOperand/[[descriptor]]}} to |operand|.{{MLOperand/[[descriptor]]}}. + 1. If |operand|.{{MLOperand/[[name]]}} [=map/exists=], then set |result|.{{MLOperand/[[name]]}} to |operand|.{{MLOperand/[[name]]}}. + 1. Return |result|. +
+
- **Returns:** - - an {{MLActivation}}. The activation function representing the tanh operation. -
+
+ + To check dimensions given |dimensions| and |type|, run the following steps: + +
+ 1. If the [=list/size=] of |dimensions| is 0, return false. + 1. If the [=list/size=] of |dimensions| is too large to be supported by the implementation, return false. + 1. If any element of |dimensions| is not a positive number, or it is too large to be supported by the implementation given |type|, return false. + 1. Return true. +
+
- The tanh() method steps are: + To validate MLOperand given |operand| and |builder|, run the following steps:
- 1. Let |op| be the result of creating an MLActivation given [=this=] and `"tanh"`. - 1. If that [=exception/throws=] an error, re-[=exception/throw=] the error. - 1. Return |op|. + 1. [=Assert=]: the type of |operand|.{{MLOperand/[[builder]]}} is {{MLGraphBuilder}}. + 1. If |builder| is not equal to |operand|.{{MLOperand/[[builder]]}}, return false. + 1. Let |desc| be |operand|.{{MLOperand/[[descriptor]]}}. + 1. If |desc|.{{MLOperandDescriptor/dimensions}} [=map/exists=] and invoking check dimensions given |desc|.{{MLOperandDescriptor/dimensions}} and |desc|.{{MLOperandDescriptor/dataType}} returns false, then return false. + 1. Return true.
-### The transpose() method ### {#api-mlgraphbuilder-transpose} -Permute the dimensions of the input tensor according to the *permutation* argument. +### dataType ### {#api-mloperand-datatype} +Return a data type of the {{MLOperand}}. + -partial interface MLGraphBuilder { - MLOperand transpose(MLOperand input, optional MLTransposeOptions options = {}); +
+ **Returns:** an {{MLOperandDataType}}. The data type of the operand. +
+ +
+ + The dataType() method steps are: + +
+ 1. Return [=this=].{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dataType}}. +
+
+ +### shape ### {#api-mloperand-shape} +Return a shape of the {{MLOperand}}. + +
- **Arguments:** - - *input*: an {{MLOperand}}. The input N-D tensor. - - *options*: an optional {{MLTransposeOptions}}. The optional parameters of the operation. - - **Returns:** an {{MLOperand}}. The permuted or transposed N-D tensor. + **Returns:** a sequence of {{unsigned long}}. The shape of the operand.
-{{MLTransposeOptions}} has the following members: -
- : permutation - :: - A sequence of {{unsigned long}} values. - Specifies the values used to permute the output shape. - The default value is [N-1, ..., 0], where N is the [=rank=] of the input tensor, e.g. [2,1,0] for a 3-D tensor. - These default values cause the output to become a transposed tensor of the input. When specified, the number of values in the sequence must be the same as the [=rank=] of the input tensor, and the values in the sequence must be within the range from 0 to N-1 with no two or more same values found in the sequence. -
+
+ + The shape() method steps are: + +
+ 1. Return [=this=].{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}. +
+
+ +## {{MLOperandDescriptor}} dictionary ## {#api-mloperanddescriptor} +
- The transpose(|input|, |options|) method steps are: + The byte length of an {{MLOperandDescriptor}} |desc| is the value returned by the following steps:
- 1. [=Assert=]: the type of |input| is {{MLOperand}}. - 1. If |options|.{{MLTransposeOptions/permutation}} does not [=map/exist=], let |options|.{{MLTransposeOptions/permutation}} be the reversed sequence of all indices for |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}. - 1. Otherwise if |options|.{{MLTransposeOptions/permutation}} [=map/exists=]: - 1. If the [=rank=] of |options|.{{MLTransposeOptions/permutation}} is not the same as the [=rank=] of |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}}, then [=exception/throw=] a {{TypeError}}. - 1. If the values in |options|.{{MLTransposeOptions/permutation}} are not in [=the range=] 0 and the [=rank=] of |input|.{{MLOperand/[[descriptor]]}}.{{MLOperandDescriptor/dimensions}} exclusive, then [=exception/throw=] a {{TypeError}}. - 1. If the values in |options|.{{MLTransposeOptions/permutation}} contain duplicate value, then [=exception/throw=] a {{TypeError}}. - 1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}. - 1. Let |output| be the result of copying an MLOperand given |input|. - 1. Make a request to the underlying platform to: - 1. Let |opImpl| be an [=implementation-defined=] platform operator for the transpose operation, given |options|. - 1. Store a reference of |opImpl| in |output|.{{MLOperand/[[operator]]}}. - 1. Create an [=implementation-defined=] platform operand |outputImpl| to represent the output, given |output| and |opImpl|. - 1. Store a reference to |outputImpl| in |output|.{{MLOperand/[[operand]]}}. - 1. Connect |input|.{{MLOperand/[[operand]]}} as input to |opImpl|. - 1. Connect |output|.{{MLOperand/[[operand]]}} as output to |opImpl|. - 1. Return |output|. + 1. Let |elementLength| be 1. + 1. [=map/For each=] |dimension| of |desc|.{{MLOperandDescriptor/dimensions}}: + 1. Set |elementLength| to |elementLength| × |dimension|. + 1. Let |elementSize| be the [=element size=] of one of the {{ArrayBufferView}} types that matches |desc|.{{MLOperandDescriptor/dataType}} according to [this table](#appendices-mloperanddatatype-arraybufferview-compatibility). + 1. Return |elementLength| × |elementSize|.
@@ -5657,6 +6466,8 @@ Thanks to Kaustubha Govind and Chrome privacy reviewers for feedback and privacy Thanks to Jiewei Qian for Chromium implementation review and feedback. +Thanks to Dwayne Robinson for his work investigating and providing recommendation for transformer support, and for providing reviews of operator conformance and WPT implementation. +
 {
   "Models": {
@@ -5911,6 +6722,24 @@ Thanks to Jiewei Qian for Chromium implementation review and feedback.
     ],
     "date": "July 2016"
   },
+  "Layer-Normalization": {
+    "href": "https://arxiv.org/abs/1607.06450",
+    "title": "Layer Normalization",
+    "authors": [
+      "Jimmy Lei Ba",
+      "Jamie Ryan Kiros",
+      "Geoffrey E. Hinton"
+    ],
+    "date": "July 2016"
+  },
+  "Error-Function": {
+    "href": "https://books.google.com/books?id=2CAqsF-RebgC&pg=PA110",
+    "title": "Special functions of mathematics for engineers",
+    "authors": [
+      "Larry C. Andrews"
+    ],
+    "date": "1998"
+  },
   "FaceForensics++": {
     "href": "https://github.com/ondyari/FaceForensics",
     "title": "FaceForensics++",
@@ -5951,4 +6780,4 @@ Thanks to Jiewei Qian for Chromium implementation review and feedback.
     ]
   }
 }
-
+ \ No newline at end of file