Package inference

Class ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBuffer.Builder

java.lang.Object
com.google.protobuf.AbstractMessageLite.Builder
com.google.protobuf.AbstractMessage.Builder<ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBuffer.Builder>
com.google.protobuf.GeneratedMessageV3.Builder<ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBuffer.Builder>
inference.ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBuffer.Builder
All Implemented Interfaces:
com.google.protobuf.Message.Builder, com.google.protobuf.MessageLite.Builder, com.google.protobuf.MessageLiteOrBuilder, com.google.protobuf.MessageOrBuilder, ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBufferOrBuilder, Cloneable
Enclosing class:
ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBuffer

public static final class ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBuffer.Builder extends com.google.protobuf.GeneratedMessageV3.Builder<ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBuffer.Builder> implements ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBufferOrBuilder
@@
@@  .. cpp:var:: message PinnedMemoryBuffer
@@
@@     Specify whether to use a pinned memory buffer when transferring data
@@     between non-pinned system memory and GPU memory. Using a pinned
@@     memory buffer for system from/to GPU transfers will typically provide
@@     increased performance. For example, in the common use case where the
@@     request provides inputs and delivers outputs via non-pinned system
@@     memory, if the model instance accepts GPU IOs, the inputs will be
@@     processed by two copies: from non-pinned system memory to pinned
@@     memory, and from pinned memory to GPU memory. Similarly, pinned
@@     memory will be used for delivering the outputs.
@@
 
Protobuf type inference.ModelOptimizationPolicy.PinnedMemoryBuffer