Package inference

Class ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBuffer

java.lang.Object
com.google.protobuf.AbstractMessageLite
com.google.protobuf.AbstractMessage
com.google.protobuf.GeneratedMessageV3
inference.ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBuffer
All Implemented Interfaces:
com.google.protobuf.Message, com.google.protobuf.MessageLite, com.google.protobuf.MessageLiteOrBuilder, com.google.protobuf.MessageOrBuilder, ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBufferOrBuilder, Serializable
Enclosing class:
ModelConfigOuterClass.ModelOptimizationPolicy

public static final class ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBuffer extends com.google.protobuf.GeneratedMessageV3 implements ModelConfigOuterClass.ModelOptimizationPolicy.PinnedMemoryBufferOrBuilder
@@
@@  .. cpp:var:: message PinnedMemoryBuffer
@@
@@     Specify whether to use a pinned memory buffer when transferring data
@@     between non-pinned system memory and GPU memory. Using a pinned
@@     memory buffer for system from/to GPU transfers will typically provide
@@     increased performance. For example, in the common use case where the
@@     request provides inputs and delivers outputs via non-pinned system
@@     memory, if the model instance accepts GPU IOs, the inputs will be
@@     processed by two copies: from non-pinned system memory to pinned
@@     memory, and from pinned memory to GPU memory. Similarly, pinned
@@     memory will be used for delivering the outputs.
@@
 
Protobuf type inference.ModelOptimizationPolicy.PinnedMemoryBuffer
See Also: