2using System.Collections.Generic;
24 List<Layer<T>> m_rgEmbLayers =
new List<Layer<T>>();
76 int nOffset = (colBottom[0].num_axes == 2) ? 1 : 2;
77 int nDim = colBottom[0].count(0, nOffset);
79 int nCount = colBottom[0].count(nOffset);
80 int nSpatialDim = nCount / nNumInput;
81 List<int> rgShape =
new List<int>() { nDim, nSpatialDim };
87 m_rgEmbBtm.
Add(blobBtm);
89 m_rgEmbTop.
Add(colTop[0]);
91 for (
int i = 0; i < nNumInput; i++)
97 m_rgEmbBtm[0] = m_rgBtm[i];
98 m_rgEmbTop[0] = colTop[i];
107 m_rgEmbLayers.Add(emb_layer);
122 for (
int i = 0; i < nNumInput; i++)
124 m_rgEmbBtm[0] = m_rgBtm[i];
125 m_rgEmbTop[0] = colTop[i];
126 m_rgEmbLayers[i].
Reshape(m_rgEmbBtm, m_rgEmbTop);
144 for (
int i = 0; i < nNumInput; i++)
146 int nCount = m_rgBtm[i].count();
147 m_cuda.channel_copy(nCount, nCount, 1, nNumInput, 1, i, colBottom[0].gpu_data, m_rgBtm[i].mutable_gpu_data,
DIR.FWD);
149 m_rgEmbBtm[0] = m_rgBtm[i];
150 m_rgEmbTop[0] = colTop[i];
151 m_rgEmbLayers[i].Forward(m_rgEmbBtm, m_rgEmbTop);
173 for (
int i = 0; i < nNumInput; i++)
175 m_rgEmbBtm[0] = m_rgBtm[i];
176 m_rgEmbTop[0] = colTop[i];
177 m_rgEmbLayers[i].Backward(m_rgEmbTop, rgbPropagateDown, m_rgEmbBtm);
The Log class provides general output in text form.
void CHECK_EQ(double df1, double df2, string str)
Test whether one number is equal to another.
The BlobCollection contains a list of Blobs.
void Add(Blob< T > b)
Add a new Blob to the collection.
int Count
Returns the number of items in the collection.
void Clear(bool bDispose=false)
Remove all items from the collection.
void Reshape(int[] rgShape)
Reshapes all blobs in the collection to the given shape.
The Blob is the main holder of data that moves through the Layers of the Net.
void Reshape(int nNum, int nChannels, int nHeight, int nWidth, bool? bUseHalfSize=null)
DEPRECIATED; use
The CudaDnn object is the main interface to the Low-Level Cuda C++ DLL.
An interface for the units of computation which can be composed into a Net.
Log m_log
Specifies the Log for output.
LayerParameter m_param
Specifies the LayerParameter describing the Layer.
abstract void LayerSetUp(BlobCollection< T > colBottom, BlobCollection< T > colTop)
Performs Layer specific setup. Derived layers should override this function as well as the Reshape fu...
CudaDnn< T > m_cuda
Specifies the CudaDnn connection to Cuda.
static Layer< T > Create(CudaDnn< T > cuda, Log log, LayerParameter p, CancelEvent evtCancel, IXDatabaseBase db=null, TransferInput trxinput=null)
Create a new Layer based on the LayerParameter.
LayerParameter.LayerType m_type
Specifies the Layer type.
BlobCollection< T > blobs
Returns the collection of learnable parameter Blobs for the Layer.
LayerParameter convertLayerParam(LayerParameter pChild, LayerParameter pParent)
Called to convert a parent LayerParameterEx, used in blob sharing, with a child layer parameter.
uint num_output
Specifies the number of outputs for the layer.
uint input_dim
Specifies the input given as integers to be interpreted as one-hot vector indices with dimension num_...
bool bias_term
Specifies whether to use a bias term or not.
Specifies the base parameter for all layers.
string name
Specifies the name of this LayerParameter.
CategoricalTransformationParameter categorical_trans_param
Returns the parameter set when initialized with LayerType.CATEGORICAL_TRANS
EmbedParameter embed_param
Returns the parameter set when initialized with LayerType.EMBED
LayerType
Specifies the layer type.
The MyCaffe.basecode contains all generic types used throughout MyCaffe.
The MyCaffe.common namespace contains common MyCaffe classes.
DIR
Defines the direction of data flow.
The MyCaffe.layers.tft namespace contains all TFT related layers.
The MyCaffe.param namespace contains parameters used to create models.
The MyCaffe namespace contains the main body of MyCaffe code that closesly tracks the C++ Caffe open-...