2using System.Collections.Generic;
47 m_blobDiff =
new Blob<T>(cuda, log);
76 base.LayerSetUp(colBottom, colTop);
90 base.Reshape(colBottom, colTop);
92 m_log.
CHECK_EQ(colBottom[0].count(1), colBottom[1].count(1),
"Inputs must have the same dimension.");
93 m_blobDiff.ReshapeLike(colBottom[0], colBottom[0].HalfSize);
113 int nCount = colBottom[0].count();
114 long hData = m_blobDiff.gpu_data;
116 m_cuda.sub(nCount, colBottom[0].gpu_data, colBottom[1].gpu_data, hData);
118 if (m_blobDiff.HalfSize)
121 T fDot =
m_cuda.dot(nCount, hData, hData);
122 double dfLoss =
convertD(fDot) / colBottom[0].num / 2.0;
163 for (
int i = 0; i < 2; i++)
165 if (rgbPropagateDown[i])
167 double dfSign = (i == 0) ? 1 : -1;
168 double dfTopDiff =
convertD(colTop[0].GetDiff(0));
169 double dfAlpha = dfSign * dfTopDiff / colBottom[i].num;
170 int nCount = colBottom[i].count();
172 m_cuda.axpby(nCount,
convert(dfAlpha), m_blobDiff.gpu_data,
m_tZero, colBottom[i].mutable_gpu_diff);
The Log class provides general output in text form.
void CHECK_EQ(double df1, double df2, string str)
Test whether one number is equal to another.
The BlobCollection contains a list of Blobs.
void SetData(double df)
Set all blob data to the value specified.
The Blob is the main holder of data that moves through the Layers of the Net.
The CudaDnn object is the main interface to the Low-Level Cuda C++ DLL.
The EuclideanLossLayer computes the Euclidean (L2) loss for real-valued regression tasks.
override void Reshape(BlobCollection< T > colBottom, BlobCollection< T > colTop)
Reshape the bottom (input) and top (output) blobs.
override void backward(BlobCollection< T > colTop, List< bool > rgbPropagateDown, BlobCollection< T > colBottom)
Computes the Euclidean error gradient w.r.t. the inputs.
override void forward(BlobCollection< T > colBottom, BlobCollection< T > colTop)
Forward computation
override bool AllowForceBackward(int nBottomIdx)
Unlike most loss layers, in the EuclideanLossLayer we can backpropagate to both inputs – override to ...
EuclideanLossLayer(CudaDnn< T > cuda, Log log, LayerParameter p)
The EuclideanLossLayer constructor
override void dispose()
Releases all GPU and host resources used by the Layer.
override void LayerSetUp(BlobCollection< T > colBottom, BlobCollection< T > colTop)
Setup the layer.
Log m_log
Specifies the Log for output.
long convert_to_full(int nCount, long hMem)
Convert half memory to full memory.
LayerParameter m_param
Specifies the LayerParameter describing the Layer.
void convert(BlobCollection< T > col)
Convert a collection of blobs from / to half size.
T m_tZero
Specifies a generic type equal to 0.0.
bool m_bConvertTopOnBwd
Specifies whether or not to convert the top on the backward pass when using half sized memory (typica...
bool m_bUseHalfSize
Specifies that the half size of the top (if any) should be converted to the base size.
double convertD(T df)
Converts a generic to a double value.
CudaDnn< T > m_cuda
Specifies the CudaDnn connection to Cuda.
LayerParameter.LayerType m_type
Specifies the Layer type.
The LossLayer provides an interface for Layer's that take two blobs as input – usually (1) prediction...
Specifies the base parameter for all layers.
string name
Specifies the name of this LayerParameter.
bool use_halfsize
Specifies whether or not to use half sized memory or not.
LayerType
Specifies the layer type.
The MyCaffe.basecode contains all generic types used throughout MyCaffe.
The MyCaffe.common namespace contains common MyCaffe classes.
The MyCaffe.layers namespace contains all layers that have a solidified code base,...
The MyCaffe.param namespace contains parameters used to create models.
The MyCaffe namespace contains the main body of MyCaffe code that closesly tracks the C++ Caffe open-...