2using System.Collections.Generic;
47 public event EventHandler<LossArgs>
OnLoss;
77 if (m_lossArgs ==
null)
81 Array.Copy(rgData, m_lossArgs.
Data, rgData.Length);
108 double dfNormalizer = 0.0;
110 switch (normalization_mode)
113 m_log.
CHECK_GT(nInnerNum, 0,
"The inner number must be set.");
114 m_log.
CHECK_GT(nOuterNum, 0,
"The outer number must be set.");
115 dfNormalizer = nOuterNum * nInnerNum;
119 if (nValidCount == -1)
121 m_log.
CHECK_GT(nInnerNum, 0,
"The inner number must be set.");
122 m_log.
CHECK_GT(nOuterNum, 0,
"The outer number must be set.");
123 dfNormalizer = nOuterNum * nInnerNum;
126 dfNormalizer = nValidCount;
130 m_log.
CHECK_GT(nOuterNum, 0,
"The outer number must be set.");
131 dfNormalizer = nOuterNum;
139 m_log.
FAIL(
"Unknown normalization mode " + normalization_mode.ToString());
145 return Math.Max(dfNormalizer, 1.0);
214 m_log.
CHECK_EQ(colBottom[0].shape(0), colBottom[1].shape(0),
"The data and label should have the same first dimension. Data has shape '" + colBottom[0].shape_string +
"' and label has shape '" + colBottom[1].shape_string +
"'.");
215 List<int> rgLossShape =
new List<int>();
216 colTop[0].
Reshape(rgLossShape);
The Log class provides general output in text form.
void CHECK(bool b, string str)
Test a flag for true.
void FAIL(string str)
Causes a failure which throws an exception with the desciptive text.
void CHECK_EQ(double df1, double df2, string str)
Test whether one number is equal to another.
void CHECK_GT(double df1, double df2, string str)
Test whether one number is greater than another.
The LossArgs contains the loss values for a given batch.
float[] Data
Specifies the loss values for a given batch.
The BlobCollection contains a list of Blobs.
void Reshape(int[] rgShape)
Reshapes all blobs in the collection to the given shape.
The Blob is the main holder of data that moves through the Layers of the Net.
T[] mutable_cpu_data
Get data from the GPU and bring it over to the host, or Set data from the Host and send it over to th...
List< int > shape()
Returns an array where each element contains the shape of an axis of the Blob.
int count()
Returns the total number of items in the Blob.
The CudaDnn object is the main interface to the Low-Level Cuda C++ DLL.
An interface for the units of computation which can be composed into a Net.
Log m_log
Specifies the Log for output.
LayerParameter m_param
Specifies the LayerParameter describing the Layer.
float convertF(T df)
Converts a generic to a float value.
LayerParameter.? LayerType m_parentLayerType
Specifies the layer type of the parent.
LayerParameter.LayerType m_type
Specifies the Layer type.
The LossLayer provides an interface for Layer's that take two blobs as input – usually (1) prediction...
const double kLOG_THRESHOLD
Specifies the minimum threshold for loss values.
double GetNormalizer(LossParameter.NormalizationMode normalization_mode, int nOuterNum, int nInnerNum, int nValidCount)
Returns the normalizer used to normalize the loss.
override bool AutoTopBlobs
For convenience and backwards compatibility, insturct the Net to automatically allocate a single top ...
EventHandler< LossArgs > OnLoss
Specifies the loss event called on each learning cycle.
bool m_bIgnoreLabels
Set to true when labels are to be ignored.
override int ExactNumBottomBlobs
Returns the exact number of required bottom (intput) Blobs: prediction, label
override void Reshape(BlobCollection< T > colBottom, BlobCollection< T > colTop)
Reshape the bottom (input) and top (output) blobs.
int m_nOuterNum
Specifies the outer num, such as the batch count (e.g. count(0, axis)). Each derivative class must se...
int m_nInnerNum
Specifies the inner num, such as the channel + height + width (e.g. count(axis + 1))....
override int ExactNumTopBlobs
Returns the exact number of required top (output) Blobs: loss
virtual double get_normalizer(LossParameter.NormalizationMode normalization_mode, int nValidCount)
Returns the normalizer used to normalize the loss.
override void LayerSetUp(BlobCollection< T > colBottom, BlobCollection< T > colTop)
Setup the layer.
LossLayer(CudaDnn< T > cuda, Log log, LayerParameter p)
The LossLayer constructor.
override bool AllowForceBackward(int nBottomIdx)
We usually cannot backpropagate to the labels; ignore force_backward for these inputs.
LossParameter.NormalizationMode m_normalization
Specifies the normalization mode used to normalize the loss.
void callLossEvent(Blob< T > blob)
This method is called by the loss layer to pass the blob data to the OnLoss event (if implemented)
Specifies the base parameter for all layers.
List< double > loss_weight
Specifies the loss weight.
LayerType
Specifies the layer type.
LossParameter loss_param
Returns the parameter set when initialized with LayerType.LOSS
Stores the parameters used by loss layers.
NormalizationMode
How to normalize the loss for loss layers that aggregate across batches, spatial dimensions,...
bool normalize
DEPRECIATED. Ignore if normalization is specified. If normalization is not specified,...
NormalizationMode? normalization
Specifies the normalization mode (default = VALID).
The MyCaffe.basecode contains all generic types used throughout MyCaffe.
The MyCaffe.common namespace contains common MyCaffe classes.
BLOB_TYPE
Defines the tpe of data held by a given Blob.
The MyCaffe.layers namespace contains all layers that have a solidified code base,...
The MyCaffe.param namespace contains parameters used to create models.
The MyCaffe namespace contains the main body of MyCaffe code that closesly tracks the C++ Caffe open-...