ParallelReverseAutoDiff 1.0.5
See the version list below for details.
dotnet add package ParallelReverseAutoDiff --version 1.0.5
NuGet\Install-Package ParallelReverseAutoDiff -Version 1.0.5
<PackageReference Include="ParallelReverseAutoDiff" Version="1.0.5" />
paket add ParallelReverseAutoDiff --version 1.0.5
#r "nuget: ParallelReverseAutoDiff, 1.0.5"
// Install ParallelReverseAutoDiff as a Cake Addin #addin nuget:?package=ParallelReverseAutoDiff&version=1.0.5 // Install ParallelReverseAutoDiff as a Cake Tool #tool nuget:?package=ParallelReverseAutoDiff&version=1.0.5
ParallelReverseAutoDiff
Parallel Reverse Mode Automatic Differentiation in C#
ParallelAutoDiff is a thread-safe C# library for reverse mode automatic differentiation, optimized for parallel computation. It leverages semaphores and locks to coordinate between threads, ensuring accuracy during gradient accumulation. Each operation in the library is implemented as a node with a forward and a backward function, facilitating efficient calculation of derivatives. A unique aspect of this library is its use of the visitor pattern: it includes a specialized 'Neural Network Visitor' which traverses neural network nodes across different threads. This visitor is responsible for gradient accumulation on nodes shared across multiple threads. This design allows for parallelized computations while maintaining consistency and avoiding race conditions. The result is an efficient, scalable automatic differentiation solution, ideal for machine learning applications and neural network training.
Supported Operations
AmplifiedSigmoidOperation - Used for gradient amplification
ApplyDropoutOperation
HadamardProductOperation
LayerNormalizationOperation
LeakyReLUOperation
MatrixAddOperation
MatrixAddThreeOperation
MatrixMultiplyOperation
MatrixMultiplyScalarOperation
MatrixTransposeOperation
ReLUOperation
SigmoidOperation
SoftmaxOperation
StretchedSigmoidOperation
TanhOperation
Usage
Create architecture JSON file
Here is an example:
{
"timeSteps": [
{
"startOperations": [
{
"id": "projectedInput",
"description": "Multiply the input with the weight matrix",
"type": "MatrixMultiplyOperation",
"inputs": [ "We", "inputSequence[t]" ],
"gradientResultTo": [ "dWe", null ]
},
{
"id": "embeddedInput",
"description": "Add the bias",
"type": "MatrixAddOperation",
"inputs": [ "projectedInput", "be" ],
"gradientResultTo": [ null, "dbe" ]
}
],
"layers": [
{
"operations": [
{
"id": "wf_currentInput",
"type": "MatrixMultiplyOperation",
"inputs": [ "Wf[layerIndex]", "currentInput" ],
"gradientResultTo": [ "dWf[layerIndex]", null ]
},
{
"id": "uf_previousHiddenState",
"type": "MatrixMultiplyOperation",
"inputs": [ "Uf[layerIndex]", "previousHiddenState" ],
"gradientResultTo": [ "dUf[layerIndex]", null ]
},
{
"id": "f_add",
"type": "MatrixAddThreeOperation",
"inputs": [ "wf_currentInput", "uf_previousHiddenState", "bf[layerIndex]" ],
"gradientResultTo": [ null, null, "dbf[layerIndex]" ]
},
{
"id": "intermediate_f_1",
"description": "Compute the forget gate",
"type": "MatrixTransposeOperation",
"inputs": [ "f_add" ]
},
{
"id": "intermediate_f_2",
"description": "Compute the forget gate",
"type": "LayerNormalizationOperation",
"inputs": [ "intermediate_f_1" ]
},
{
"id": "intermediate_f_3",
"description": "Compute the forget gate",
"type": "MatrixTransposeOperation",
"inputs": [ "intermediate_f_2" ]
},
{
"id": "f",
"description": "Compute the forget gate",
"type": "AmplifiedSigmoidOperation",
"inputs": [ "intermediate_f_3" ],
"setResultTo": "f[t][layerIndex]"
},
{
"id": "wi_currentInput",
"type": "MatrixMultiplyOperation",
"inputs": [ "Wi[layerIndex]", "currentInput" ],
"gradientResultTo": [ "dWi[layerIndex]", null ]
},
{
"id": "ui_previousHiddenState",
"type": "MatrixMultiplyOperation",
"inputs": [ "Ui[layerIndex]", "previousHiddenState" ],
"gradientResultTo": [ "dUi[layerIndex]", null ]
},
{
"id": "i_add",
"type": "MatrixAddThreeOperation",
"inputs": [ "wi_currentInput", "ui_previousHiddenState", "bi[layerIndex]" ],
"gradientResultTo": [ null, null, "dbi[layerIndex]" ]
},
{
"id": "intermediate_i_1",
"description": "Compute the input gate",
"type": "MatrixTransposeOperation",
"inputs": [ "i_add" ]
},
{
"id": "intermediate_i_2",
"description": "Compute the input gate",
"type": "LayerNormalizationOperation",
"inputs": [ "intermediate_i_1" ]
},
{
"id": "intermediate_i_3",
"description": "Compute the input gate",
"type": "MatrixTransposeOperation",
"inputs": [ "intermediate_i_2" ]
},
{
"id": "i",
"description": "Compute the input gate",
"type": "AmplifiedSigmoidOperation",
"inputs": [ "intermediate_i_3" ],
"setResultTo": "i[t][layerIndex]"
},
{
"id": "wc_currentInput",
"type": "MatrixMultiplyOperation",
"inputs": [ "Wc[layerIndex]", "currentInput" ],
"gradientResultTo": [ "dWc[layerIndex]", null ]
},
{
"id": "uc_previousHiddenState",
"type": "MatrixMultiplyOperation",
"inputs": [ "Uc[layerIndex]", "previousHiddenState" ],
"gradientResultTo": [ "dUc[layerIndex]", null ]
},
{
"id": "cHat_add",
"type": "MatrixAddThreeOperation",
"inputs": [ "wc_currentInput", "uc_previousHiddenState", "bc[layerIndex]" ],
"gradientResultTo": [ null, null, "dbc[layerIndex]" ]
},
{
"id": "intermediate_cHat_1",
"description": "Compute the candidate memory cell state",
"type": "MatrixTransposeOperation",
"inputs": [ "cHat_add" ]
},
{
"id": "intermediate_cHat_2",
"description": "Compute the candidate memory cell state",
"type": "LayerNormalizationOperation",
"inputs": [ "intermediate_cHat_1" ]
},
{
"id": "intermediate_cHat_3",
"description": "Compute the candidate memory cell state",
"type": "MatrixTransposeOperation",
"inputs": [ "intermediate_cHat_2" ]
},
{
"id": "cHat",
"description": "Compute the candidate memory cell state",
"type": "TanhOperation",
"inputs": [ "intermediate_cHat_3" ],
"setResultTo": "cHat[t][layerIndex]"
},
{
"id": "f_previousMemoryCellState",
"type": "HadamardProductOperation",
"inputs": [ "f[t][layerIndex]", "previousMemoryCellState" ]
},
{
"id": "i_cHat",
"type": "HadamardProductOperation",
"inputs": [ "i[t][layerIndex]", "cHat[t][layerIndex]" ]
},
{
"id": "newC",
"description": "Compute the memory cell state",
"type": "MatrixAddOperation",
"inputs": [ "f_previousMemoryCellState", "i_cHat" ]
},
{
"id": "newCTransposed",
"type": "MatrixTransposeOperation",
"inputs": [ "newC" ]
},
{
"id": "newCNormalized",
"type": "LayerNormalizationOperation",
"inputs": [ "newCTransposed" ]
},
{
"id": "c",
"type": "MatrixTransposeOperation",
"inputs": [ "newCNormalized" ],
"setResultTo": "c[t][layerIndex]"
},
{
"id": "wo_currentInput",
"type": "MatrixMultiplyOperation",
"inputs": [ "Wo[layerIndex]", "currentInput" ],
"gradientResultTo": [ "dWo[layerIndex]", null ]
},
{
"id": "uo_previousHiddenState",
"type": "MatrixMultiplyOperation",
"inputs": [ "Uo[layerIndex]", "previousHiddenState" ],
"gradientResultTo": [ "dUo[layerIndex]", null ]
},
{
"id": "o_add",
"type": "MatrixAddThreeOperation",
"inputs": [ "wo_currentInput", "uo_previousHiddenState", "bo[layerIndex]" ],
"gradientResultTo": [ null, null, "dbo[layerIndex]" ]
},
{
"id": "o",
"description": "Compute the output gate",
"type": "LeakyReLUOperation",
"inputs": [ "o_add" ],
"setResultTo": "o[t][layerIndex]"
},
{
"id": "c_tanh",
"type": "TanhOperation",
"inputs": [ "c" ]
},
{
"id": "newH",
"type": "HadamardProductOperation",
"inputs": [ "o[t][layerIndex]", "c_tanh" ]
},
{
"id": "keys",
"type": "MatrixMultiplyOperation",
"inputs": [ "Wk[layerIndex]", "embeddedInput" ],
"gradientResultTo": [ "dWk[layerIndex]", null ]
},
{
"id": "queries",
"type": "MatrixMultiplyOperation",
"inputs": [ "Wq[layerIndex]", "previousHiddenState" ],
"gradientResultTo": [ "dWq[layerIndex]", null ]
},
{
"id": "values",
"type": "MatrixMultiplyOperation",
"inputs": [ "Wv[layerIndex]", "embeddedInput" ],
"gradientResultTo": [ "dWv[layerIndex]", null ]
},
{
"id": "queriesTranspose",
"type": "MatrixTransposeOperation",
"inputs": [ "queries" ]
},
{
"id": "dotProduct",
"description": "Compute the dot product of the queries and keys",
"type": "MatrixMultiplyOperation",
"inputs": [ "keys", "queriesTranspose" ]
},
{
"id": "scaledDotProduct",
"description": "Scale the dot product",
"type": "MatrixMultiplyScalarOperation",
"inputs": [ "dotProduct", "scaledDotProductScalar" ]
},
{
"id": "scaledDotProductTranspose",
"type": "MatrixTransposeOperation",
"inputs": [ "scaledDotProduct" ]
},
{
"id": "attentionWeights",
"type": "SoftmaxOperation",
"inputs": [ "scaledDotProductTranspose" ]
},
{
"id": "attentionOutput",
"type": "MatrixMultiplyOperation",
"inputs": [ "attentionWeights", "values" ]
},
{
"id": "newHWithAttentionOutput",
"type": "MatrixAddOperation",
"inputs": [ "newH", "attentionOutput" ]
},
{
"id": "newHWithAttentionOutputTranspose",
"type": "MatrixTransposeOperation",
"inputs": [ "newHWithAttentionOutput" ]
},
{
"id": "normalizedNewH",
"type": "LayerNormalizationOperation",
"inputs": [ "newHWithAttentionOutputTranspose" ]
},
{
"id": "h",
"type": "MatrixTransposeOperation",
"inputs": [ "normalizedNewH" ],
"setResultTo": "h[t][layerIndex]"
}
]
}
],
"endOperations": [
{
"id": "v_h",
"type": "MatrixMultiplyOperation",
"inputs": [ "V", "hFromCurrentTimeStepAndLastLayer" ],
"gradientResultTo": [ "dV", null ]
},
{
"id": "v_h_b",
"type": "MatrixAddOperation",
"inputs": [ "v_h", "b" ],
"gradientResultTo": [ null, "db" ]
},
{
"id": "output_t",
"type": "AmplifiedSigmoidOperation",
"inputs": [ "v_h_b" ],
"setResultTo": "output[t]"
}
]
}
]
}
Instantiate the architecture
Use a JSON serialization library like Newtonsoft.JSON to deserialize the JSON file to a JSONArchitecure object.
Instantiate and populate the operations
Instantiate each operation based on its type. Then set the Next property of each operation to be the next operation in the forward pass.
Add each operation that is backwards in the computation graph to the BackwardAdjacentOperations property of an operation. BackwardAdjacentOperations is just a list of operations.
Add instances of the gradients to send the result to, to the GradientDestinations array. If there is no gradient result for a certain input, add null.
Then populate the backward dependency counts by running the following code. It only has to be run once to set up the backward dependency counts.
for (int t = numTimeSteps - 1; t >= 0; t--) // if there are multiple timesteps
{
backwardStartOperation = operationsMap[$"output_t_{t}"]; // the backward start operation
OperationGraphVisitor opVisitor = new OperationGraphVisitor(Guid.NewGuid().ToString(), backwardStartOperation, t);
await opVisitor.TraverseAsync(); // sets the backward dependency counts
await opVisitor.ResetVisitedCountsAsync(backwardStartOperation);
}
Run the forward pass
var op = startOperation; // the start operation
IOperation currOp = null;
do
{
var parameters = LookupParameters(op); // lookup the parameters
op.OperationType.GetMethod("Forward").Invoke(op, parameters); // call the forward function
if (op.ResultToName != null)
{
op.ResultTo(NameToValueFunc(op.ResultToName)); // send the result to the appropriate object
}
operationsMap[op.SpecificId] = op;
currOp = op;
if (op.HasNext)
op = op.Next;
} while (currOp.Next != null);
Run the backward pass utilizing inherent parallelization
for (int t = numTimeSteps - 1; t >= 0; t--)
{
backwardStartOperation = operationsMap[$"output_t_{t}"];
if (gradientOfLossWrtOutput[t][0] != 0.0d)
{
backwardStartOperation.BackwardInput = new double[][] { gradientOfLossWrtOutput[t] };
OperationNeuralNetworkVisitor opVisitor = new OperationNeuralNetworkVisitor(Guid.NewGuid().ToString(), backwardStartOperation, t);
await opVisitor.TraverseAsync();
opVisitor.Reset();
traverseCount++;
}
}
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
.NET Core | netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
.NET Standard | netstandard2.1 is compatible. |
MonoAndroid | monoandroid was computed. |
MonoMac | monomac was computed. |
MonoTouch | monotouch was computed. |
Tizen | tizen60 was computed. |
Xamarin.iOS | xamarinios was computed. |
Xamarin.Mac | xamarinmac was computed. |
Xamarin.TVOS | xamarintvos was computed. |
Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETStandard 2.1
- StyleCop.Analyzers (>= 1.1.118)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last updated | |
---|---|---|---|
1.2.17 | 82 | 10/30/2024 | |
1.2.16 | 118 | 10/27/2024 | |
1.2.15 | 81 | 10/22/2024 | |
1.2.14 | 120 | 10/13/2024 | |
1.2.13 | 89 | 10/11/2024 | |
1.2.12 | 110 | 10/6/2024 | |
1.2.11 | 118 | 9/22/2024 | |
1.2.10 | 97 | 9/1/2024 | |
1.2.9 | 143 | 8/31/2024 | |
1.2.8 | 99 | 8/29/2024 | |
1.2.7 | 115 | 8/28/2024 | |
1.2.6 | 117 | 7/4/2024 | |
1.2.5 | 118 | 7/4/2024 | |
1.2.4 | 140 | 7/2/2024 | |
1.2.3 | 118 | 6/30/2024 | |
1.2.2 | 117 | 6/27/2024 | |
1.2.1 | 136 | 4/13/2024 | |
1.2.0 | 114 | 4/1/2024 | |
1.1.65 | 141 | 1/20/2024 | |
1.1.64 | 121 | 1/10/2024 | |
1.1.63 | 122 | 1/9/2024 | |
1.1.62 | 139 | 1/8/2024 | |
1.1.61 | 126 | 1/7/2024 | |
1.1.60 | 120 | 1/7/2024 | |
1.1.59 | 110 | 1/7/2024 | |
1.1.58 | 118 | 1/6/2024 | |
1.1.57 | 124 | 1/6/2024 | |
1.1.56 | 120 | 1/6/2024 | |
1.1.55 | 110 | 1/6/2024 | |
1.1.54 | 123 | 1/5/2024 | |
1.1.53 | 130 | 1/4/2024 | |
1.1.52 | 122 | 1/4/2024 | |
1.1.51 | 117 | 1/4/2024 | |
1.1.50 | 122 | 1/3/2024 | |
1.1.49 | 122 | 1/3/2024 | |
1.1.48 | 134 | 1/3/2024 | |
1.1.47 | 123 | 1/3/2024 | |
1.1.46 | 118 | 1/3/2024 | |
1.1.45 | 118 | 1/3/2024 | |
1.1.44 | 126 | 1/3/2024 | |
1.1.43 | 121 | 1/3/2024 | |
1.1.42 | 128 | 1/2/2024 | |
1.1.41 | 131 | 1/2/2024 | |
1.1.40 | 138 | 1/2/2024 | |
1.1.39 | 145 | 1/1/2024 | |
1.1.38 | 131 | 1/1/2024 | |
1.1.37 | 134 | 1/1/2024 | |
1.1.36 | 141 | 1/1/2024 | |
1.1.35 | 128 | 1/1/2024 | |
1.1.34 | 131 | 12/31/2023 | |
1.1.33 | 130 | 12/25/2023 | |
1.1.32 | 105 | 12/25/2023 | |
1.1.31 | 131 | 12/24/2023 | |
1.1.30 | 114 | 12/24/2023 | |
1.1.29 | 165 | 9/25/2023 | |
1.1.28 | 122 | 9/25/2023 | |
1.1.27 | 133 | 9/16/2023 | |
1.1.26 | 158 | 9/7/2023 | |
1.1.25 | 135 | 9/7/2023 | |
1.1.24 | 153 | 9/7/2023 | |
1.1.23 | 128 | 9/7/2023 | |
1.1.22 | 141 | 9/6/2023 | |
1.1.21 | 140 | 9/6/2023 | |
1.1.20 | 137 | 9/6/2023 | |
1.1.19 | 150 | 9/5/2023 | |
1.1.18 | 141 | 9/4/2023 | |
1.1.17 | 116 | 9/4/2023 | |
1.1.16 | 146 | 9/4/2023 | |
1.1.15 | 140 | 9/4/2023 | |
1.1.14 | 170 | 7/12/2023 | |
1.1.13 | 160 | 7/11/2023 | |
1.1.12 | 158 | 7/10/2023 | |
1.1.11 | 151 | 7/9/2023 | |
1.1.10 | 152 | 7/9/2023 | |
1.1.9 | 141 | 7/9/2023 | |
1.1.8 | 148 | 7/8/2023 | |
1.1.7 | 177 | 7/8/2023 | |
1.1.6 | 135 | 7/7/2023 | |
1.1.5 | 145 | 7/7/2023 | |
1.1.4 | 170 | 7/6/2023 | |
1.1.3 | 152 | 7/5/2023 | |
1.1.2 | 157 | 7/5/2023 | |
1.1.1 | 173 | 7/3/2023 | |
1.1.0 | 174 | 7/3/2023 | |
1.0.61 | 181 | 7/1/2023 | |
1.0.60 | 156 | 6/30/2023 | |
1.0.59 | 175 | 6/29/2023 | |
1.0.58 | 160 | 6/27/2023 | |
1.0.57 | 160 | 6/27/2023 | |
1.0.56 | 164 | 6/26/2023 | |
1.0.55 | 155 | 6/26/2023 | |
1.0.54 | 161 | 6/24/2023 | |
1.0.53 | 165 | 6/24/2023 | |
1.0.52 | 161 | 6/23/2023 | |
1.0.51 | 158 | 6/21/2023 | |
1.0.50 | 169 | 6/20/2023 | |
1.0.49 | 157 | 6/20/2023 | |
1.0.48 | 168 | 6/20/2023 | |
1.0.47 | 164 | 6/19/2023 | |
1.0.46 | 152 | 6/17/2023 | |
1.0.45 | 158 | 6/16/2023 | |
1.0.44 | 158 | 6/16/2023 | |
1.0.43 | 174 | 6/14/2023 | |
1.0.42 | 161 | 6/13/2023 | |
1.0.41 | 170 | 6/13/2023 | |
1.0.40 | 205 | 6/11/2023 | |
1.0.39 | 172 | 5/30/2023 | |
1.0.38 | 174 | 5/30/2023 | |
1.0.37 | 172 | 5/30/2023 | |
1.0.36 | 168 | 5/30/2023 | |
1.0.35 | 174 | 5/29/2023 | |
1.0.34 | 178 | 5/28/2023 | |
1.0.33 | 167 | 5/27/2023 | |
1.0.32 | 176 | 5/22/2023 | |
1.0.31 | 174 | 5/18/2023 | |
1.0.30 | 185 | 5/18/2023 | |
1.0.29 | 170 | 5/18/2023 | |
1.0.28 | 153 | 5/16/2023 | |
1.0.27 | 179 | 5/16/2023 | |
1.0.26 | 173 | 5/13/2023 | |
1.0.25 | 154 | 5/12/2023 | |
1.0.24 | 201 | 5/12/2023 | |
1.0.23 | 179 | 5/12/2023 | |
1.0.22 | 182 | 5/12/2023 | |
1.0.21 | 184 | 5/12/2023 | |
1.0.20 | 207 | 5/12/2023 | |
1.0.19 | 186 | 5/12/2023 | |
1.0.18 | 184 | 5/10/2023 | |
1.0.17 | 188 | 5/9/2023 | |
1.0.16 | 199 | 5/9/2023 | |
1.0.15 | 180 | 5/9/2023 | |
1.0.14 | 191 | 5/9/2023 | |
1.0.13 | 177 | 5/9/2023 | |
1.0.12 | 182 | 5/8/2023 | |
1.0.11 | 212 | 5/8/2023 | |
1.0.10 | 220 | 5/8/2023 | |
1.0.9 | 192 | 5/7/2023 | |
1.0.8 | 182 | 5/6/2023 | |
1.0.7 | 186 | 5/5/2023 | |
1.0.6 | 168 | 5/4/2023 | |
1.0.5 | 168 | 5/4/2023 | |
1.0.4 | 168 | 5/4/2023 | |
1.0.3 | 190 | 5/3/2023 | |
1.0.2 | 192 | 5/3/2023 | |
1.0.1 | 184 | 5/3/2023 | |
1.0.0 | 189 | 5/2/2023 |
Add an enumerator to the Matrix class.