Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Highlights of this release
- .Net Support
- Efficient group convolution.
- Improvement of sequential convolution.
- More operators and improvement of existing ones.
- ONNX feature update to support ONNX 1.2.2.
- More ops supported by ONNX converter.
- Bug fixes.
Efficient group convolution
The implementation of group convolution in CNTK has been updated. The updated implementation moves away from creating a sub-graph for group convolution (using slicing and splicing), and instead uses cuDNN7 and MKL2017 APIs directly. This improves the experience both in terms of performance and model size.
As an example, for a single group convolution op with the following attributes:
- Input tensor (C, H, W) = (32, 128, 128)
- Number of output channels = 32 (channel multiplier is 1)
- Groups = 32 (depth wise convolution)
- Kernel size = (5, 5)
The comparison numbers for this single node are as follows:
| First Header | GPU exec. time (in millisec., 1000 run avg.) | CPU exec. time (in millisec., 1000 run avg.) | Model Size (in KB, CNTK format) |
|---|---|---|---|
| Old implementation | 9.349 | 41.921 | 38 |
| New implementation | 6.581 | 9.963 | 5 |
| Speedup/savings Approx. | 30% Approx. | 65-75% Approx. | 87% |
Sequential Convolution
The implementation of sequential convolution in CNTK has been updated. The updated implementation creates a separate sequential convolution layer. Different from regular convolution layer, this operation convolves also on the dynamic axis(sequence), and filter_shape[0] is applied to that axis. The updated implementation supports broader cases, such as where stride > 1 for the sequence axis.
For example, a sequential convolution over a batch of one-channel black-and-white images. The images have the same fixed height of 640, but each with width of variable lengths. The width is then represented by sequential axis. Padding is enabled, and strides for both width and height are 2.
>>> f = SequentialConvolution((3,3), reduction_rank=0, pad=True, strides=(2,2), activation=C.relu)
>>> x = C.input_variable(**Sequence[Tensor[640]])
>>> x.shape
(640,)
>>> h = f(x)
>>> h.shape
(320,)
>>> f.W.shape
(1, 1, 3, 3)
Operators
depth_to_space and space_to_depth
There is a breaking change in the depth_to_space and space_to_depth operators. These have been updated to match ONNX specification, specifically the permutation for how the depth dimension is placed as blocks in the spatial dimensions, and vice-versa, has been changed. Please refer to the updated doc examples for these two ops to see the change.
Tan and Atan
Added support for trigonometric ops Tan and Atan.
ELU
Added support for alpha attribute in ELU op.
Convolution
Updated auto padding algorithms of Convolution to produce symmetric padding at best effort on CPU, without affecting the final convolution output values. This update increases the range of cases that could be covered by MKL API and improves the performance, E.g. ResNet50.
Default arguments order
There is a breaking change in the arguments property in CNTK python API. The default behavior has been updated to return arguments in python order instead of in C++ order. This way it will return arguments in the same order as they are fed into ops. If you wish to still get arguments in C++ order, you can simply override the global option. This change should only affect the following ops: Times, TransposeTimes, and Gemm(internal).
Bug fixes
- Updated doc for Convolution layer to include group and dilation arguments.
- Added improved input validation for group convolution.
- Updated
LogSoftMaxto use more numerically stable implementation. - Fixed Gather op's incorrect gradient value.
- Added validation for 'None' node in python clone substitution.
- Added validation for padding channel axis in convolution.
- Added CNTK native default lotusIR logger to fix the "Attempt to use DefaultLogger" error when loading some ONNX models.
- Added proper initialization for ONNX TypeStrToProtoMap.
- Updated python doctest to handle different print format for newer version numpy(version >= 1.14).
- Fixed Pooling(CPU) to produce correct output values when kernel center is on padded input cells.
ONNX
Updates
- Updated CNTK's ONNX import/export to use ONNX 1.2 spec.
- Major update to how batch and sequence axes are handled in export and import. As a result, the complex scenarios and edge cases are handled accurately.
- Updated CNTK's ONNX
BatchNormalizationop export/import to latest spec. - Added model domain to ONNX model export.
- Improved error reporting during import and export of ONNX models.
- Updated
DepthToSpaceandSpaceToDepthops to match ONNX spec on the permutation for how the depth dimension is placed as block dimension. - Added support for exporting
alphaattribute inELUONNX op. - Major overhaul to
ConvolutionandPoolingexport. Unlike before, these ops do not export an explicitPadop in any situation. - Major overhaul to
ConvolutionTransposeexport and import. Attributes such asoutput_shape,output_padding, andpadsare fully supported. - Added support for CNTK's
StopGradientas a no-op. - Added ONNX support for TopK op.
- Added ONNX support for sequence ops: sequence.slice, sequence.first, sequence.last, sequence.reduce_sum, sequence.reduce_max, sequence.softmax. For these ops, there is no need to expand ONNX spec. CNTK ONNX exporter just builds computation equavalent graphs for these sequence ops.
- Added full support for Softmax op.
- Made CNTK broadcast ops compatible with ONNX specification.
- Handle to_batch, to_sequence, unpack_batch, sequence.unpack ops in CNTK ONNX exporter.
- ONNX tests to export ONNX test cases for other toolkits to run and to validate.
- Fixed
Hardmax/Softmax/LogSoftmaximport/export. - Added support for
Selectop export. - Added import/export support for several trigonometric ops.
- Updated CNTK support for ONNX
MatMulop. - Updated CNTK support for ONNX
Gemmop. - Updated CNTK's ONNX
MeanVarianceNormalizationop export/import to latest spec. - Updated CNTK's ONNX
LayerNormalizationop export/import to latest spec. - Updated CNTK's ONNX
PReluop export/import to latest spec. - Updated CNTK's ONNX
Gatherop export/import to latest spec. - Updated CNTK's ONNX
ImageScalerop export/import to latest spec. - Updated CNTK's ONNX
Reduceops export/import to latest spec. - Updated CNTK's ONNX
Flattenop export/import to latest spec. - Added CNTK support for ONNX
Unsqueezeop.
Bug or minor fixes:
- Updated LRN op to match ONNX 1.2 spec where the
sizeattribute has the semantics of diameter, not radius. Added validation if LRN kernel size is larger than channel size. - Updated
Min/Maximport implementation to handle variadic inputs. - Fixed possible file corruption when resaving on top of existing ONNX model file.
.Net Support
The Cntk.Core.Managed library has officially been converted to .Net Standard and supports .Net Core and .Net Framework applications on both Windows and Linux. Starting from this release, .Net developers should be able to restore CNTK Nuget packages using new .Net SDK style project file with package management format set to PackageReference.
The following C# code now works on both Windows and Linux:
>>> var weightParameterName = "weight";
>>> var biasParameterName = "bias";
>>> var inputName = "input";
>>> var outputDim = 2;
>>> var inputDim = 3;
>>> Variable inputVariable = Variable.InputVariable(new int[] { inputDim }, DataType.Float, inputName);
>>> var weightParameter = new Parameter(new int[] { outputDim, inputDim }, DataType.Float, 1, device, weightParameterName);
>>> var biasParameter = new Parameter(new int[] { outputDim }, DataType.Float, 0, device, biasParameterName);
>>>
>>> Function modelFunc = CNTKLib.Times(weightParameter, inputVariable) + biasParameter;
For example, simply adding an ItemGroup clause in the .csproj file of a .Net Core application is sufficient: >>> >>> >>> >>> netcoreapp2.1 >>> x64 >>> >>> >>> >>> >>> >>> >>>
Bug or minor fixes:
- Fixed C# string and char to native wstring and wchar UTF conversion issues on Linux.
- Fixed multibyte and wide character conversions across the codebase.
- Fixed Nuget package mechanism to pack for .Net Standard.
- Fixed a memory leak issue in Value class in C# API where Dispose was not called upon object destruction.