Swiftpack.co - Swift Packages by microsoft

Swiftpack.co is a collection of thousands of indexed Swift packages. Search packages.

Packages published by microsoft

microsoft/onnxruntime v1.15.0
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
โญ๏ธ 9,476
๐Ÿ•“ 1 week ago
๐Ÿ”– Release Notes

Releases

The markdown parsing is broken/disabled for release notes. Sorry about that, I'm chasing the source of a crash that's been bringing this website down for the last couple of days.
ONNX Runtime v1.15.0
1 week ago
# Announcements Starting from the next release(ONNX Runtime 1.16.0), at operating system level we will drop the support for - iOS 11 and below. iOS 12 will be the minimum supported version. - CentOS 7, Ubuntu 18.04, and any Linux distro without glibc version >=2.28. At compiler level we will drop the support for - GCC version <= 9 - Visual Studio 2019 Also, we will remove the onnxruntime_DISABLE_ABSEIL build option since we will upgrade protobuf and the new protobuf version will need abseil. # General - [Added support for ONNX Optional type in C# API](https://github.com/microsoft/onnxruntime/pull/15314) - [Added collectives to support multi-GPU inferencing](https://github.com/microsoft/onnxruntime/pull/14399) - Updated macOS build machines to macOS-12, which comes with Xcode 14.2 and we should stop using Xcode 12.4 - Added Python 3.11 support (deprecate 3.7, support 3.8-3.11) in packages for Onnxruntime CPU, Onnxruntime-GPU, Onnxruntime-directml, and onnxruntime-training. - Updated to CUDA 11.8. ONNX Runtime source code is still compatible with CUDA 11.4 and 12.x. - Dropped the support for Windows 8.1 and below - Eager mode code and onnxruntime_ENABLE_EAGER_MODE cmake option are deleted. - Upgraded Mimalloc version from 2.0.3 to 2.1.1 - Upgraded protobuf version from 3.18.3 to 21.12 - New dependency: cutlass, which is only used in CUDA/TensorRT packages. - Upgraded DNNL from 2.7.1 to 3.0 # Build System - On POSIX systems by default we disallow using "root" user to build the code. If needed, you can append "--allow_running_as_root" to your build command to bypass the check. - Add the support for building the source natively on Windows ARM64 with Visual Studio 2022. - Added a Gradle wrapper and updated Gradle version from 6.8.3 to 8.0.1. (Gradle is the tool for building ORT Java package) - When doing cross-compiling, the build scripts will try to download a prebuit protoc from Github instead of building the binary from source. Because now protobuf has many dependencies. It is not easy to setup a build environment for protobuf. # Performance - [Improved string marshalling and reduce GC pressure](https://github.com/microsoft/onnxruntime/pull/15545) - [Added a build option to allow using a lock-free queue in threadpool for improved CPU utilization](https://github.com/microsoft/onnxruntime/pull/14834) - [Fix CPU memory leak due to external weights](https://github.com/microsoft/onnxruntime/pull/15040) - Added fused decoder multi-head attention kernel to improve GPT and decoder models(like T5, Whisper) - Added packing mode to improve encoder models with inputs of large padding ratio - Improved generation algorithm (BeamSearch, TopSampling, GreedySearch) - Improved performance for StableDiffusion, ViT, GPT, whisper models # Execution Providers Two new execution providers: JS EP and QNN EP. ## TensorRT EP - Official support for TensorRT 8.6 - Explicit shape profile overrides - Support for TensorRT plugins via ORT custom op - Improve support for TensorRT options (heuristics, sparsity, optimization level, auxiliary stream, tactic source selection etc.) - Support for TensorRT timing cache - Improvements to our test coverage, specifically for opset16-17 models and package pipeline unit test coverage. - Other misc bugfixes and improvements. ## OpenVINO EP - Support for OpenVINO 2023.0 - Dynamic shapes support for iGPU - Changes to OpenVINO backend to improve first inference latency - Deprecation of HDDL-VADM and Myriad VPU support - Misc bug fixes. ## QNN EP - [Initial Public preview release](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.QNN) ## DirectML EP: - Updated to [DirectML 1.12](https://github.com/microsoft/DirectML/blob/master/Releases.md#directml-112) - Opset 16-17 support ## AzureEP - Added support for OpenAI whisper model - Available in a Nuget pkg in addition to Python # Mobile ## New packages - Swift Package Manager for onnxruntime - Nuget package for onnxruntime-extensions (supports Android/iOS for MAUI/Xamarin) - React Native package for onnxruntime can optionally include onnxruntime-extensions ## Pre/Post processing - Added support for built-in pre and post processing for NLP scenarios: classification, question-answering, text-prediction - Added support for built-in pre and post processing for Speech Recognition (Whisper) - Added support for built-in post processing for Object Detection (YOLO). Non-max suppression, draw bounding boxes - Additional CoreML and NNAPI kernels to support customer scenarios - NNAPI: BatchNormalization, LRN - CoreML: Div, Flatten, LeakyRelu, LRN, Mul, Pad, Pow, Sub # Web - [preview] WebGPU support - Support building the source code with "MinGW make" on Windows. # ORT Training ## On-device training: - Official package for On-Device Training now available. On-device training extends ORT Inference solutions to enable training on edge devices. - APIs and Language bindings supported for C, C++, Python, C#, Java. - Packages available for Desktop and Android. - For custom [build](https://onnxruntime.ai/docs/build/training.html#build-for-on-device-training)s refer build instructions. ## Others - Added [graph optimizations]( https://github.com/microsoft/onnxruntime/blob/rel-1.15.0/docs/ORTModule_Training_Guidelines.md#ortmodule_enable_compute_optimizer) which leverage the sparsity in the label data to improve performance. With these optimizations we see performance gains ranging from 4% to 15% for popular HF models over baseline ORT. - Vision transformer models like ViT, BEIT and SwinV2 see upto 44% speedup with ORT Training+ DeepSpeed over PyTorch eager mode on AzureML. - Added optimizations for SOTA models like Dolly and Whisper. ORT Training + DS now gives ~17% speedup for Whisper and ~4% speedup for Dolly over PyTorch eager mode. Dolly optimizations on main branch show a ~40% over eager mode. # Known Issues - The onnxruntime-training 1.15.0 packages published to pypi.org were actually built in Debug mode instead of Release mode. You can get the right one from https://download.onnxruntime.ai/ . We will fix the issue in the next patch release. - XNNPack EP does not work on x86 CPUs without AVX-512 instructions, because we used wrong alignment when allocating buffers for XNNPack to use. - The CUDA EP source code has a build error when CUDA version <11.6. See #16000. - The onnxruntime-training builds are missing the training header files. # Contributions Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: [snnn](https://github.com/snnn), [fs-eire](https://github.com/fs-eire), [edgchen1](https://github.com/edgchen1), [wejoncy](https://github.com/wejoncy), [mszhanyi](https://github.com/mszhanyi), [PeixuanZuo](https://github.com/PeixuanZuo), [pengwa](https://github.com/pengwa), [jchen351](https://github.com/jchen351), [cloudhan](https://github.com/cloudhan), [tianleiwu](https://github.com/tianleiwu), [PatriceVignola](https://github.com/PatriceVignola), [wangyems](https://github.com/wangyems), [adrianlizarraga](https://github.com/adrianlizarraga), [chenfucn](https://github.com/chenfucn), [HectorSVC](https://github.com/HectorSVC), [baijumeswani](https://github.com/baijumeswani), [justinchuby](https://github.com/justinchuby), [skottmckay](https://github.com/skottmckay), [yuslepukhin](https://github.com/yuslepukhin), [RandyShuai](https://github.com/RandyShuai), [RandySheriffH](https://github.com/RandySheriffH), [natke](https://github.com/natke), [YUNQIUGUO](https://github.com/YUNQIUGUO), [smk2007](https://github.com/smk2007), [jslhcl](https://github.com/jslhcl), [chilo-ms](https://github.com/chilo-ms), [yufenglee](https://github.com/yufenglee), [RyanUnderhill](https://github.com/RyanUnderhill), [hariharans29](https://github.com/hariharans29), [zhanghuanrong](https://github.com/zhanghuanrong), [askhade](https://github.com/askhade), [wschin](https://github.com/wschin), [jywu-msft](https://github.com/jywu-msft), [mindest](https://github.com/mindest), [zhijxu-MS](https://github.com/zhijxu-MS), [dependabot[bot]](https://github.com/dependabot[bot]), [xadupre](https://github.com/xadupre), [liqunfu](https://github.com/liqunfu), [nums11](https://github.com/nums11), [gramalingam](https://github.com/gramalingam), [Craigacp](https://github.com/Craigacp), [fdwr](https://github.com/fdwr), [shalvamist](https://github.com/shalvamist), [jstoecker](https://github.com/jstoecker), [yihonglyu](https://github.com/yihonglyu), [sumitsays](https://github.com/sumitsays), [stevenlix](https://github.com/stevenlix), [iK1D](https://github.com/iK1D), [pranavsharma](https://github.com/pranavsharma), [georgen117](https://github.com/georgen117), [sfatimar](https://github.com/sfatimar), [MaajidKhan](https://github.com/MaajidKhan), [satyajandhyala](https://github.com/satyajandhyala), [faxu](https://github.com/faxu), [jcwchen](https://github.com/jcwchen), [hanbitmyths](https://github.com/hanbitmyths), [jeffbloo](https://github.com/jeffbloo), [souptc](https://github.com/souptc), [ytaous](https://github.com/ytaous) [kunal-vaishnavi](https://github.com/kunal-vaishnavi)
ONNX Runtime v1.14.1
13 weeks ago
This patch addresses packaging issues and bug fixes on top of v1.14.0: * Mac OS Python build for x86 arch (issue: #14663) * DirectML EP fixes: sequence ops (#14442), package naming to remove -dev suffix * CUDA12 build compatibility (#14659) * Performance regression fixes: IOBinding input (#14719), Transformer models (#14732, #14517, #14699) * ORT Training kernel fix (#14727) Only select packages were published for this patch release; others can be found in the attachments below: * Pypi: [onnxruntime](https://pypi.org/project/onnxruntime), [onnxruntime-gpu](https://pypi.org/project/onnxruntime-gpu), [onnxruntime-directml](https://pypi.org/project/onnxruntime-directml), [onnxruntime-training](https://pypi.org/project/onnxruntime-training/) * Nuget: [Microsoft.ML.OnnxRuntime](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime), [Microsoft.ML.OnnxRuntime.Gpu](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu), [Microsoft.ML.OnnxRuntime.DirectML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.directml), [Microsoft.AI.MachineLearning](https://www.nuget.org/packages/Microsoft.AI.MachineLearning)
ONNX Runtime v1.14.0
16 weeks ago
# Announcements * Building ORT from source will require cmake version >=3.24 instead of >=3.18. # General * [ONNX 1.13](https://github.com/onnx/onnx/releases/tag/v1.13.0) support (opset 18) * Threading * ORT Threadpool is now NUMA aware [(details)](https://onnxruntime.ai/docs/performance/tune-performance.html#numa-support-and-performance-tuning) * New API to set thread affinity ([details](https://onnxruntime.ai/docs/performance/tune-performance.html#set-intra-op-thread-affinity)) * New custom operator APIs * Enables a custom operator to wrap an entire model that is meant to be inferenced with an external API or runtime. * [Details](https://onnxruntime.ai/docs/reference/operators/add-custom-op.html#define-and-register-a-custom-operator) and [example](https://github.com/microsoft/onnxruntime/tree/main/onnxruntime/test/testdata/custom_op_openvino_wrapper_library) * Multi-stream Execution Provider refactoring * Improves GPU utilization by putting parallel inference requests on different GPU streams. Updated for CUDA, TensorRT, and ROCM execution providers * Improves memory efficiency by enabling GPU memory reuse across different streams * Enables Execution Provider developer to customize its stream implementation by providing "Stream" interface in ExecutionProvider API * *[Preview]* [Rust API](https://github.com/microsoft/onnxruntime/tree/main/rust) for ORT - not part of release branch but available to build in main. # Performance * Support of quantization with AMX on Sapphire Rapids processors * CUDA EP performance improvements: * Improve performance of transformer models and decoding methods: beam search, greedy search, and topp sampling. * Stable Diffusion model optimizations * Change cudnn_conv_use_max_workspace default value to be 1 * Performance improvements to GRU and Slice operators # Execution Providers * TensorRT EP * Adds support for TensorRT 8.5 GA versions * Bug fixes * OpenVINO EP * Adds support for OpenVINO 2022.3 * DirectML EP: * Updated to DML [1.10.1](https://www.nuget.org/packages/Microsoft.AI.DirectML) * Additional operators: [NonZero](https://github.com/microsoft/onnxruntime/pull/13768), [Shape](https://github.com/microsoft/onnxruntime/pull/13442), [Size](https://github.com/microsoft/onnxruntime/pull/13442), [Attention](https://github.com/microsoft/onnxruntime/pull/13371), [EmbedLayerNorm](https://github.com/microsoft/onnxruntime/pull/13868), [SkipLayerNorm](https://github.com/microsoft/onnxruntime/pull/13849), [BiasGelu](https://github.com/microsoft/onnxruntime/pull/13795) * Additional data types: [Abs](https://github.com/microsoft/onnxruntime/pull/13470), [Sign](https://github.com/microsoft/onnxruntime/pull/13470), [Where](https://github.com/microsoft/onnxruntime/pull/13443) * Enable SetOptimizedFilePath [export/reload](https://github.com/microsoft/onnxruntime/pull/13913) * Bug fixes/extensions: [allow squeeze-13 axes](https://github.com/microsoft/onnxruntime/pull/13635), [EinSum with MatMul NHCW](https://github.com/microsoft/onnxruntime/pull/13440) * [ROCm EP](https://onnxruntime.ai/docs/execution-providers/ROCm-ExecutionProvider.html): 5.4 support and GA ready * *[Preview]* [Azure EP](https://onnxruntime.ai/docs/execution-providers/Azure-ExecutionProvider.html) - supports AzureML hosted models using Triton for hybrid inferencing on-device and on-cloud # Mobile * Pre/Post processing * Support updating mobilenet and super resolution models to move the pre and post processing into the model, including usage of custom ops for conversion to/from jpg/png * [onnxruntime-extensions python package](https://pypi.org/project/onnxruntime-extensions/) includes the model update script to add pre/post processing to the model * See [example](https://github.com/microsoft/onnxruntime-extensions/blob/main/tutorials/superresolution_e2e.py) model update usage * *[Coming soon]* onnxruntime-extensions packages for Android and iOS with DecodeImage and EncodeImage custom ops * Updated the onnxruntime inference examples to demonstrate end-to-end usage with onnxruntime-extensions package * [SuperResolution model](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/mobile/examples/super_resolution) * XNNPACK * Added support for additional commonly used operators * Add iOS build support * XNNPACK EP is now included in the onnxruntime-c iOS package * Added support for using the ORT allocator in XNNPACK kernels to minimize memory usage # Web * [onnxruntime-extensions](https://github.com/microsoft/onnxruntime-extensions) included in default ort-web build (NLP centric) * XNNPACK Gemm * Improved exception handling * New [utility functions](https://onnxruntime.ai/docs/api/js/index.html) (experimental) to help with exchanging data between images and tensors. # Training * Performance optimizations and bug fixes for Hugging Face models (i.e. Xlnet and Bloom) * Stable diffusion optimizations for training, including support for Resize and InstanceNorm gradients and addition of ORT-enabled examples to the [diffusers library](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/onnxruntime) * FP16 optimizer exposed in torch-ort ([details](https://github.com/microsoft/onnxruntime/blob/main/docs/ORTModule_Training_Guidelines.md#4-use-fp16_optimizer-to-complement-deepspeedapex)) * Bug fixes for Hugging Face models # Known Issues * The [Microsoft.ML.OnnxRuntime.DirectML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) package name includes -dev-* suffix. This is functionally equivalent to the release branch build, and a patch is in progress. --- # Contributions Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: [snnn](https://github.com/snnn), [skottmckay](https://github.com/skottmckay), [edgchen1](https://github.com/edgchen1), [hariharans29](https://github.com/hariharans29), [tianleiwu](https://github.com/tianleiwu), [yufenglee](https://github.com/yufenglee), [guoyu-wang](https://github.com/guoyu-wang), [yuslepukhin](https://github.com/yuslepukhin), [fs-eire](https://github.com/fs-eire), [pranavsharma](https://github.com/pranavsharma), [iK1D](https://github.com/iK1D), [baijumeswani](https://github.com/baijumeswani), [tracysh](https://github.com/tracysh), [thiagocrepaldi](https://github.com/thiagocrepaldi), [askhade](https://github.com/askhade), [RyanUnderhill](https://github.com/RyanUnderhill), [wangyems](https://github.com/wangyems), [fdwr](https://github.com/fdwr), [RandySheriffH](https://github.com/RandySheriffH), [jywu-msft](https://github.com/jywu-msft), [zhanghuanrong](https://github.com/zhanghuanrong), [smk2007](https://github.com/smk2007), [pengwa](https://github.com/pengwa), [liqunfu](https://github.com/liqunfu), [shahasad](https://github.com/shahasad), [mszhanyi](https://github.com/mszhanyi), [SherlockNoMad](https://github.com/SherlockNoMad), [xadupre](https://github.com/xadupre), [jignparm](https://github.com/jignparm), [HectorSVC](https://github.com/HectorSVC), [ytaous](https://github.com/ytaous), [weixingzhang](https://github.com/weixingzhang), [stevenlix](https://github.com/stevenlix), [tiagoshibata](https://github.com/tiagoshibata), [faxu](https://github.com/faxu), [wschin](https://github.com/wschin), [souptc](https://github.com/souptc), [ashbhandare](https://github.com/ashbhandare), [RandyShuai](https://github.com/RandyShuai), [chilo-ms](https://github.com/chilo-ms), [PeixuanZuo](https://github.com/PeixuanZuo), [cloudhan](https://github.com/cloudhan), [dependabot[bot]](https://github.com/dependabot[bot]), [jeffbloo](https://github.com/jeffbloo), [chenfucn](https://github.com/chenfucn), [linkerzhang](https://github.com/linkerzhang), [duli2012](https://github.com/duli2012), [codemzs](https://github.com/codemzs), [oliviajain](https://github.com/oliviajain), [natke](https://github.com/natke), [YUNQIUGUO](https://github.com/YUNQIUGUO), [Craigacp](https://github.com/Craigacp), [sumitsays](https://github.com/sumitsays), [orilevari](https://github.com/orilevari), [BowenBao](https://github.com/BowenBao), [yangchen-MS](https://github.com/yangchen-MS), [hanbitmyths](https://github.com/hanbitmyths), [satyajandhyala](https://github.com/satyajandhyala), [MaajidKhan](https://github.com/MaajidKhan), [smkarlap](https://github.com/smkarlap), [sfatimar](https://github.com/sfatimar), [jchen351](https://github.com/jchen351), [georgen117](https://github.com/georgen117), [wejoncy](https://github.com/wejoncy), [PatriceVignola](https://github.com/PatriceVignola), [adrianlizarraga](https://github.com/adrianlizarraga), [justinchuby](https://github.com/justinchuby), [zhangxiang1993](https://github.com/zhangxiang1993), [gineshidalgo99](https://github.com/gineshidalgo99), [tlh20](https://github.com/tlh20), [xzhu1900](https://github.com/xzhu1900), [jeffdaily](https://github.com/jeffdaily), [suryasidd](https://github.com/suryasidd), [yihonglyu](https://github.com/yihonglyu), [liuziyue](https://github.com/liuziyue), [chentaMS](https://github.com/chentaMS), [jcwchen](https://github.com/jcwchen), [ybrnathan](https://github.com/ybrnathan), [ajindal1](https://github.com/ajindal1), [zhijxu-MS](https://github.com/zhijxu-MS), [gramalingam](https://github.com/gramalingam), [WilBrady](https://github.com/WilBrady), [garymm](https://github.com/garymm), [kkaranasos](https://github.com/kkaranasos), [ashari4](https://github.com/ashari4), [martinb35](https://github.com/martinb35), [AdamLouly](https://github.com/AdamLouly), [zhangyaobit](https://github.com/zhangyaobit), [vvchernov](https://github.com/vvchernov), [jingyanwangms](https://github.com/jingyanwangms), [wenbingl](https://github.com/wenbingl), [daquexian](https://github.com/daquexian), [sreekanth-yalachigere](https://github.com/sreekanth-yalachigere), [NonStatic2014](https://github.com/NonStatic2014), [mayavijx](https://github.com/mayavijx), [mindest](https://github.com/mindest), [jstoecker](https://github.com/jstoecker), [manashgoswami](https://github.com/manashgoswami), [Andrews548](https://github.com/Andrews548), [baowenlei](https://github.com/baowenlei), [kunal-vaishnavi](https://github.com/kunal-vaishnavi)
ONNX Runtime v1.13.1
32 weeks ago
# Announcements * Security issues addressed by this release 1. A protobuf security issue CVE-2022-1941 that impact users who load ONNX models from untrusted sources, for example, a deep learning inference service which allows users to upload their models then runs the inferences in a shared environment. 2. An ONNX security vulnerability that allows reading of tensor_data outside the model directory, which allows attackers to read or write arbitrary files on an affected system that loads ONNX models from untrusted sources. (#12915) * Deprecations * CUDA 10.x support at source code level * Windows 8.x support in Nuget/C API prebuilt binaries. Support for Windows 7+ Desktop versions (including Windows servers) will be retained by building ONNX Runtime from source. * NUPHAR EP code is removed * Dependency versioning updates * C++ 17 compiler is now required to build ORT from source. On Linux, GCC version >=7.0 is required. * Minimal numpy version bumped to 1.21.6 (from 1.21.0) for ONNX Runtime Python packages * Official ONNX Runtime GPU packages now require CUDA version >=11.6 instead of 11.4. # General * Expose all arena configs in Python API in an extensible way * Fix ARM64 NuGet packaging * Fix EP allocator setup issue affecting TVM EP # Performance * Transformers CUDA improvements * Quantization on GPU for BERT - notebook, documentation on QAT, transformer optimization toolchain and quantized kernels. * Add fused attention CUDA kernels for BERT. * Fuse `Add` (bias) and `Transpose` of Q/K/V into one kernel for Attention and LongformerAttention. * Reduce GEMM computation in LongformerAttention with a new weight format. * General quantization (tool and kernel) * [Quantization debugging tool](https://onnxruntime.ai/docs/performance/quantization.html#quantization-debugging) - identify sensitive node/layer from accuracy drop discrepancies * New quantize API based on QuantConfig * New quantized operators: SoftMax, Split, Where # Execution Providers * CUDA EP * Official ONNX Runtime GPU packages are now built with CUDA version 11.6 instead of 11.4, but should still be backwards compatible with 11.4 * TensorRT EP * Build option to link against pre-built onnx-tensorrt parser; this enables potential "no-code" TensorRT minor version upgrades and can be used to build against TensorRT 8.5 EA * Improved nested control flow support * Improve HashId generation used for uniquely identifying TRT engines. Addresses issues such as [TRT Engine Cache Regeneration Issue](https://github.com/triton-inference-server/onnxruntime_backend/issues/145) * TensorRT uint8 support * OpenVINO EP * OpenVINO version upgraded to 2022.2.0 * Support for INT8 QDQ models from [NNCF](https://github.com/openvinotoolkit/nncf/tree/develop/examples/experimental/onnx/) * Support for Intel 13th Gen Core Process (Raptor Lake) * Preview support for Intel discrete graphics cards [Intel Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/data-center-gpu/flex-series/overview.html) and [Intel Arc GPU](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/arc.html) * Increased test coverage for GPU Plugin * SNPE EP * Add support for [Windows Dev Kit 2023](https://onnxruntime.ai/winarm.html) * [Nuget Package](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Snpe) is now available * DirectML EP * Update to [DML 1.9.1](https://www.nuget.org/packages/Microsoft.AI.DirectML/1.9.1) * [New ops](https://github.com/microsoft/onnxruntime/blob/main/docs/OperatorKernels.md#dmlexecutionprovider): [LayerNormalization](https://github.com/microsoft/onnxruntime/pull/12809), [Gelu](https://github.com/microsoft/onnxruntime/pull/12898/), MatMulScale, [DFT](https://github.com/microsoft/onnxruntime/pull/12710), [FusedMatMul](https://github.com/microsoft/onnxruntime/pull/12898/) (contrib) * Bug fixes: DML EP Fix InstanceNormalization with 3D tensors (#12693), DML EP squeeze all axes when empty (#12649), DirectML GEMM broken in opset 11 and 13 when optional tensor C not provided (#12568) * **[new]** CANN EP - Initial integration of CANN EP contributed by Huawei to support Ascend 310 (#11477) # Mobile * EP infrastructure * Implemented support for additional EPs that use static kernels * Required for EPs like XNNPACK to be supported in minimal build * Removes need for kernel hashes to reduce maintenance overhead for developers * NOTE: ORT format models will need to be regenerated as the format change is NOT backwards compatible. We're replacing hashes for the CPU EP kernels with operator constraint information for operators used by the model so that we can match any static kernels available at runtime. * XNNPack * Added more kernels including QDQ format model support * AveragePool, Softmax, * QLinearConv, QLinearAveragePool, QLinearSoftmax * Added support for XNNPACK using threadpool * See [documentation](https://onnxruntime.ai/docs/execution-providers/Xnnpack-ExecutionProvider.html) for recommendations on how to configure the XNNPACK threadpool * ORT format model peak memory usage * Added ability to use ORT format model directly for initializers to reduce peak memory usage * Enabled via SessionOptions config * https://onnxruntime.ai/docs/reference/ort-format-models.html#load-ort-format-model-from-an-in-memory-byte-array * Set "session.use_ort_model_bytes_directly" and "session.use_ort_model_bytes_for_initializers" to "1" # Web * Support for 4GB memory in webassembly * Upgraded emscripten to 3.1.19 * Build from source support for [onnxruntime-extensions](https://github.com/microsoft/onnxruntime-extensions) and [sentencepiece](https://github.com/microsoft/onnxruntime-extensions/blob/main/docs/custom_ops.md#sentencepiecetokenizer) * Initial support for XNNPACK for optimizations for Wasm # Training * Training packages updated to CUDA version 11.6 and removed CUDA 10.2 and 11.3 * Performance improvements via op fusions like BiasSoftmax and Dropout fusion, Gather to Split fusion etc targeting SOTA models * Added Aten support for GroupNorm, InstanceNormalization, Upsample nearest * Bug fix for SimplifiedLayerNorm, seg fault for alltoall --- # Contributions Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: [snnn](https://github.com/snnn), [baijumeswani#2baijumeswani](https://github.com/baijumeswani#2baijumeswani), [edgchen1](https://github.com/edgchen1), [iK1D](https://github.com/iK1D), [skottmckay](https://github.com/skottmckay), [cloudhan](https://github.com/cloudhan), [tianleiwu](https://github.com/tianleiwu), [fs-eire](https://github.com/fs-eire), [mszhanyi](https://github.com/mszhanyi), [WilBrady](https://github.com/WilBrady), [hariharans29](https://github.com/hariharans29), [chenfucn](https://github.com/chenfucn), [fdwr](https://github.com/fdwr), [yuslepukhin](https://github.com/yuslepukhin), [wejoncy](https://github.com/wejoncy), [PeixuanZuo](https://github.com/PeixuanZuo), [pengwa](https://github.com/pengwa), [yufenglee](https://github.com/yufenglee), [jchen351](https://github.com/jchen351), [justinchuby](https://github.com/justinchuby), [dependabot[bot]](https://github.com/dependabot[bot]), [RandySheriffH](https://github.com/RandySheriffH), [sumitsays](https://github.com/sumitsays), [wschin](https://github.com/wschin), [wangyems](https://github.com/wangyems), [YUNQIUGUO](https://github.com/YUNQIUGUO), [ytaous](https://github.com/ytaous), [pranavsharma](https://github.com/pranavsharma), [vvchernov](https://github.com/vvchernov), [natke](https://github.com/natke), [Craigacp](https://github.com/Craigacp), [RandyShuai](https://github.com/RandyShuai), [smk2007](https://github.com/smk2007), [zhangyaobit](https://github.com/zhangyaobit), [jcwchen](https://github.com/jcwchen), [yihonglyu](https://github.com/yihonglyu), [georgen117](https://github.com/georgen117), [chilo-ms](https://github.com/chilo-ms), [ashbhandare](https://github.com/ashbhandare), [faxu](https://github.com/faxu), [jstoecker](https://github.com/jstoecker), [gramalingam](https://github.com/gramalingam), [garymm](https://github.com/garymm), [jeffbloo](https://github.com/jeffbloo), [xadupre](https://github.com/xadupre), [jywu-msft](https://github.com/jywu-msft), [askhade](https://github.com/askhade), [RyanUnderhill](https://github.com/RyanUnderhill), [thiagocrepaldi](https://github.com/thiagocrepaldi), [mindest](https://github.com/mindest), [jingyanwangms](https://github.com/jingyanwangms), [wenbingl](https://github.com/wenbingl), [ashari4](https://github.com/ashari4), [sfatimar](https://github.com/sfatimar), [MaajidKhan](https://github.com/MaajidKhan), [souptc](https://github.com/souptc), [HectorSVC](https://github.com/HectorSVC), [weixingzhang](https://github.com/weixingzhang), [zhanghuanrong](https://github.com/zhanghuanrong)
ONNX Runtime v1.12.1
43 weeks ago
This patch addresses packaging issues and bug fixes on top of v1.12.0. - Java package: MacOS M1 support folder structure fix - Android package: enable optimizations - GPU (TensorRT provider): bug fixes - DirectML: package fix - WinML: bug fixes See #12418 for full list of specific fixes included
ONNX Runtime v1.12.0
45 weeks ago
# Announcements * For Execution Provider maintainers/owners: the [lightweight compile API](https://github.com/microsoft/onnxruntime/blob/master/include/onnxruntime/core/framework/execution_provider.h#L249) is now the default compiler API for all Execution Providers (this was previously only available for the mobile build). If you have an EP using the [legacy compiler API](https://github.com/microsoft/onnxruntime/blob/master/include/onnxruntime/core/framework/execution_provider.h#L237), please migrate to the lightweight compile API as soon as possible. The legacy API will be deprecated in next release (ORT 1.13). * netstandard1.1 support is being deprecated in this release and will be removed in the next ORT 1.13 release # Key Updates ## General * ONNX spec support * onnx opset 17 * onnx-ml opset 3 (TreeEnsemble update) * BeamSearch operator for encoder-decoder transformers models * Support for invoking individual ops without the need to create a separate graph * For use with custom op development to reuse ORT code * Support for feeding external initializers (for large models) as byte arrays for model inferencing * Build switch to disable usage of abseil library to remove dependency ## Packages * Python 3.10 support * Mac M1 support in Python and Java packages * .NET 6/MAUI support in Nuget C# package * Additional target frameworks: net6.0, net6.0-android, net6.0-ios, net6.0-macos * NOTE: netstandard1.1 support is being deprecated in this release and will be removed in the 1.13 release * [onnxruntime-openvino](https://pypi.org/project/onnxruntime-openvino/1.12.0/) package available on Pypi (from Intel) ## Performance and Quantization * Improved C++ APIs that now utilize RAII for better memory management * Operator performance optimizations, including GatherElements * Memory optimizations to support compute-intensive real-time inferencing scenarios (e.g. audio inferencing scenarios) * CPU usage savings for infrequent inference requests by reducing thread spinning * Memory usage reduction through use of containers from the abseil library, especially inlined vectors used to store tensor shapes and inlined hash maps * New quantized kernels for weight symmetry to improve performance on ARM64 little core (GEMM and Conv) * Specialized kernel to improve performance of quantized Resize by up to 2x speedup * Improved the thread job partition for QLinearConv, demonstrating up to ~20% perf gain for certain models * Quantization tool: improved ONNX shape inference for large models ## Execution Providers * TensorRT EP * TensorRT 8.4 support * Provide option to share execution context memory between TensorRT subgraphs * Workaround long CI test time caused by frequent initialization/de-initialization of TensorRT builder * Improve subgraph partitioning and consolidate TensorRT subgraphs when possible * Refactor engine cache serialization/deserialization logic * Miscellaneous bug fixes and performance improvements * OpenVINO EP * Pre-Built ONNXRuntime binaries with OpenVINO now available on pypi: [onnxruntime-openvino](https://pypi.org/project/onnxruntime-openvino/1.12.0/) * Performance optimizations of existing supported models * New runtime configuration option โ€˜enable_dynamic_shapesโ€™ added to enable dynamic shapes for each iteration * ORTModule included as part of OVEP Python Package to enable Torch ORT Inference * DirectML EP * Updated to [DirectML 1.9](https://github.com/microsoft/DirectML/blob/master/Releases.md#directml-190) * Opset 13-15 support: [#11827](https://github.com/microsoft/onnxruntime/pull/11827), [#11814](https://github.com/microsoft/onnxruntime/pull/11814), [#11782](https://github.com/microsoft/onnxruntime/pull/11782), [#11772](https://github.com/microsoft/onnxruntime/pull/11772) * Bug fixes: [Xbox command list reuse](https://github.com/microsoft/onnxruntime/pull/12063), [descriptor heap reset](https://github.com/microsoft/onnxruntime/pull/12059), [command allocator memory growth](https://github.com/microsoft/onnxruntime/pull/12114), [negative pad counts](https://github.com/microsoft/onnxruntime/pull/11974), [node suffix removal](https://github.com/microsoft/onnxruntime/pull/11879) * TVM EP - [details](https://onnxruntime.ai/docs/execution-providers/TVM-ExecutionProvider.html) * Updated to add model .dll ingestion and execution on Windows * Updated documentation and CI tests * ***[New]*** SNPE EP - [details](https://onnxruntime.ai/docs/execution-providers/SNPE-ExecutionProvider.html) * ***[Preview]*** XNNPACK EP - initial infrastructure with limited operator support, for use with ORT Mobile and ORT Web * Currently supports Conv and MaxPool, with work in progress to add more kernels ## Mobile * Binary size reductions in Android minimal build - 12% reduction in size of base build with no operator kernels * Added new operator support to NNAPI and CoreML EPs to improve ability to run super resolution and BERT models using NPU * NNAPI: DepthToSpace, PRelu, Gather, Unsqueeze, Pad * CoreML: DepthToSpace, PRelu * Added [Docker file](https://onnxruntime.ai/docs/build/custom.html#android) to simplify running a custom minimal build to create an ORT Android package * Initial XNNPACK EP compatibility ## Web * Memory usage optimizations * Initial XNNPACK EP compatibility ## ORT Training * ***[New]*** ORT Training acceleration is also natively available through [HuggingFace Optimum](https://github.com/huggingface/optimum#training) * ***[New]*** FusedAdam Optimizer now available through the torch-ort package for easier training integration * FP16_Optimizer Support for more DeepSpeed Versions * Bfloat16 support for AtenOp * Added gradient ops for ReduceMax and ReduceMin * Updates to Min and Max grad ops to use distributed logic * Optimizations * Optimized perf for Gelu and GeluGrad kernels for mixed precision models * Enabled fusions for SimplifiedLayerNorm * Added bitmask versions of Dropout, BiasDropout and DropoutGrad which brings ~8x space savings for the mast output. # Known issues * The [Microsoft.ML.OnnxRuntime.DirectML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) package on Nuget has an issue and will be fixed in a patch. Fix: #12368 * The [Maven package](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime) has a packaging issue for Mac M1 builds and will be fixed in a patch. Fix: #12335 / [Workaround discussion](https://github.com/microsoft/onnxruntime/issues/11054#issuecomment-1195391571) * Windows builds are not compatible with Windows 8.x in this release. Please use v1.11 for now. --- # Contributions Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: [snnn](https://github.com/snnn), [edgchen1](https://github.com/edgchen1), [fdwr](https://github.com/fdwr), [skottmckay](https://github.com/skottmckay), [iK1D](https://github.com/iK1D), [fs-eire](https://github.com/fs-eire), [mszhanyi](https://github.com/mszhanyi), [WilBrady](https://github.com/WilBrady), [justinchuby](https://github.com/justinchuby), [tianleiwu](https://github.com/tianleiwu), [PeixuanZuo](https://github.com/PeixuanZuo), [garymm](https://github.com/garymm), [yufenglee](https://github.com/yufenglee), [adrianlizarraga](https://github.com/adrianlizarraga), [yuslepukhin](https://github.com/yuslepukhin), [dependabot[bot]](https://github.com/dependabot[bot]), [chilo-ms](https://github.com/chilo-ms), [vvchernov](https://github.com/vvchernov), [oliviajain](https://github.com/oliviajain), [ytaous](https://github.com/ytaous), [hariharans29](https://github.com/hariharans29), [sumitsays](https://github.com/sumitsays), [wangyems](https://github.com/wangyems), [pengwa](https://github.com/pengwa), [baijumeswani](https://github.com/baijumeswani), [smk2007](https://github.com/smk2007), [RandySheriffH](https://github.com/RandySheriffH), [gramalingam](https://github.com/gramalingam), [xadupre](https://github.com/xadupre), [yihonglyu](https://github.com/yihonglyu), [zhangyaobit](https://github.com/zhangyaobit), [YUNQIUGUO](https://github.com/YUNQIUGUO), [jcwchen](https://github.com/jcwchen), [chenfucn](https://github.com/chenfucn), [souptc](https://github.com/souptc), [chandru-r](https://github.com/chandru-r), [jstoecker](https://github.com/jstoecker), [hanbitmyths](https://github.com/hanbitmyths), [RyanUnderhill](https://github.com/RyanUnderhill), [georgen117](https://github.com/georgen117), [jywu-msft](https://github.com/jywu-msft), [mindest](https://github.com/mindest), [sfatimar](https://github.com/sfatimar), [HectorSVC](https://github.com/HectorSVC), [Craigacp](https://github.com/Craigacp), [jeffdaily](https://github.com/jeffdaily), [zhijxu-MS](https://github.com/zhijxu-MS), [natke](https://github.com/natke), [stevenlix](https://github.com/stevenlix), [jeffbloo](https://github.com/jeffbloo), [guoyu-wang](https://github.com/guoyu-wang), [daquexian](https://github.com/daquexian), [faxu](https://github.com/faxu), [jingyanwangms](https://github.com/jingyanwangms), [adtsai](https://github.com/adtsai), [wschin](https://github.com/wschin), [weixingzhang](https://github.com/weixingzhang), [wenbingl](https://github.com/wenbingl), [MaajidKhan](https://github.com/MaajidKhan), [ashbhandare](https://github.com/ashbhandare), [ajindal1](https://github.com/ajindal1), [zhanghuanrong](https://github.com/zhanghuanrong), [tiagoshibata](https://github.com/tiagoshibata), [askhade](https://github.com/askhade), [liqunfu](https://github.com/liqunfu)
ONNX Runtime v1.11.1
1 year ago
This is a patch release on 1.11.0 with the following fixes: - Symbolic shape infer error (https://github.com/microsoft/onnxruntime/pull/10674) - Quantization tool bug (https://github.com/microsoft/onnxruntime/pull/10940) - Adds missing numpy type when looking for the ort correspondance (https://github.com/microsoft/onnxruntime/pull/10943) - Profiling tool JSON format bug (https://github.com/microsoft/onnxruntime/pull/11046) - Function bug fix (https://github.com/microsoft/onnxruntime/pull/11148) - Add mobile helpers to Python build (https://github.com/microsoft/onnxruntime/pull/11196) - Scoped GIL release in run_with_iobinding (https://github.com/microsoft/onnxruntime/pull/11248) - Fix output type mapping for JS (https://github.com/microsoft/onnxruntime/pull/11049) All official packages are attached, and Python packages are additionally published to PyPi.
ONNX Runtime v1.11.0
1 year ago
# Key Updates ## General * Support for ONNX 1.11 with opset 16 * Updated protobuf version to 3.18.x * Enable usage of Mimalloc ([details](https://onnxruntime.ai/docs/performance/tune-performance.html#mimalloc-allocator-usage)) * Transformer model helper scripts * [T5 conversion script](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/models/t5/convert_to_onnx.py) * [GPT2 conversion script](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers#gpt-2-model-conversion) * On Windows, error strings in OrtStatus are now encoded in UTF-8. When you need to print it out to screen, first convert it to a wide char string by using the MultiByteToWideChar Windows API. ## Performance * Memory utilization related performance improvements (e.g. elimination of vectors for small dims) * Performance variance stability improvement through dynamic cost model session option ([details](https://onnxruntime.ai/docs/performance/tune-performance.html#mitigate-high-latency-variance)) * New quantization data format support: S8S8 in QDQ format * Added s8s8 kernels for ARM64 * Support to convert s8s8 to u8s8 automatically for x64 * Improved performance on ARM64 for quantized CNN model through: * New kernels for quantized depthwise Conv * Improved symmetrically quantized Conv by leveraging indirect buffer * New Gemm kernels for symmetric quantized Conv and MatMul * General quantization improvements, including new quantized operators (Resize, ArgMax) and quantization tool updates ## API * Java: Only a single OrtEnv can be created in any given execution of the JVM. Previously, the environment could be closed completely and a fresh one could be created with different parameters (e.g. global thread pool, or logging level) ([details](https://github.com/microsoft/onnxruntime/pull/10670)) ## Packages * Nuget packages * C# packages now tested with .NET 5. .NET Core 2.1 support is deprecated as it has reached end of life support on August 21, 2021. We will closely follow [.NET's support policy](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) * Removed PDB files. These are attached as release artifacts below. * Pypi packages * Python 3.6 is deprecated as it has reached EOL December 2021. Supported Python versions: 3.7-3.9 * *Note: Mac M1 builds are not yet available in pypi but can be built from source* * OnnxRuntime with OpenVINO support available at [https://pypi.org/project/onnxruntime-openvino/1.11.0/](https://pypi.org/project/onnxruntime-openvino/1.11.0/) ## Execution Providers * CUDA * Enable CUDA provider option configuration for C# to support workspace size configuration from and fix binary compatibility of CUDAProviderOptions C API * Preview support for CUDA Graphs ([details](https://onnxruntime.ai/docs/performance/tune-performance.html#using-cuda-graphs-in-the-cuda-ep)) * TensorRT * TRT 8.2.3 support * Memory footprint optimizations * Support protobuf >= 3.11 * Updated flatbuffers version to 2.0 * Misc Bug Fixes * DirectML * Updated more operators to opset 13 (QuantizeLinear, DequantizeLinear, ReduceSum, Split, Squeeze, Unsqueeze, ReduceSum). * OpenVINO * OpenVINOโ„ข version upgraded to 2022.1.0 - biggest OpenVINOโ„ข upgrade in 3.5 years. This provides functional bug fixes, API Change 2.0 and capability changes from the previous 2021.4.2 LTS release. * Performance Optimizations of existing supported models. * Pre-Built OnnxRuntime Binaries with OpenVINO enabled can be downloaded from [https://github.com/intel/onnxruntime/releases/tag/v4.0](https://github.com/intel/onnxruntime/releases/tag/v4.0) [https://pypi.org/project/onnxruntime-openvino/1.11.0/](https://pypi.org/project/onnxruntime-openvino/1.11.0/) * OpenCL _(in preview)_ * Introduced the EP for OpenCL to use with Mobile GPUs * Available in `experimental/opencl` branch for users to try. Provide feedback through Issues and Discussions in the repo. * README is available [here](https://github.com/microsoft/onnxruntime/blob/experimental/opencl/onnxruntime/core/providers/opencl/README.md). ## Mobile * Added general support for converting a model to NHWC layout at runtime * Execution provider sets preferred layout and shared infrastructure in ORT will ensure the nodes the execution provider is assigned will be in that layout * Added support for runtime optimization with minimal binary size impact * Relevant optimizations are saved in the ORT format model for replay at runtime if applicable * Added support for QDQ format models to the NNAPI EP * Will fall back to CPU EPโ€™s QDQ handling if NNAPI is not available using runtime optimizations * Includes updates to the ORT QDQ optimizers so they work better with mobile scenarios * Added helpers to: * Analyze if a model can be used with the pre-built ORT Mobile package * Update ONNX opset so model can be used with the pre-built package * Convert dynamic inputs into fixed size inputs so that the model can be used with NNAPI/CoreML * Optimize a QDQ format model for use with ORT * Added Android and iOS packages with full ORT builds * These packages have additional support for the full set of opsets and ops for ONNX models at the cost of a larger binary size. ## Web * Build option to create ONNX Runtime WebAssembly static library * Support for concurrent creation of multiple inference sessions * Upgraded emsdk version to 3.1.3 for more stable multi-threads and enables LTO with multi-threads build on WebAssembly. # Known issues * When using tensor sequences/sparse tensors, the generated profile is not valid JSON. (Fixed in https://github.com/microsoft/onnxruntime/pull/10974) * There is a bug in the quantization tool for calibration when choosing percentile algorithm (Fixed in https://github.com/microsoft/onnxruntime/pull/10940). To fix this, please apply the typo fix in the python file. * Mac M # Contributions Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: [snnn](https://github.com/snnn), [edgchen1](https://github.com/edgchen1), [skottmckay](https://github.com/skottmckay), [yufenglee](https://github.com/yufenglee), [wangyems](https://github.com/wangyems), [yuslepukhin](https://github.com/yuslepukhin), [gwang-msft](https://github.com/gwang-msft), [iK1D](https://github.com/iK1D), [chilo-ms](https://github.com/chilo-ms), [fdwr](https://github.com/fdwr), [ytaous](https://github.com/ytaous), [RandySheriffH](https://github.com/RandySheriffH), [hanbitmyths](https://github.com/hanbitmyths), [chenfucn](https://github.com/chenfucn), [yihonglyu](https://github.com/yihonglyu), [ajindal1](https://github.com/ajindal1), [fs-eire](https://github.com/fs-eire), [souptc](https://github.com/souptc), [tianleiwu](https://github.com/tianleiwu), [YUNQIUGUO](https://github.com/YUNQIUGUO), [hariharans29](https://github.com/hariharans29), [oliviajain](https://github.com/oliviajain), [xadupre](https://github.com/xadupre), [ashari4](https://github.com/ashari4), [RyanUnderhill](https://github.com/RyanUnderhill), [jywu-msft](https://github.com/jywu-msft), [weixingzhang](https://github.com/weixingzhang), [baijumeswani](https://github.com/baijumeswani), [georgen117](https://github.com/georgen117), [natke](https://github.com/natke), [Craigacp](https://github.com/Craigacp), [jeffdaily](https://github.com/jeffdaily), [JingqiaoFu](https://github.com/JingqiaoFu), [zhanghuanrong](https://github.com/zhanghuanrong), [satyajandhyala](https://github.com/satyajandhyala), [smk2007](https://github.com/smk2007), [ryanlai2](https://github.com/ryanlai2), [askhade](https://github.com/askhade), [thiagocrepaldi](https://github.com/thiagocrepaldi), [jingyanwangms](https://github.com/jingyanwangms), [pengwa](https://github.com/pengwa), [scxiao](https://github.com/scxiao), [ashbhandare](https://github.com/ashbhandare), [BowenBao](https://github.com/BowenBao), [SherlockNoMad](https://github.com/SherlockNoMad), [sumitsays](https://github.com/sumitsays), [sfatimar](https://github.com/sfatimar), [mosdav](https://github.com/mosdav), [harshithapv](https://github.com/harshithapv), [liqunfu](https://github.com/liqunfu), [tiagoshibata](https://github.com/tiagoshibata), [gineshidalgo99](https://github.com/gineshidalgo99), [pranavsharma](https://github.com/pranavsharma), [jcwchen](https://github.com/jcwchen), [nkreeger](https://github.com/nkreeger), [xkszltl](https://github.com/xkszltl), [faxu](https://github.com/faxu), [suffiank](https://github.com/suffiank), [stevenlix](https://github.com/stevenlix), [jeffbloo](https://github.com/stevenlix), [feihugis](https://github.com/feihugis)
ONNX Runtime v1.10.0
1 year ago
# Announcements * As noted in the [deprecation notice](https://github.com/microsoft/onnxruntime/blob/4daa14bc74b5378d5fcb0d6de063a9fa8bd42eac/onnxruntime/python/onnxruntime_inference_collection.py#L350) in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider. e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider']) * Python 3.6 support removed for Mac builds. Since 3.6 is end-of-life in December 2021, it will no longer be supported from next release (ORT 1.11) onwards * Removed dependency on [optional-lite](https://github.com/martinmoene/optional-lite) * Removed experimental Featurizers code # General * Support for plug-in custom thread creation and join functions to enable usage of external threads * Optional type support from op set 15 # Performance * Introduced indirect Convolution method for QLinearConv which has symmetrically quantized filter, i.e., filter type is int8 and zero point of filter is 0. The method leverages in-direct buffer instead of memcpy'ing the original data and doesnโ€™t need to compute the sum of each pixel of output image for quantized Conv. * X64: new kernels - including avx2, avxvnni, avx512 and avx 512 vnni, for general and depthwise quantized Conv. * ARM64: new kernels for depthwise quantized Conv. * Tensor shape optimization to avoid allocating heap memory in most cases - [#9542](https://github.com/microsoft/onnxruntime/pull/9542) * Added transpose optimizer to push and cancel transpose ops, significantly improving perf for models requiring layout transformation # API * Python * Following through on the [deprecation notice](https://github.com/microsoft/onnxruntime/blob/4daa14bc74b5378d5fcb0d6de063a9fa8bd42eac/onnxruntime/python/onnxruntime_inference_collection.py#L350) in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider. e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider']) * C/C++ * New API to query CUDA stream to launch a custom kernel for scenarios where custom ops compiled into shared libraries need implicit synchronization with ORT CUDA kernels - [#9141](https://github.com/microsoft/onnxruntime/pull/9141) * Updated Invalid -> OrtInvalidAllocator * Updated every item in OrtCudnnConvAlgoSearch to a safer global name * WinML * New APIs to create OrtValues from Windows platform specific ID3D12Resources by exposing DirectML Execution Provider specific APIs. These APIs allow DML to extend the C-API and provide EP specific extensions. * OrtSessionOptionsAppendExecutionProviderEx_DML * DmlCreateGPUAllocationFromD3DResource * DmlFreeGPUAllocation * DmlGetD3D12ResourceFromAllocation * Bug fix: LearningModel::LoadFromFilePath in UWP apps # Packages * Added Mac M1 Universal2 build support for a single binary that runs natively on both Apple silicon and Intel-based Macs. These are included in the official Nuget packages. ([build instructions](https://onnxruntime.ai/docs/build/inferencing.html#macos)) * Windows C API Symbols are now uploaded to [Microsoft symbol server](https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/microsoft-public-symbols) * [Nuget package](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) now supports ARM64 Linux C# * [Python GPU package](https://pypi.org/project/onnxruntime-gpu/) now includes both TensorRT and CUDA EPs. *Note: EPs need to be explicitly registered to ensure the correct provider is used. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider']). Please also ensure you have appropriate [TensorRT dependencies](https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements) and [CUDA dependencies](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements) installed.* # Execution Providers * TensorRT EP * Python GPU release packages now include support for TensorRT 8.0. Enable TensorrtExecutionProvider by explicitly setting providers parameter when creating an InferenceSession. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider']) * Published [quantized BERT model example](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/quantization/nlp/bert/trt) * OpenVINO EP * Add support for OpenVINO 2021.4.x * Auto Plugin support * IO Buffer/Copy Avoidance Optimizations for GPU plugin * Misc fixes * DNNL EP * Add Softmaxgrad op * Add Transpose, Reshape, Pow and LeakyRelu ops * Add DynamicQuantizeLinear op * Add squeeze/unsqueeze ops * DirectML EP * [Update](https://github.com/microsoft/onnxruntime/pull/9765) DirectML.dll from [1.5.1](https://www.nuget.org/packages/Microsoft.AI.DirectML/1.5.1) to [1.8.0](https://www.nuget.org/packages/Microsoft.AI.DirectML/1.8.0) * Support full precision uint64/int64 for [48](https://github.com/microsoft/DirectML/blob/master/Releases.md#directml-180) operators * Add 8D for [7](https://github.com/microsoft/DirectML/blob/master/Releases.md#directml-160) more existing operators * Add DynamicQuantizeLinear op * Accept ID3DResource's via [C API](https://github.com/microsoft/onnxruntime/pull/9686) # Mobile * Added Xamarin support to the ORT C# Nuget packages * Updated target frameworks in native package * iOS and Android binaries now included in native package * ORT format models now have backwards compatibility guarantee # Web * Support WebAssembly SIMD for qgemm kernel to accelerate the performance of quantized models * Upgraded existing WebGL kernels to the latest opset * Optimized bundle size to support various production scenarios, such as WebAssembly only or WebGL only --- # Contributions Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: [snnn](https://github.com/snnn), [gineshidalgo99](https://github.com/gineshidalgo99), [fs-eire](https://github.com/fs-eire), [gwang-msft](https://github.com/gwang-msft), [edgchen1](https://github.com/edgchen1), [hariharans29](https://github.com/hariharans29), [skottmckay](https://github.com/skottmckay), [jeffdaily](https://github.com/jeffdaily), [baijumeswani](https://github.com/baijumeswani), [fdwr](https://github.com/fdwr), [smk2007](https://github.com/smk2007), [suffiank](https://github.com/suffiank), [souptc](https://github.com/souptc), [RyanUnderhill](https://github.com/RyanUnderhill), [iK1D](https://github.com/iK1D), [yuslepukhin](https://github.com/yuslepukhin), [chilo-ms](https://github.com/chilo-ms), [satyajandhyala](https://github.com/satyajandhyala), [hanbitmyths](https://github.com/hanbitmyths), [thiagocrepaldi](https://github.com/thiagocrepaldi), [wschin](https://github.com/wschin), [tianleiwu](https://github.com/tianleiwu), [pengwa](https://github.com/pengwa), [xadupre](https://github.com/xadupre), [zhanghuanrong](https://github.com/zhanghuanrong), [SherlockNoMad](https://github.com/SherlockNoMad), [wangyems](https://github.com/wangyems), [RandySheriffH](https://github.com/RandySheriffH), [ashbhandare](https://github.com/ashbhandare), [tiagoshibata](https://github.com/tiagoshibata), [yufenglee](https://github.com/yufenglee), [mindest](https://github.com/mindest), [sumitsays](https://github.com/sumitsays), [MaajidKhan](https://github.com/MaajidKhan), [gramalingam](https://github.com/gramalingam), [tracysh](https://github.com/tracysh), [georgen117](https://github.com/georgen117), [jywu-msft](https://github.com/jywu-msft), [sfatimar](https://github.com/sfatimar), [martinb35](https://github.com/martinb35), [nkreeger](https://github.com/nkreeger), [ytaous](https://github.com/ytaous), [ashari4](https://github.com/ashari4), [stevenlix](https://github.com/stevenlix), [chandru-r](https://github.com/chandru-r), [jingyanwangms](https://github.com/jingyanwangms), [mosdav](https://github.com/mosdav), [raviskolli](https://github.com/raviskolli), [faxu](https://github.com/faxu), [liqunfu](https://github.com/liqunfu), [kit1980](https://github.com/kit1980), [weixingzhang](https://github.com/weixingzhang), [pranavsharma](https://github.com/pranavsharma), [jcwchen](https://github.com/jcwchen), [chenfucn](https://github.com/chenfucn), [BowenBao](https://github.com/BowenBao), [jeffbloo](https://github.com/jeffbloo)
ONNX Runtime v1.9.1
1 year ago
This is a patch release on 1.9.0 with the following fixes: - Microsoft.AI.MachineLearning NuGet Package Fixes - Bug fix for the issue that fails GPU execution if the executable is on the path that contained the unicode characters - [9229](https://github.com/microsoft/onnxruntime/pull/9229). - Bug fix for the NuGet package to be installed on UWP apps with 1.9 - [9182](https://github.com/microsoft/onnxruntime/pull/9182). - Bug fix for OpenVino EP Python API- [9166](https://github.com/microsoft/onnxruntime/pull/9166). - Bump up TVM version for NUPHAR EP - [9159](https://github.com/microsoft/onnxruntime/pull/9159). - Fixed build issue for iOS 11 and earlier versions - [9036](https://github.com/microsoft/onnxruntime/pull/9036).
iOS windows
microsoft/plcrashreporter 1.11.1
Reliable, open-source crash reporting for iOS, macOS and tvOS
โญ๏ธ 2,677
๐Ÿ•“ 5 days ago
๐Ÿ”– Release Notes

Releases

The markdown parsing is broken/disabled for release notes. Sorry about that, I'm chasing the source of a crash that's been bringing this website down for the last couple of days.
1.11.1
5 days ago
## Version 1.11.1 * **[Improvement]** Disable treating warnings as errors in code to avoid blockers when new Xcode warnings are introduced. * **[Improvement]** Add caught exception logging to PLCrashReporter to generate reports from specific exception.
1.11.0
33 weeks ago
## Version 1.11.0 * **[Feature]** Add Xcode 14 support. Xcode 11 and Xcode 12 are out of support now. * **[Improvement]** Fix analyzer warnings.
1.10.2
48 weeks ago
## Version 1.10.2 * **[Fix]** Config ignored `shouldRegisterUncaughtExceptionHandler` parameter in constructor. * **[Improvement]** Update `protobuf-c` to version 1.4.0. * **[Improvement]** Fix Xcode 13 deprecated build settings that might broke incremental builds (it drops workaround for Xcode's 12.0-12.4 bug). It only affects projects that use PLCrashReporter as sources.
1.10.1
1 year ago
## Version 1.10.1 * **[Improvement]** Specified minimum cocoapods version in podspec to 1.10.0 * **[Improvement]** Mark `PLCrashReporter.crashReportPath` method as public.
1.10.0
1 year ago
## Version 1.10.0 * **[Fix]** Fix error `Undefined symbols for architecture arm64` while building PLCrashReporter for simulator on Xcode 12.4 and higher. * **[Fix]** Fix Cycle in dependencies error happening when building project from sources multiple times. * **[Feature]** Distribute XCFramework via Cocoapods and Carthage. The XCFramework will contain static libs only. * **[Fix]** Include plcrashutil in all release archives.
1.9.0
1 year ago
## Version 1.9.0 * **[Fix]** Fix `double-quoted` warnings in Xcode 12. * **[Fix]** Fix memory leak during stack trace unwinding. * **[Feature]** Add an API to customize data path.
1.8.1
2 years ago
## Version 1.8.1 * Re-build Apple Silicon binaries with [Xcode 12.2 Release Candidate](https://developer.apple.com/news/releases/?id=11052020h) to [be able to](https://developer.apple.com/news/releases/?id=11052020i) submit the applications that use the framework as a binary to the App Store.
1.8.0
2 years ago
## Version 1.8.0 * Drop support of old versions of iOS and macOS. The minimal version is iOS 9 and macOS 10.9 now. * Add Apple Silicon support. Note that `arm64` for iOS and tvOS simulators is available only in xcframework or SwiftPM. * Support saving custom data in crash report, see `PLCrashReporter.customData` property. * Fix exported symbols list for applying `PLCRASHREPORTER_PREFIX` prefix. * Fix Xcode 12 compatibility if the framework is used from sources. * Fix getting the subtype of device architecture on iOS 14. * Fix crash on collecting register values on `arm64e` devices with iOS 14.
1.7.2
2 years ago
## Version 1.7.2 * Fix building on Xcode 12 beta. * Use memory mapping to reduce live reports memory pressure. * Remove "CrashReporter Key: TODO" from text report output. * Add `[PLCrashReporter]` prefix for all log messages.
1.7.1
2 years ago
## Version 1.7.1 * Fix crash on old operating systems: macOS 10.11, iOS 9 and tvOS 9 (and older). * Fix duplicate symbols in applications with `-all_load` linker flag. * Fix exporting PLCrashReporter along with an application into `.xcarchive`. * Fix collecting stacktraces on `arm64e` devices in some cases.
iOS macOS tvOS
microsoft/FluentDarkModeKit 1.0.4
A library for backporting Dark Mode in iOS
โญ๏ธ 1,626
๐Ÿ•“ 2 years ago
๐Ÿ”– Release Notes

Releases

The markdown parsing is broken/disabled for release notes. Sorry about that, I'm chasing the source of a crash that's been bringing this website down for the last couple of days.
1.0.4: Add @objc annotation for use in Objective-C only projects (#106)
2 years ago
1.0.3: Fix missing attributes for UILabel when set attributedText in iOS 11
2 years ago
1. Fix missing attributes for UILabel when set attributedText in iOS 11 (#102)
1.0.2: Making project ready with minimum deployment target >= 13.0
2 years ago
1. Making project ready with minimum deployment target >= 13.0
1.0.1: Add windowThemeChangeHandler
2 years ago
1. Fix crash for CGColor related property on view called with color with `withAlphaComponent` #96 2. Add window theme change handler #97
1.0.0: Migrate to system APIs on iOS 13
2 years ago
0.5.2: Fix SPM import issue
2 years ago
1. Fixed SPM import issue (#65) 2. Added SwiftLint
0.5.1: Fix UIScrollView indicator display in dark mode (#59)
3 years ago
0.5.0: Rename the repo to FluentDarkModeKit
3 years ago
3 years ago
3 years ago
iOS
microsoft/AdaptiveCards v1.2.11
A new way for developers to exchange card content in a common and consistent way.
โญ๏ธ 1,522
๐Ÿ•“ 2 years ago
๐Ÿ”– Release Notes

Releases

The markdown parsing is broken/disabled for release notes. Sorry about that, I'm chasing the source of a crash that's been bringing this website down for the last couple of days.
AdaptiveCards Mobile Schema 1.6
23 hours ago
We are excited to release Adaptive Cards v1.6 for Mobile renderers! # Latest Packages * :tada:[Android 2.9.0](https://search.maven.org/artifact/io.adaptivecards/adaptivecards-android/2.9.0/aar) * :tada:[iOS 2.9.0](https://cocoapods.org/pods/AdaptiveCards) # New Features 1. [Dynamic Typeahead](https://github.com/microsoft/AdaptiveCards/blob/dipja/dynamic-type-ahead-doc-android/specs/DesignDiscussions/DynamicTypeAhead.md) # Bug Fixes ## Android * https://github.com/microsoft/AdaptiveCards/pull/8500 * https://github.com/microsoft/AdaptiveCards/pull/8413 ## iOS * https://github.com/microsoft/AdaptiveCards/issues/7601 * https://github.com/microsoft/AdaptiveCards/issues/7572
23.04 JS Renderer Release
8 weeks ago
# Latest Packages ๐ŸŽ‰[JS adaptivecards 3.0.0-beta.13](https://www.npmjs.com/package/adaptivecards/v/3.0.0-beta.13) ## New Feature * [JS] Add role property to actions by @anna-dingler in https://github.com/microsoft/AdaptiveCards/pull/8336 ## Fixes * [Input][RevealOnHover]Fixes for reveal on hover on inputs by @baton17 in https://github.com/microsoft/AdaptiveCards/pull/8414
23.03 XAML Renderer and Object Model Release
10 weeks ago
# Latest Packages * :tada:[XAML Renderer 3.2.0](https://www.nuget.org/packages/AdaptiveCards.Rendering.Uwp/3.2.0) * :tada:[XAML Object Model 1.2.0](https://www.nuget.org/packages/AdaptiveCards.ObjectModel.Uwp/1.2.0) # Renderer ## New Features * ๐ŸŽ‰ [Beta feature: Role property for actions](https://github.com/microsoft/AdaptiveCards/pull/8337) ## Bug Fixes * https://github.com/microsoft/AdaptiveCards/pull/8221 * https://github.com/microsoft/AdaptiveCards/pull/8231 * https://github.com/microsoft/AdaptiveCards/pull/8232 * https://github.com/microsoft/AdaptiveCards/pull/8236 * https://github.com/microsoft/AdaptiveCards/pull/8235 * https://github.com/microsoft/AdaptiveCards/pull/8352 * https://github.com/microsoft/AdaptiveCards/pull/8233 * https://github.com/microsoft/AdaptiveCards/pull/8234 # Object Model ## New Features * ๐ŸŽ‰ [Beta feature: Role property for actions](https://github.com/microsoft/AdaptiveCards/pull/8337)
.NET Templating 1.4.0
13 weeks ago
# Latest Packages * :tada:[.NET Templating 1.4.0](https://www.nuget.org/packages/AdaptiveCards.Templating/1.4.0) # New Feature: Host parameters Applications using the .NET Templating engine can now supply an additional `Host` data blob in `EvaluationContext`, which is accessible in templates through `$host`. This allows applications to provide contextual information to be referenced in generating a card. PR: https://github.com/microsoft/AdaptiveCards/pull/8328 (matches behavior from NodeJS templating engine. see https://github.com/microsoft/AdaptiveCards/pull/7199)
23.02 XAML Renderer Release
14 weeks ago
# Latest Packages * :tada:[XAML 3.1.1](https://www.nuget.org/packages/AdaptiveCards.Rendering.Uwp/3.1.1) # Bug Fixes * https://github.com/microsoft/AdaptiveCards/issues/8243 # Note * The corresponding Object Model release is [1.1.0](https://www.nuget.org/packages/AdaptiveCards.ObjectModel.Uwp/1.1.0)
23.02 JS Renderer Release
15 weeks ago
# Latest Packages ๐ŸŽ‰[JS adaptivecards 3.0.0-beta.11](https://www.npmjs.com/package/adaptivecards/v/3.0.0-beta.11) # Fixes #8299 Fixed Carousel Keyboard issue
23.02 JS Renderer Release
16 weeks ago
# Latest Packages ๐ŸŽ‰[JS adaptivecards 2.11.2](https://www.npmjs.com/package/adaptivecards/v/2.11.2) # Fixes #8301 Fixed Perf Issue. #8144 Fixed empty placeholder issue.
23.01 Designer Release
18 weeks ago
# Latest Packages * :tada:[Designer 2.4.2](https://www.npmjs.com/package/adaptivecards-designer/v/2.4.2) # Breaking change * We added host container theming support in this version. Custom host containers should either: * Extend `SingleThemeHostContainer` or `MultiThemeHostContainer` * Implement `getCurrentStyleSheet()` which is now an abstract class on the `HostContainer` class. # Beta Features * Add Widgets Board Host Container * Schema 1.6 Preview * Allow theming in host containers. Host containers can now support `Light` and `Dark` themes from the same container. # Bug Fixes * #8048 Update webpack config for xmark.svg * #7931 Remove tab index from container picker * #7952 Resolve undefined error in ActionPeer getBoundingRect * #7917 Assign names to icon buttons
23.01 JS Renderer Release
18 weeks ago
# Latest Packages * :tada:[JS adaptivecards 3.0.0-beta.10](https://www.npmjs.com/package/adaptivecards/v/3.0.0-beta.10) # New Features #8242 Vertical Carousel Option Added #8129 Stacked Style Added to ImageSet # Fixes #8260 Addressed Gif Reset Issue #8259 Addressed Actions Issues in Duplicated Carousel Slides #8250 Fixed Carousel Flicker Issue #8216 Updated Tooltip for Carousel Navigation Button
2023.01 XAML Object Model Release
20 weeks ago
# Latest Packages * :tada:[XAML Object Model 1.1.0](https://www.nuget.org/packages/AdaptiveCards.ObjectModel.Uwp/1.1.0) # New Features ## XAML Object Model 1. [RTL Card Level Support](https://github.com/microsoft/AdaptiveCards/pull/6661) 2. Beta Feature: [Closed Captions](https://github.com/microsoft/AdaptiveCards/pull/7178)
iOS
microsoft/fluentui-apple 0.17.3
UIKit and AppKit controls for building native Microsoft experiences
โญ๏ธ 791
๐Ÿ•“ 4 days ago
๐Ÿ”– Release Notes

Releases

The markdown parsing is broken/disabled for release notes. Sorry about that, I'm chasing the source of a crash that's been bringing this website down for the last couple of days.
0.17.3
4 days ago
### What's Changed * Add new ColorProviding protocol for Mac Fluent colors by @brentpe * Fix ShyHeaderController regression by @laminesm * Revert changes to viewControllerNeedsWrapping by @laminesm * Add resizingHandleViewBackgroundColor objc wrapper by @laminesm * Add accessibility identifier to NavigationBar's back button by @laminesm
0.17.0
2 weeks ago
### What's Changed * Multiline Contextual Command Bar by @joannaquu * Add documentation for Multiline Command Bar by @joannaquu * Hosted search bar styles by @edjamesmsft * Bring tokenized BadgeView by @laminesm * Update Label's textColor/font to use tokenSet by @joannaquu * Add SearchBar documentation by @laminesm * Fix Swift Package Manager builds on macOS by @markavitale * Tokenizing Separator by @sophialee0416 * Fix Button isFocused backgroundColor by @joannaquu * Add Floating Action Button by @joannaquu * Fix alphabetical order of controls by @joannaquu * Fix PillButtonBar's button sizes by @huwilkes * Fix VoiceOver bug in DatePicker NSMenu by @joannaquu * Update SegmentedControl over NavigationBar by @edjamesmsft * Tokenized HeaderFooterView by @sophialee0416 * Add labelFont to BadgeView tokenSet by @joannaquu * Update TVC's checkmark to use theme color by @joannaquu * Move minimum iOS sdk to 15.0 by @mischreiber * [Fluent Button] Added support for Increase Contrast Accessibility by @amoggo * Add new stroke tokens by @joannaquu * Add accessible stroke to BadgeView by @joannaquu * Remove stroke from BadgeField by @joannaquu * Improve SwiftUI theme support by @huwilkes * Changed naming of Border in AvatarGroup demo to Activity Ring by @michaelxiao16 * Merge NavBar feature branch into main by @amgleitman * Add gradient style to tokenized NavigationBar by @laminesm * Fix Tooltip XCUITest by @joannaquu * Tokenize BadgeLabel by @laminesm * Update logic between setting vars on Label and updating tokenSet by @huwilkes * Update Button's large text support by @huwilkes * Hide avatar for centered titles on small phones by @amgleitman * Update version to 0.17 by @laminesm **Full Changelog**: https://github.com/microsoft/fluentui-apple/compare/0.16.2...0.17.0
0.16.2
3 weeks ago
### What's Changed - Add accessibilityIdentifier to BottomCommandingController's more button @joannaquu - Bump FluentUI Apple to 0.16.2 @joannaquu
0.16.1
3 weeks ago
### What's Changed - Delete Divider @mischreiber - Fix PillButtonBar's button sizes @huwilkes - [main_0.16] Bump FluentUI Apple to 0.16.1 @joannaquu
0.15.2
3 weeks ago
### Summary - Remove unused Divider control - Fix SwiftLint warnings ### What's Changed - [main_0.15] [cherry-pick] Deleting Divider + Fix new warnings @mischreiber @huwilkes - [main_0.15] Bump FluentUI Apple to 0.15.2 @joannaquu
0.16.0
8 weeks ago
### What's Changed * Bring tokenized PopupMenu, Drawer, TabBar, SideTabBar, PillButton/Bar & ResizingHandleView to main by @laminesm * Fix maximumContentOffset calculation in SegmentedControl by @huwilkes * Changing transition style for AvatarGroup by @sophialee0416 * Update the localization action by @huwilkes * Action clean up by @huwilkes * Add accessibility identifier to CommandingItem by @joannaquu * Deprecate FluentUIHostingController in iOS 16 by @huwilkes * Fix new swiftlint warnings by @huwilkes **Full Changelog**: https://github.com/microsoft/fluentui-apple/compare/0.15.1...0.16.0
0.14.2
9 weeks ago
### Summary ### **iOS:** **AvatarGroup:** Changed transition style from `.move()` to `.opacity` for a more intuitive animation experience ### What's Changed * [main_0.14] Cherry-picking: Changing animation for AvatarGroup (#1676) by @sophialee0416 in https://github.com/microsoft/fluentui-apple/pull/1680 * [main_0.14] Bumping FluentUI Apple to 0.14.2 by @sophialee0416 in https://github.com/microsoft/fluentui-apple/pull/1681 **Full Changelog**: https://github.com/microsoft/fluentui-apple/compare/0.14.1...0.14.2
0.15.1
9 weeks ago
## What's Changed * Cherry-picking: Changing animation for AvatarGroup (#1676) by @sophialee0416 in https://github.com/microsoft/fluentui-apple/pull/1678 * [main_0.15] Bumping FluentUI Apple Version (0.15.1) by @sophialee0416 in https://github.com/microsoft/fluentui-apple/pull/1679 **Full Changelog**: https://github.com/microsoft/fluentui-apple/compare/0.15.0...0.15.1
0.15.0
10 weeks ago
### What's Changed * Allow HUD to grow for long texts by @joannaquu in #1667 * Tokenize SearchBar by @laminesm in #1670 * Update version to 0.15 by @laminesm in #1671 **Full Changelog**: https://github.com/microsoft/fluentui-apple/compare/0.14.1...0.15.0
v 0.14.1
10 weeks ago
### iOS update - improve Tooltip's Show() hostingviewcontroller param - remove unused Divider control from the package - fix Swift Package Manager issue ### What's Changed * [main_0.14] Remove FluentUIResources import from FluentUIFramework.swift (#1664) by @laminesm in https://github.com/microsoft/fluentui-apple/pull/1666 * cherry pick tooltip changes by @harrieshin in https://github.com/microsoft/fluentui-apple/pull/1668 * [main_0.14] Deleting `Divider` from 0.14 branch by @mischreiber in https://github.com/microsoft/fluentui-apple/pull/1662 * release 0.14.1 by @harrieshin in https://github.com/microsoft/fluentui-apple/pull/1669 **Full Changelog**: https://github.com/microsoft/fluentui-apple/compare/0.14.0...0.14.1
iOS macOS
microsoft/appcenter-sdk-apple 5.0.2
Development repository for the App Center SDK for iOS, macOS and tvOS.
โญ๏ธ 523
๐Ÿ•“ 12 weeks ago
๐Ÿ”– Release Notes

Releases

The markdown parsing is broken/disabled for release notes. Sorry about that, I'm chasing the source of a crash that's been bringing this website down for the last couple of days.
5.0.2
12 weeks ago
## Version 5.0.2 ### App Center * **[Fix]** Fix NSLog congestion on Apple's Framework Thread. * **[Improvement]** Always specify `isDirectory` parameter for `[NSURL URLByAppendingPathComponent:]` for better performace. * **[Improvement]** Disable treating warnings as errors in code to avoid blockers when new Xcode warnings are introduced.
5.0.1
20 weeks ago
## Version 5.0.1 ### App Center * **[Fix]** Fix Unsafe Object Deserialization. * **[Fix]** Fix "Collection was mutated while being enumerated" exception in MSACChannelGroupDefault. * **[Fix]** Fix crash channel:didPrepareLog in MSACChannelGroupDefault ### App Center Distribute * **[Fix]** Fix crash in getPresentationAnchor function if the active scene is not an instance of UIWindowScene.
5.0.0
29 weeks ago
## Version 5.0.0 ### App Center * **[Feature]** Add Xcode 14 support. Xcode 11 and Xcode 12 are out of support now. Bump minumum supported iOS version to iOS 11. ### App Center Crashes * **[Improvement]** Update PLCrashReporter to 1.11.0.
4.4.3
46 weeks ago
## Version 4.4.3 ### App Center Crashes * **[Improvement]** Update PLCrashReporter to 1.10.2.
4.4.2
1 year ago
## Version 4.4.2 ### App Center Analytics * **[Feature]** Support build via the command line `swift build`. ### App Center Crashes * **[Feature]** Support build via the command line `swift build`. * **[Fix]** Add exception null check for `Crashes.trackError` API.
4.4.1
1 year ago
## Version 4.4.1 ### App Center * **[Fix]** Fix warning about broken symlink `MSACCustomProperties.h` when integrating via Swift Package Manager.
4.4.0
1 year ago
## Version 4.4.0 ### App Center * **[Breaking change]** Remove `AppCenter.setCustomProperties` API. * **[Fix]** Fix `Undefined symbol: OBJC_CLASS$_CTTelephonyNetworkInfo` error for Mac Catalyst platform when integrating the SDK via Swift Package Manager with Swift 5.5 and higher. * **[Fix]** Fix throw an exception when checking to authenticate MAC value during decryption. * **[Improvement]** Specified minimum cocoapods version in podspec to 1.10.0. ### App Center Analytics * **[Feature]** Increase the interval between sending logs from 3 to 6 seconds for the backend load optimization. * **[Feature]** Add `Analytics.enableManualSessionTracker` and `Analytics.startSession` APIs for tracking session manually. ### App Center Crashes * **[Feature]** Add `(NSString *)description` method to convert `MSACErrorReport` to a string and additional useful information about sending error. * **[Feature]** Save a crash report from the Xamarin.Mac platform. * **[Fix]** Fix build failure on Xcode 13, because of warning `ะกompletion handler is never used`. Only observable when SDK is integrated as source code. Continuation of the previous fix that fixed the issue on the beta version. * **[Fix]** Fix sending `Crashes.trackError` logs after allowing network requests after the launch app. * **[Improvement]** Update PLCrashReporter to 1.10.1. ### App Center Distribute * **[Fix]** Cancel authorization process if application is not active, otherwise ASWebAuthenticationSession will fail opening browswer and update flow will end up being in a broken state. This only affects updating from a private distribution group. ## Known issues * Warning about broken symlink `MSACCustomProperties.h` appear while integrating App Center with SPM.
4.3.0
1 year ago
## Version 4.3.0 ### App Center * **[Feature]** Improved `AES` token encryption algorithm using `Encrypt-then-MAC` data authentication approach. ### App Center Crashes * **[Feature]** Add support for tracking handled errors with `Crashes.trackError` and `Crashes.trackException` APIs. * **[Fix]** Fix build failure on Xcode 13, because of warning `completion handler is never used`. Only observable when SDK is integrated as source code. Workaround: Set `Treat Warnings as Errors` to `No` in target's build settings. * **[Improvement]** Update PLCrashReporter to 1.10.0. ### App Center Distribute - **[Fix]** Fix a warning `'Resources/AppCenterDistribute.strings': file not found` when resolving swift packages using Swift 5.5. - **[Fix]** Fix the part of the script which is responsible for cleanup the resource bundles inside the xcframework. - **[Fix]** Fix `Undefined symbols for architecture x86_64` for `ASWebAuthenticationSession` for Cocoapods (v 1.11) integration.
4.2.0
1 year ago
## Version 4.2.0 ### App Center * **[Feature]** Add a `AppCenter.networkRequestsAllowed` API to block any network requests without disabling the SDK. * **[Fix]** Fix umbrella header warnings in Xcode 12.5. ### App Center Crashes * **[Fix]** Fix error nullability in crashes delegate. * **[Fix]** Merge the device information from the crash report with the SDK's device information in order to fix some time sensitive cases where the reported application information was incorrect. * **[Improvement]** Update PLCrashReporter to 1.9.0. ### App Center Distribute * **[Fix]** Fix linking framework `AuthenticationServices`. * **[Fix]** Fix a warning in Distribute module that prevented using SDK as a source code on Xcode 12.5.
4.1.1
2 years ago
## Version 4.1.1 ### App Center * **[Improvement]** Use ASWebAuthenticationSession for authentication on iOS 12 or later. * **[Fix]** Fix Objective-C properties attributes warnings in MRC projects. ### App Center Distribute * **[Fix]** Fix `kMSACUpdateTokenRequestIdKey` never gets removed.
iOS macOS tvOS
microsoft/LocalizedStringKit v0.2.4
Generate .strings files directly from your code
โญ๏ธ 290
๐Ÿ•“ 6 weeks ago
๐Ÿ”– Release Notes

Releases

The markdown parsing is broken/disabled for release notes. Sorry about that, I'm chasing the source of a crash that's been bringing this website down for the last couple of days.
v0.2.5
2 years ago
microsoft/health-data-sync 1.0.0
HealthDataSync is a Swift library that simplifies and automates the export of HealthKit data to an external store.
โญ๏ธ 22
๐Ÿ•“ 3 years ago
๐Ÿ”– Release Notes

Releases

The markdown parsing is broken/disabled for release notes. Sorry about that, I'm chasing the source of a crash that's been bringing this website down for the last couple of days.
Initial Release
3 years ago
iOS
microsoft/healthkit-to-fhir 1.0.3
The HealthKitToFhir Swift Library provides a simple way to create FHIR Resources from HKObjects.
โญ๏ธ 13
๐Ÿ•“ 2 years ago
๐Ÿ”– Release Notes

Releases

The markdown parsing is broken/disabled for release notes. Sorry about that, I'm chasing the source of a crash that's been bringing this website down for the last couple of days.
1.0.3
2 years ago
Fixed issue with improper OS Version string.
1.0.2
2 years ago
Added support for more HealthKit Types: - HKQuantityTypeIdentifierRestingHeartRate - HKQuantityTypeIdentifierHeartRateVariabilitySDNN - HKQuantityTypeIdentifierWalkingHeartRateAverage - HKQuantityTypeIdentifierAppleExerciseTime - HKQuantityTypeIdentifierAppleStandTime - HKQuantityTypeIdentifierActiveEnergyBurned - HKQuantityTypeIdentifierEnvironmentalAudioExposure - HKQuantityTypeIdentifierDietaryEnergyConsumed
1.0.1
3 years ago
Support for new HealthKit types: Blood Oxygen Saturation Weight Height Temperature Respiratory Rate
Initial Release
3 years ago
iOS
microsoft/iomt-fhir-client 1.0.0
The IomtFhirClient Swift library simplifies sending IoMT (Internet of Medical Things) data to an IoMT FHIR Connector for Azure endpoint.
โญ๏ธ 7
๐Ÿ•“ 3 years ago
๐Ÿ”– Release Notes

Releases

The markdown parsing is broken/disabled for release notes. Sorry about that, I'm chasing the source of a crash that's been bringing this website down for the last couple of days.
Initial Release
3 years ago
iOS
microsoft/CorrelationVector-Swift 1.0.0
CorrelationVector-Swift provides the Swift implementation of the CorrelationVector protocol for tracing and correlation of events through a distributed system.
โญ๏ธ 3
๐Ÿ•“ 27 weeks ago
๐Ÿ”– Release Notes

Releases

The markdown parsing is broken/disabled for release notes. Sorry about that, I'm chasing the source of a crash that's been bringing this website down for the last couple of days.
Initial release
3 years ago

Swiftpack is being maintained by Petr Pavlik | @ptrpavlik | @swiftpackco | API | Analytics