Keeping up with the trend of many recent years, Deep Learning in 2020 continued to be one of the fastest-growing fields, darting straight ahead into the Future of Work. The developments were manifold and on multiple fronts. Here’s a rundown on the prominent highlights.
OpenAI, the AI Research organization, declared PyTorch as its new standard Deep Learning framework. PyTorch will increase its research productivity at scale on GPUs. With PyTorch backing it, OpenAI cut down its generative modeling iteration time from weeks to days.
Megvii Technology, a China-based startup, said that it would make its Deep Learning framework open-source. MegEngine is a part of Megvii’s proprietary AI platform Brain++. It can train computer vision on a broad scale and help developers the world over to build AI solutions for commercial and industrial use.
The new release cleared confusion about incompatibilities and differences between tf.keras and the standalone Keras package. Now, a single Keras model – tf.keras – is operational.
Huawei Technologies open-sourced MindSpore, a Deep Learning training framework for mobile, edge, and cloud scenarios. The framework is lightweight and is giving tough competition to TensorFlow and PyTorch.
It is scalable across devices and uses 20 percent fewer codes for functions like Natural Language Processing (NLP). It also supports parallel training, saves training time for different hardware, and maintains and preserves sensitive data.
MindSpore doesn’t process any data itself but ingests only the pre-processed model and gradient information, maintaining the robustness of the model.
IBM’s Deep Learning framework CogMol will help researchers to accelerate cures for infectious diseases like COVID-19. The new framework will address the challenges in the current “generative AI models to create novel peptides, proteins, drug candidates, and materials.”
ABBYY, announced the launch of NeoML. It is an open-source library for building, training, and deploying ML models. NeoML is a cross-platform framework. It is optimized for applications running in the cloud, on desktops, and on mobile devices, and supports both deep learning and machine learning algorithms.
Engineers at ABBYY use it for computer vision and NLP tasks. These include image preprocessing, classification, OCR, document layout analysis, and data extraction from documents, which can be structured or unstructured.
“NeoML offers 15-20% faster performance for pre-trained image processing models running on any device.” The library has been designed as a comprehensive tool to process and analyze multi-format data (video, image, etc).
Network scientists were grappling with one important problem for years. They had been trying to identify key players or an optimal set of nodes that most influence a network's functionality.
In June this year, researchers at the National University of Defense Technology in China, University of California, Los Angeles (UCLA), and Harvard Medical School (HMS) published a deep reinforcement learning (DRL) framework called FINDER (Finding key players in Networks through Deep Reinforcement learning). It was trained on a small set of synthetic networks and then applied to real-world scenarios. The framework can identify key players in complex networks. It was published in a paper in Nature Machine Intelligence.
The new release includes some new key features, and has fixed bugs in the previous one. Its major features include: generalized linear models, and Poisson loss for gradient boosting; a rich visual representation of estimators; scalability and stability improvements to KMeans; improvements to the histogram-based gradient boosting estimators; and sample-weight support for Lasso and ElasticNet.
Team Amazon added key programming frameworks to its book. The book – Dive into Deep Learning – is drafted through Jupyter notebooks and integrates mathematics, text, and runnable code. It is a fully open-source live document, with triggered updates to HTML, PDF, and notebook versions.
While the book was originally written for MXNeT, its authors also added PyTorch and TensorFlow to it.
Amazon’s book is a great open-source resource for students, developers, and scientists interested in Deep Learning.
This October, an international research team from TU Wien (Vienna), IST Austria, and MIT (USA) announced a new artificial intelligence system. Founded on the brains of tiny animals like threadworms, this new-age AI-system can control a vehicle with a few artificial neurons.
The solution offers remarkable benefits over previous Deep Learning models. Away from the infamous “black box”, it can handle noisy inputs and is simple to understand. The model was published in Nature Machine Intelligence.
MIScnn, an open-source Python framework for medical image segmentation with convolutional neural networks and Deep Learning, was announced.
It has intuitive APIs enabling the fast setup of medical image segmentation pipelines in just a few code lines. MIScnn also has data I/O, preprocessing; patch-wise analysis; data augmentation; metrics; a library with state-of-the-art deep learning models and model utilization; and automatic evaluation.
The new version comes with easy loading, faster preprocessing of data, and easier solving of input-pipeline bottlenecks. tf.data solves input pipeline bottlenecks and improves resource utilization. For advanced users, it has improved training speed. tf.data allows users to reuse the output on a different training run, which frees up additional CPU time.
The TF Profiler adds a memory profiler to visualize the model’s memory usage, and a Python tracer to trace Python function calls in the model. It also offers experimental support for the new Keras Preprocessing Layers API.
It includes many new APIs including “support for NumPy-compatible FFT operations, profiling tools, and major updates to both distributed data parallel (DDP) and remote procedure call (RPC)-based distributed training.”
As 2020 enters its last lap, we expect more new and impressive developments to crop up.
Mark Cuban said: “Artificial Intelligence, deep learning, machine learning — whatever you're doing if you don't understand it — learn it. Because otherwise you're going to be a dinosaur within 3 years.”
Cheers to diving deeper into Deep Learning!
Did we miss an important update? Share with us!