Advances in Deep Learning 2020

Advances in Deep Learning 2020
Nov 27, 2020

Keeping up with the trend of many recent years, Deep Learning in 2020 continued to be one of the fastest-growing fields, darting straight ahead into the Future of Work. The developments were manifold and on multiple fronts. Here’s a rundown on the prominent highlights.

January 2020

OpenAI announced PyTorch as its standard Deep Learning framework

OpenAI, the AI Research organization, declared PyTorch as its new standard Deep Learning framework. PyTorch will increase its research productivity at scale on GPUs. With PyTorch backing it, OpenAI cut down its generative modeling iteration time from weeks to days.

March 2020

Megvii made its Deep Learning AI framework open-source

Megvii Technology, a China-based startup, said that it would make its Deep Learning framework open-source. MegEngine is a part of Megvii’s proprietary AI platform Brain++. It can train computer vision on a broad scale and help developers the world over to build AI solutions for commercial and industrial use.

Keras 2.4.0 was released

The new release cleared confusion about incompatibilities and differences between tf.keras and the standalone Keras package. Now, a single Keras model – tf.keras – is operational.

Huawei Technologies Co Ltd open-sourced ‘Mindspore’

Huawei Technologies open-sourced MindSpore, a Deep Learning training framework for mobile, edge, and cloud scenarios. The framework is lightweight and is giving tough competition to TensorFlow and PyTorch.

It is scalable across devices and uses 20 percent fewer codes for functions like Natural Language Processing (NLP). It also supports parallel training, saves training time for different hardware, and maintains and preserves sensitive data.

MindSpore doesn’t process any data itself but ingests only the pre-processed model and gradient information, maintaining the robustness of the model.

April 2020

IBM’s CogMol accelerated therapeutic developments for COVID-19

IBM’s Deep Learning framework CogMol will help researchers to accelerate cures for infectious diseases like COVID-19. The new framework will address the challenges in the current “generative AI models to create novel peptides, proteins, drug candidates, and materials.”

June 2020

ABBYY open-sourced NeoML, a framework for Deep Learning and algorithms

ABBYY, announced the launch of NeoML. It is an open-source library for building, training, and deploying ML models. NeoML is a cross-platform framework. It is optimized for applications running in the cloud, on desktops, and on mobile devices, and supports both deep learning and machine learning algorithms.

Engineers at ABBYY use it for computer vision and NLP tasks. These include image preprocessing, classification, OCR, document layout analysis, and data extraction from documents, which can be structured or unstructured.

“NeoML offers 15-20% faster performance for pre-trained image processing models running on any device.” The library has been designed as a comprehensive tool to process and analyze multi-format data (video, image, etc).

FINDER was published

Network scientists were grappling with one important problem for years. They had been trying to identify key players or an optimal set of nodes that most influence a network's functionality.

In June this year, researchers at the National University of Defense Technology in China, University of California, Los Angeles (UCLA), and Harvard Medical School (HMS) published a deep reinforcement learning (DRL) framework called FINDER (Finding key players in Networks through Deep Reinforcement learning). It was trained on a small set of synthetic networks and then applied to real-world scenarios. The framework can identify key players in complex networks. It was published in a paper in Nature Machine Intelligence.

August 2020

scikit-learn released version 0.23

The new release includes some new key features, and has fixed bugs in the previous one. Its major features include: generalized linear models, and Poisson loss for gradient boosting; a rich visual representation of estimators; scalability and stability improvements to KMeans; improvements to the histogram-based gradient boosting estimators; and sample-weight support for Lasso and ElasticNet.

September 2020

Amazon published Dive into Deep Learning

Team Amazon added key programming frameworks to its book. The book – Dive into Deep Learning – is drafted through Jupyter notebooks and integrates mathematics, text, and runnable code. It is a fully open-source live document, with triggered updates to HTML, PDF, and notebook versions.

While the book was originally written for MXNeT, its authors also added PyTorch and TensorFlow to it.

Amazon’s book is a great open-source resource for students, developers, and scientists interested in Deep Learning.

October 2020

A groundbreaking model was published in Nature Machine Intelligence

This October, an international research team from TU Wien (Vienna), IST Austria, and MIT (USA) announced a new artificial intelligence system. Founded on the brains of tiny animals like threadworms, this new-age AI-system can control a vehicle with a few artificial neurons.

The solution offers remarkable benefits over previous Deep Learning models. Away from the infamous “black box”, it can handle noisy inputs and is simple to understand. The model was published in Nature Machine Intelligence.

MIScnn was released

MIScnn, an open-source Python framework for medical image segmentation with convolutional neural networks and Deep Learning, was announced.

It has intuitive APIs enabling the fast setup of medical image segmentation pipelines in just a few code lines. MIScnn also has data I/O, preprocessing; patch-wise analysis; data augmentation; metrics; a library with state-of-the-art deep learning models and model utilization; and automatic evaluation.

TensorFlow 2.3 was released

The new version comes with easy loading, faster preprocessing of data, and easier solving of input-pipeline bottlenecks. tf.data solves input pipeline bottlenecks and improves resource utilization. For advanced users, it has improved training speed. tf.data allows users to reuse the output on a different training run, which frees up additional CPU time.

The TF Profiler adds a memory profiler to visualize the model’s memory usage, and a Python tracer to trace Python function calls in the model. It also offers experimental support for the new Keras Preprocessing Layers API.

PyTorch 1.7.0 was released

It includes many new APIs including “support for NumPy-compatible FFT operations, profiling tools, and major updates to both distributed data parallel (DDP) and remote procedure call (RPC)-based distributed training.”

November and Beyond

As 2020 enters its last lap, we expect more new and impressive developments to crop up.

Mark Cuban said: “Artificial Intelligence, deep learning, machine learning — whatever you're doing if you don't understand it — learn it. Because otherwise you're going to be a dinosaur within 3 years.”

Cheers to diving deeper into Deep Learning!

Did we miss an important update? Share with us!

Follow Us!

Conversational Ai Best Practices: Strategies for Implementation and Success
Brought to you by ARTiBA
Artificial Intelligence Certification
Conversational Ai Best Practices: Strategies for Implementation and Success

Conversational Ai Best Practices:
Strategies for Implementation and Success

The future is promising with conversational Ai leading the way. This guide provides a roadmap to seamlessly integrate conversational Ai, enabling virtual assistants to enhance user engagement in augmented or virtual reality environments.

  • Mechanism of Conversational Ai
  • Application of Conversational Ai
  • It's Advantages
  • Using Conversational Ai in your Organization
  • Real-World Examples
  • Evolution of Conversational Ai
Download
X

This website uses cookies to enhance website functionalities and improve your online experience. By browsing this website, you agree to the use of cookies as outlined in our Privacy Policy.

Got it