Wiley Wang, John Inacay, and Mike Wang

One of the emerging concepts in the field of deep learning is Few Shot Learning. If you’ve been studying Machine Learning or Deep Learning, you’ve probably heard this term before. But what is it? How does it actually work? We’ll dive into the topic, and show one of the ways to perform Few Shot Learning through Twin Networks. Below, we’ll learn how to build a One Shot Learning system using Twin Networks. You can find our Twin Network Implementation in PyTorch here.

What is Few Shot Learning?

First of all, what is and why should you want to…


John Inacay, Mike Wang, and Wiley Wang

Reinforcement learning is the training of models to make optimal decisions. One realm where we can put that decision-making prowess to the test is computer games. We can test humans vs machines on these frame-by-frame reaction tasks. When we train a good model with reinforcement learning, machines can play like a pro. At the core of many modern reinforcement learning algorithms is the policy gradient. To understand this line of algorithms, we will dive deeper into the basic policy gradient algorithm.

OpenAI Gym

OpenAI gym provides a set of toolkits for reinforcement learning. It has…


John Inacay, Mike Wang, and Wiley Wang (All authors contributed equally)

In the field of Machine Learning, many ML practitioners started out learning and understanding classical ML algorithms. However, lack of practice can dull your mastery of these algorithms. We would like to start a series of blogs to help refresh ML algorithms.

You may have learned linear regression in college. Now that several years have passed by, you vaguely remember what it is. Deep Learning is everywhere in the field when solving machine learning problems. In a traditional deep learning pipeline, we’re used to collecting large amounts of data…


Mike Wang, John Inacay, and Wiley Wang (All authors contributed equally)

If you’ve been using online translation services, you may have noticed that the translation quality has significantly improved in recent years. Since it was introduced in 2017, the Transformer deep learning model has rapidly replaced the recurrent neural network (RNN) model as the model of choice in natural language processing tasks. However, Transformer models, like OpenAI’s Generative Pre-trained Transformer (GPT) and Google’s Bidirectional Encoder Representations from Transformers (BERT) models, have quickly replaced RNNs as the network architecture of choice for Natural Language Processing (NLP). With the Transformer’s parallelization ability…


John Inacay, Mike Wang, and Wiley Wang (All authors contributed equally)

  1. Why are Transformers Important?

Transformers have taken the world of Natural Language Processing (NLP) and Deep Learning by storm. This neural network model architecture was introduced in the paper Attention Is All You Need in 2017 as an alternative mechanism for attention and has quickly become a dominant technique in Natural Language Processing. Google’s BERT and OpenAI’s GPT-3 are both state of the art language models that are predominantly based on the Transformer architecture.

Before the transformer architecture was introduced, NLP used many different problem-specific models for each NLP…


Mike Wang and Wiley Wang

Why Neural Architecture Search

To apply deep learning in the real world, researchers constantly drive to improve accuracy and efficiency of their neural network. Efficiency, especially regarding memory usage and calculation speed are important, for engineering, they’re directly related to the cost of the system. Recent progress has shown in different architectures such as MobileNet, SqueezeNet, (some other compact network.) for edge computing over mobile and edge devices. Among them, EfficientNet has been able to achieve state of art accuracy as complex models but with 3–10x parameter reduction. …


Wiley Wang and Mike Wang

Intro

Generative Adversarial Network (GAN) is a class of deep learning models fascinating even to deep learning researchers and engineers. It draws attention in mainstream media such as deep fake images and videos, aging apps, and beautification apps. To see completely generated images of humans is quite an astonishing experience. They’re so successful, hard to distinguish images and voices are posing problems in computer ethics and security.

Many of these instances though require large amounts of training data. Several efforts have been made to reduce the training data dependency. A recent paper, that won Best Paper…

Deep Gan Team

We’re a team of Machine Learning Engineers exploring and researching deep learning technologies

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store