Title: MetaFlow: A Meta Learning With Normalizing Flows Approach For Few Shots Learning
Abstract: Humans can generalize and learn new concepts fast. They are able to learn new concepts given only a few examples and quickly adapt to new situations. In order to do so, they conclude knowledge from their prior experiences, as they are able to combine previous observations with small amounts of new evidence to learn fast. On the other hand, in most machine learning systems there are two distinct phases: Training and testing. The training phase consists of updating the model parameters using data, and during testing, the model is deployed as a decision-making engine. Meta learning, or learning to learn, gained a huge rise in interest in recent years. Contrary to traditional approaches to artificial intelligence where a given problem is solved from scratch using a standard learning algorithm, what meta learning aims at is to improve the learning algorithm itself. In this paper, we consider meta learning problems, where there is a distribution of meta learning tasks, and we would like to train a model that does well and learns quickly when presented with unseen tasks sampled from this distribution. We propose an approach to enhance gradient-based algorithms for meta learning, where we achieved a mean accuracy of 62% with only 3 training iterations over tasks sampled from the Omniglot dataset.
Publication Year: 2021
Publication Date: 2021-12-15
Language: en
Type: article
Indexed In: ['crossref']
Access and Citation
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot