Artificial intelligence has made many technological advances and breakthroughs in a wide variety of industries, such as transportation, finance, healthcare, education, sports, etc. However, AI still has several technological barriers too:
- It has a hefty environmental cost.
- It requires a substantial amount of computational resources.
- It requires a huge amount of bandwidth when running an AI on the cloud.
Today, we will be exploring in this article about why AI has these disadvantages, how Tiny AI can serve as a solution to these disadvantages, and applications that Tiny AI can have in the future.
Ecological and Technological Disadvantages of AI
A relatively unknown fact about AI is that training one well can require a huge amount of carbon dioxide to be emitted into the atmosphere. An article of the MIT Technology Review describes how “the process can emit more than 626,000 pounds of carbon dioxide equivalent— nearly five times the lifetime emissions of the average American car”. This is because training an AI can consume a substantial amount of energy, which can leave carbon footprints of large magnitudes.
The reason why training an AI uses up a ton of computational resources is because the algorithms needed for deep learning requires immense amounts of data to be processed. For example, in order for a natural language processing (NLP) model to understand the meanings of words and sentences, it would likely have to train itself by reading through billions of articles first. The massive amount of computational resources required can lead to reductions in efficiency and limitations on privacy for AI applications.
Cloud services such as Google Cloud, AWS, and Azure can reduce the need for a large amount of memory and computational power to be required locally when performing deep learning tasks. However, one trade-off of training an AI on the cloud is that it requires a large amount of bandwidth.
How Tiny AI can overcome these obstacles
Tiny AI aims to reduce the size of the training algorithms in a way that would lower the amount of computational power and memory consumed while minimizing the trade-off of a loss of accuracy. Researchers aim to do this by approximating the outputs of time-consuming computer simulations and tasks.
New advances in hardware have also been designed to better handle computationally heavy tasks. By doing so, devices can reduce the need to perform these tasks on the cloud, which can reduce the strain put on cloud services, as well as the privacy and security concerns of transferring data to the cloud in the first place.
Techniques that allow for more efficient data usage have also been used to increase the efficiency of AI systems, such as the compression of datasets and unsupervised learning.
The Applications of Tiny AI
Tiny AI will enable simpler, more compact devices, such as the smartphone, to deploy complex training algorithms without needing to connect to the cloud. For example, Google discovered a way to run Google Assistant locally on smartphones in May, and Apple is able to use Siri’s speech recognition application locally on iPhones with iOS 13.
In healthcare, Tiny AI can lead to faster results for personalized medicine and treatments. Tiny AI will allow faster reactions from self-driving cars, and Tiny AI will also improve image processing in cameras. Tiny AI will likely also have applications in agriculture, manufacturing, and logistics.
Although Tiny AI is still in its early stage of development, it is bound to have a substantial impact on a wide variety of areas in the near future.
Daniel Wu is a Student Ambassador in the Inspirit AI Student Ambassadors Program. Inspirit AI is a pre-collegiate enrichment program that exposes curious high school students globally to AI through live online classes. Learn more at https://www.inspiritai.com/.