- November 21, 2018 /
- by Claudio M. Camacho
The forecast for storage solutions is not cloudy, not foggy, and not edgy. It’s hybrid.
Unlike Asimov’s setting for I Robot in a distant future, humankind is already today surrounded by Artificial Intelligence (AI) that solves very complex problems. The AI in our reality does not take the shape of a giant robot, nor resides in laboratories running inside supercomputers. Instead, AI is taking over the world through the most familiar technology, the ones that are closest to humans: phones, cars, furniture, venues, and even clothing. Yet, in a time where cloud computing seems to be the only way forward, why are consumer brands pushing new devices to the market equipped with built-in AI-grade computing power?
The centralized-decentralized-centralized rollercoaster
Technology has always been created and upgraded in cycles. In the 1980s there were so-called “dumb terminals” connected to remote servers. While in the 1990s, we saw the rise of personal computing with a decentralized architecture. Since then, we’ve shifted back toward a centralized processing architecture with cloud computing, where edge devices rely on remote servers to do most of their tasks and store the majority of the data they produce. But now, the next wave is here, with AI in all of its forms that, for an important reason, do not solely run in the cloud.
The next wave—driven by machine learning
In 2016, Huawei introduced Kirin 970, the first AI hardware that could run on a mobile phone. Since then, all new iPhones and basically every new phone has been powered with AI and machine learning (ML) capabilities. In fact, these technologies are starting to power new cars, smart buildings, and every single object that can be grouped within the IoT revolution.
Machine learning is a new form of AI that has become pervasive. It is a necessary evolutional step in the computing paradigm. Instead of tailor-coding huge AI applications to solve different tasks, machine learning requires little coding, a few hours of training, and it can solve the most difficult and challenging tasks our world has ever seen. Thanks to ML models that can “perceive and study” our reality and are computed very quickly, we get statistically satisfactory answers for problems that otherwise would have taken years to solve. 
Comparing the three types of computing architectures
In order to deliver a best-in-class experience on the consumer end, ML algorithms need to provide a near-immediate answer to given tasks. There are three different ways of building the architecture for consumer-grade AI: in the cloud, at the edge, or a hybrid approach. The table below summarizes the pros and cons of these architectures.
Comparison between different ML deployment architectures: edge vs. cloud vs. hybrid.
The fact about ML data sets and models is that they typically require a large amount of storage. Some of the data sets can take up to several terabytes of data. This creates a great challenge both for edge and cloud approaches. On one hand, running these sets fully on an edge device means the device would need to have a lot of raw computing power, as well as a very large storage. On the other hand, running everything on the cloud has strict limitations due to bandwidth and network features. Even on a hybrid model, edge devices would still need to use large data sets and models locally, in order to compute the necessary local functions .
Why a hybrid architecture comes out on top
With the advancements of current technology, the prevailing model will be one where cloud and edge will work more tightly together. This means that the cloud provides a baseline of computation and generalized models, whereas edge devices use local data sets and models to personalize and optimize results for a faster and greater user experience. The future is certainly not cloudy, not foggy, and not edgy. It’s a hybrid approach.
In 2017, Google launched a version of their ML library, TensorFlow, for edge devices. It was called TensorFlow Lite and it can run compressed data sets and models on consumer devices with limited computing and storage capabilities. After that release, Microsoft, Apple, and many other companies have started to work on and release their own versions of ML libraries that can run on edge devices. The great thing about these libraries is that they enable edge devices to achieve almost cloud-computing level ML operations, by combining the best local resources with the power of the cloud.
More on-device computing means more edge storage requirements
This new hybrid architecture where edge devices perform AI and ML locally is driving an increasing demand in computing and storage capabilities for consumer electronics. Already in 2018, we have seen how you can buy a phone with 1 TB of storage (just remember that you couldn’t buy an actual computer with 1 TB of storage 5 years ago), and this is just the beginning. In the coming years, AI and ML will not only be run on mobile phones. All IoT devices, connected cars, and AR/VR vision systems will leverage advancements in ML. This means a new handful of use cases that will affect the exponential need for local storage and computational speed.
The basis of machine learning is data. The more data processed, the greater the chance at getting better results. This means that all these new devices will need to carry and process loads of data to help them refine their algorithms. In the days to come, I bet we will be seeing phones moving towards the multi-terabyte storage trend. In addition to this, faster neural processing units (NPU) will impose stiff requirements on on-device storage to perform better and more reliably than ever before. This opens a new world of opportunities for storage manufacturers and software vendors to create state-of-the-art solutions to drive the machine learning revolution with scalable data engines for edge devices.
Car makers and device manufacturers—make sure you car and device storage is AI- and ML-ready.
Tuxera Flash File System ensures the highest performance for your data-heavy workloads.
- Machine Learning for Edge Devices — Western Digital: https://blog.westerndigital.com/machine-learning-edge-devices/
- Why Machine Learning on the Edge: https://towardsdatascience.com/why-machine-learning-on-the-edge-92fac32105e6
- Deploying ML models on the Edge: https://blog.algorithmia.com/machine-learning-and-mobile-deploying-models-on-the-edge/
- ML on IoT Edge by Google: https://iot.eetimes.com/machine-learning-on-edge-brings-ai-to-iot/
- ARM — State of the Art ML on edge: https://community.arm.com/processors/b/blog/posts/how-to-deploy-deep-learning-on-edge-devices-quickly
- TensorFlow for Poets: https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/index.html#0
- Google EdgeTPU: https://cloud.google.com/edge-tpu/
- Machine Learning for Mobile Developers: https://firebase.google.com/products/ml-kit/