The application of AI in the recent world has been no less than a revolution – a revolution that has unfolded only partly and is yet to unleash the renaissance of autonomy.
With customers increasingly using smart devices, businesses becoming more aware of the need to implement technology in processes, and the growth of corporate activities, the use of AI around the world has increased considerably. And cloud computing has become the epicenter of this AI revolution.
Cloud computing has been positioned as a magic bullet to the IT infrastructural challenges. From cost efficiency to improved accessibility, and from seamless manageability to easy implementation, cloud computing has become a vital factor for corporates to rise up the competition. However, it has a bucket of drawbacks too.
Vulnerability of attacks, downtime, dependency on network connectivity, bandwidth issues, lack of redundancy, technical issues – the list is quite long. And this downside has accelerated the adoption of yet another dimension of computation and data storage. Edge computing, as it is called, the technology is well-able to save bandwidth, lower constraints on data analysis, and improve response times.
According to reports, enterprise-generated data on Edge computing is expected to rise from 10% to 75% by 2022 with digital health care, retail, and manufacturing businesses likely to expand their use of edge computing, in particular. Clearly, edge computing portrays a powerful paradigm shift and this can be more potent when coupled with AI.
Edge AI describes an infrastructure where AI models are locally processed, on devices at the edge of a network and not a cloud. In contrast to other AI processes that are carried out via cloud-based datacenters, edge AI requires little or no cloud infrastructure.
An edge AI model might be trained in the cloud and deployed on an edge where it mostly runs without server infrastructure. As such setups require just sensors and microprocessors, not an internet connection, edge AI can process data and make decisions in real-time, within milliseconds.
An example of edge AI could be the Google or Alexa speakers that learn words and phrases via Machine Learning and store them on the device locally. And when the user communicates to Google or Siri, the voice recording is sent to an edge network where it passes to text via AI, and a response is processed. The response time here usually is less than 400 milliseconds. Had it not been edge AI, the response would take a few seconds.
Other examples of Edge AI include real-time traffic updates and facial recognition on smartphones, drones, security cameras, robots, video games, and wearable health monitoring devices. The technology is essential for a lot more industries too.
It can provide intelligence to security cameras thus enabling them to detect and process suspicious activities in real-time, unlike the hours-long response time of traditional surveillance cameras, making the service more efficient and less expensive.
Second, it will increase the capacity of autonomous vehicles to process images and data in real-time for detecting traffic signs, vehicles, pedestrians, and roads, thereby improving the security in transportation.
Edge AI can also be used in the industrial sector to reduce costs and improve safety. It would monitor the machinery for errors and defects in the production chain and recompile the data of the entire process in real-time. Analysis of medical images in emergency medical care can be yet another application of Edge AI.
The deployment of 5G technology that requires very low latency and greater speed for mobile data transmission will make Edge AI more useful. In fact, the market for Edge AI is expected to grow to $1.12 trillion by 2023 while it was just $355 million five years back.
The paradigm shift has already begun, apparently, as there are collaborative projects in line trying to train professionals with STEM profiles in Edge AI. Then, Intel and Udacity are collaborating for the IoT Developers Nanodegree program in the Intel Edge AI for training developers in Deep Learning and Computer Vision.
However, will this technology actually come out as a seamless up-gradation or will it too carry a reserve of threats and challenges, is a question that remains unanswered. Whether its benefits would outweigh its shortcomings or will the burden of its loopholes create a way for another technological evolution is something we will get to know in the coming years.
Source: indiaai.gov.in