Apple is utilizing cutting-edge technology to enhance fundamental operations in its new products, in contrast to other businesses that are attempting to overhaul their businesses significantly with artificial intelligence. Apple unveiled a new line of iPhones and a new watch with enhanced semiconductor architectures that power the new AI features without using the term “artificial intelligence” to characterize the emerging technology. The features substantially enhance fundamental operations like making calls or capturing better pictures.
Even though it wasn’t discussed during its developer conference in June, artificial intelligence has been silently altering Apple’s main software products for months now. Instead, with their AI initiatives, Microsoft and Alphabet’s Google set high standards for the degree of transformation. Leaders in the industry have issued warnings about the possible dangers of the unrestrained development of new tools like generative AI.
Apple added a four-core “Neural Engine” to its new chip, which was used to create the Series 9 Watch. This engine can handle machine learning jobs up to twice as quickly. Apple refers to the components of its AI-accelerating CPUs as the Neural Engine.
Siri, Apple’s voice assistant, is 25% more accurate thanks to the AI components of the watch processor. Apple was able to introduce a new way for users to engage with the device, however, by including the machine learning chip components. Users may now “double tap” by squeezing their finger on their watch hand to perform actions such as answering or finishing phone calls, pausing music, or launching additional information such as the weather.
The purpose is to allow users to operate the Apple Watch while using their non-watch hand to hold a cup of coffee or stroll a dog. When users tap their fingers together, the function uses the new chip and machine learning to detect minute motions and alterations in blood flow.
Additionally, the iPhone manufacturer demonstrated enhanced image capture for its line of phones. The business has long provided a “portrait mode” that may blur the backgrounds by simulating a huge camera lens with computational power. However, users had to keep in mind to activate the feature. Now, the camera detects when a person is in the frame automatically and collects the information needed to blur the backdrop afterward.