The future of AI for smartphones is on-device. Or make as many AI processes local as possible. Why? Well, you don’t need an internet connection to get the job done. Whether it’s asking a chatbot to proofread and fix grammatical mistakes, doing a brief research, editing images, or explaining the world around you through the camera.
Second, none of your personal data has to leave the device and get processed on a remote server. And third, it’s going to be faster. The smaller a model gets, the faster it can produce results. It’s a bit of a give-and-take situation. A light AI model means its capabilities are limited.
A bigger AI model, like Gemini or ChatGPT, can understand text, image, audio, and even generate video. These are large models, and they require a heck of a ton of processing power on custom chips. In a nutshell, you need an internet connection to make that happen. But something pretty cool is brewing, and that something comes from Google.








