Google is on a quest to push its Gemini AI chatbot in as many productivity tools as possible. The latest app to get some generative AI lift is the Files by Google app, which now automatically pulls up Gemini analysis when you open a PDF document. The feature, which was first shared on the r/Android Reddit community, is now live for phones running Android 15. Digital Trends tested this feature on a Pixel 9 running the stable build of Android 15 and the latest version of Google’s file manager app. When users open a PDF document in the Files app and pull up Gemini — via a screen gesture or the power button shortcut — the chat overlay now shows an “Ask about this PDF” chip message above it. Tapping on this chip integrates the PDF attachment in the chatbot, readying it for the AI model to process it. Next, all you have to do is type in the relevant query, and Gemini will pull up the response in natural language after reading through it. Digital Trends tried the feature with scientific research papers, and the whole system worked flawlessly. This is one of the more thoughtful implementations of Gemini and also saves users the hassle of importing PDF files in other standalone AI apps, including the Gemini mobile app. However, if the feature is yet to appear in the Files app, users can still pull off a similar trick within the Gemini mobile app. Just open the app, tap on the Plus sign icon in the Ask Gemini chat bubble, and attach the requisite PDF file. Users can pick a PDF file from the local storage drive or even pull one from their Google Drive. The AI chatbot, thanks to its multi-modal capabilities, can also access and comprehend media assets lifted from the on-device gallery. There is, however, a caveat. The Gemini-powered PDF analysis facility is only limited to devices running Android 15 and Google accounts that pay for Gemini Advanced access. Thankfully, you can avoid the payment hassle and still benefit from similar AI convenience by ditching Gemini. Instead, you should check out NotebookLM, one of the best AI tools from Google, which is free and can parse information from files, URLs, and even YouTube videos. Android devices have offered a built-in screen reader feature called TalkBack for years. It helps people with vision problems to make sense of what appears on their phone’s screen and lets them control it with their voice. In 2024, Google added its Gemini AI into the mix to give users a more detailed description of images. Google is now bolstering it with a whole new layer of interactive convenience for users. So far, Gemini has only described images. Now, when users are looking at images, they can even ask follow-up questions about them and have a more detailed conversation. Google has just announced that the Gemini AI stack is coming to your Wear OS smartwatch, and a bunch of other screens in your life, such as your car’s infotainment dashboard and smart TV. With the move, the company is bringing down the curtain on Google Assistant across its device ecosystem. Gemini is already a part of the core Android experience, deeply integrated across the Workspace ecosystem of apps and even third-party platforms such as WhatsApp and Spotify. With Gemini making its way to Wear OS, Android Auto, and TV, users will have a more seamless experience and a wider variety of screens to get work done. Google’s Chrome browser has offered a rich suite of privacy and safety features for a while now. Take, for example, Enhanced Safe Browsing, which was introduced back in 2020. It protects users against unsafe websites and files by using real-time threat detection. Three years later, Google switched it from an opt-in mode to a default safety protocol to guard users against phishing attacks, bad extensions, and malicious downloads. Now, the company is deploying its Gemini Nano AI to safeguard smartphone users against potential online scams, especially those hiding as a tech security warning on webpages.


