The Curiosity rover, which is designed to explore Mars, has found an ancient oasis on Mars. Researchers working with the Curiosity rover have found salt-enriched…
There are a lot of questions that we do not know the accurate answers, and we always google each unknown thing finding more information about it. Google Lens created this opportunity for users to get all the answers that people have no idea, or need getting much more details. Last year, Google Introduced Google Lens in Photos and the Assistant. People are already using Google Lens, especially when they want to get the answers like “what type of dog is that?” or “what’s that building called?”.
So, Google introduced Lens at Google I\O (Google I/O (simply I/O) is an annual developer conference held by Google in Mountain View, California) that will now be available directly in the camera app on supported devices from LGE, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, Asus, and of course the Google Pixel. And also three updates that enable Lens to answer more questions, about more things, and of course more quickly.
- The first Google Lens Update: Smart Text Reading
This new feature will allow you to find each answer with a help of smart text reading. For example, you can copy and paste text from the real world: (like recipes, gift card codes, or Wi-Fi passwords) to your phone. Lens helps you make sense of a page of words by showing you appropriate information and photos. For Example, you are at the restaurant and notice the name of dish that you have no idea: Lens will show you photos giving a better idea. This requires not just recognizing shapes of letters, but also the meaning and context behind the words.
- Second Google Lens Update: Style match
Now, with style match, if a suit or home decor item catch your eye, you can open Lens and not only get info on that specific item but also see things in a similar style that fit the look you like.
- Third Google Lens Update: Works in Real Time
Google Lens now works in real time. It’s able to proactively surface information instantly—and anchor it to the things you see. Now it gives you an opportunity be able to browse the world around you, just by pointing your camera. This is only possible with state-of-the-art machine learning, using both on-device intelligence and cloud TPUs, to identify billions of words, phrases, places, and things in a split second.
Google reduced a TV dongle Chromecast
Samsung Galaxy S III replaces TV remote,hotel check-in, keycard and AC control
Google Removed 3.2 billion Bad Ads in 2017
Google Play adds Security Metadata DRM to All Android Apps
Google Releases its VR Doodle 360-degree Interactive Video for the First Time
Digital afterlife bequest from Google
At Last Google Pixel 3 XL Smartphone Is Available
Google plus Supports Webcam Updates