NASA warned that a space rock will pass through space at more than 15,100mph (24,300kmh) that will shoot past the Earth after midnight tonight. NASA…
There are a lot of questions that we do not know the accurate answers, and we always google each unknown thing finding more information about it. Google Lens created this opportunity for users to get all the answers that people have no idea, or need getting much more details. Last year, Google Introduced Google Lens in Photos and the Assistant. People are already using Google Lens, especially when they want to get the answers like “what type of dog is that?” or “what’s that building called?”.
So, Google introduced Lens at Google I\O (Google I/O (simply I/O) is an annual developer conference held by Google in Mountain View, California) that will now be available directly in the camera app on supported devices from LGE, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, Asus, and of course the Google Pixel. And also three updates that enable Lens to answer more questions, about more things, and of course more quickly.
- The first Google Lens Update: Smart Text Reading
This new feature will allow you to find each answer with a help of smart text reading. For example, you can copy and paste text from the real world: (like recipes, gift card codes, or Wi-Fi passwords) to your phone. Lens helps you make sense of a page of words by showing you appropriate information and photos. For Example, you are at the restaurant and notice the name of dish that you have no idea: Lens will show you photos giving a better idea. This requires not just recognizing shapes of letters, but also the meaning and context behind the words.
- Second Google Lens Update: Style match
Now, with style match, if a suit or home decor item catch your eye, you can open Lens and not only get info on that specific item but also see things in a similar style that fit the look you like.
- Third Google Lens Update: Works in Real Time
Google Lens now works in real time. It’s able to proactively surface information instantly—and anchor it to the things you see. Now it gives you an opportunity be able to browse the world around you, just by pointing your camera. This is only possible with state-of-the-art machine learning, using both on-device intelligence and cloud TPUs, to identify billions of words, phrases, places, and things in a split second.
Kuratas Robot Controlled by iPhone
Unusual Galaxy Photographed from Gemini Observatory
Asteroid Toutatis Getting Closer to the Earth on 12 Dec 2012
Talking Shoes - the first clever Footwear from Google
Google will release the Google video game console and digital watch on Android
Google reduced a TV dongle Chromecast
Google Drones Deliver Aid to Remote Areas
On March 12 Is the Day of World Wide Web