Google just made its Circle to Search feature considerably smarter. The company rolled out an update that lets users identify and explore multiple items within a single image at once - a significant leap from the tool's original single-item focus. Director of Product Management Harsh Kharbanda announced the enhancement, which positions Google's visual search capabilities as a more comprehensive alternative to traditional text-based queries and competing visual search tools.
Google is doubling down on visual search, and the latest Circle to Search update shows why. The company's AI-powered feature now handles multiple items in a single frame - a capability that transforms how users interact with images on their phones.
The enhancement addresses a fundamental limitation of the original Circle to Search experience. Since its initial launch, users could only query one item at a time, forcing them to circle, search, back out, and repeat for each object of interest. Now they can explore an entire outfit, a room full of furniture, or a complex scene without the tedious back-and-forth.
"We've updated Circle to Search so you can now explore multiple items in a single image," Harsh Kharbanda, Director of Product Management for Search, wrote in the announcement. The brevity of the statement belies the technical complexity involved in accurately segmenting, identifying, and contextualizing multiple objects simultaneously.
The timing is strategic. Visual search has become a battleground for tech giants trying to capture shopping intent before it reaches traditional search bars or e-commerce sites. Amazon has been aggressively pushing its visual search capabilities through Alexa-enabled devices, while Pinterest built its entire Lens feature around multi-item discovery in aspirational lifestyle images.
Google's approach leverages its existing search infrastructure and massive training data advantage. The company's computer vision models can now parse complex scenes, distinguish between foreground and background objects, and understand spatial relationships - all in real-time on mobile devices. This kind of on-device AI processing represents a significant advancement from earlier cloud-dependent visual search implementations.












