Close
Close

Published

ViSenze bets on vision trumping text for retail search

AI techniques have been applied to some degree in retailing for over a decade, but mostly for enhanced recommendations based on preferences, and in various trials designed to exploit emotion during selling. ViSenze, founded in 2012 and based in Singapore with $14 million funding so far, was one of the first start-ups to spot an opportunity applying AI based visual processing to product recommendation.

Its technology now supports not just recognition of products but also object-based processing within the image, to develop associations with customers’ personal preferences. It can now combine this with metadata and keywords, so it is no longer confined just to image processing.

The firm’s opening strategy was led by the observation that ecommerce was starting to converge with social media and lead consumers towards more spontaneous purchases which could be triggered by effective recommendations with a visual element, according to ViSenze CEO and co-founder Oliver Tan. “Recognizing this shift in behavior and the impact that AI technology could have on the retail commerce landscape for both brands and online shoppers, it prompted us to further develop and enhance our technology to not only recognize images, but also analyze their content including visual attributes in order to help retailers offer recommendations and visually similar products,” said Tan.

Recommendations can be made on the basis of previous purchases or images uploaded by the user and can include visual 2D products such as photos as well as physical 3D objects. “Our visual commerce platform enables the camera to be an information entry point to interact with the real-world objects,” said Tan. “The business goal of the system is to help our customers bring a “discovery with your camera” experience to end users. The whole system is optimized for search and rendering in order to display results within seconds.”

The service is underpinned by a large database, associating images of products with their SKUs (Stock Keeping Units), which are bar codes identifying individual items for inventory and stock keeping purposes. “Today, we have indexed more than 500 global retailers and brands and more than 100 million products,” said Tan.

The platform can be supplied as an embedded package, or as a cloud-based Vision-as-a-Service for retailers or any enterprise selling online. Deep learning comes in by enabling the system to be tuned to recognize objects supplied by customers, and associate them effectively with similar ones. It is also at the heart of the platform’s flexibility. “We understand that strategies and techniques that perform well for one retailer may perform differently for another, so we built our solution to be easy to use and customizable,” said Tan. “ViSenze has over 150 enriched domain models trained with many years of real world data.”

The firm’s differentiator lies in total reliance on deep learning for a vision-based service in ecommerce, Tan claimed. Customers can exploit the deep learning capabilities themselves through APIs that enables them to tag images, index their visual content and perform mobile visual search operations such as taking a photo of an object they want to find. This leads on to content recommendation based on visual similarity.

“Within ViSenze, the product manager and data analyst can easily train a model and test the performance, while advanced algorithm engineers can use the API directly to perform, train, and experiment,” said Tan. “By adopting ViSenze’s platform, our clients can quickly train deep learning models and deploy it based on their own unique requirements.”

ViSenze uses Nvidia GPUs to run its deep learning algorithms, which include supervised learning where a system is trained to recognize and distinguish between objects supplied by its customers. This lies behind some of the flexibility and adaptability but meanwhile the algorithms are being enhanced continuously by performing tasks against the increasingly large database. At the same time customers can perform tests and tune the models even if they cannot change the underlying algorithms themselves.

Close