Finnish video analytics firm Valossa is doing the rounds at Mobile World Congress (MWC) this week, one of hundreds of budding artificial intelligence (AI) companies hoping to turn customer heads in Barcelona. It’s core message is that its engine can analyze live content to find what’s suitable to run alongside expression in your advertising.
The company’s platform is based on natural language processing, image recognition and computer vision technologies, providing media companies and broadcasters with a system that spans applications in advertising, voice search, automated content tagging, and live video discovery and recommendations.
Valossa’s CEO and CTO Mika Rautiainen spoke to Faultline Online Reporter this week, saying that he recently attended a conference filled with venture capitalists who were frustrated about the air of mystery surrounding AI technologies. There are countless analytics companies who claim to use AI or machine learning, and they give very little explanation of what specifically constitutes, defines or differentiates their supposed AI technologies from each other.
Rautiainen gave his best shot at explaining this, claiming that a lot of rival companies use external APIs, such as those from Microsoft, and this is not truly artificial intelligence in the way he sees it, whereas Valossa has built its own APIs from scratch, where engineers can monitor and control exactly what the platform is learning and tweak this accordingly.
It can analyze live content using its expression and emotions engine and can identify things such as places, objects and unique topics in a service provider’s content, and we were reassured that this has been tried and tested at flagging up inappropriate criteria such as nudity.
The platform not only makes content searchable in real time, but broadcasters can use it to automatically match live content with relevant advertisements. In a world of linear TV channels this is not hardship, but as advertisers buy programmatically the Valossa software enables companies to do this automatically without requiring tags, on a highly-targeted video-related basis. This type of ability is essential with programmatic advertising where you don’t have time to look at every piece of video your advert is going to be shown alongside.
This week, Valossa rolled out an upgraded set of APIs for its content description capabilities, currently being tested by prospective customers in beta mode. It rolls out what it calls intelligence updates, which have been modified according to direct feedback from the companies using the software. Rautiainen noted that this level of flexibility when it comes to metadata is what enables Valossa to provide intelligent summarization of categories.
It can plug directly into a CMS, where it extracts metadata and automatically tags content and provides recommendations, although Rautiainen said that it currently has no personalization capabilities, so we shouldn’t put Valossa in the same bracket as the major recommendation software providers like ThinkAnalytics. Not just yet anyway, as the company is currently tinkering with personalization software in the lab.
Rautiainen continued to say that the pure content recommendations market is not Valossa’s core business, preferring to be known as an identification company rather than a recommendations firm. But where it does dabble in recommendations, it can do so for live TV on set tops and smart TVs, and currently has the makings of a system in the pipeline to expand this to content on mobile devices. It can either provide this on the client device or as a back-end cloud service.
Valossa, founded in 2015 at the University of Oulu in Finland, can only speak publicly about one customer to its name – Finnish broadcaster YLE, which uses Valossa’s software in its CMS for tagging content. Last year at NAB, Orange’s subsidiary Orange Silicon Valley (OSV) teamed up with Valossa to show a demonstration of identifying objects in more than 25 simultaneous HD streams, in partnership with EchoStreams’ storage technology and Exascale’s supercomputing platform, using 16 Nvidia GPUs.
The company also has a direct B2C offering with its movie discovery site Whatismymovie.com, a library of 40,000 titles which has found a home in Amazon Echo, where it is integrated with the Alexa assistant. It throws up movie suggestions based on a user’s descriptive search, for example “show me James Bond movies with Sean Connery” or “adventure movies about samurai.” This is based on research which Valossa calls Deep Content – tracking and analyzing multiple data feeds such as transcripts, audio and visual patterns, then converting this into metadata and processing it using its cognitive machine learning system.
With Valossa shrugging off suggestions that its rivals are the well-known recommendation software suppliers, this leaves us with the likes of Clarify, as well as a range of companies which have similar AI-based video analytics software but many are mainly used in video surveillance applications, such as helping police forces perform more efficiently.