Mobile World Congress established two significant trends concerning the video industry – the demise of virtual reality and the caution around artificial intelligence. Announcements were aplenty in both areas, yet the show floor was notably absent of VR headsets compared to just two years ago, leaving a void for AI technologies to slide in and take center stage – an opportunity missed.
One reason for this is consumer apprehension around AI whipped up by click bait articles, which seems to have rubbed off on vendors at the show, where there was a general feeling that developing AI for the mobile market would not reap rewards, not while the powerhouses of the communications, silicon and handset worlds are making all the noises. Vendors might even be wary of being rumbled for jumping on the AI bandwagon prematurely, amid suspicions that many are parading unyielding computer algorithms as machine learning. Another is that telcos have been testing AI for years now in areas such as network planning and fraud, but with little payback, so why should they develop it for video?
A brief summary of what happened in AI at MWC includes new smartphone launches from ASUS and LG, both marketing AI as a main feature. The Zenfone 5 series marks a debut in AI by Taiwan’s ASUS, using AI technology to automatically adjust camera settings based on surroundings, as well as learn the type of edits users make to their photos over time, allowing it to do the edits itself. LG’s V30S ThinQ also focused on photos, unveiling AI Cam software, although LG has licensed the tech from EyeEm so this is by no means an in-house AI breakthrough. AI Cam also adjusts things like lighting to ensure the best quality photo, also highlighting familiar keywords in the process.
Faultline Online Reporter met with AI video analytics company Valossa in Barcelona, demonstrating a new product for the first time. Building on its engine for analyzing live content for advertising and content tagging, the Finnish firm’s new real-time video stream monitoring system has been updated with scene element search – identifying every recognized person, object and scene in under 1 micro second.
Valossa is hoping to officially launch the software at NAB in April, aiming for sports accounts, as the technology is apparently ideal for picking out key moments from a live event and spinning them up rapidly into a highlights reel. This is running in the cloud but Valossa said the next step for its AI technology is deployments inside devices, for which the company is currently in talks with some unnamed TV manufacturers.
Meanwhile, augmented reality has without question surpassed virtual reality as the flavor of the month. While we didn’t partake in any VR or AR demos at MWC, this sentiment was echoed by Israeli video acceleration software supplier Giraffic, which admitted hype has died down in the VR market, meaning its VR AVA (adaptive video acceleration), which it launched in December 2016, has not taken off as hoped.
Essentially Giraffic’s VR software is sat on the back burner while the company plays the waiting game, which is probably true for countless other vendors – hoping the time will come when their VR technologies will provide a return on investment. In the meantime, Giraffic has been focusing its SDK developments on Android TV, Apple TV and Amazon Fire TV. In a nut shell, operators and OTT content providers are licensing Giraffic software for deployment inside video apps on devices, helping to reduce buffering.
Also new on the Giraffic roadmap is a foray into analytics, building a platform to look at CDN performance and provide recommendations on how to resolve any issues. Video analytics is a grey area, in the sense that unless a system can fix problems after identifying them, it is almost pointless. However, the benefit of Giraffic getting more into analytics is that its AVA software can actually go some way to solving problems. Giraffic hopes to wrap up this product for launch within the next year.