Your browser is not supported. Please update it.

9 March 2022

Trust – a five letter word creating a buzz within energy

If you have not heard of Intertrust Technologies in the energy space, it is not a start-up, but a security company that we have followed here at Rethink for 20 years, founded in 1990, and initially focused on digital rights management (DRM), as applied to the video space.

We talked a few weeks back to our old friend Talal Shamoon, who has been CEO there since 2003 shortly after Sony and Philips combined to buy the company, after it was forced to resort to legal protection of its intellectual property. It is widely understood to have “invented” DRM, a huge step forward over and above conditional access, the first generation technology used to protect the cable TV industry, before internet distribution overtook video.

The company changed hands for $453 million in 2002, and it went on to get back most of that back from the Microsoft legal action, settled in 2004 for $440 million. Others have since bought licenses to the same IPR, and even Microsoft had to renew its license after ten years. Since then Intertrust has filled out its product line in what has become a competitive and slightly less rewarding space as companies like Microsoft and Google sell DRM solutions at record low level pricing.

Yet Intertrust still holds many US patents and international patents which cover software and hardware technologies which can be implemented in a broad range of products which might use DRM, including digital media platforms, web services and enterprise infrastructure.

When you get down to it Intertrust is simply a master of identity management and access management, secure execution environments where encryption needs to execute to keep encryption keys secret. It also is a master of data visualization and what is known as time series databases – holding vast amounts of information which are often things which change over time, like readings from sensors, and arranging them with a timestamp so you can look at trends over a long time period.

So far most energy engineers would say “That’s all well and good, but what has it done for me lately,” and mostly you need a concrete example to see how data driven energy decision-making is becoming important today and will become increasingly important in the future.

After an initial chat with Shamoon, he put us in touch with Florian Kolb, Chief Commercial Officer and General Manager, Energy and we had a chat this week, and he told us, “The megatrend in energy is to focus on data – as energy becomes more decentralized, data needs to be used to control its distribution. I was working at Innogy (now part of E.ON) when we realized that we needed far greater data interoperability to accommodate renewables onto the grid.”

“As E.ON moved in Europe from 40% being renewables to more like 70%, both the grid and retail utilities needed to share data, and must do it through a Chinese wall,” Kolb told us.

In grid networks the supply and demand always have to match, and the decade from 2020 to 2030 is being dubbed the decade of electrification, as electric vehicles, steel, cement and aluminum manufacture, home heating switching from natural gas and the increasing use of air conditioning, not to mentioning a new hydrogen economy, all drive up electricity usage way beyond the natural rise in industrialization, which typically aligns with GDP growth in each country.

Kolb tells a convoluted story about how he came to the US to work in Innogy’s innovation group, which headed in the direction of data-driven business models for the energy industry. As a result he helped incubate internal start-ups at Innogy, which in turn led to the creation of DigiKoo (smart EV charging) and Livisi (smart home technology). Kolb then made the connection to Intertrust which is now working to extend the developments made on Digikoo.

Intertrust talks about its “CleanGrid,” which is built on the established Intertrust Platform, and this is now a planning tool available internationally and in the US for EV charging utilities.

You can add assets, grid and gas grid, municipal and demographic data on households and overlay it with physical attributes like the middle and low voltage networks, and the local transformer stations which convert between them. This is a map of how each household is actually getting energy, as well as the tariffs they are being charged. This has now been combined with data on who is “EV minded,” a data platform from interviews and demographic data which aims to forecast who is likely to buy an EV next and then try to charge it at home. This is something of a challenge for retailer grids right now because if it doesn’t predict it accurately, it does not know how much electricity it needs every day.

If each car home charger adds a few kW, the grid operator needs to know that a particular street location can handle all of them being turned on at once for 3 hours every evening. If this will break the distribution links, adding maybe 500 kW, then extra throughput could be needed or some other control. The key here is to work out the highest probability of having a grid issue soonest, and at the same time work out the lowest spend to upgrade this part of the grid, and what the increase in revenues from it will be. At the same time utilities need to match any upgrades to potential new revenues, or they are forced to fund it via the rate base – regulator controlled price increases which take forever to process.

Kolb adds, “Utilities often operate on a five year cycle when reporting on their rate base to utility regulators or Commissions. If they agree that some capex is needed to support EVs, then if it gets put in the wrong place, or if it installs too little, they may have to spend that money all over again – and that will mean delay, and if they are an investor utility it may involve losses.”

There are other solutions – such as options whereby the utility can come up with a Demand Response system, with customers ceding control over large outputs like EV charging to the utility. Then the utility can schedule overnight recharges based on getting as much out of its existing transmission assets as possible, by scheduling the charging cars in a sequenced poll, one bit of charge to each car at a time, similar to the way WiFi treats messages or how a computer multitasks.

The way Intertrust CleanGrid does this is by acting as a reliable clearing house for data summaries between the grid operators and the recharge vendors, where full discovery of data would be bad, or directly affect competitiveness. This allows day ahead planning for grid operations and keeps capex to its lowest possible level.

If you think about it, this is really vital. Rethink Energy pointed out that actually the grid operator or the DSO can cut deals with more distributed assets like solar and wind, which could be used locally. Then the operator could actually eliminate a ton of unnecessary connections, for instance if retail utilities partner with the best positioned solar assets, and attach them locally or even part-funding them if they promise to put them where the grid needs them. This could mean the difference between slowing the roll out of EV re-charge points, due to concerns about grid balancing where the costs of a fresh high voltage line would be prohibitive. It could fulfill electricity needs by simply diverting solar plus battery power being offered locally, to take up the extra demand from EV chargers in the home. In effect using a wide variety of data tools to exert more precise control over electricity consumption and distribution pathways instead of spending any money at all – almost as valuable as having huge amounts of battery built into the network – to put off upgrades.

The same goes for the suppliers of Apps for rechargers. In many instances the Apps which are offered to EV owners for scheduling and billing their recharges are not owned by the local re-charge suppliers, but shared across a whole bunch of them. This adds another layer of people who mustn’t see each other’s data or they would have a competitive advantage.

But by working virtualized data – essentially a summary – then if it turns out that one EV owner regularly does a longer trip and charges on a public charger on the highway, that could help plan both the highway fast re-charger, and also the utility’s private home re-charge load, which may get some relief on one or two days of the week. The more granular this is, and the more detail we get into it, the less new infrastructure which has to be built out early – effectively keeping the rate base and the electricity costs lower, without damaging utility profits. It also prevents huge friction between regulators and the utilities. That data might also be shared with any authority which gives out EV subsidies, to drive sales in the most advantageous neighborhoods, where fewer grid upgrades would be needed. This would in effect accelerate the rate at which utilities can embrace EVs.

And since no-single forecast for EV sales has ever been accurate – and the outcome is always higher than forecast – this type of relief is going to be needed throughout the next ten years at least, and perhaps for twice as long.

There is still a huge potential for data access to be something of a maze inside such an environment, and it’s not something you can just throw at your cloud provider – sure all of this data being in the cloud helps a lot, but the workflow of who is allowed access to what level of abstraction, is a painstaking process, that a company like Intertrust has tools to help with, and it can take responsibility for this.

The first layer is identity and access management which enforces fine-grained rules for governing access to data, based on carefully defined policies for all data queries. It uses data virtualization to create connectors allowing queries to datasets regardless of whether they are held in a cloud or on-premises. These connectors can work with numerous file systems as well as both structured and unstructured data.

In many cases, data needs to be analyzed within third-party software to supply an answer for queries. To ensure that the underlying data is secure, the Intertrust Platform has to set up an appropriate secure execution environment (Trusted Execution Environment or TEE). Part of the reason for this is so that if a piece of malware gets into the system, it cannot change the rules or send data to an unauthorized destination – because it cannot witness the encryption and key handling aspects from inside a TEE.

In the original video protection Marlin DRM technology from Intertrust, which was used to protect paid-for video access, it had to meet MovieLabs requirements brought in almost a decade ago. And that included a running in a TEE, a place to allow software to execute without the results being watched by a device operating system, as well as a Secure Video Path, which is ensuring that any messages that leave the TEE cannot be intercepted as they are sent to another part of the system- so they must be in an encrypted format.

Chip core designer ARM, which dominates in mobile communications is perhaps bests known for initiating a TEE on its chip core some years back – but by now other architectures all have something similar – and the Intertrust suite runs on each of them. Getting to know such a wide range of TEE implementations, along with the file type transformations is another reason for getting on board someone who has done this before. TEEs effectively have their own secure OS (or middleware as it is sometimes called) running in parallel with, but separated from, the general OS

Only trusted applications can run in the TEE, which relies ultimately on a hardware root of trust to distinguish itself from say external software attempting to masquerade as a Trusted app. A hardware root of trust is hard wired into a chip and is the first layer of encryption that only the manufacturer has access to, so can sign.

We talk about signing operations, which when it comes down to it is really decrypting using key pairs (two prime numbers) one to sign security in, and a public key to decipher messages. If you want to spend hours trying to get to grips with this process and have your eyes cross with complexity, you can read up on PKI (public key infrastructure) – but it is the benefits of the data here that is important, not how it is achieved.

The upshot of being really good at this technology is that it’s not economic for companies like Amazon Web Services or Microsoft to get down into this much detail when it comes to traffic moving from one cloud to another or to an outside server farm, and cloud players are more than happy to have security services layered on top of their cloud services by outside partners like Intertrust. And it is the last thing that a utility wants to deal with on a day to day basis, hence needing a trusted partner, and while Intertrust is not the only company with this know-how, it seems to have a head start in addressing the utility space.

When you look at potential futures for the energy market – for instance a high level of distributed resources – it would be impossible for the grid to co-exist without broad access to a large number of virtualized data sources.

Take the simple instance that a home buys enough solar and battery to go off the grid and not require much more than 5% of the electricity it was previously taking from the grid. The home does not want to show the utility the detail of what he or she has installed, but what happens when either the battery packs up, and needs new lithium cells, or when the temperature falls for a day or rises sufficiently for that home to pull significantly more electricity form than the grid that usual. And what happens if 10,000 off grid homes all make a call to the grid at once, based on the weather?

The same is true for PPAs – when a solar farm sells all of its electricity to a few data centers – but they stay connected to the grid and take their shortfall from there – if the retail utility has to provision for the entire data center for periods of when solar performs badly due to cloud, then the utility saves nothing, but it has less revenue. In fact everywhere you think about a transition to distributed forms of energy you can see a basis for data driven forecasting for utilities – what if a solar farm can simply make more money feeding a hydrogen electrolyzer at some point in the future and just pulls its supply from the wholesale market? So the system will need insights from the grid settlement system and real time auctions as well.

In the bad old days for Intertrust of customers being only the larger video operators such as cable firms, it was tough to keep data pricing above $1 per home, and we understand it has fallen since to about a third of that -now within energy there are a multiplicity of data constituencies, including customers, requiring new combined datasets – and in an industry whose leading concern is about cyber-attacks on centralized energy installations, it might be that the time has come for the proliferation of this type of data handling Chinese wall.