Comments on the CIM Network Model Management (NMM) concept – from an distribution perspective

At the CIM User Group conference in Amsterdam June 2016, there was an interesting presentation on a “Network Model Management Improvement Program at American Electric Power (AEP)” in which a CIM-based Network Model Management (NMM) approach to improving processes across a TSO enterprise was presented.

While this particular project is focused on transmission, the concept seem to fit well in distribution also.

Especially, after a chat with Pat Brown from EPRI, it was clear that many of the challenges addressed in this project is the same kind of many-headed beast several Danish DSOs are fighting in the moment.

In this blog post, I will try describe the new reality Danish DSOs are facing from my perspective, and how NMM might help.

For more in-depth information on NMM, please read the following EPRI papers:

Using the Common Information Model for Network Analysis Data Management:

Network Model Manager Technical Market Requirements: The Transmission Perspective

Motivation to change

As a consequence of regulation (i.e. benchmarking, retail market liberalisation and increased focus on quality of supply), the changing nature of the distribution grid itself (i.e. DERs and Smart Metering) and the vast improvements in processing power and analytical capabilities, a new era of competition has been unleashed in the Danish DSO world lately.

Ten years ago there were a lot of hype around smart grid systems, big data, and how the DSO processes were soon to be revolutionized using state of the art technology – at least if you would believe the vendors. It was mostly talk and not much action, though.

Today, however, we see some DSO that invest in fundamentally rethinking their processes, and not just duplicating paper-based processes into digital ones. It’s widely believed now, that an enterprise-wide commitment to change is required to stay ahead of the game.

Moreover, an increasing number of DSOs now believe that shoehorning existing processes into IT-systems with little or no knowledge of distribution and power grid information modelling will not save cost and boost innovation as much as required to stay competitive.

Actually, challenges such as DERs has long shown the need for e.g. timely and accurate as-build network models, which again has shown the need to change processes to support such vision. However, the real pressure to actual change things inside the DSOs seem to have come from regulation and competition.

Well, that’s at least my view on it.

Building a Chinese assembly line in Denmark

Recently, I’ve been involved in a project where some information that only exists in some old paper archives had to be registered electronically – i.e. installation dates, type information etc. This because some of the information in these archives is needed to do condition based asset management – another result of competition and more focus on operation costs.

Anyway, going through these old archives was really fascinating and exciting. The way projects was done from the 50’s to the 80’s – i.e. the details they put on the drawings and documentation from engineering, constructing etc. – is really amazing. You can really see this passion for the craft, and how each step was carefully executed and QA-checked with stamps, signing etc.

Don’t get me wrong. I’m not saying there’s no passion for the craft today. Not at all. However, looking at the resources a DSO has today compared to then, and the number of projects that must go through the pipeline, we are facing a completely new world!

It is, of course, an exaggeration to compare this new world to an assembly line in a Chinese electronic factory, but nevertheless I like to use this metaphor to fuel a discussion.

DSOs also do bigger projects involving analytical processes comparable to the TSO. However, the vast amount of work in DSOs are smaller projects that has to be executed as fast and efficiently as possible. I’m talking about projects such as planning and construction of distribution substations, cable boxes, and installations/meters in the LV-network.

Since top-management cannot move these labor intensive projects to China, even that they probably would like to, we have to come up with a concept that can optimize these, freeing engineering resources to work on the bigger innovative projects.

I believe that not only will a well-oiled NMM assembly line get the smaller projects executed way more efficiently, but it will also be an invaluable tool and information historian for the bigger projects – i.e. long term planning and smart grid optimization projects – to be based on.

Beyond analytical processes

The EPRI documents talks mostly about the analytical processes in planning and operation from a TSO perspective, and how these two worlds traditionally have been silos doing their own modelling and processes.

Well, DSOs, much like the TSOs, also have a tradition to run processes in silos and put the same information into multiple systems used by the different departments. However, that is changing really fast now, and the reason is quite simple: More and more DSO processes depend on the LV-network – i.e. operation, outage management, asset management and network analysis.

Having different business units manually entering and maintaining the same LV-network data in different systems is frankly insane. No DSO has the resources to do that, and even they did, it is almost impossible to keep such a big amount of data synchronized between systems manually. We’re talking around one million assets per 100.000 consumer metering points.

So, does it mean that Danish DSOs have already integrated GIS, DMS, OMS, metering and network analysis software? 

Well, yes and no. More and more systems get integrated, however processes are still far from running efficiently, to be honest. Experiences from integrating the aforementioned systems, has shown that the real challenge is not technical by nature.

Actually, systems can be integrated quite easily using modern technology. The real challenge is about maintaining data consistency and semantic interoperability across systems and processes over time. The infamous principle of garbage in equal garbage out quickly hits the fan when you start integrating systems and optimizing processes for real.

An area where a lot of problems are experienced right now is when planning projects goes into the construction phase and a lot of people begin to alter project data and the as-build network using various systems and tools.

For example, it seems that many processes, and the IT-tools to support them, are not really helping people to effectively register changes correctly and migrating from project to as-build state. In other words, the as-build model get messed up all the time, and this is a big issue when systems are tightly integrated and processes are getting more data driven.

To make a long story short, this is where I think NMM could possibly be one of the biggest benefit to the DSO initially. That is, to function as a proactive helper and guardian for changes going into projects, and from there into the as-build network during the construction and documentation processes.

I believe a key to start really optimizing processes, is to treat all project work as incremental changes to the as-build model in a consistent way across all basic processes in the DSO, like the NMM advocates. That is, let an architecture validate and keep track of all changes. No processes or system should be allowed to mess up the as-build network, nor mess up data in projects, anymore.

Final thoughts and an invitation to discuss

As Fred Brooks argues, there is no silver bullet in software engineering. I believe the same is true when it comes to optimizing processes in a DSO, or any other businesses as well.

However, I believe it’s possible to scale up things using CIM and NMM as envisioned by EPRI and the experts in CIM User Group.

I believe that NMM is an enabler to create a new set of intelligent tools that understand the domain, and that proactively help the users to visualize, analyze, change and validate data about the network before it is saved in various systems and used by other processes.

However, I also believe that to successfully create such tools, a crisp (in terms of domain modelling / semantics) and scalable (in terms of cross-functional team access and development) architecture is needed.

I envision a NMM architecture that has an open and easy to use API, so that user friendly back office and front end applications can be developed and tested quickly together with the domain experts (the users of the DSO).

More than ten years experience has learned me, that trying to shoehorn processes into centralized and/or general purpose systems or architectures without a well defined domain model underneath, can only optimize processes to a certain level. 

Moreover, systems that is only understandable and/or accessible to specialized IT-people – i.e. because the system is a silo or monolith – can also only optimize processes to a certain level.

If you really want to stay ahead of the game, you need to give employees and dedicated expert teams access to a distributed, scalable and flexible architecture, so they can access, analyze and manipulate data across the DSO enterprise easily by using crisp domain models (that I believe CIM is) and technologies of their choices.

If you have any thought or ideas on this topic don’t hesitate to contact me. You can also use the blog comment function below.

In the next blog post I will try to be more concrete how a NMM-inspired architecture for DSOs could look like.


CIM konferencen den 1-3 juni 2016

CIM User group logo

Husk CIM User Group konference i Amsterdam d. 1-3 Juni 2016

CIM User Group er et El Dorado for nørder som beskæftiger sig med Smart Grid problematikker, og deltagerne kan typisk også godt lide at drikke en øl eller to. Med andre ord, der er ingen undskyldninger for ikke at deltage 🙂

Slå lige på tråden, hvis du har tænkt dig at deltage!

Emnet er denne gang: “Using CIM to Create and Support the Data-Driven Utility”.

Læs mere på hjemmesiden:

CIM har indtil videre primært været brugt som et udvekslingsformat i mellem specifikke systemer – fx mellem SCADA, GIS, Smart Meter system m.fl.

Dvs. et system, fx SCADA, abonnerer på noget data som ejes af et andet system, fx det statiske netværksdata modelleret i GIS, og får gennem en integration disse data synkroniseret til egen database.

Men faktisk så bliver det meget mere spændende når CIM udnyttes til at arbejde med data på tværs af systemerne, hvilket er denne konferences fokusområde.

Det klassiske brugsscenarie er “Condition Based Asset Management”, hvor data fra fx GIS, SCADA og Smart Meter systemet m.fl. kombineres, hvorpå vedligeholdsbesøg kan optimeres alt efter anlæggets belastning, levetidskurve, lokation m.v. Forventer at der kommer nogle gode indlæg vedr. dette, uden at jeg dog ved det med sikkerhed.

Der er også CIM university den første dag, som er rigtig godt, hvis du ønsker at blive klogere på CIM i en fart. Der er normalt to spor. Et begynder spor, samt et mere avanceret spor. Sidstnævnte er typisk fokuseret på netberegning.