Webinar Series about Digital Services

Managing Complexity as a Service

How to help your clients reduce complexity and drive performance through data-driven services

Join our next Executive Service Roundtable on "Advancing Your Service Sales  Approach"

INTRODUCTION – BY JAN VAN VEEN

Welcome at this best-practices exchange event of the moreMomentum Services Community.  This is one of the kind of meetings we have in the services community besides their peer group sessions. 

It's a pleasure to have Yvo Saanen as guest. Yvo is founder and commercial director of TBA Group, which is part of the Konecranes group. TBA‘s core business is helping their clients - predominately port operations, but not only port operations - to become more data driven in their decisions, in their process management and in their continuous improvement programmes. 

Already for quite a while, we have seen an increasing pressure from different kinds of forces, which makes it more and more important for manufacturers to grow their digital services and be able to thrive and grow in these disruptive times.

Last year, we did a research global research amongst service leaders in manufacturing companies all over the world, to get a better understanding of what their views, concerns and challenges are around services, partly related to the COVID pandemic. This shows:

  • A massive acceleration of the adoption of digital technologies also remote support. This is not only for maintaining their equipment, but also for their continuous operational improvement and their operational management. They start using more and more digital tools, data and support from service providers.
  • Manufacturing companies are more and more changing their view and perspective about their future business model.
    • They make a shift from a pure product equipment provider - with of course good maintenance services - to kind of integrated business solution provider.
    • They envision expanding the scope of their value and their relevance for the clients.
    • Business innovation capabilities are becoming an increasing priority.

These are two quick snapshots from the global research we did. If you want to have the report, you can download it from our website.

Today we will be talking about data-driven capabilities, data-driven decision-making, data-driven improvement programmes. The main topics we'll be discussing are

  • What is really the value of becoming more data driven?
  • What does it take to become more data driven?
  • What do we see as common mistakes and pitfalls?
  • And what are some examples of how you can provide more data driven services and advanced services with these capabilities.

Today with Yvo, we will start with about an hour presentation. Yvo we'll be sharing his experience and views and practices, which is quite insightful for all of us.

After that, the members and few invited guests will continue with peer discussions in smaller groups. After the break, we will have a panel discussion with Yvo, in which we will discuss all questions which have raised during the peer discussions.

Yvo if you could introduce yourself a bit further and share your thoughts and experience.

Thank you for the introduction. Today, I'll be talking about something that seems so obvious that we use the data that is available to us to improve our decision making. Decision making in operations decision making on a much longer time horizon strategic decision making. But how, how does it entail? How does it play out in reality? We see that using the data that is actually available to us is only barely used. So we're only scratching the surface in getting the full value out of the data that is being gathered at large scale. One of the specific topics I will focus on today is also using models. Models will contain a lot of data; it will help us understand the complex reality we are in today. 

ABOUT YVO AND TBA GROUP

Before I start, maybe short background on myself, basically, I've been 25 years working in the logistics sector, as a consultant as a supplier of software, and mainly in the area of ports. Designing container terminals and improving them. Assisting clients in their journey to become more efficient, to have higher productivity and use data in a much more efficient manner. 

I'm married, I'm proud father of two kids who are meanwhile also getting out of the house, so and as part-time lecturer at Rotterdam School of Management, on maritime logistics. 

TBA Group, which I founded it together with a partner 25 years ago, operates globally, focusing on helping customers in the logistics area. Our clients are mainly terminals, container terminals, dry bulk terminals, general cargo terminals, but also large-scale warehouses. We provide solutions, such as a terminal operating system, equipment, control systems, warehouse management systems.

We support about 150 sites globally, with software that we have delivered, and making sure that they get the best out of that software. 

OVERVIEW

What am I going to talk about today, as Jan already said, we're going to talk about data. And we're going to talk about the use of data. And critical here is that we create out of this data, we actually create insight, we create knowledge, because data itself is basically useless. We need to be able to get value out of the data. And how do we do that.

One of the big hypes at the moment is artificial intelligence. And I'll talk a little bit into that area because that is in method, a technique to actually get value out of data. But how do we do that? And how do we get the most value out of it? I'll try to discuss.

Then we'll tap into the reasons for doing data driven decision making. Why not sail on our gut feeling? Why is data driven decision making so much better?  And one way of incorporating data in our decision making is the use of modelling. Creating models of real-life systems helps us understand and they are typically very data hungry, can we put real data in these models so that the models will help us in our decision making.

And then as a final topic, I will also explore how we're using these models in broader sense than just decision making.

  • In enhancing our software, especially when we're looking at software that controls complex systems, think of manufacturing plants, think of transportation systems, systems that are too big to fail and in which there's a high reliance in the working of our business.
  • Another area is in the area of serious gaming. really coming from the traditional gaming sector, which is more fun sector, we use more and more games to train people to become better at their job, at their job interacting with complex systems or systems that we also try to model to get high quality decision making.
  • And finally, again, a, in my view, a hype digital twinning. And I'll place this digital twinning in the context of the modelling discussion. And I hope afterwards, you can see how the relationship between digital twinning, and actually the modelling practice that is already around since the 60s, actually takes place.

And I'll end with some takeaways for you to remember. So that's the topic for this morning. 

THE DATA PARADOX: DATA BUT NO INSIGHT

Increasing volume of data

So without further ado, data is available in heaps, there's so much data available that we actually don't know how to find our way around it nor how to really get something out of this data. The data itself is not what provides us with value. We need to create structures, way of analysing data to get actually value out of it. And that's not happening all the time. Although we are gathering more and more data, it's rapidly increasing. And here you see some overview numbers without going into much detail. But it doesn't mean we have information. And it certainly doesn't mean we actually have knowledge. So, there's a whole process that brings us from data to a level of understanding of what is happening. And we also see that even with all this data we are gathering, we are struggling to do for instance predictions, predictions about stock market developments, predictions about identifying terrorists, but also much closer to home, predicting when a machine is going to fail, or predicting when a rain sensor sees rain, and has to wipe the windscreen. All of that remains quite difficult.

Nevertheless, very important to make good use of the data that you're gathering. And what we see these days in operations in many companies is that they're actually still very poor use of the data we are gathering. We are spending a lot of time in gathering data, even creating all kinds of graphs from the data, KPIs key performance indicators, that the actual use and turn into actions is still a big struggle.

One of the sources of data is that all our devices or equipment, but also infrastructures becoming are becoming connected, they are producing data in real time. They all have sensors to connect to the internet sensors that are relatively inexpensive, which is an important part. Also, when we look at enhancing the equipment you are using the machines you are using the machines you are selling, enhancing them with sensors, and with devices to communicate to the internet, so that the data they those sensors are collecting can be used to get insight. And, of course, maybe it's more difficult to predict what a remotely accessible dishwasher will do for you. After all, you still have to put the dishes in, and you need to be there to turn it on. But you could imagine that when it senses that there's a water overflow, because a valve broke, that you have to get home as quickly as possible. And you want to signal from that device that there is a problem. Well, the same applies to sensors in factories, sensors in all kinds of complex systems, to know what's going on what they see on a local level.

The need to bring it together

Bring it together on central level and get and turn it into insight that we can use to make those systems more efficient, to get more productivity out of our factories, to get more throughput through our highways, etc etc. Scale is enormous. So, we need to have our systems in place to actually deal with all this data. And that's clearly today's struggle, not only in the fact that we are collecting a lot, but also to structure it, then it's really understood what this data means. And when we understand what it means, then turn it into actions, actions to enhance maintenance practices, actions to find you a whole factory production line to make sure that the machines all produce in the same speed to overall create a more reliable, more productive system.

Artificial intelligence- Hoax or Panacea

Is artificial intelligence, then the answer to this? Or is it just the hype? Well, I can already tell you, artificial is not a hype. It has been around for many, many years. However, today's computing power, combined with the availability of data enables a much broader reach of the artificial intelligence - algorithms.

Today, we will discuss very specific areas in which artificial intelligence is quite successful and increasingly successful. If you look, for instance, at the improvements of voice recognition, and interpretation of people speaking to automated devices, let's say call centres that use automation to categorise what is cause for or ordering, let's say a ticket to travel somewhere that has improved dramatically.

In some areas, AI still struggles

Where we see that artificial intelligence is still very much struggling, is in more complex tasks. Tasks, where humans are actually extremely well equipped for, we find it very difficult to create artificial brains to deal with situations. Clearly, it can be seen in autonomous cars, already for quite a long period predicted to be available. But even today, we don't get where the manufacturers told us 5, 6, 7 years ago, we will be today, we are still struggling to deal with very complex reality.

And there are many problems that are complex that are containing so many variables, so many circumstantial parameters that make it very hard to beat a human in judging what is going on. And a very simple example you see here, from the dog to muffin comparison, if you just look as a computer would look, it has great difficulty in distinguishing the Chihuahua from the muffin. To us, it's very obvious which ones are the dogs and which ones are the cakes. But for computer, it's very hard to distinguish them. And yeah, I mean, despite Elon Musk claims already for many years how his cars would drive autonomously from the West Coast to the East Coast. This is what ends up in the news, failing artificial intelligence. Drivers that really think their car is already capable of dealing with all this complex reality and ending up in bad shape, so to say. So, we need to keep that in mind, we need we should not be too optimistic in the capability of computers in already predicting what is going to happen in this very complex world.

VUCA

A very important notion is VUCA. VUCA is a term used to describe how complex are realities, it's volatile, it's uncertain, it's complex, and it's ambiguous. It's not so clear what's really happening. So we need to be able to deal with uncertainty with that complexity.

Few examples here you can imagine if you show this type of traffic sign to a computer, it gets easily confused what is really meant here, and we already have seen accidents with automated cars when they are confronted with a confusing situation like this. But look at this. I mean how would a car get away for you and driver, Piece of cake. But for a computerised brain. This requires a lot more to deal with strange situations. As they appear everywhere in the world. So dealing with VUCA dealing with this complex reality.

The essence - Modelling 

And this brings us into the second topic of today, modelling what is the essence of modelling? How does modelling help addressing this complexity. And one word here is key: reduction. Reduction in the context of modelling means that we are leaving away details that are not relevant for a decision we want to make. Think about that. Leaving away details that are not necessarily for the position we want to make. That implies that we take all the details into account that are necessary to make the right decision. In other words, we try to make valid models of reality and valid models of reality, meaning that the model contains enough level of realism to represent a good basis of decision making.

Sometimes the model can be incredibly simple, because all the details that are surrounding it are not necessary to come to the right decision. But in other cases, it can be very detailed, very complex. So even the model can be very complex, needing a lot of detail to come to the right decisions. That's also better expressed as the art of modelling, deciding which details need to be included. And which details can be left away. The more details can be left away without sacrificing model validity, the better the model gets, the simpler it gets, the easier it gets to understand. The easier it gets to understand, the easier it is to use the model. Because if models become also very complex, we almost end up in the same situation are as we are in reality, we really don't understand all the complexity anymore.

So modelling is about production, creating a simpler representation of reality, which still allows us to come to very good conclusions about a topic we want to make a decision about. So that's a very important notion of why we're using models. Because we understand them better, they're easier to use than reality. Because if we want to try out a new solution, a solution that supposedly addresses a particular problem we want to solve and try it out in reality can be done, but to find out whether it really works. And we'll talk a little bit further into the benefits of modelling.

WHY DATA-DRIVEN DECISION MAKING

So why are we looking at data driven decision making? Why are we not just taking decisions based on trial and error based on gut feel? Isn't that good enough? No, it isn't. It's risk prone, it leads to overspending. It doesn't allow us to do comprehensive testing. So in the end, it leads to too expensive to slow solutions. We need to focus on data driven decision making to come to efficient, productive, lean solutions, also solutions that actually can be used by humans. So they can already in a controlled environment get used to what is being implemented. So reducing risk is the overarching focus of data-driven decision making the quality of the decisions simply get better, reducing risk, avoiding wrong decisions. So there's a very clear reason why we should bother.

Dynamic modelling

Now one specific area that really addresses this VUCA reality very well is dynamic modelling. So instead of building static representations of reality, we try to bring a part of the dynamic behaviour, the fact that some things sometimes take longer than other things, think of, you're going to the airport. Now you live in a particular place, I guess you live in different places, and you need to go to the airport. And you, of course, want to be on time for your plane. When should you leave? Is that always the same time? Or should you consider the circumstances? Should you consider the weather? All kinds of variables that need to be considered to come to the right decision, in this case, a quite a simple decision, when should you leave to be in time for your claim. And we'll come back to that, when we look at certainty of decision making. In any case, what we need to realise is that a model is always an approximation of the real system. A model is not the real system. And that's also where we need to realise that absolute model validity doesn't exist. Even stronger: it's not even desired. Because if we would search for that, the model would be as complex as reality, we would have to copy all the specific aspects and behaviours in the model that we see in reality. And actually, we try to do the opposite. We're trying to leave away as much as we can, while keeping the model still fit for purpose. So to still give valid answers to the questions we ask.

All models are wrong

So as George box, in His Word says all models are wrong, but some models are useful. So validity of the model is, of course essential, because otherwise, the models put us on the wrong foot. Now, the use of these models is always a process, a process that starts with a VUCA reality, you're indicated in this strange shape. And by looking and analysing starting situation, the current situation, we try to identify what are the problems we face. Because we don't want to implement solutions if there is no problem. And there's only a problem when the current situation doesn't deliver the desired outcomes, desired revenues, desired production levels, the desired efficiency, the desired failure rate. ###mentioned it depends which situation you're actually analysing. But there's only a problem if your KPI's are not met, if you're not achieving your goals.

 

THE MODELLING PROCESS

4 step process

So the first step in the model in the modelling process is to build a model of the current situation you're in today.

And before you start looking at any solution, any improvement any new technology to apply, you should make sure that your model is valid, a valid representation of today's reality. And then you start the cycle of diagnosis, what are really the root causes, why you're not achieving your goals.

And only when that is complete and you have prioritised your problems, only then you start looking for solutions.

And also, of those solutions, you will make and model you will implement in the model the solutions as you see them and you will start evaluating which model or which solution contributes the most to the identified problems.

This is the only way where you can prioritise improvement measures.

Jumping to conclusions and symptom fighting

A typical pitfall is that people immediately jump into solutions without knowing the real problem. They try to change all kinds of things without knowing what it will actually solve and contribute to the objectives. This problem-solving process described here in this graph actually forces you to first focus on problem identification, diagnosis and only then start searching for solutions. And then when you have proof, data supporting that these solutions will address your problems. Only then you start implementing them in reality. You make a selection, these improvement measures I'm going to implement and when you have implemented them, then you will compare the new situation that is created with the former situation. And you determine to what extent you have addressed and solved your problems?

This is the crucial cycle data driven decision making:

  • Modelling the starting situation
  • Finding the bottlenecks, finding the problem
  • Then looking for solutions, assessing those solutions.
  • And based on that, based on those outcomes, those qualitative outcomes, select what your what once you're going to implement and compare them to your former situation to determine was I successful? Did I achieve the goals I was after?

Example: Hospital logistics

I'll give you some examples in various areas where we have been active because we are active also beyond the port sector. This is an example where we have been looking at the logistics of using all kinds of resources in hospitals, surgery rooms, doctors, and surgeons, but also other capabilities that you need to perform surgery for instance, assistance equipment that you might need in a surgery room, etc, etc. So how do you make this whole logistic flow of patients in relation to the resources you would have in a hospital? The most efficient one? Well, I can already tell you, and hospital is not a very much optimised logistical process, if you would compare it to a typical factory. There's a lot to gain just by the sheer fact that hospitals only operate a very limited amount of time, whereas they have super expensive equipment, which is only used to very low percentage of the time.

Example: Traffic flow

Another example, a bit longer ago, this was a, let's say, a data driven decision support we did before the extended A-4 high-way between Rotterdam and The Hague was being built. Now this is the highway that took the longest to build about 50 years, the part between Delft and Rotterdam took extremely long, although it's only five kilometres. And some 15 years ago the Ministry of Transport came to us. And they wanted to know what would happen if they would make the toll road - a toll road where we would create at the access points at a decision points. So you're at this this intersection, and also here at this intersection, Rijswijk, if you would be able to decide based on the travel time game, how much you're going to pay. So if there is no congestion on both roads, it will be relatively inexpensive, because there is no game. But if let's say the A-13 would be highly congested, and the A-4 will be free flowing, you would have a major gain in your travel time. So you would pay a lot to use that road. How would that work? Again, something that was never implemented. So it stayed with a with a study that was quite interesting to see how the resulting mechanisms would be depending on the algorithms you would use for pricing, the gain of traffic of travel time, sorry.

Example: New container facility

And then a third example, here, planning of a new container facility in this case in in Singapore, where the Port Authority of Singapore is actually moving all their terminals outside the downtown area. Ports are traditionally close to where people live. So the old ports in Singapore or very close to areas where people live. And obviously it makes much more money to translate that into high value real estate. So they actually invest investing huge amounts of money, billions and billions to build new artificial islands and build fully automated container terminals over there. And here you see a represent have such a model that we used to validate whether the concepts that that were considered actually deliver upon the targets. And based on these models, this is actually now being built. So years before actually, the first Island is being built, we already create these kind of models to determine how the audio operation will go.

WHY DYNAMIC MODELLING

So, a bit going into more detail, why are we using this dynamic modelling? Well, a couple of reasons.

Address dynamic behaviour

First of all, to actually address dynamic behaviour. The fact that processes are stochastic, sometimes takes 10 seconds, sometimes it takes 20 seconds, sometimes to take five minutes to take those behaviours into account, because they largely affect the outcome or the production line. If everything will be static, every everything will be producing exactly on the same process time, there will be a lot less loss, a lot less loss in these complex systems.

Inexpensive trial and error

Second reason, it's a safe and inexpensive trial and error environments. So instead of trying out all kinds of solutions in reality, which takes a lot of time to find out what really works, but not only that, it also causes a lot of risk. Because if you implement something wrong, it may cause you harm, it may lead to actually reduction of the outcome. So you want to try in a safe environment to see what works.

Prepare for extreme and non-repetitive situations

Moreover, you want to analyse systems in extreme situations. Think of the flower auction in Aalsmeer - one of the biggest flower auctions in the world - at Mother's Day, or Valentine's Day, then is when those systems get tested to the extreme, there's only one day a week or one day a year. So if you can only test your algorithms, your control mechanisms of that complex system that they have the year it gives you a lot of difficulty in really trying it out.

In a model environment, which is a controlled environment, you can play that particular scenario time after time until you are satisfied with your solutions. Again, data driven in this case, creating the scenarios that could make the system break.

Visualisation of process

From the examples you could have already seen the visualisation. So typically, these kinds of models come with a visualisation of the process, which helps people understand what's going on, especially when you're changing things people have difficulty in understanding based on descriptions and process models on drawings to really understand what the interaction between all the system components will be. Modelling allows you to visualise it and to create much better understanding.

Quantify and prioritise

The most important part though, is the quantified results that these models resulting they will create performance, cost, breakdown rates, reliability numbers, and those numbers will allow you to prioritise to really say is this improvement measure is this solution worth the money I need to invest worth the time it takes to invest? Because it's not just money is typically also effort or disturbance of production sites.

Avoid guessing

Lastly, it also avoids guessing these models are data hungry, they require a lot of input about reality, they require a lot of analysis of how system components are behaviour are behaving which requires a lot of measurement before you start doing the modelling cycle. Modelling avoids guessing and I've seen too many times that people think behaviour is so and so. And it turns out to be very different. So different that actually the decision would go completely different. An additional benefit, what we see is that throughout the creation of the model, so many questions need to be asked about the real system, that not only the modeller gets insight, but also the owner of the real process gets insight and starts to learn his process his operation better than he did before. So, it comes with quite some very nice benefits.

CONFIDENCE AND ACCURACY OF ESTIMATES

Now, still, people come to me and say, Well, if I can, I'll still make a decision on the back of an envelope. And I would say, if complexity allows for that, meaning, it's a very simple question. Surely, I will do the same. But many systems we are working with you are working with most likely are not so simple. If you turn a button here, turn a knob here. What will be the outcome five steps further in your supply chain five step further in your production line, we typically don't know. And we see that these overconfident decisions that people make, they think they can make it based on the back of an envelope calculation are wrong and lead to overspending lead to negative consequences, potentially have high financial impact. And we want to avoid those.

The other thing is, and I'm coming back to my example of going to the airport. If I would ask you, I live here in Delft. And I asked you how much time is going to take me to the airport? people typically answer 45 minutes, one hour, half an hour. Some will say, well, the best be ticket guard, do you take your bike? Do you go by train? and create an answer, depending on the circumstances.

But what you rarely hear is people say, well, it will take you between 30 minutes at best, and two hours in the worst case. So they actually mentioned the level of uncertainty, which is essential for modelling. But it's also essential to come to the right conclusion, because if you tell me it takes 30 minutes, and it happens to be that I leave in rush hour in the morning, and I spent an hour and a half in a traffic jam is going to take me two hours and I will miss my plane. So here it is a little bit visualised.

Now, one of the things that you probably familiar with is that a lot of behaviours of processes and phenomena in reality, have a high resemblance of the normal distribution. You see here two representations of the same process, in this case, a crane in a container terminal behaving and around the average, there is a strength of outcomes. And the larger the spread, the less reliable the outcome is going to be. So what we are trying to do when we estimate, we try to estimate with a certain reliability, and typically we create what we call a 95% confidence interval.

So for instance, coming back to our example of travelling to the airport, we say, I want to make sure you do with 95% confidence, you will arrive within a time I have indicated. If you make this interval to small, and the variation is much larger, the likeliness of anything outside is interval is very high. So the more uncertainty there is, the larger you have to create your interval. If you're really not knowledgeable of the situation, you would even create a gigantic interval, you would simply say I have no clue. It takes you between one minute and one day. Well, you're right, probably, it's useless answer. So I would find somebody else to answer this question for me. But I know at least that you were so unsure that you couldn't give me any certainty.

One rule of thumb and it's extremely difficult for people to do that create a larger uncertainty interval rather than having the real value outside is interval, in this particular case, missing my claim. So always when you remember when people ask you to estimate something better a large interval if you're uncertain than having the real value outside the interval because that could have very negative consequences.

New call-to-action

PITFALLS

Last thing about modelling is its pitfalls.

Garbage in - Garbage out

Because there are a number and one is a very well-known phrase, garbage in, garbage out. We need to spend enough time to gather accurate data to have a very solid base layer of assumptions that we know how reality is behaving. So if we actually can create a valid model of reality, we did get it out of the way.

Too detailed models

Another pitfall is that people too many details in our models, engineers. Unfortunately, I have to admit, that is one of them. We tend to put too many too much detail in our models. We are in love with details. But a true modeller tries to limit the level of detail in making the model as simple as possible, as easy to understand as possible. Future decision making also becomes more trustworthy.

Humans change behaviour 

One of the other complex things in modelling is actually representing behaviour of humans, and especially humans, that are confronted with new situations, because how they are going to react. I've seen situations where in theory with unchanged human behaviour, the solution implemented work perfectly. With the new solution in place, they change their behaviour. I'll give you an example, we were analysing garbage collection system. So a set of trucks, garbage collection trucks, and we implemented a new optimised routing scheme. Very quickly, the drivers found out that it didn't give them enough time to plan an intermediate coffee stop. And instead of saving the overall time it took to collect the garbage, they're actually building their own coffee stop at some points in between, in the middle of the route. The overall result, after implementing this routing was zero. Of course, the guys had a good time, because they could take some time for coffee, but there was no result.

Poor validation and trust

Validation, we cannot take that too easy. We need to make sure that our model is valid. Otherwise, our decision making is in danger. And also related to validation - at least as important is accreditation - is creating trust with the decision-makers that the model can be used, that they have trust in the model. If they don't, even with a valid model, they won't use it, and they will still use that gut feel in making the decisions.

 

NEW USE CASES FOR MODELING

So apart from traditional decision making, we see more and more applications of models for different purposes.

Emulation-based testing

And one of the purposes we have been exploring already for 15 years, is actually testing complex control software. Think of a warehouse management system or warehouse control system. Think of a traffic management system in a laboratory environment, running against a virtual representation of reality. A model of that model that is again, suitable for testing system, creating all those circumstances where the system may fail. So we can actually play around with this system until the level of confidence is high enough to actually roll it out into live operations.

We call this emulation base testing. We link a real system a real piece of software, typically complex software, we link it to a real system, and let it run as if it was in live operations. To the control software, there's no difference the interactions with the real equipment with everything that inputs and outputs arriving information, decisions by humans is as if it will be live in operation, but it's actually only moving virtual machines, virtual containers. So we can actually analyse whether the software works as its shouldn't be.

And we always use this this metaphor for that power is nothing without control. So we need to make sure that our control systems actually work.

And it's a high necessity, because the success rate of complex software implementations is actually very low. Typically, they end up with big problems. And we have seen in real life, we've seen some big problems. For instance, the last time Heathrow went live with a new baggage handling system, it resulted in 50,000, passengers stranded at Heathrow couldn't get their baggage, couldn't continue flying. Major problems in major damage.

So here we see some statistics from typical complex IT projects, which overall have a very low success rate overruns in time in budget, less functionality delivered, and even a large percentage are considered a total disaster.

So there's emulation based testing offers a much more much richer environment to validate whether the software actually works under extreme circumstances. Under Valentine's Day circumstances, I'm going to circumstance after Chinese New Year, all these exceptions, exceptional scenarios that we see coming. But it's hard to test software under those circumstances in real life. We rather do that in a controlled laboratory environment using emulation. So this is one of those use cases that have already developed, especially in the area where we look at large automated systems with a lot of complexity, but also continuous operation. It doesn't allow for standstill; therefore, it has to work like clockwork.

Serious gaming

Another area that I already mentioned at the introduction is serious gaming.  So actually, training operators in a - what we call - near to life circumstance, even in a multi actor environment, as you can see here in this little movie, to the system, they will be operating in, in this case, fully automated container terminal, where we still have interactions between humans going for instance, in the into the automated area to repair an automated, guided vehicle. And we want to make sure that when they execute is in reality, they are already trained. And they follow the procedures so precisely that they're never at risk. And we do that by training them interactively. So letting them play in a kind of shooter game reality. And at the same time one of their colleagues working on the control tower systems to make sure that they follow also the right steps. So that the person going into this automated area is always safe.

What we find from these training sessions is learning is way better than textbook learning. It's much more fun, so people are much more engaged. And we actually can test them in the right range of circumstances that could happen. Again, the same type of use, all those models that were used for decision making, as we're used for this emulation-based testing are now used in a gaming environment.

 

Digital twinning

The last one that I want to mention, and this is also a true hype is what we call the digital twin. And here we see an example of a combination of real vehicles and simulated vehicles.

So the blue vehicles here are not real, but in the system environment. You can imagine that if we have only a very few vehicles that have been built yet and we still want to analyse the whole complexity of the entire fleet. So we scale up from a system point of view, in reality, of course, we would have only a very few real vehicles driving, but in the system, we will be working with an entire fleet to really analyse how they behave in indirect interaction with each other. And we also use us in a way where we are trying to learn from the patterns from the census from the events that are generated by the machines, partially real events from real machines and partially simulated events from models that resemble the behaviour of the real machines.

From the output, we try to recognise patterns, patterns that would indicate that something is about to happen. It is very important in all kinds of artificial learning processes that we try to see what is coming from the data we are receiving, for instance, that we can predict in advance that something is going to break. So, anticipating a breakdown, we can already order a spare part. Those patterns, especially when we have enough of them, and as we started, we are gathering a lot of data. We're not so good yet, at understanding the data, structuring the data, using them in a systematic way.

But we're sitting on this huge amount of data. So we all need to work with that data and translate it into insight. And their models of reality will help us understand it, the models will be trained, so that they start behaving like the real machines. That's the key of digital twins. And they will help us bring us sooner from data to insight to real knowledge of how the systems are behaving. So we can actually achieve our goals much faster.

So those are three examples of a similar type of models, used for different purposes, to even enrich, enhance the use of data.

IN CONCLUSION

So, takeaways, what we discussed today.

  • It's not the data we are after. It's the insight or maybe even the knowledge we are after. But we need to data. And we need to understand that data. We also need to be aware that just data doesn't bring us anywhere, we need to turn it into insight. And that insight should be used to enhance our processes. It should enhance maybe even new business models. For instance, automated ordering of spare parts.
  • Only when data is really well understood and also collected in a systematic structured way, insight can be created. This is a very important prerequisite for successful data driven decision making is systematic structured collection. Ensuring that we know what we are collecting, that we understand what we are collecting.
  • Modelling will help us understand complex problems much better, because models are simplified representations of reality. On purpose we leave away irrelevant details. And also models allow us to put data in their context.
  • And finally, we see also different use of models these days beyond decision making for which they were originally thought. We see also great purpose in software testing in interactive training, by using gaming technologies, and by digital training, for instance, to enhance predictive maintenance.

That's it.

WHAT KIND OF SERVICES CAN WE OFFER, AS MANUFACTURER

Jan van Veen

Yvo: one question maybe you can elaborate a bit on as a reference, most people listening to this are in service from a manufacturing company. So, one very obvious point is be more data driven in maintenance services approach so that's already a focus point for a lot of companies. But there's also another opportunity as I mentioned in the beginning as also more and more clients of manufacturing that become more data driven in in optimising in the running and their operating model the process. You've given some examples in Port operations, hospitals and even in traffic. So maybe you can paint a bit of a picture what kind of services do you provide to your client? Is it resulting around very specific product problems? Or are you helping them to develop the capabilities to become more data driven in their daily practice? Are you also offering let's say, an ongoing service, managed service or recurring service where you constantly are playing a part in their data driven capabilities? Maybe you can paint a bit of a picture, I think that would be interesting for a lot of the listeners

 

Yvo Saanen  

What we see with a lot of customers is that the actual use of the data is quite limited. So although they're gathering a lot of data, it's not really used for day to day decision making. So although there are daily meetings, with most of our customers, where they look at some KPIs, they hardly go to the level of the question, why is the KPI as it was. What are the reasons to explain why yesterday's performance was better than the day before, or worse than the day before. So and then take action, oh, we have identified this in this problem yesterday, we could have solved it by doing this, and this and this.

So this is typically something we help a lot of customers with. So to create a culture of continuous improvement, using data. So you see data of yesterday's operation or last week's operation or last month operation. And by looking at that data, you are identifying where you could have done better, where you missed opportunities, where your overall strategies of deploying your equipment to deploy your manpower, placing your orders could have done could have been done more efficiently by setting parameters in your control systems, by different ways of deciding that you should put a piece of equipment in a certain location or in a certain deployment state. So and really try to get that mindset.

Now, that's not easy. What we have seen is that as long as we are engaged, typically between three and six, nine months, and the customer is in this in this mode of analysing, coming up with the diagnosis of what's wrong, identifying solutions and implementing your solutions, and that that cycle. After that, when we when our engagement stops. Very often they stop also this process. And when you come back, sometime later, it's often gone. Now, that's not everywhere, some keep that process going. But you see that the people are still struggling, they're not only struggling, because they don't, they cannot. But because they have no time there is not enough capacity in terms of resources to actually look at the data and really analyse what is going on. Why is it as it is what are the root causes of the, let's say, disappointing results, or results that are not meeting the objectives. So it's availability. It's also the way data is being presented, which is still very much lacking.

So I think there's a long way to go in our sector to really come to a continuous learning curve, a continuous improvement based on the data from the past, let alone using data looking into the future coming to predictions and based on these predictions take the right course of action.

So that's another step forward.