As the technology industry races to improve and perfect the digital twin to optimise performance, McLaren Applied’s F1-derived simulation techniques are helping to drive game-changing products which are delivering competitive advantage to clients of the McLaren and Deloitte alliance.
Simulation is a technique used to study the behaviour and performance of an actual or theoretical system. McLaren develops simulations during the design process of its Formula 1 car. This is integral to creating a thoroughbred racing machine. At McLaren Applied we apply this expertise as part of our work with Deloitte, in industries such as aviation.
The McLaren and Deloitte alliance have developed a suite of solutions, starting with the Airport Operations Performance Predictor (AOPP). It allows airport operators, including Manchester Airport Group, to deliver improved and robust On Time Performance (OTP) by enabling coordinated and focused delivery of the day’s flight schedule.
The AOPP runs simulations to support airports' critical KPIs and time performance targets. Achieving those on time targets involves operations such as turning around aircraft within a specific timeframe, and ensuring that aircraft ground movements or stand allocation adhere to schedule.
Methods of operational simulation include system dynamics, discrete-event, agent-based and random-sample simulation.
There are six key steps we take to build a process simulation which underpin our aviation products:
STEP 1: Map the process
The first step is to map the process. Often the systems which use of this type of simulation are complex, and are not easily distilled into mathematical equations. This means we endeavour to acquire a detailed view of the system, which is a crucial element of our products.
For an airport simulation, we look at the historical data of arrivals, the average number and length of delays, the factors that cause the delays, as well as the impact of those delays.
Unsurprisingly, the higher the quality of data provided, the quicker this step will be.
A set of assumptions are made when mapping the process. That’s because it’s not possible to map every relationship or understand every single decision point in a complex system. There must be a focus on the principal sources of variation. Once this set of assumptions has been approved by the client and they have agreed on the model to be created, we can commence building the model.
STEP 2: Model selection
Deciding what kind of modelling technique to use can be obvious, but on other occasions there may be several options. Choosing the right one requires considerable investigation and research.
White-box modelling relies on the system knowing the equations or logic that drive system behaviour. This approach allows users to understand how the system behaves internally, and also to interact with it to see how changes will affect its behaviour.
In contrast, black-box modelling creates models that define outputs as a function of a set of inputs by using mathematical and statistical methodologies – for example a neural network; a computer system modelled on the human brain and nervous system. The internal behaviour of the model is unknown because the modelling doesn’t rely on the system understanding equations or logic that drive system behaviour, but rather on observation of historical data.
Grey-box modelling combines the physical and logical modelling techniques used in white-box modelling, with an input/output based representation of a system that is used in black-box modelling.
This combined approach enjoys the benefits offered by both. It can solve problems where system behaviour is unknown by using historical data; equally, it affords the application of system equations where system behaviour is known.
STEP 3: Choose the right tools for the job
Along with model selection, toolset selection is another important consideration.
The tools selected depend on factors such as the type of simulation to perform, the offered functionality, and the cost.
Some simulation packages include built-in functionality that enables more efficient modelling, but it also involves a higher cost. Knowing who the end user of the simulation will be is also a relevant factor. For example, do they want something that’s highly visual which they can interact with and therefore a front-end is required?
STEP 4: Build the model
Once key decisions have been made regarding appropriate modelling techniques, assumptions and tools, a first model framework can be created. This contains the basic functionality required to model the system, and serves as the basis for future development.
The model is continuously improved using an agile approach – an iterative method to planning and guiding project processes – where a full development cycle, or what we call a ‘sprint’, is completed periodically. Regular feedback on the model from the client is encouraged as this helps the team to prioritise tasks when planning.
Once the plan is complete, it is implemented, and the development team works collaboratively to ensure the desired functionality is present. The model is then tested and deployed, ready for the next iteration to follow. After each development cycle or ‘sprint’, we aim to have a fully working version of the model which contains the latest functionality.
STEP 5: Verification and validation
With the model created, verification and validation testing begins. Simulation engineers and data scientists are responsible for this, with input from the quality team and software testers.
Verification is about checking whether the code that has been written is doing what it is designed to do. For example, if a piece of code states that a plane takes off at 10:00 in the morning, is that plane actually taking off at that time?
Validation is the comparison of the model with the real system. It’s establishing how close the model is to the real system. Is it representative of the real world, or is it entirely useless? Are the differences acceptable?
There is a famous quote from renowned statistician George Box: “All models are wrong, but some are useful.”
A model will never be 100% accurate, but it doesn’t have to be entirely representative of the real world. If it’s something that is operational, such as making important strategic decisions in real-time, a very high level of accuracy is necessary.
STEP 6: Find the answer
The beauty of simulation is that it allows you to explore ‘what if’ scenarios and understand the impact of a specific change. For example, if a new taxi layout were to be implemented in an airport, how would it impact upon the airport’s OTP?
The former supports operational supervisors and the wider operations teams at Air Navigation Service Providers. By accurately predicting the impact of a range of tactical decisions, this tool enables them to run ‘what-if’ scenarios and understand the impact of decisions, and how they could have been better.
AOPP is designed to be the core of an integrated airport operations hub and is based upon hundreds of thousands of simulations, conducted during every minute of the dynamic airfield environment. The software-enabled system navigates airport operators through better decision making and towards the best possible OTP and operational efficiency.
It can determine which arrivals and departures are most likely to operate out of schedule, which flights should be focused on to recover OTP and how to decide the actions which best mitigate the impact of delay when there is disruption.
Fuelling data driven insight
Currently, there is a frequent disconnect between planning and operational reality in aviation, which leaves the industry to operate reactively as opposed to proactively.
McLaren Deloitte is at the forefront of bridging this gap; moving the industry ahead of the curve by using cutting-edge simulation technology to determine outcomes and performance, as well as quantify the impact of factors beyond the operational plan.
Explore McLaren Deloitte aviation products.
If you would like to speak to our team about our products please contact us here.