The qualification of components in terms of Durability and Reliability is based on the analysis on a lot of sensors, CAN, IIOT data, which needs a data management infrastructure, to understand the customer usage and variability. Such big data infrastructure are often called « data lake », and may lead to store huge amount of data. This infrastructure must be generic and test data-oriented, to understand the data structure and its analysis required, and to be optimized for such application. All those data may come from connected equipments, instrumented fleets, test bench or proving ground measurements, digital twins and multi-body dynamics simulations, and must be managed in terms of quality and traceability, indexed to be able to be retrieved through request tags (customer, vehicle, measurement site, engine specification, road condition, usage conditions).
Once this step is achieved, the research and development department is able to have a better understanding of customer usage, inputs variabilities, in different environments and conditions, which have to be taken into account in the mission profile. This ad-hoc mission profile derives realistic and iso-damage usage scenarios on proving ground, test flights, or test bench specifications.
Understanding the customer usage and input variabilities enables to achieve a probabilistic fatigue analysis. The uncertainties on inputs (geometry, material and loading) may be propagated on life results, knowing the probability distribution function, using a Monte Carlo analysis. The infrastructure proposes to launch multiple runs on cloud-oriented server, which enables to automatize and streamline the whole process. A use case will be presented, which will illustrate the approach and its benefits.