

The overall data reduction varies from 2x to 6% or better, but the proposed methods can only be applied with certain restrictions in place. The obtained results lay down the foundations for a new way of doing numerical analysis, which can be extended to many other scientific fields, like, for instance, electrotechnics and magnetohydrodynamics. It is thus natural to propose a style of analysis that tries to extract and decouple the interesting simulation parts, from the large scale version, which is normally tightened to expensive, and scarce hardware installations. It is common practice to analyse a domain larger than what is really of interest, just to get the experimental data to match against a smaller region in the space-time domain. A regular computational fluid dynamics numerical simulation, is always shifted in time, and sometimes even in space. However, they are more powerful when used together, in a compact, stand-alone procedure. Both provide substantial data reduction, and alleviate the supercomputing data bottlenecks. The concept is implemented using two different solutions: the first is focused on providing maximum flexibility to the user, while still retaining the flow features from the global simulation the second concentrates in reconstructing the very same floating point bits. A space-time window is an independent numerical simulation, based on a large scale version, capturing a subdomain of analysis, in both time and space.

The new concept is called space-time window reconstruction, and introduces a new style in high performance computing. This thesis proposes a new concept for dealing with large-scale, numerical simulation data. Supercomputing today is like riding a barouche with horses that travel orders of magnitude faster than the storage long distance runs, add hills, and valleys, to the landscape high performance computing facilities, have become high-tech aquaria, where one can build the most advanced, and expensive submarines, and then be limited to only staring at them through the windows.

The beating up of more computational horse power out of supercomputers, is a trend that simply hits the data wall long before it gets a chance to start the ExaFLOP race. Recently it became more and more evident that a radical change has to take place in the way scientists and engineers handle numerical simulations. The size of the output originating from large scale, numerical simulations poses major bottlenecks in high performance, parallel computing.
