Understanding the behaviour of colloids – macroscopic mixture of insoluble particles – remains a challenging task of immense practical importance. We find systems all around us, in these seemingly mundane products such as milk, mayonnaise or shampoos, through everyday weather events like mist, clouds or (alas!) pollution, up to large industrial processes in chemical engineering. Often these mixtures exhibit some surprising behaviours where the whole is more than just a sum of the parts. The scientific research into this area has a long and illustrious history but the recent addition of HPC to our science toolkit allows us to explore cases which are either too difficult for classical analysis or too expensive for experimentation.
Researchers at IBM and the Hartree Centre are working collaboratively to build mathematical models and computer programs that allow the study of colloidal dynamics in detail. The complexity of modelling these flows comes from the presence of multiple interfaces between immiscible phases (as they do not form a single fluid), broad range of scales and in case of liquid or gas dispersions constantly deforming shapes. For instance in a liquid jet breakup, the jet may be many times larger than the drops due to primary and secondary breakup and at the same time many times smaller than the geometrical dimensions of the mixer unit. A direct resolution of all of these scales would result in an excessive computational cost which leads us to seek alternatives in the form of various multi-scale modelling strategies.
A full view of a a detailed simulation of liquid sheet breakup (top) with a contour plot of a mid-plane slic (bottom).
Therefore, together with my colleagues I am working on scalable techniques to accurately resolve the details of such flows as well as their effective macroscopic characteristics such as mixture viscosities, mean drop sizes, interfacial drag etc. The direct numerical simulations are only performed for small portions of the complete domain and the automatic post-processing extracts information on predefined characteristic features and identifies the overall flow regime. The modelling of the full system can then employ the identified relationships as closure laws or boundary conditions. The intuitive justification for this strategy is that characteristic features are essentially repetitive and do not need to be resolved everywhere.
Chemical engineers, depending on the context, may want to minimize or maximise mixing efficiency of their process plants. To answer their questions, we need to be able to employ our findings from the detailed studies in models of much larger systems. This is why we are also working on novel code coupling methods enabling data exchange between simulation codes operating at different spatial scales.
The final component is a visualisation framework implementing data-centric principles to avoid excessive strain on disk input/output and providing the responsiveness of a desktop-like application. The combination of running multiple simulations with concurrent visualisation is well suited for the capabilities of modern heterogeneous computer clusters.
Overall, the science and technology working in tandem can deliver a much more comprehensive way of studying multi-scale phenomena related to colloidal dispersions. The main benefits are the ability to refine models used at the engineering device level with results from a detailed simulations and possibility to explore new flow regimes. So next time you wash your hair try and appreciate how science, experimentation, maths and HPC contributing to making the perfect mixture.
In a new paper published in Nature Electronics, IBM researchers demonstrate the smallest ever built DRAM memory cell, fifty years after its invention. The new DRAM cells feature potentially low power consumption and an unprecedented small footprint. They could be therefore particularly appealing for implementation in mobile devices or as cache memory.
Modern digital computers have changed our lives in a variety of ways, but the technology on which they are built has still some room for improvement. As computational workloads continue to grow due to massive amounts of data and techniques like artificial intelligence, more powerful computing technologies become of paramount importance. Two of the main […]