984 resultados para Development of numerical thinking


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective in this work is to build a rapid and automated numerical design method that makes optimal design of robots possible. In this work, two classes of optimal robot design problems were specifically addressed: (1) When the objective is to optimize a pre-designed robot, and (2) when the goal is to design an optimal robot from scratch. In the first case, to reach the optimum design some of the critical dimensions or specific measures to optimize (design parameters) are varied within an established range. Then the stress is calculated as a function of the design parameter(s), the design parameter(s) that optimizes a pre-determined performance index provides the optimum design. In the second case, this work focuses on the development of an automated procedure for the optimal design of robotic systems. For this purpose, Pro/Engineer© and MatLab© software packages are integrated to draw the robot parts, optimize them, and then re-draw the optimal system parts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Total knee arthroplasty (TKA) has revolutionized the life of millions of patients and it is the most efficient treatment in cases of osteoarthritis. The increase in life expectancy has lowered the average age of the patient, which requires a more enduring and performing prosthesis. To improve the design of implants and satisfying the patient's needs, a deep understanding of the knee Biomechanics is needed. To overcome the uncertainties of numerical models, recently instrumented knee prostheses are spreading. The aim of the thesis was to design and manifacture a new prototype of instrumented implant, able to measure kinetics and kinematics (in terms of medial and lateral forces and patellofemoral forces) of different interchangeable designs of prosthesis during experiments tests within a research laboratory, on robotic knee simulator. Unlike previous prototypes it was not aimed for industrial applications, but purely focusing on research. After a careful study of the literature, and a preliminary analytic study, the device was created modifying the structure of a commercial prosthesis and transforming it in a load cell. For monitoring the kinematics of the femoral component a three-layers, piezoelettric position sensor was manifactured using a Velostat foil. This sensor has responded well to pilot test. Once completed, such device can be used to validate existing numerical models of the knee and of TKA and create new ones, more accurate.It can lead to refinement of surgical techniques, to enhancement of prosthetic designs and, once validated, and if properly modified, it can be used also intraoperatively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

UK engineering standards are regulated by the Engineering Council (EC) using a set of generic threshold competence standards which all professionally registered Chartered Engineers in the UK must demonstrate, underpinned by a separate academic qualification at Masters Level. As part of an EC-led national project for the development of work-based learning (WBL) courses leading to Chartered Engineer registration, Aston University has started an MSc Professional Engineering programme, a development of a model originally designed by Kingston University, and build around a set of generic modules which map onto the competence standards. The learning pedagogy of these modules conforms to a widely recognised experiential learning model, with refinements incorporated from a number of other learning models. In particular, the use of workplace mentoring to support the development of critical reflection and to overcome barriers to learning is being incorporated into the learning space. This discussion paper explains the work that was done in collaboration with the EC and a number of Professional Engineering Institutions, to design a course structure and curricular framework that optimises the engineering learning process for engineers already working across a wide range of industries, and to address issues of engineering sustainability. It also explains the thinking behind the work that has been started to provide an international version of the course, built around a set of globalised engineering competences. © 2010 W J Glew, E F Elsworth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.

We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.

We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.

The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.

Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.

The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Participation usually sets off from the bottom up, taking the form of more or less enduring forms of collective action with varying degrees of infl uence. However, a number of projects have been launched by political institutions in the last decades with a view to engaging citizens in public affairs and developing their democratic habits, as well as those of the administration. This paper analyses the political qualifying capacity of the said projects, i.e. whether participating in them qualifi es individuals to behave as active citizens; whether these projects foster greater orientation towards public matters, intensify (or create) political will, and provide the necessary skills and expertise to master this will. To answer these questions, data from the comparative analysis of fi ve participatory projects in France and Spain are used, shedding light on which features of these participatory projects contribute to the formation of political subjects and in which way. Finally, in order to better understand this formative dimension, the formative capacity of institutional projects is compared with the formative dimension of other forms of participation spontaneously developed by citizens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The power of computer game technology is currently being harnessed to produce “serious games”. These “games” are targeted at the education and training marketplace, and employ various key game-engine components such as the graphics and physics engines to produce realistic “digital-world” simulations of the real “physical world”. Many approaches are driven by the technology and often lack a consideration of a firm pedagogical underpinning. The authors believe that an analysis and deployment of both the technological and pedagogical dimensions should occur together, with the pedagogical dimension providing the lead. This chapter explores the relationship between these two dimensions, and explores how “pedagogy may inform the use of technology”, how various learning theories may be mapped onto the use of the affordances of computer game engines. Autonomous and collaborative learning approaches are discussed. The design of a serious game is broken down into spatial and temporal elements. The spatial dimension is related to the theories of knowledge structures, especially “concept maps”. The temporal dimension is related to “experiential learning”, especially the approach of Kolb. The multi-player aspect of serious games is related to theories of “collaborative learning” which is broken down into a discussion of “discourse” versus “dialogue”. Several general guiding principles are explored, such as the use of “metaphor” (including metaphors of space, embodiment, systems thinking, the internet and emergence). The topological design of a serious game is also highlighted. The discussion of pedagogy is related to various serious games we have recently produced and researched, and is presented in the hope of informing the “serious game community”.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This reports summarises research that began in March 2014 and was completed in October 2015 by an experienced inter-disciplinary research team from the Centre for Social Justice and Change and Psycho-Social Research Group, School of Social Sciences, the University of East London (UEL) and included Dr Yang Li from the Centre for Geo-Information Studies, UEL, for the first phase of the study. Tottenham ‘Thinking Space’ (TTS) was a pilot therapeutic initiative based in local communities and delivered by the Tavistock & Portman NHS Foundation Trust and funded by the London Borough of Haringey Directorate of Public Health. TTS aimed to improve mental health and enable and empower local communities. TTS was situated within a mental health agenda that was integral to Haringey’s Health and Wellbeing Strategy 2012-2015 and aimed to encourage people to help themselves and each other and develop confident communities. On the one hand TTS was well-suited to this agenda, but, on the other, participants were resistant to, and were trying to free themselves from labelling that implied ‘mental health difficulties’. A total of 243 meetings were held and 351 people attended 1,716 times. The majority of participants attended four times or less, and 33 people attended between 5 and 10 times and 39 people attended over 10 times. Attending a small number of times does not necessarily mean that the attendee was not helped. Attendees reflected the ethnic diversity of Tottenham; 29 different ethnic groups attended. The opportunity to meet with people from different cultural backgrounds in a safe space was highly valued by attendees. Similarly, participants valued the wide age range represented and felt that they benefited from listening to inter-generational experiences. The majority of participants were women (72%) and they were instrumental in initiating further Thinking Spaces, topic specific meetings, the summer programme of activities for mothers and young children and training to meet their needs. The community development worker had a key role in implementing the initiative and sustaining its growth throughout the pilot period. We observed that TTS attracted those whose life experiences were marked by personal struggle and trauma. Many participants felt safe enough to disclose mental health difficulties (85% of those who completed a questionnaire). Participants also came seeking a stronger sense of community in their local area. Key features of the meetings are that they are democratic, non-judgemental, respectful, and focussed on encouraging everyone to listen and to try to understand. We found that the therapeutic method was put in place by high quality facilitators and health and personal outcomes for participants were consistent with those predicted by the underpinning psychoanalytical and systemic theories. Outcomes included a reduction in anxieties and improved personal and social functioning; approximately two thirds of those who completed a questionnaire felt better understood, felt more motivated and more hopeful for the future. The overwhelming majority of survey respondents also felt good about contributing to their community, said that they were more able to cooperate with others and accepting of other cultures, and had made new friends. Participants typically had a better understanding of their current situation and how to take positive action; of those who completed a questionnaire, over half felt more confident to seek support for a personal issue and to contact services. Members of TTS supported each other and instilled hope and build community-mindedness that reduced social isolation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phase change problems arise in many practical applications such as air-conditioning and refrigeration, thermal energy storage systems and thermal management of electronic devices. The physical phenomenon in such applications are complex and are often difficult to be studied in detail with the help of only experimental techniques. The efforts to improve computational techniques for analyzing two-phase flow problems with phase change are therefore gaining momentum. The development of numerical methods for multiphase flow has been motivated generally by the need to account more accurately for (a) large topological changes such as phase breakup and merging, (b) sharp representation of the interface and its discontinuous properties and (c) accurate and mass conserving motion of the interface. In addition to these considerations, numerical simulation of multiphase flow with phase change introduces additional challenges related to discontinuities in the velocity and the temperature fields. Moreover, the velocity field is no longer divergence free. For phase change problems, the focus of developmental efforts has thus been on numerically attaining a proper conservation of energy across the interface in addition to the accurate treatment of fluxes of mass and momentum conservation as well as the associated interface advection. Among the initial efforts related to the simulation of bubble growth in film boiling applications the work in \cite{Welch1995} was based on the interface tracking method using a moving unstructured mesh. That study considered moderate interfacial deformations. A similar problem was subsequently studied using moving, boundary fitted grids \cite{Son1997}, again for regimes of relatively small topological changes. A hybrid interface tracking method with a moving interface grid overlapping a static Eulerian grid was developed \cite{Juric1998} for the computation of a range of phase change problems including, three-dimensional film boiling \cite{esmaeeli2004computations}, multimode two-dimensional pool boiling \cite{Esmaeeli2004} and film boiling on horizontal cylinders \cite{Esmaeeli2004a}. The handling of interface merging and pinch off however remains a challenge with methods that explicitly track the interface. As large topological changes are crucial for phase change problems, attention has turned in recent years to front capturing methods utilizing implicit interfaces that are more effective in treating complex interface deformations. The VOF (Volume of Fluid) method was adopted in \cite{Welch2000} to simulate the one-dimensional Stefan problem and the two-dimensional film boiling problem. The approach employed a specific model for mass transfer across the interface involving a mass source term within cells containing the interface. This VOF based approach was further coupled with the level set method in \cite{Son1998}, employing a smeared-out Heaviside function to avoid the numerical instability related to the source term. The coupled level set, volume of fluid method and the diffused interface approach was used for film boiling with water and R134a at the near critical pressure condition \cite{Tomar2005}. The effect of superheat and saturation pressure on the frequency of bubble formation were analyzed with this approach. The work in \cite{Gibou2007} used the ghost fluid and the level set methods for phase change simulations. A similar approach was adopted in \cite{Son2008} to study various boiling problems including three-dimensional film boiling on a horizontal cylinder, nucleate boiling in microcavity \cite{lee2010numerical} and flow boiling in a finned microchannel \cite{lee2012direct}. The work in \cite{tanguy2007level} also used the ghost fluid method and proposed an improved algorithm based on enforcing continuity and divergence-free condition for the extended velocity field. The work in \cite{sato2013sharp} employed a multiphase model based on volume fraction with interface sharpening scheme and derived a phase change model based on local interface area and mass flux. Among the front capturing methods, sharp interface methods have been found to be particularly effective both for implementing sharp jumps and for resolving the interfacial velocity field. However, sharp velocity jumps render the solution susceptible to erroneous oscillations in pressure and also lead to spurious interface velocities. To implement phase change, the work in \cite{Hardt2008} employed point mass source terms derived from a physical basis for the evaporating mass flux. To avoid numerical instability, the authors smeared the mass source by solving a pseudo time-step diffusion equation. This measure however led to mass conservation issues due to non-symmetric integration over the distributed mass source region. The problem of spurious pressure oscillations related to point mass sources was also investigated by \cite{Schlottke2008}. Although their method is based on the VOF, the large pressure peaks associated with sharp mass source was observed to be similar to that for the interface tracking method. Such spurious fluctuation in pressure are essentially undesirable because the effect is globally transmitted in incompressible flow. Hence, the pressure field formation due to phase change need to be implemented with greater accuracy than is reported in current literature. The accuracy of interface advection in the presence of interfacial mass flux (mass flux conservation) has been discussed in \cite{tanguy2007level,tanguy2014benchmarks}. The authors found that the method of extending one phase velocity to entire domain suggested by Nguyen et al. in \cite{nguyen2001boundary} suffers from a lack of mass flux conservation when the density difference is high. To improve the solution, the authors impose a divergence-free condition for the extended velocity field by solving a constant coefficient Poisson equation. The approach has shown good results with enclosed bubble or droplet but is not general for more complex flow and requires additional solution of the linear system of equations. In current thesis, an improved approach that addresses both the numerical oscillation of pressure and the spurious interface velocity field is presented by featuring (i) continuous velocity and density fields within a thin interfacial region and (ii) temporal velocity correction steps to avoid unphysical pressure source term. Also I propose a general (iii) mass flux projection correction for improved mass flux conservation. The pressure and the temperature gradient jump condition are treated sharply. A series of one-dimensional and two-dimensional problems are solved to verify the performance of the new algorithm. Two-dimensional and cylindrical film boiling problems are also demonstrated and show good qualitative agreement with the experimental observations and heat transfer correlations. Finally, a study on Taylor bubble flow with heat transfer and phase change in a small vertical tube in axisymmetric coordinates is carried out using the new multiphase, phase change method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Matrix converters convert a three-phase alternating-current power supply to a power supply of a different peak voltage and frequency, and are an emerging technology in a wide variety of applications. However, they are susceptible to an instability, whose behaviour is examined herein. The desired “steady-state” mode of operation of the matrix converter becomes unstable in a Hopf bifurcation as the output/input voltage transfer ratio, q, is increased through some threshold value, qc. Through weakly nonlinear analysis and direct numerical simulation of an averaged model, we show that this bifurcation is subcritical for typical parameter values, leading to hysteresis in the transition to the oscillatory state: there may thus be undesirable large-amplitude oscillations in the output voltages even when q is below the linear stability threshold value qc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Back-pressure on a diesel engine equipped with an aftertreatment system is a function of the pressure drop across the individual components of the aftertreatment system, typically, a diesel oxidation catalyst (DOC), catalyzed particulate filter (CPF) and selective catalytic reduction (SCR) catalyst. Pressure drop across the CPF is a function of the mass flow rate and the temperature of the exhaust flowing through it as well as the mass of particulate matter (PM) retained in the substrate wall and the cake layer that forms on the substrate wall. Therefore, in order to control the back-pressure on the engine at low levels and to minimize the fuel consumption, it is important to control the PM mass retained in the CPF. Chemical reactions involving the oxidation of PM under passive oxidation and active regeneration conditions can be utilized with computer numerical models in the engine control unit (ECU) to control the pressure drop across the CPF. Hence, understanding and predicting the filtration and oxidation of PM in the CPF and the effect of these processes on the pressure drop across the CPF are necessary for developing control strategies for the aftertreatment system to reduce back-pressure on the engine and in turn fuel consumption particularly from active regeneration. Numerical modeling of CPF's has been proven to reduce development time and the cost of aftertreatment systems used in production as well as to facilitate understanding of the internal processes occurring during different operating conditions that the particulate filter is subjected to. A numerical model of the CPF was developed in this research work which was calibrated to data from passive oxidation and active regeneration experiments in order to determine the kinetic parameters for oxidation of PM and nitrogen oxides along with the model filtration parameters. The research results include the comparison between the model and the experimental data for pressure drop, PM mass retained, filtration efficiencies, CPF outlet gas temperatures and species (NO2) concentrations out of the CPF. Comparisons of PM oxidation reaction rates obtained from the model calibration to the data from the experiments for ULSD, 10 and 20% biodiesel-blended fuels are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective for physics based modeling of the power converter components is to design the whole converter with respect to physical and operational constraints. Therefore, all the elements and components of the energy conversion system are modeled numerically and combined together to achieve the whole system behavioral model. Previously proposed high frequency (HF) models of power converters are based on circuit models that are only related to the parasitic inner parameters of the power devices and the connections between the components. This dissertation aims to obtain appropriate physics-based models for power conversion systems, which not only can represent the steady state behavior of the components, but also can predict their high frequency characteristics. The developed physics-based model would represent the physical device with a high level of accuracy in predicting its operating condition. The proposed physics-based model enables us to accurately develop components such as; effective EMI filters, switching algorithms and circuit topologies [7]. One of the applications of the developed modeling technique is design of new sets of topologies for high-frequency, high efficiency converters for variable speed drives. The main advantage of the modeling method, presented in this dissertation, is the practical design of an inverter for high power applications with the ability to overcome the blocking voltage limitations of available power semiconductor devices. Another advantage is selection of the best matching topology with inherent reduction of switching losses which can be utilized to improve the overall efficiency. The physics-based modeling approach, in this dissertation, makes it possible to design any power electronic conversion system to meet electromagnetic standards and design constraints. This includes physical characteristics such as; decreasing the size and weight of the package, optimized interactions with the neighboring components and higher power density. In addition, the electromagnetic behaviors and signatures can be evaluated including the study of conducted and radiated EMI interactions in addition to the design of attenuation measures and enclosures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the presented thesis work, the meshfree method with distance fields was coupled with the lattice Boltzmann method to obtain solutions of fluid-structure interaction problems. The thesis work involved development and implementation of numerical algorithms, data structure, and software. Numerical and computational properties of the coupling algorithm combining the meshfree method with distance fields and the lattice Boltzmann method were investigated. Convergence and accuracy of the methodology was validated by analytical solutions. The research was focused on fluid-structure interaction solutions in complex, mesh-resistant domains as both the lattice Boltzmann method and the meshfree method with distance fields are particularly adept in these situations. Furthermore, the fluid solution provided by the lattice Boltzmann method is massively scalable, allowing extensive use of cutting edge parallel computing resources to accelerate this phase of the solution process. The meshfree method with distance fields allows for exact satisfaction of boundary conditions making it possible to exactly capture the effects of the fluid field on the solid structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Besides increasing the share of electric and hybrid vehicles, in order to comply with more stringent environmental protection limitations, in the mid-term the auto industry must improve the efficiency of the internal combustion engine and the well to wheel efficiency of the employed fuel. To achieve this target, a deeper knowledge of the phenomena that influence the mixture formation and the chemical reactions involving new synthetic fuel components is mandatory, but complex and time intensive to perform purely by experimentation. Therefore, numerical simulations play an important role in this development process, but their use can be effective only if they can be considered accurate enough to capture these variations. The most relevant models necessary for the simulation of the reacting mixture formation and successive chemical reactions have been investigated in the present work, with a critical approach, in order to provide instruments to define the most suitable approaches also in the industrial context, which is limited by time constraints and budget evaluations. To overcome these limitations, new methodologies have been developed to conjugate detailed and simplified modelling techniques for the phenomena involving chemical reactions and mixture formation in non-traditional conditions (e.g. water injection, biofuels etc.). Thanks to the large use of machine learning and deep learning algorithms, several applications have been revised or implemented, with the target of reducing the computing time of some traditional tasks by orders of magnitude. Finally, a complete workflow leveraging these new models has been defined and used for evaluating the effects of different surrogate formulations of the same experimental fuel on a proof-of-concept GDI engine model.