22 resultados para methods: N-body simulations
Resumo:
Besides increasing the share of electric and hybrid vehicles, in order to comply with more stringent environmental protection limitations, in the mid-term the auto industry must improve the efficiency of the internal combustion engine and the well to wheel efficiency of the employed fuel. To achieve this target, a deeper knowledge of the phenomena that influence the mixture formation and the chemical reactions involving new synthetic fuel components is mandatory, but complex and time intensive to perform purely by experimentation. Therefore, numerical simulations play an important role in this development process, but their use can be effective only if they can be considered accurate enough to capture these variations. The most relevant models necessary for the simulation of the reacting mixture formation and successive chemical reactions have been investigated in the present work, with a critical approach, in order to provide instruments to define the most suitable approaches also in the industrial context, which is limited by time constraints and budget evaluations. To overcome these limitations, new methodologies have been developed to conjugate detailed and simplified modelling techniques for the phenomena involving chemical reactions and mixture formation in non-traditional conditions (e.g. water injection, biofuels etc.). Thanks to the large use of machine learning and deep learning algorithms, several applications have been revised or implemented, with the target of reducing the computing time of some traditional tasks by orders of magnitude. Finally, a complete workflow leveraging these new models has been defined and used for evaluating the effects of different surrogate formulations of the same experimental fuel on a proof-of-concept GDI engine model.
Resumo:
Astrocytes are the most numerous glial cell type in the mammalian brain and permeate the entire CNS interacting with neurons, vasculature, and other glial cells. Astrocytes display intracellular calcium signals that encode information about local synaptic function, distributed network activity, and high-level cognitive functions. Several studies have investigated the calcium dynamics of astrocytes in sensory areas and have shown that these cells can encode sensory stimuli. Nevertheless, only recently the neuro-scientific community has focused its attention on the role and functions of astrocytes in associative areas such as the hippocampus. In our first study, we used the information theory formalism to show that astrocytes in the CA1 area of the hippocampus recorded with 2-photon fluorescence microscopy during spatial navigation encode spatial information that is complementary and synergistic to information encoded by nearby "place cell" neurons. In our second study, we investigated various computational aspects of applying the information theory formalism to astrocytic calcium data. For this reason, we generated realistic simulations of calcium signals in astrocytes to determine optimal hyperparameters and procedures of information measures and applied them to real astrocytic calcium imaging data. Calcium signals of astrocytes are characterized by complex spatiotemporal dynamics occurring in subcellular parcels of the astrocytic domain which makes studying these cells in 2-photon calcium imaging recordings difficult. However, current analytical tools which identify the astrocytic subcellular regions are time consuming and extensively rely on user-defined parameters. Here, we present Rapid Astrocytic calcium Spatio-Temporal Analysis (RASTA), a novel machine learning algorithm for spatiotemporal semantic segmentation of 2-photon calcium imaging recordings of astrocytes which operates without human intervention. We found that RASTA provided fast and accurate identification of astrocytic cell somata, processes, and cellular domains, extracting calcium signals from identified regions of interest across individual cells and populations of hundreds of astrocytes recorded in awake mice.
Resumo:
The presence of multiple stellar populations in globular clusters (GCs) is now well accepted, however, very little is known regarding their origin. In this Thesis, I study how multiple populations formed and evolved by means of customized 3D numerical simulations, in light of the most recent data from spectroscopic and photometric observations of Local and high-redshift Universe. Numerical simulations are the perfect tool to interpret these data: hydrodynamic simulations are suited to study the early phases of GCs formation, to follow in great detail the gas behavior, while N-body codes permit tracing the stellar component. First, we study the formation of second-generation stars in a rotating massive GC. We assume that second-generation stars are formed out of asymptotic giant branch stars (AGBs) ejecta, diluted by external pristine gas. We find that, for low pristine gas density, stars mainly formed out of AGBs ejecta rotate faster than stars formed out of more diluted gas, in qualitative agreement with current observations. Then, assuming a similar setup, we explored whether Type Ia supernovae affect the second- generation star formation and their chemical composition. We show that the evolution depends on the density of the infalling gas, but, in general, an iron spread is developed, which may explain the spread observed in some massive GCs. Finally, we focused on the long-term evolution of a GC, composed of two populations and orbiting the Milky Way disk. We have derived that, for an extended first population and a low-mass second one, the cluster loses almost 98 percent of its initial first population mass and the GC mass can be as much as 20 times less after a Hubble time. Under these conditions, the derived fraction of second-population stars reproduces the observed value, which is one of the strongest constraints of GC mass loss.
Resumo:
The discovery of new materials and their functions has always been a fundamental component of technological progress. Nowadays, the quest for new materials is stronger than ever: sustainability, medicine, robotics and electronics are all key assets which depend on the ability to create specifically tailored materials. However, designing materials with desired properties is a difficult task, and the complexity of the discipline makes it difficult to identify general criteria. While scientists developed a set of best practices (often based on experience and expertise), this is still a trial-and-error process. This becomes even more complex when dealing with advanced functional materials. Their properties depend on structural and morphological features, which in turn depend on fabrication procedures and environment, and subtle alterations leads to dramatically different results. Because of this, materials modeling and design is one of the most prolific research fields. Many techniques and instruments are continuously developed to enable new possibilities, both in the experimental and computational realms. Scientists strive to enforce cutting-edge technologies in order to make progress. However, the field is strongly affected by unorganized file management, proliferation of custom data formats and storage procedures, both in experimental and computational research. Results are difficult to find, interpret and re-use, and a huge amount of time is spent interpreting and re-organizing data. This also strongly limit the application of data-driven and machine learning techniques. This work introduces possible solutions to the problems described above. Specifically, it talks about developing features for specific classes of advanced materials and use them to train machine learning models and accelerate computational predictions for molecular compounds; developing method for organizing non homogeneous materials data; automate the process of using devices simulations to train machine learning models; dealing with scattered experimental data and use them to discover new patterns.
Resumo:
In medicine, innovation depends on a better knowledge of the human body mechanism, which represents a complex system of multi-scale constituents. Unraveling the complexity underneath diseases proves to be challenging. A deep understanding of the inner workings comes with dealing with many heterogeneous information. Exploring the molecular status and the organization of genes, proteins, metabolites provides insights on what is driving a disease, from aggressiveness to curability. Molecular constituents, however, are only the building blocks of the human body and cannot currently tell the whole story of diseases. This is why nowadays attention is growing towards the contemporary exploitation of multi-scale information. Holistic methods are then drawing interest to address the problem of integrating heterogeneous data. The heterogeneity may derive from the diversity across data types and from the diversity within diseases. Here, four studies conducted data integration using customly designed workflows that implement novel methods and views to tackle the heterogeneous characterization of diseases. The first study devoted to determine shared gene regulatory signatures for onco-hematology and it showed partial co-regulation across blood-related diseases. The second study focused on Acute Myeloid Leukemia and refined the unsupervised integration of genomic alterations, which turned out to better resemble clinical practice. In the third study, network integration for artherosclerosis demonstrated, as a proof of concept, the impact of network intelligibility when it comes to model heterogeneous data, which showed to accelerate the identification of new potential pharmaceutical targets. Lastly, the fourth study introduced a new method to integrate multiple data types in a unique latent heterogeneous-representation that facilitated the selection of important data types to predict the tumour stage of invasive ductal carcinoma. The results of these four studies laid the groundwork to ease the detection of new biomarkers ultimately beneficial to medical practice and to the ever-growing field of Personalized Medicine.
Resumo:
The cation chloride cotransporters (CCCs) represent a vital family of ion transporters, with several members implicated in significant neurological disorders. Specifically, conditions such as cerebrospinal fluid accumulation, epilepsy, Down’s syndrome, Asperger’s syndrome, and certain cancers have been attributed to various CCCs. This thesis delves into these pharmacological targets using advanced computational methodologies. I primarily employed GPU-accelerated all-atom molecular dynamics simulations, deep learning-based collective variables, enhanced sampling methods, and custom Python scripts for comprehensive simulation analyses. Our research predominantly centered on KCC1 and NKCC1 transporters. For KCC1, I examined its equilibrium dynamics in the presence/absence of an inhibitor and assessed the functional implications of different ion loading states. In contrast, our work on NKCC1 revealed its unique alternating access mechanism, termed the rocking-bundle mechanism. I identified a previously unobserved occluded state and demonstrated the transporter's potential for water permeability under specific conditions. Furthermore, I confirmed the actual water flow through its permeable states. In essence, this thesis leverages cutting-edge computational techniques to deepen our understanding of the CCCs, a family of ion transporters with profound clinical significance.
Resumo:
Protected crop production is a modern and innovative approach to cultivating plants in a controlled environment to optimize growth, yield, and quality. This method involves using structures such as greenhouses or tunnels to create a sheltered environment. These productive solutions are characterized by a careful regulation of variables like temperature, humidity, light, and ventilation, which collectively contribute to creating an optimal microclimate for plant growth. Heating, cooling, and ventilation systems are used to maintain optimal conditions for plant growth, regardless of external weather fluctuations. Protected crop production plays a crucial role in addressing challenges posed by climate variability, population growth, and food security. Similarly, animal husbandry involves providing adequate nutrition, housing, medical care and environmental conditions to ensure animal welfare. Then, sustainability is a critical consideration in all forms of agriculture, including protected crop and animal production. Sustainability in animal production refers to the practice of producing animal products in a way that minimizes negative impacts on the environment, promotes animal welfare, and ensures the long-term viability of the industry. Then, the research activities performed during the PhD can be inserted exactly in the field of Precision Agriculture and Livestock farming. Here the focus is on the computational fluid dynamic (CFD) approach and environmental assessment applied to improve yield, resource efficiency, environmental sustainability, and cost savings. It represents a significant shift from traditional farming methods to a more technology-driven, data-driven, and environmentally conscious approach to crop and animal production. On one side, CFD is powerful and precise techniques of computer modeling and simulation of airflows and thermo-hygrometric parameters, that has been applied to optimize the growth environment of crops and the efficiency of ventilation in pig barns. On the other side, the sustainability aspect has been investigated and researched in terms of Life Cycle Assessment analyses.