991 resultados para Physics simulation
Resumo:
One of the core tasks of the virtual-manufacturing environment is to characterise the transformation of the state of material during each of the unit processes. This transformation in shape, material properties, etc. can only be reliably achieved through the use of models in a simulation context. Unfortunately, many manufacturing processes involve the material being treated in both the liquid and solid state, the trans-formation of which may be achieved by heat transfer and/or electro-magnetic fields. The computational modelling of such processes, involving the interactions amongst various interacting phenomena, is a consider-able challenge. However, it must be addressed effectively if Virtual Manufacturing Environments are to become a reality! This contribution focuses upon one attempt to develop such a multi-physics computational toolkit. The approach uses a single discretisation procedure and provides for direct interaction amongst the component phenomena. The need to exploit parallel high performance hardware is addressed so that simulation elapsed times can be brought within the realms of practicality. Examples of Multiphysics modelling in relation to shape casting, and solder joint formation reinforce the motivation for this work.
Resumo:
This paper describes a new 2D model for the photospheric evolution of the magnetic carpet. It is the first in a series of papers working towards constructing a realistic 3D non-potential model for the interaction of small-scale solar magnetic fields. In the model, the basic evolution of the magnetic elements is governed by a supergranular flow profile. In addition, magnetic elements may evolve through the processes of emergence, cancellation, coalescence and fragmentation. Model parameters for the emergence of bipoles are based upon the results of observational studies. Using this model, several simulations are considered, where the range of flux with which bipoles may emerge is varied. In all cases the model quickly reaches a steady state where the rates of emergence and cancellation balance. Analysis of the resulting magnetic field shows that we reproduce observed quantities such as the flux distribution, mean field, cancellation rates, photospheric recycle time and a magnetic network. As expected, the simulation matches observations more closely when a larger, and consequently more realistic, range of emerging flux values is allowed (4×1016 - 1019 Mx). The model best reproduces the current observed properties of the magnetic carpet when we take the minimum absolute flux for emerging bipoles to be 4×1016 Mx. In future, this 2D model will be used as an evolving photospheric boundary condition for 3D non-potential modeling.
Resumo:
The protein folding problem has been one of the most challenging subjects in biological physics due to its complexity. Energy landscape theory based on statistical mechanics provides a thermodynamic interpretation of the protein folding process. We have been working to answer fundamental questions about protein-protein and protein-water interactions, which are very important for describing the energy landscape surface of proteins correctly. At first, we present a new method for computing protein-protein interaction potentials of solvated proteins directly from SAXS data. An ensemble of proteins was modeled by Metropolis Monte Carlo and Molecular Dynamics simulations, and the global X-ray scattering of the whole model ensemble was computed at each snapshot of the simulation. The interaction potential model was optimized and iterated by a Levenberg-Marquardt algorithm. Secondly, we report that terahertz spectroscopy directly probes hydration dynamics around proteins and determines the size of the dynamical hydration shell. We also present the sequence and pH-dependence of the hydration shell and the effect of the hydrophobicity. On the other hand, kinetic terahertz absorption (KITA) spectroscopy is introduced to study the refolding kinetics of ubiquitin and its mutants. KITA results are compared to small angle X-ray scattering, tryptophan fluorescence, and circular dichroism results. We propose that KITA monitors the rearrangement of hydrogen bonding during secondary structure formation. Finally, we present development of the automated single molecule operating system (ASMOS) for a high throughput single molecule detector, which levitates a single protein molecule in a 10 µm diameter droplet by the laser guidance. I also have performed supporting calculations and simulations with my own program codes.
Resumo:
International audience
Resumo:
Digital rock physics combines modern imaging with advanced numerical simulations to analyze the physical properties of rocks -- In this paper we suggest a special segmentation procedure which is applied to a carbonate rock from Switzerland -- Starting point is a CTscan of a specimen of Hauptmuschelkalk -- The first step applied to the raw image data is a nonlocal mean filter -- We then apply different thresholds to identify pores and solid phases -- Because we are aware of a nonneglectable amount of unresolved microporosity we also define intermediate phases -- Based on this segmentation determine porositydependent values for the pwave velocity and for the permeability -- The porosity measured in the laboratory is then used to compare our numerical data with experimental data -- We observe a good agreement -- Future work includes an analytic validation to the numerical results of the pwave velocity upper bound, employing different filters for the image segmentation and using data with higher resolution
Resumo:
A primary goal of this dissertation is to understand the links between mathematical models that describe crystal surfaces at three fundamental length scales: The scale of individual atoms, the scale of collections of atoms forming crystal defects, and macroscopic scale. Characterizing connections between different classes of models is a critical task for gaining insight into the physics they describe, a long-standing objective in applied analysis, and also highly relevant in engineering applications. The key concept I use in each problem addressed in this thesis is coarse graining, which is a strategy for connecting fine representations or models with coarser representations. Often this idea is invoked to reduce a large discrete system to an appropriate continuum description, e.g. individual particles are represented by a continuous density. While there is no general theory of coarse graining, one closely related mathematical approach is asymptotic analysis, i.e. the description of limiting behavior as some parameter becomes very large or very small. In the case of crystalline solids, it is natural to consider cases where the number of particles is large or where the lattice spacing is small. Limits such as these often make explicit the nature of links between models capturing different scales, and, once established, provide a means of improving our understanding, or the models themselves. Finding appropriate variables whose limits illustrate the important connections between models is no easy task, however. This is one area where computer simulation is extremely helpful, as it allows us to see the results of complex dynamics and gather clues regarding the roles of different physical quantities. On the other hand, connections between models enable the development of novel multiscale computational schemes, so understanding can assist computation and vice versa. Some of these ideas are demonstrated in this thesis. The important outcomes of this thesis include: (1) a systematic derivation of the step-flow model of Burton, Cabrera, and Frank, with corrections, from an atomistic solid-on-solid-type models in 1+1 dimensions; (2) the inclusion of an atomistically motivated transport mechanism in an island dynamics model allowing for a more detailed account of mound evolution; and (3) the development of a hybrid discrete-continuum scheme for simulating the relaxation of a faceted crystal mound. Central to all of these modeling and simulation efforts is the presence of steps composed of individual layers of atoms on vicinal crystal surfaces. Consequently, a recurring theme in this research is the observation that mesoscale defects play a crucial role in crystal morphological evolution.
Resumo:
Statistically stationary and homogeneous shear turbulence (SS-HST) is investigated by means of a new direct numerical simulation code, spectral in the two horizontal directions and compact-finite-differences in the direction of the shear. No remeshing is used to impose the shear-periodic boundary condition. The influence of the geometry of the computational box is explored. Since HST has no characteristic outer length scale and tends to fill the computational domain, long-term simulations of HST are “minimal” in the sense of containing on average only a few large-scale structures. It is found that the main limit is the spanwise box width, Lz, which sets the length and velocity scales of the turbulence, and that the two other box dimensions should be sufficiently large (Lx ≳ 2Lz, Ly ≳ Lz) to prevent other directions to be constrained as well. It is also found that very long boxes, Lx ≳ 2Ly, couple with the passing period of the shear-periodic boundary condition, and develop strong unphysical linearized bursts. Within those limits, the flow shows interesting similarities and differences with other shear flows, and in particular with the logarithmic layer of wall-bounded turbulence. They are explored in some detail. They include a self-sustaining process for large-scale streaks and quasi-periodic bursting. The bursting time scale is approximately universal, ∼20S−1, and the availability of two different bursting systems allows the growth of the bursts to be related with some confidence to the shearing of initially isotropic turbulence. It is concluded that SS-HST, conducted within the proper computational parameters, is a very promising system to study shear turbulence in general.
Resumo:
The low-frequency electromagnetic compatibility (EMC) is an increasingly important aspect in the design of practical systems to ensure the functional safety and reliability of complex products. The opportunities for using numerical techniques to predict and analyze system’s EMC are therefore of considerable interest in many industries. As the first phase of study, a proper model, including all the details of the component, was required. Therefore, the advances in EMC modeling were studied with classifying analytical and numerical models. The selected model was finite element (FE) modeling, coupled with the distributed network method, to generate the model of the converter’s components and obtain the frequency behavioral model of the converter. The method has the ability to reveal the behavior of parasitic elements and higher resonances, which have critical impacts in studying EMI problems. For the EMC and signature studies of the machine drives, the equivalent source modeling was studied. Considering the details of the multi-machine environment, including actual models, some innovation in equivalent source modeling was performed to decrease the simulation time dramatically. Several models were designed in this study and the voltage current cube model and wire model have the best result. The GA-based PSO method is used as the optimization process. Superposition and suppression of the fields in coupling the components were also studied and verified. The simulation time of the equivalent model is 80-100 times lower than the detailed model. All tests were verified experimentally. As the application of EMC and signature study, the fault diagnosis and condition monitoring of an induction motor drive was developed using radiated fields. In addition to experimental tests, the 3DFE analysis was coupled with circuit-based software to implement the incipient fault cases. The identification was implemented using ANN for seventy various faulty cases. The simulation results were verified experimentally. Finally, the identification of the types of power components were implemented. The results show that it is possible to identify the type of components, as well as the faulty components, by comparing the amplitudes of their stray field harmonics. The identification using the stray fields is nondestructive and can be used for the setups that cannot go offline and be dismantled
Resumo:
The time-dependent CP asymmetries of the $B^0\to\pi^+\pi^-$ and $B^0_s\toK^+K^-$ decays and the time-integrated CP asymmetries of the $B^0\toK^+\pi^-$ and $B^0_s\to\pi^+K^-$ decays are measured, using the $p-p$ collision data collected with the LHCb detector and corresponding to the full Run2. The results are compatible with previous determinations of these quantities from LHCb, except for the CP-violation parameters of the $B^0_s\to K^+K^-$ decays, that show a discrepancy exceeding 3 standard deviations between different data-taking periods. The investigations being conducted to understand the discrepancy are documented. The measurement of the CKM matrix element $|V_{cb}|$ using $B^0_{s}\to D^{(*)-}_s\mu^+ \nu_\mu$ is also reported, using the $p-p$ collision data collected with the LHCb detector and corresponding to the full Run1. The measurement leads to $|V_{cb}| = (41.4\pm0.6\pm0.9\pm1.2)\times 10^{-3}$, where the first uncertainty is statistical, the second is systematic, and the third is due to external inputs. This measurement is compatible with the world averages and constitutes the first measurement of $|V_{cb}|$ at a hadron collider and the absolute first one with decays of the $B^0_s$ meson. The analysis also provides the very first measurements of the branching ratio and form factors parameters of the signal decay modes. The study of the characteristics ruling the response of an electromagnetic calorimeter (ECAL) to profitably operate in the high luminosity regime foreseen for the Upgrade2 of LHCb is reported in the final part of this Thesis. A fast and flexible simulation framework is developed to this purpose. Physics performance of different configurations of the ECAL are evaluated using samples of fully simulated $B^0\to \pi^+\pi^-\pi^0$ and $B^0\to K^{*0}e^+e^-$ decays. The results are used to guide the development of the future ECAL and are reported in the Framework Technical Design Report of the LHCb Upgrade2 detector.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
Monte Carlo track structures (MCTS) simulations have been recognized as useful tools for radiobiological modeling. However, the authors noticed several issues regarding the consistency of reported data. Therefore, in this work, they analyze the impact of various user defined parameters on simulated direct DNA damage yields. In addition, they draw attention to discrepancies in published literature in DNA strand break (SB) yields and selected methodologies. The MCTS code Geant4-DNA was used to compare radial dose profiles in a nanometer-scale region of interest (ROI) for photon sources of varying sizes and energies. Then, electron tracks of 0.28 keV-220 keV were superimposed on a geometric DNA model composed of 2.7 × 10(6) nucleosomes, and SBs were simulated according to four definitions based on energy deposits or energy transfers in DNA strand targets compared to a threshold energy ETH. The SB frequencies and complexities in nucleosomes as a function of incident electron energies were obtained. SBs were classified into higher order clusters such as single and double strand breaks (SSBs and DSBs) based on inter-SB distances and on the number of affected strands. Comparisons of different nonuniform dose distributions lacking charged particle equilibrium may lead to erroneous conclusions regarding the effect of energy on relative biological effectiveness. The energy transfer-based SB definitions give similar SB yields as the one based on energy deposit when ETH ≈ 10.79 eV, but deviate significantly for higher ETH values. Between 30 and 40 nucleosomes/Gy show at least one SB in the ROI. The number of nucleosomes that present a complex damage pattern of more than 2 SBs and the degree of complexity of the damage in these nucleosomes diminish as the incident electron energy increases. DNA damage classification into SSB and DSB is highly dependent on the definitions of these higher order structures and their implementations. The authors' show that, for the four studied models, different yields are expected by up to 54% for SSBs and by up to 32% for DSBs, as a function of the incident electrons energy and of the models being compared. MCTS simulations allow to compare direct DNA damage types and complexities induced by ionizing radiation. However, simulation results depend to a large degree on user-defined parameters, definitions, and algorithms such as: DNA model, dose distribution, SB definition, and the DNA damage clustering algorithm. These interdependencies should be well controlled during the simulations and explicitly reported when comparing results to experiments or calculations.
Resumo:
In this work, the energy response functions of a CdTe detector were obtained by Monte Carlo (MC) simulation in the energy range from 5 to 160keV, using the PENELOPE code. In the response calculations the carrier transport features and the detector resolution were included. The computed energy response function was validated through comparison with experimental results obtained with (241)Am and (152)Eu sources. In order to investigate the influence of the correction by the detector response at diagnostic energy range, x-ray spectra were measured using a CdTe detector (model XR-100T, Amptek), and then corrected by the energy response of the detector using the stripping procedure. Results showed that the CdTe exhibits good energy response at low energies (below 40keV), showing only small distortions on the measured spectra. For energies below about 80keV, the contribution of the escape of Cd- and Te-K x-rays produce significant distortions on the measured x-ray spectra. For higher energies, the most important correction is the detector efficiency and the carrier trapping effects. The results showed that, after correction by the energy response, the measured spectra are in good agreement with those provided by a theoretical model of the literature. Finally, our results showed that the detailed knowledge of the response function and a proper correction procedure are fundamental for achieving more accurate spectra from which quality parameters (i.e., half-value layer and homogeneity coefficient) can be determined.
Resumo:
The purpose of this study was to evaluate the influence of intrapulpal pressure simulation on the bonding effectiveness of etch & rinse and self-etch adhesives to dentin. Eighty sound human molars were distributed into eight groups, according to the permeability level of each sample, measured by an apparatus to assess hydraulic conductance (Lp). Thus, a similar mean permeability was achieved in each group. Three etch & rinse adhesives (Prime & Bond NT - PB, Single Bond -SB, and Excite - EX) and one self-etch system (Clearfil SE Bond - SE) were employed, varying the presence or absence of an intrapulpal pressure (IPP) simulation of 15 cmH2O. After adhesive and restorative procedures were carried out, the samples were stored in distilled water for 24 hours at 37°C, and taken for tensile bond strength (TBS) testing. Fracture analysis was performed using a light microscope at 40 X magnification. The data, obtained in MPa, were then submitted to the Kruskal-Wallis test ( a = 0.05). The results revealed that the TBS of SB and EX was significantly reduced under IPP simulation, differing from the TBS of PB and SE. Moreover, SE obtained the highest bond strength values in the presence of IPP. It could be concluded that IPP simulation can influence the bond strength of certain adhesive systems to dentin and should be considered when in vitro studies are conducted.