984 resultados para Simulation Experiment


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ultrafine grained materials produced by severe plastic deformation methods possess attractive mechanical properties such as high strength compared with traditional coarse grained counterparts and reasonable ductility. Between existing severe plastic deformation methods the Equal Channel Angular Pressing is the most promising for future industrial applications and can produce a variety of ultrafine grained microstructures in materials depending on route, temperature and number of passes during processing. Driven by a rising trend of miniaturisation of parts these materials are promising candidates for microforming processes. Considering that bi-axial deformation of sheet (foil) is the major operation in microforming, the investigation of the influence of the number of ECAP passes on the bi-axial ductility in micro deep drawing test has been examined by experiments and FE simulation in this study. The experiments have showed that high force was required for drawing of the samples processed by ECAP compare to coarse grained materials. The limit drawing ratio of ultrafine grained samples was in the range of 1.9–2.0 with ECAP pass number changing from 1 to 16, while a higher value of 2.2 was obtained for coarse grained copper. However, the notable decrease in tensile ductility with increase in strength was not as pronounced for bi-axial ductility. The FE simulation using standard isotropic hardening model and von Mises yielding criterion confirmed these findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that the gas–solid system plays a significant role in many industrial processes. It is a complex physical and chemical process, generally consisting of heat transfer, mass transfer, species diffusion, and chemical reactions. In this paper, the reaction of methane with air at a low air factor and the gas flow in a fluidized bed with 0.1 mm solid particles are computationally simulated to enable the study of the effect of the inert particles on the species diffusion and the chemical reactions. The reaction of methane and air is modeled by a two-step reaction mechanism that produces a continuous fluid phase composed of six gases (CH4, CO, O2, CO2, H2O, and N2) and discrete solid particles in the reactor. The simulation results are compared with experiment and show that the finite rate model and the eddy dissipation model can well describe the reactions of gases in high-density gas–solid systems. The distribution of each gas and the particle behaviors are analyzed for incomplete combustion at different concentrations of loaded solid particles. The inert particles change the reactions by enhancing both the chemical kinetics and the species diffusion dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The understanding of cell manipulation, for example in microinjection, requires an accurate model of the cells. Motivated by this important requirement, a 3D particlebased mechanical model is derived for simulating the deformation of the fish egg membrane and the corresponding cellular forces during microrobotic cell injection. The model is formulated based on the kinematic and dynamic of spring- damper configuration with multi-particle joints considering the visco-elastic fluidic properties. It simulates the indentation force feedback as well as cell visual deformation during microinjection. A preliminary simulation study is conducted with different parameter configurations. The results indicate that the proposed particle-based model is able to provide similar deformation profiles as observed from a real microinjection experiment of the zebrafish embryo published in the literature. As a generic modelling approach is adopted, the proposed model also has the potential in applications with different types of manipulation such as micropipette cell aspiration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The simulation is a very powerful tool to develop more efficient systems, hence it is been widely used with the goal of productivity improvement. Its results, if compared with other methods, are not always optimum; however, if the experiment is rightly elaborated, its results will represent the real situation, enabling its use with a good level of reliability. This work used the simulation (through the ProModel (R) software) in order to study, understand, model and improve the expenditure system of an enterprise, with a premise of keeping the production-delivery flow considering quick, controlled and reliable conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Polyampholyte copolymers containing both positive and negative monomers regularly dispersed along the chain were studied. The Monte Carlo method was used to simulate chains with charged monomers interacting by screened Coulomb potential. The neutral polyampholyte chains collapse due to the attractive electrostatic interactions. The nonneutral chains are in extended conformations due to the repulsive polyelectrolyte effects that dominate the attractive polyampholyte interactions. The results are in good agreement with experiment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A methodology of identification and characterization of coherent structures mostly known as clusters is applied to hydrodynamic results of numerical simulation generated for the riser of a circulating fluidized bed. The numerical simulation is performed using the MICEFLOW code, which includes the two-fluids IIT's hydrodynamic model B. The methodology for cluster characterization that is used is based in the determination of four characteristics, related to average life time, average volumetric fraction of solid, existing time fraction and frequency of occurrence. The identification of clusters is performed by applying a criterion related to the time average value of the volumetric solid fraction. A qualitative rather than quantitative analysis is performed mainly owing to the unavailability of operational data used in the considered experiments. Concerning qualitative analysis, the simulation results are in good agreement with literature. Some quantitative comparisons between predictions and experiment were also presented to emphasize the capability of the modeling procedure regarding the analysis of macroscopic scale coherent structures. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O objetivo deste trabalho foi parametrizar, calibrar e validar uma nova versão do modelo de crescimento e de produtividade da soja desenvolvido por Sinclair, em condições naturais de campo no nordeste da Amazônia. Os dados meteorológicos e os valores de crescimento e de área foliar da soja foram obtidos em um experimento agrometeorológico realizado em Paragominas, PA, de 2006 a 2009. As condições climáticas durante o experimento foram muito distintas, com uma ligeira redução na precipitação em 2007, em virtude do fenômeno El Niño. Houve redução no índice de área foliar (IAF) e na produção de biomassa neste ano, a qual foi reproduzida pelo modelo. A simulação do IAF apresentou raiz do erro quadrado médio (REQM) de 0,55 a 0,82 m2 m‑2, de 2006 a 2009. A simulação da produtividade da soja para os dados independentes apresentou um REQM de 198 kg ha‑1, ou seja, uma superestimativa de 3%. O modelo encontra-se calibrado e validado para as condições climáticas da Amazônia e pode contribuir positivamente para a melhoria das simulações dos impactos da mudança de uso da terra na região amazônica. A versão modificada do modelo de Sinclair simula adequadamente a formação de área foliar, a biomassa total e a produtividade da soja, nas condições climáticas do nordeste da Amazônia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dados meteorológicos e simulações numéricas de alta resolução foram usados para estimar campos espaciais na região leste da Amazônia onde se situam a Floresta e a Baía de Caxiuanã, no Estado do Pará. O estudo foi feito para o período de Novembro de 2006, quando foi realizado o experimento de campo COBRA-PARÁ. Análises de imagens do sensor MODIS mostram a ocorrência de vários fenômenos locais como avenidas de nuvens, sistemas convectivos precipitantes, e importante influência das interfaces entre a floresta e as superfícies aquáticas. Simulações numéricas para o dia 7 de novembro de 2006 mostraram que o modelo representou bem as principais variáveis meteorológicas. Os resultados mostram que a Baía de Caxiuanã provoca importante impacto nos campos meteorológicos adjacentes, principalmente, através da advecção pelos ventos de nordeste que induzem a temperaturas do dossel mais frias a oeste da baía. Simulações de alta resolução (LES) produziram padrões espaciais de temperatura e umidade alinhados com os ventos durante o período diurno e mudanças noturnas causadas principalmente pela presença da baía e chuvas convectivas. Correlações espaciais entre os ventos de níveis médios e os fluxos verticais de calor latente mostraram que existe uma mudança de correlações negativas para as primeiras horas do dia passando para correlações positivas para o período da tarde e início da noite.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of metacontingency was taught to undergraduate students of Psychology by using a "game" simulation proposed originally by Vichi, Andery and Glenn (2009). Twenty-five students, distributed into three groups were exposed to six experimental sessions in which they had to make bets and divide the amounts gained. The three groups competed against each other for photocopies quotas. Two contingencies shifted over the sessions. Under Contingency B, the group would win points only if in the previous round each member had received the same amount of points and under Contingency A, winning was contingent on an unequal distribution of the points. We observed that proportional divisions predominated independent of the contingency in course. The manipulation of cultural consequences (winning or losing points) produced consistent modifications in two response categories: 1) choices of the value bet in each round, and 2) divisions of the points among group members. Controlling relations between cultural consequences and the behavior of dividing were statistically significant in one of the groups, whereas in the other two groups controlling relations were observed only in Contingency B. A review of the reinforcement criteria used in the original experiment is suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this PhD thesis is to investigate the orientational and dynamical properties of liquid crystalline systems, at molecular level and using atomistic computer simulations, to reach a better understanding of material behavior from a microscopic point view. In perspective this should allow to clarify the relation between the micro and macroscopic properties with the objective of predicting or confirming experimental results on these systems. In this context, we developed four different lines of work in the thesis. The first one concerns the orientational order and alignment mechanism of rigid solutes of small dimensions dissolved in a nematic phase formed by the 4-pentyl,4 cyanobiphenyl (5CB) nematic liquid crystal. The orientational distribution of solutes have been obtained with Molecular Dynamics Simulation (MD) and have been compared with experimental data reported in literature. we have also verified the agreement between order parameters and dipolar coupling values measured in NMR experiments. The MD determined effective orientational potentials have been compared with the predictions of Maier­Saupe and Surface tensor models. The second line concerns the development of a correct parametrization able to reproduce the phase transition properties of a prototype of the oligothiophene semiconductor family: sexithiophene (T6). T6 forms two crystalline polymorphs largely studied, and possesses liquid crystalline phases still not well characterized, From simulations we detected a phase transition from crystal to liquid crystal at about 580 K, in agreement with available experiments, and in particular we found two LC phases, smectic and nematic. The crystal­smectic transition is associated to a relevant density variation and to strong conformational changes of T6, namely the molecules in the liquid crystal phase easily assume a bent shape, deviating from the planar structure typical of the crystal. The third line explores a new approach for calculating the viscosity in a nematic through a virtual exper- iment resembling the classical falling sphere experiment. The falling sphere is replaced by an hydrogenated silicon nanoparticle of spherical shape suspended in 5CB, and gravity effects are replaced by a constant force applied to the nanoparticle in a selected direction. Once the nanoparticle reaches a constant velocity, the viscosity of the medium can be evaluated using Stokes' law. With this method we successfully reproduced experimental viscosities and viscosity anisotropy for the solvent 5CB. The last line deals with the study of order induction on nematic molecules by an hydrogenated silicon surface. Gaining predicting power for the anchoring behavior of liquid crystals at surfaces will be a very desirable capability, as many properties related to devices depend on molecular organization close to surfaces. Here we studied, by means of atomistic MD simulations, the flat interface between an hydrogenated (001) silicon surface in contact with a sample of 5CB molecules. We found a planar anchoring of the first layers of 5CB where surface interactions are dominating with respect to the mesogen intermolecular interactions. We also analyzed the interface 5CB­vacuum, finding a homeotropic orientation of the nematic at this interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is about three major aspects of the identification of top quarks. First comes the understanding of their production mechanism, their decay channels and how to translate theoretical formulae into programs that can simulate such physical processes using Monte Carlo techniques. In particular, the author has been involved in the introduction of the POWHEG generator in the framework of the ATLAS experiment. POWHEG is now fully used as the benchmark program for the simulation of ttbar pairs production and decay, along with MC@NLO and AcerMC: this will be shown in chapter one. The second chapter illustrates the ATLAS detectors and its sub-units, such as calorimeters and muon chambers. It is very important to evaluate their efficiency in order to fully understand what happens during the passage of radiation through the detector and to use this knowledge in the calculation of final quantities such as the ttbar production cross section. The last part of this thesis concerns the evaluation of this quantity deploying the so-called "golden channel" of ttbar decays, yielding one energetic charged lepton, four particle jets and a relevant quantity of missing transverse energy due to the neutrino. The most important systematic errors arising from the various part of the calculation are studied in detail. Jet energy scale, trigger efficiency, Monte Carlo models, reconstruction algorithms and luminosity measurement are examples of what can contribute to the uncertainty about the cross-section.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we describe in detail the Monte Carlo simulation (LVDG4) built to interpret the experimental data collected by LVD and to measure the muon-induced neutron yield in iron and liquid scintillator. A full Monte Carlo simulation, based on the Geant4 (v 9.3) toolkit, has been developed and validation tests have been performed. We used the LVDG4 to determine the active vetoing and the shielding power of LVD. The idea was to evaluate the feasibility to host a dark matter detector in the most internal part, called Core Facility (LVD-CF). The first conclusion is that LVD is a good moderator, but the iron supporting structure produce a great number of neutrons near the core. The second conclusions is that if LVD is used as an active veto for muons, the neutron flux in the LVD-CF is reduced by a factor 50, of the same order of magnitude of the neutron flux in the deepest laboratory of the world, Sudbury. Finally, the muon-induced neutron yield has been measured. In liquid scintillator we found $(3.2 \pm 0.2) \times 10^{-4}$ n/g/cm$^2$, in agreement with previous measurements performed at different depths and with the general trend predicted by theoretical calculations and Monte Carlo simulations. Moreover we present the first measurement, in our knowledge, of the neutron yield in iron: $(1.9 \pm 0.1) \times 10^{-3}$ n/g/cm$^2$. That measurement provides an important check for the MC of neutron production in heavy materials that are often used as shield in low background experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Der radiative Zerfall eines Hyperons in ein leichteres Hyperon und ein Photon erlaubt eine Untersuchung der Struktur der elektroschwachen Wechselwirkung von Hadronen. Dazu wird die Zerfallsasymmetrie $alpha$ betrachtet. Sie beschreibt die Verteilung des Tochterhyperons bezüglich der Polarisation $vec{P}$ des Mutterhyperons mit $dN / d cos(Theta) propto 1 + alpha |vec{P}| cos(Theta)$, wobei $Theta$ der Winkel zwischen $vec{P}$ und dem Impuls des Tochterhyperons ist. Von besonderem Interesse ist der radiative Zerfall $Xi^0 to Lambda gamma$, für den alle Rechnungen auf Quarkniveau eine positive Asymmetrie vorhersagen, wohingegen bisher eine negative Asymmetrie von $alpha_{Lambda gamma} = -0,73 +- 0,17$ gemessen wurde. Ziel dieser Arbeit war es, die bisherigen Messungen zu überprüfen und die Asymmetrie mit einer deutlich höheren Präzision zu bestimmen. Ferner wurden die Zerfallsasymmetrie des radiativen Zerfalls $Xi^0 to Sigma^0 gamma$ ermittelt und zum Test der angewandten Analysemethode der gut bekannte Zerfall $Xi^0 to Lambda pi^0$ herangezogen. Während der Datennahme im Jahr 2002 zeichnete das NA48/1-Experiment am CERN gezielt seltene $K_S$- und Hyperonzerfälle auf. Damit konnte der weltweit größte Datensatz an $Xi^0$-Zerfällen gewonnen werden, aus dem etwa 52.000 $Xi^0 to Lambda gamma$-Zerfälle, 15.000 $Xi^0 to Sigma^0 gamma$-Zerfälle und 4 Mill. $Xi^0 to Lambda pi^0$-Zerfälle mit nur geringem Untergrund extrahiert wurden. Ebenso wurden die entsprechenden $antiXi$-Zerfälle mit etwa einem Zehntel der obigen Ereigniszahlen registriert. Die Bestimmung der Zerfallsasymmetrien erfolgte durch den Vergleich der gemessene Daten mit einer detaillierten Detektorsimulation und führte zu den folgenden Resultaten dieser Arbeit: $alpha_{Lambda gamma} = -0,701 +- 0,019_{stat} +- 0,064_{sys}$, $alpha_{Sigma^0 gamma} = -0,683 +- 0,032_{stat} +- 0,077_{sys}$, $alpha_{Lambda pi^0} = -0,439 +- 0,002_{stat} +- 0,056_{sys}$, $alpha_{antiLambda gamma} = 0,772 +- 0,064_{stat} +- 0,066_{sys}$, $alpha_{antiSigma^0 gamma} = 0,811 +- 0,103_{stat} +- 0,135_{sys}$, $alpha_{antiLambda pi^0} = 0,451 +- 0,005_{stat} +- 0,057_{sys}$. Somit konnte die Unsicherheit der $Xi^0 to Lambda gamma$-Zerfallsasymmetrie auf etwa ein Drittel reduziert werden. Ihr negatives Vorzeichen und damit der Widerspruch zu den Vorhersagen der Quarkmodellrechnungen ist so zweifelsfrei bestätigt. Mit den zum ersten Mal gemessenen $antiXi$-Asymmetrien konnten zusätzlich Grenzen auf eine mögliche CP-Verletzung in den $Xi^0$-Zerfällen, die $alpha_{Xi^0} neq -alpha_{antiXi}$ zur Folge hätte, bestimmt werden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Das Standardmodell der Teilchenphysik, das drei der vier fundamentalen Wechselwirkungen beschreibt, stimmt bisher sehr gut mit den Messergebnissen der Experimente am CERN, dem Fermilab und anderen Forschungseinrichtungen überein. rnAllerdings können im Rahmen dieses Modells nicht alle Fragen der Teilchenphysik beantwortet werden. So lässt sich z.B. die vierte fundamentale Kraft, die Gravitation, nicht in das Standardmodell einbauen.rnDarüber hinaus hat das Standardmodell auch keinen Kandidaten für dunkle Materie, die nach kosmologischen Messungen etwa 25 % unseres Universum ausmacht.rnAls eine der vielversprechendsten Lösungen für diese offenen Fragen wird die Supersymmetrie angesehen, die eine Symmetrie zwischen Fermionen und Bosonen einführt. rnAus diesem Modell ergeben sich sogenannte supersymmetrische Teilchen, denen jeweils ein Standardmodell-Teilchen als Partner zugeordnet sind.rnEin mögliches Modell dieser Symmetrie ist das R-Paritätserhaltende mSUGRA-Modell, falls Supersymmetrie in der Natur realisiert ist.rnIn diesem Modell ist das leichteste supersymmetrische Teilchen (LSP) neutral und schwach wechselwirkend, sodass es nicht direkt im Detektor nachgewiesen werden kann, sondern indirekt über die vom LSP fortgetragene Energie, die fehlende transversale Energie (etmiss), nachgewiesen werden muss.rnrnDas ATLAS-Experiment wird 2010 mit Hilfe des pp-Beschleunigers LHC mit einer Schwerpunktenergie von sqrt(s)=7-10 TeV mit einer Luminosität von 10^32 #/(cm^2*s) mit der Suche nach neuer Physik starten.rnDurch die sehr hohe Datenrate, resultierend aus den etwa 10^8 Auslesekanälen des ATLAS-Detektors bei einer Bunchcrossingrate von 40 MHz, wird ein Triggersystem benötigt, um die zu speichernde Datenmenge zu reduzieren.rnDabei muss ein Kompromiss zwischen der verfügbaren Triggerrate und einer sehr hohen Triggereffizienz für die interessanten Ereignisse geschlossen werden, da etwa nur jedes 10^8-te Ereignisse für die Suche nach neuer Physik interessant ist.rnZur Erfüllung der Anforderungen an das Triggersystem wird im Experiment ein dreistufiges System verwendet, bei dem auf der ersten Triggerstufe mit Abstand die höchste Datenreduktion stattfindet.rnrnIm Rahmen dieser Arbeit rn%, die vollständig auf Monte-Carlo-Simulationen basiert, rnist zum einen ein wesentlicher Beitrag zum grundlegenden Verständnis der Eigenschaft der fehlenden transversalen Energie auf der ersten Triggerstufe geleistet worden.rnZum anderen werden Methoden vorgestellt, mit denen es möglich ist, die etmiss-Triggereffizienz für Standardmodellprozesse und mögliche mSUGRA-Szenarien aus Daten zu bestimmen. rnBei der Optimierung der etmiss-Triggerschwellen für die erste Triggerstufe ist die Triggerrate bei einer Luminosität von 10^33 #/(cm^2*s) auf 100 Hz festgelegt worden.rnFür die Triggeroptimierung wurden verschiedene Simulationen benötigt, bei denen eigene Entwicklungsarbeit eingeflossen ist.rnMit Hilfe dieser Simulationen und den entwickelten Optimierungsalgorithmen wird gezeigt, dass trotz der niedrigen Triggerrate das Entdeckungspotential (für eine Signalsignifikanz von mindestens 5 sigma) durch Kombinationen der etmiss-Schwelle mit Lepton bzw. Jet-Triggerschwellen gegenüber dem bestehenden ATLAS-Triggermenü auf der ersten Triggerstufe um bis zu 66 % erhöht wird.