932 resultados para Smoothed Particle Hydrodynamics(SPH)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the last decade advances in the field of sensor design and improved base materials have pushed the radiation hardness of the current silicon detector technology to impressive performance. It should allow operation of the tracking systems of the Large Hadron Collider (LHC) experiments at nominal luminosity (1034 cm-2s-1) for about 10 years. The current silicon detectors are unable to cope with such an environment. Silicon carbide (SiC), which has recently been recognized as potentially radiation hard, is now studied. In this work it was analyzed the effect of high energy neutron irradiation on 4H-SiC particle detectors. Schottky and junction particle detectors were irradiated with 1 MeV neutrons up to fluence of 1016 cm-2. It is well known that the degradation of the detectors with irradiation, independently of the structure used for their realization, is caused by lattice defects, like creation of point-like defect, dopant deactivation and dead layer formation and that a crucial aspect for the understanding of the defect kinetics at a microscopic level is the correct identification of the crystal defects in terms of their electrical activity. In order to clarify the defect kinetic it were carried out a thermal transient spectroscopy (DLTS and PICTS) analysis of different samples irradiated at increasing fluences. The defect evolution was correlated with the transport properties of the irradiated detector, always comparing with the un-irradiated one. The charge collection efficiency degradation of Schottky detectors induced by neutron irradiation was related to the increasing concentration of defects as function of the neutron fluence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present thesis is concerned with the study of a quantum physical system composed of a small particle system (such as a spin chain) and several quantized massless boson fields (as photon gasses or phonon fields) at positive temperature. The setup serves as a simplified model for matter in interaction with thermal "radiation" from different sources. Hereby, questions concerning the dynamical and thermodynamic properties of particle-boson configurations far from thermal equilibrium are in the center of interest. We study a specific situation where the particle system is brought in contact with the boson systems (occasionally referred to as heat reservoirs) where the reservoirs are prepared close to thermal equilibrium states, each at a different temperature. We analyze the interacting time evolution of such an initial configuration and we show thermal relaxation of the system into a stationary state, i.e., we prove the existence of a time invariant state which is the unique limit state of the considered initial configurations evolving in time. As long as the reservoirs have been prepared at different temperatures, this stationary state features thermodynamic characteristics as stationary energy fluxes and a positive entropy production rate which distinguishes it from being a thermal equilibrium at any temperature. Therefore, we refer to it as non-equilibrium stationary state or simply NESS. The physical setup is phrased mathematically in the language of C*-algebras. The thesis gives an extended review of the application of operator algebraic theories to quantum statistical mechanics and introduces in detail the mathematical objects to describe matter in interaction with radiation. The C*-theory is adapted to the concrete setup. The algebraic description of the system is lifted into a Hilbert space framework. The appropriate Hilbert space representation is given by a bosonic Fock space over a suitable L2-space. The first part of the present work is concluded by the derivation of a spectral theory which connects the dynamical and thermodynamic features with spectral properties of a suitable generator, say K, of the time evolution in this Hilbert space setting. That way, the question about thermal relaxation becomes a spectral problem. The operator K is of Pauli-Fierz type. The spectral analysis of the generator K follows. This task is the core part of the work and it employs various kinds of functional analytic techniques. The operator K results from a perturbation of an operator L0 which describes the non-interacting particle-boson system. All spectral considerations are done in a perturbative regime, i.e., we assume that the strength of the coupling is sufficiently small. The extraction of dynamical features of the system from properties of K requires, in particular, the knowledge about the spectrum of K in the nearest vicinity of eigenvalues of the unperturbed operator L0. Since convergent Neumann series expansions only qualify to study the perturbed spectrum in the neighborhood of the unperturbed one on a scale of order of the coupling strength we need to apply a more refined tool, the Feshbach map. This technique allows the analysis of the spectrum on a smaller scale by transferring the analysis to a spectral subspace. The need of spectral information on arbitrary scales requires an iteration of the Feshbach map. This procedure leads to an operator-theoretic renormalization group. The reader is introduced to the Feshbach technique and the renormalization procedure based on it is discussed in full detail. Further, it is explained how the spectral information is extracted from the renormalization group flow. The present dissertation is an extension of two kinds of a recent research contribution by Jakšić and Pillet to a similar physical setup. Firstly, we consider the more delicate situation of bosonic heat reservoirs instead of fermionic ones, and secondly, the system can be studied uniformly for small reservoir temperatures. The adaption of the Feshbach map-based renormalization procedure by Bach, Chen, Fröhlich, and Sigal to concrete spectral problems in quantum statistical mechanics is a further novelty of this work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The fundamental aim in our investigation of the interaction of a polymer film with a nanoparticle is the extraction of information on the dynamics of the liquid using a single tracking particle. In this work two theoretical methods were used: one passive, where the motion of the particle measures the dynamics of the liquid, one active, where perturbations in the system are introduced through the particle. In the first part of this investigation a thin polymeric film on a substrate is studied using molecular dynamics simulations. The polymer is modeled via a 'bead spring' model. The particle is spheric and non structured and is able to interact with the monomers via a Lennard Jones potential. The system is micro-canonical and simulations were performed for average temperatures between the glass transition temperature of the film and its dewetting temperature. It is shown that the stability of the nanoparticle on the polymer film in the absence of gravity depends strongly on the form of the chosen interaction potential between nanoparticle and polymer. The relative position of the tracking particle to the liquid vapor interface of the polymer film shows the glass transition of the latter. The velocity correlation function and the mean square displacement of the particle has shown that it is caged when the temperature is close to the glass transition temperature. The analysis of the dynamics at long times shows the coupling of the nanoparticle to the center of mass of the polymer chains. The use of the Stokes-Einstein formula, which relates the diffusion coefficient to the viscosity, permits to use the nanoparticle as a probe for the determination of the bulk viscosity of the melt, the so called 'microrheology'. It is shown that for low frequencies the result obtained using microrheology coincides with the results of the Rouse model applied to the polymer dynamics. In the second part of this investigation the equations of Linear Hydrodynamics are solved for a nanoparticle oscillating above the film. It is shown that compressible liquids have mechanical response to external perturbations induced with the nanoparticle. These solutions show strong velocity and pressure profiles of the liquid near the interface, as well as a mechanical response of the liquid-vapor interface. The results obtained with this calculations can be employed for the interpretation of experimental results of non contact AFM microscopy

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Particle concentration is a principal factor that affects erosion rate of solid surfaces under particle impact, such as pipe bends in pneumatic conveyors; it is well known that a reduction in the specific erosion rate occurs under high particle concentrations, a phenomenon referred to as the “shielding effect”. The cause of shielding is believed to be increased likelihood of inter-particulate collisions, the high collision probability between incoming and rebounding particles reducing the frequency and the severity of particle impacts on the target surface. In this study, the effects of particle concentration on erosion of a mild steel bend surface have been investigated in detail using three different particulate materials on an industrial scale pneumatic conveying test rig. The materials were studied so that two had the same particle density but very different particle size, whereas two had very similar particle size but very different particle density. Experimental results confirm the shielding effect due to high particle concentration and show that the particle density has a far more significant influence than the particle size, on the magnitude of the shielding effect. A new method of correcting for change in erosivity of the particles in repeated handling, to take this factor out of the data, has been established, and appears to be successful. Moreover, a novel empirical model of the shielding effects has been used, in term of erosion resistance which appears to decrease linearly when the particle concentration decreases. With the model it is possible to find the specific erosion rate when the particle concentration tends to zero, and conversely predict how the specific erosion rate changes at finite values of particle concentration; this is critical to enable component life to be predicted from erosion tester results, as the variation of the shielding effect with concentration is different in these two scenarios. In addition a previously unreported phenomenon has been recorded, of a particulate material whose erosivity has steadily increased during repeated impacts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the main targets of the CMS experiment is to search for the Standard Model Higgs boson. The 4-lepton channel (from the Higgs decay h->ZZ->4l, l = e,mu) is one of the most promising. The analysis is based on the identification of two opposite-sign, same-flavor lepton pairs: leptons are required to be isolated and to come from the same primary vertex. The Higgs would be statistically revealed by the presence of a resonance peak in the 4-lepton invariant mass distribution. The 4-lepton analysis at CMS is presented, spanning on its most important aspects: lepton identification, variables of isolation, impact parameter, kinematics, event selection, background control and statistical analysis of results. The search leads to an evidence for a signal presence with a statistical significance of more than four standard deviations. The excess of data, with respect to the background-only predictions, indicates the presence of a new boson, with a mass of about 126 GeV/c2 , decaying to two Z bosons, whose characteristics are compatible with the SM Higgs ones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The lattice Boltzmann method is a popular approach for simulating hydrodynamic interactions in soft matter and complex fluids. The solvent is represented on a discrete lattice whose nodes are populated by particle distributions that propagate on the discrete links between the nodes and undergo local collisions. On large length and time scales, the microdynamics leads to a hydrodynamic flow field that satisfies the Navier-Stokes equation. In this thesis, several extensions to the lattice Boltzmann method are developed. In complex fluids, for example suspensions, Brownian motion of the solutes is of paramount importance. However, it can not be simulated with the original lattice Boltzmann method because the dynamics is completely deterministic. It is possible, though, to introduce thermal fluctuations in order to reproduce the equations of fluctuating hydrodynamics. In this work, a generalized lattice gas model is used to systematically derive the fluctuating lattice Boltzmann equation from statistical mechanics principles. The stochastic part of the dynamics is interpreted as a Monte Carlo process, which is then required to satisfy the condition of detailed balance. This leads to an expression for the thermal fluctuations which implies that it is essential to thermalize all degrees of freedom of the system, including the kinetic modes. The new formalism guarantees that the fluctuating lattice Boltzmann equation is simultaneously consistent with both fluctuating hydrodynamics and statistical mechanics. This establishes a foundation for future extensions, such as the treatment of multi-phase and thermal flows. An important range of applications for the lattice Boltzmann method is formed by microfluidics. Fostered by the "lab-on-a-chip" paradigm, there is an increasing need for computer simulations which are able to complement the achievements of theory and experiment. Microfluidic systems are characterized by a large surface-to-volume ratio and, therefore, boundary conditions are of special relevance. On the microscale, the standard no-slip boundary condition used in hydrodynamics has to be replaced by a slip boundary condition. In this work, a boundary condition for lattice Boltzmann is constructed that allows the slip length to be tuned by a single model parameter. Furthermore, a conceptually new approach for constructing boundary conditions is explored, where the reduced symmetry at the boundary is explicitly incorporated into the lattice model. The lattice Boltzmann method is systematically extended to the reduced symmetry model. In the case of a Poiseuille flow in a plane channel, it is shown that a special choice of the collision operator is required to reproduce the correct flow profile. This systematic approach sheds light on the consequences of the reduced symmetry at the boundary and leads to a deeper understanding of boundary conditions in the lattice Boltzmann method. This can help to develop improved boundary conditions that lead to more accurate simulation results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The most ocean - atmosphere exchanges take place in polar environments due to the low temperatures which favor the absorption processes of atmospheric gases, in particular CO2. For this reason, the alterations of biogeochemical cycles in these areas can have a strong impact on the global climate. With the aim of contributing to the definition of the mechanisms regulating the biogeochemical fluxes we have analyzed the particles collected in the Ross Sea in different years (ROSSMIZE, BIOSESO 1 and 2, ROAVERRS and ABIOCLEAR projects) in two sites (mooring A and B). So it has been developed a more efficient method to prepare sediment trap samples for the analyses. We have also processed satellite data of sea ice, chlorophyll a and diatoms concentration. At both sites, in each year considered, there was a high seasonal and inter-annual variability of biogeochemical fluxes closely correlated with sea ice cover and primary productivity. The comparison between the samples collected at mooring A and B in 2008 highlighted the main differences between these two sites. Particle fluxes at Mooring A, located in a polynia area, are higher than mooring B ones and they happen about a month before. In the mooring B area it has been possible to correlate the particles fluxes to the ice concentration anomalies and with the atmospheric changes in response to El Niño Southern Oscillations. In 1996 and 1999, years subjected to La Niña, the concentrations of sea ice in this area have been less than in 1998, year subjected to El Niño. Inverse correlation was found for 2005 and 2008. In the mooring A area significant differences in mass and biogenic fluxes during 2005 and 2008 has been recorded. This allowed to underline the high variability of lateral advection processes and to connect them to the physical forcing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, the influence of composition changes on the glass transition behavior of binary liquids in two and three spatial dimensions (2D/3D) is studied in the framework of mode-coupling theory (MCT).The well-established MCT equations are generalized to isotropic and homogeneous multicomponent liquids in arbitrary spatial dimensions. Furthermore, a new method is introduced which allows a fast and precise determination of special properties of glass transition lines. The new equations are then applied to the following model systems: binary mixtures of hard disks/spheres in 2D/3D, binary mixtures of dipolar point particles in 2D, and binary mixtures of dipolar hard disks in 2D. Some general features of the glass transition lines are also discussed. The direct comparison of the binary hard disk/sphere models in 2D/3D shows similar qualitative behavior. Particularly, for binary mixtures of hard disks in 2D the same four so-called mixing effects are identified as have been found before by Götze and Voigtmann for binary hard spheres in 3D [Phys. Rev. E 67, 021502 (2003)]. For instance, depending on the size disparity, adding a second component to a one-component liquid may lead to a stabilization of either the liquid or the glassy state. The MCT results for the 2D system are on a qualitative level in agreement with available computer simulation data. Furthermore, the glass transition diagram found for binary hard disks in 2D strongly resembles the corresponding random close packing diagram. Concerning dipolar systems, it is demonstrated that the experimental system of König et al. [Eur. Phys. J. E 18, 287 (2005)] is well described by binary point dipoles in 2D through a comparison between the experimental partial structure factors and those from computer simulations. For such mixtures of point particles it is demonstrated that MCT predicts always a plasticization effect, i.e. a stabilization of the liquid state due to mixing, in contrast to binary hard disks in 2D or binary hard spheres in 3D. It is demonstrated that the predicted plasticization effect is in qualitative agreement with experimental results. Finally, a glass transition diagram for binary mixtures of dipolar hard disks in 2D is calculated. These results demonstrate that at higher packing fractions there is a competition between the mixing effects occurring for binary hard disks in 2D and those for binary point dipoles in 2D.