995 resultados para i-particle
Resumo:
The effect of event background fluctuations on charged particle jet reconstruction in Pb-Pb collisions at root s(NN) = 2.76 TeV has been measured with the ALICE experiment. The main sources of non-statistical fluctuations are characterized based purely on experimental data with an unbiased method, as well as by using single high p(t) particles and simulated jets embedded into real Pb-Pb events and reconstructed with the anti-k(t) jet finder. The influence of a low transverse momentum cut-off on particles used in the jet reconstruction is quantified by varying the minimum track p(t) between 0.15 GeV/c and 2 GeV/c. For embedded jets reconstructed from charged particles with p(t) > 0.15 GeV/c, the uncertainty in the reconstructed jet transverse momentum due to the heavy-ion background is measured to be 11.3 GeV/c (standard deviation) for the 10% most central Pb-Pb collisions, slightly larger than the value of 11.0 GeV/c measured using the unbiased method. For a higher particle transverse momentum threshold of 2 GeV/c, which will generate a stronger bias towards hard fragmentation in the jet finding process, the standard deviation of the fluctuations in the reconstructed jet transverse momentum is reduced to 4.8-5.0 GeV/c for the 10% most central events. A non-Gaussian tail of the momentum uncertainty is observed and its impact on the reconstructed jet spectrum is evaluated for varying particle momentum thresholds, by folding the measured fluctuations with steeply falling spectra.
Resumo:
We present STAR measurements of azimuthal anisotropy by means of the two- and four-particle cumulants nu(2) (nu(2){2} and nu(2){4}) for Au + Au and Cu + Cu collisions at center-of-mass energies root S-NN = 62.4 and 200 GeV. The difference between nu(2){2}(2) and nu(2){4}(2) is related to nu(2) fluctuations (sigma(nu 2)) and nonflow (delta(2)). We present an upper limit to sigma(nu 2)/nu 2. Following the assumption that eccentricity fluctuations sigma(epsilon) dominate nu(2) fluctuations nu(2)/sigma nu(2) approximate to epsilon/sigma epsilon we deduce the nonflow implied for several models of eccentricity fluctuations that would be required for consistency with nu(2){2} and nu(2){4}. We also present results on the ratio of nu(2) to eccentricity.
Resumo:
The ALICE Collaboration reports the measurement of the relative J/psi yield as a function of charged particle pseudorapidity density dN(ch)/d eta in pp collisions at root s = 7 TeV at the LHC. J/psi particles are detected for p(t) > 0, in the rapidity interval vertical bar y vertical bar < 0.9 via decay into e(+)e(-), and in the interval 2.5 < y < 4.0 via decay into mu(+)/mu(-) pairs. An approximately linear increase of the J/psi yields normalized to their event average (dN(J/psi)/dy)/(dN(J/psi)/dy) with (dN(ch)/c eta)/(dN(ch)/d eta) is observed in both rapidity ranges, where dN(ch)/d eta is measured within vertical bar eta vertical bar < 1 and p(t) > 0. In the highest multiplicity interval with (dN(ch)/d eta)(bin)) = 24.1, corresponding to four times the minimum bias multiplicity density, an enhancement relative to the minimum bias J/psi yield by a factor of about 5 at 2.5 < y <4 (8 at vertical bar y vertical bar < 0.9) is observed. (C) 2012 CERN. Published by Elsevier B.V. All rights reserved.
Resumo:
Events of new particle formation (NPF) in tropical boundary layer followed by consecutive growth towards Aitken mode size range are sparse compared to mid- latitudes Kulmala et al. (2004). This is also the case for rainforest environment. More often short episodes of elevated ultrafine and Aitken mode aerosol particle concentrations are observed their origin and the processes governing these episodes do however remain unclear. Based on observations performed in the Amazonian rainforest environment combined with statistical analysis we present a mechanism explaining the erratic appearance of ultra-fine aerosol in tropical boundary layer of the rainforest.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
The present thesis is concerned with the study of a quantum physical system composed of a small particle system (such as a spin chain) and several quantized massless boson fields (as photon gasses or phonon fields) at positive temperature. The setup serves as a simplified model for matter in interaction with thermal "radiation" from different sources. Hereby, questions concerning the dynamical and thermodynamic properties of particle-boson configurations far from thermal equilibrium are in the center of interest. We study a specific situation where the particle system is brought in contact with the boson systems (occasionally referred to as heat reservoirs) where the reservoirs are prepared close to thermal equilibrium states, each at a different temperature. We analyze the interacting time evolution of such an initial configuration and we show thermal relaxation of the system into a stationary state, i.e., we prove the existence of a time invariant state which is the unique limit state of the considered initial configurations evolving in time. As long as the reservoirs have been prepared at different temperatures, this stationary state features thermodynamic characteristics as stationary energy fluxes and a positive entropy production rate which distinguishes it from being a thermal equilibrium at any temperature. Therefore, we refer to it as non-equilibrium stationary state or simply NESS. The physical setup is phrased mathematically in the language of C*-algebras. The thesis gives an extended review of the application of operator algebraic theories to quantum statistical mechanics and introduces in detail the mathematical objects to describe matter in interaction with radiation. The C*-theory is adapted to the concrete setup. The algebraic description of the system is lifted into a Hilbert space framework. The appropriate Hilbert space representation is given by a bosonic Fock space over a suitable L2-space. The first part of the present work is concluded by the derivation of a spectral theory which connects the dynamical and thermodynamic features with spectral properties of a suitable generator, say K, of the time evolution in this Hilbert space setting. That way, the question about thermal relaxation becomes a spectral problem. The operator K is of Pauli-Fierz type. The spectral analysis of the generator K follows. This task is the core part of the work and it employs various kinds of functional analytic techniques. The operator K results from a perturbation of an operator L0 which describes the non-interacting particle-boson system. All spectral considerations are done in a perturbative regime, i.e., we assume that the strength of the coupling is sufficiently small. The extraction of dynamical features of the system from properties of K requires, in particular, the knowledge about the spectrum of K in the nearest vicinity of eigenvalues of the unperturbed operator L0. Since convergent Neumann series expansions only qualify to study the perturbed spectrum in the neighborhood of the unperturbed one on a scale of order of the coupling strength we need to apply a more refined tool, the Feshbach map. This technique allows the analysis of the spectrum on a smaller scale by transferring the analysis to a spectral subspace. The need of spectral information on arbitrary scales requires an iteration of the Feshbach map. This procedure leads to an operator-theoretic renormalization group. The reader is introduced to the Feshbach technique and the renormalization procedure based on it is discussed in full detail. Further, it is explained how the spectral information is extracted from the renormalization group flow. The present dissertation is an extension of two kinds of a recent research contribution by Jakšić and Pillet to a similar physical setup. Firstly, we consider the more delicate situation of bosonic heat reservoirs instead of fermionic ones, and secondly, the system can be studied uniformly for small reservoir temperatures. The adaption of the Feshbach map-based renormalization procedure by Bach, Chen, Fröhlich, and Sigal to concrete spectral problems in quantum statistical mechanics is a further novelty of this work.
Resumo:
The most ocean - atmosphere exchanges take place in polar environments due to the low temperatures which favor the absorption processes of atmospheric gases, in particular CO2. For this reason, the alterations of biogeochemical cycles in these areas can have a strong impact on the global climate. With the aim of contributing to the definition of the mechanisms regulating the biogeochemical fluxes we have analyzed the particles collected in the Ross Sea in different years (ROSSMIZE, BIOSESO 1 and 2, ROAVERRS and ABIOCLEAR projects) in two sites (mooring A and B). So it has been developed a more efficient method to prepare sediment trap samples for the analyses. We have also processed satellite data of sea ice, chlorophyll a and diatoms concentration. At both sites, in each year considered, there was a high seasonal and inter-annual variability of biogeochemical fluxes closely correlated with sea ice cover and primary productivity. The comparison between the samples collected at mooring A and B in 2008 highlighted the main differences between these two sites. Particle fluxes at Mooring A, located in a polynia area, are higher than mooring B ones and they happen about a month before. In the mooring B area it has been possible to correlate the particles fluxes to the ice concentration anomalies and with the atmospheric changes in response to El Niño Southern Oscillations. In 1996 and 1999, years subjected to La Niña, the concentrations of sea ice in this area have been less than in 1998, year subjected to El Niño. Inverse correlation was found for 2005 and 2008. In the mooring A area significant differences in mass and biogenic fluxes during 2005 and 2008 has been recorded. This allowed to underline the high variability of lateral advection processes and to connect them to the physical forcing.
Resumo:
I crescenti volumi di traffico che interessano le pavimentazioni stradali causano sollecitazioni tensionali di notevole entità che provocano danni permanenti alla sovrastruttura. Tali danni ne riducono la vita utile e comportano elevati costi di manutenzione. Il conglomerato bituminoso è un materiale multifase composto da inerti, bitume e vuoti d'aria. Le proprietà fisiche e le prestazioni della miscela dipendono dalle caratteristiche dell'aggregato, del legante e dalla loro interazione. L’approccio tradizionalmente utilizzato per la modellazione numerica del conglomerato bituminoso si basa su uno studio macroscopico della sua risposta meccanica attraverso modelli costitutivi al continuo che, per loro natura, non considerano la mutua interazione tra le fasi eterogenee che lo compongono ed utilizzano schematizzazioni omogenee equivalenti. Nell’ottica di un’evoluzione di tali metodologie è necessario superare questa semplificazione, considerando il carattere discreto del sistema ed adottando un approccio di tipo microscopico, che consenta di rappresentare i reali processi fisico-meccanici dai quali dipende la risposta macroscopica d’insieme. Nel presente lavoro, dopo una rassegna generale dei principali metodi numerici tradizionalmente impiegati per lo studio del conglomerato bituminoso, viene approfondita la teoria degli Elementi Discreti Particellari (DEM-P), che schematizza il materiale granulare come un insieme di particelle indipendenti che interagiscono tra loro nei punti di reciproco contatto secondo appropriate leggi costitutive. Viene valutata l’influenza della forma e delle dimensioni dell’aggregato sulle caratteristiche macroscopiche (tensione deviatorica massima) e microscopiche (forze di contatto normali e tangenziali, numero di contatti, indice dei vuoti, porosità, addensamento, angolo di attrito interno) della miscela. Ciò è reso possibile dal confronto tra risultati numerici e sperimentali di test triassiali condotti su provini costituiti da tre diverse miscele formate da sfere ed elementi di forma generica.
Resumo:
In this thesis, the influence of composition changes on the glass transition behavior of binary liquids in two and three spatial dimensions (2D/3D) is studied in the framework of mode-coupling theory (MCT).The well-established MCT equations are generalized to isotropic and homogeneous multicomponent liquids in arbitrary spatial dimensions. Furthermore, a new method is introduced which allows a fast and precise determination of special properties of glass transition lines. The new equations are then applied to the following model systems: binary mixtures of hard disks/spheres in 2D/3D, binary mixtures of dipolar point particles in 2D, and binary mixtures of dipolar hard disks in 2D. Some general features of the glass transition lines are also discussed. The direct comparison of the binary hard disk/sphere models in 2D/3D shows similar qualitative behavior. Particularly, for binary mixtures of hard disks in 2D the same four so-called mixing effects are identified as have been found before by Götze and Voigtmann for binary hard spheres in 3D [Phys. Rev. E 67, 021502 (2003)]. For instance, depending on the size disparity, adding a second component to a one-component liquid may lead to a stabilization of either the liquid or the glassy state. The MCT results for the 2D system are on a qualitative level in agreement with available computer simulation data. Furthermore, the glass transition diagram found for binary hard disks in 2D strongly resembles the corresponding random close packing diagram. Concerning dipolar systems, it is demonstrated that the experimental system of König et al. [Eur. Phys. J. E 18, 287 (2005)] is well described by binary point dipoles in 2D through a comparison between the experimental partial structure factors and those from computer simulations. For such mixtures of point particles it is demonstrated that MCT predicts always a plasticization effect, i.e. a stabilization of the liquid state due to mixing, in contrast to binary hard disks in 2D or binary hard spheres in 3D. It is demonstrated that the predicted plasticization effect is in qualitative agreement with experimental results. Finally, a glass transition diagram for binary mixtures of dipolar hard disks in 2D is calculated. These results demonstrate that at higher packing fractions there is a competition between the mixing effects occurring for binary hard disks in 2D and those for binary point dipoles in 2D.
Resumo:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.
Resumo:
http://www.ncbi.nlm.nih.gov/pubmed/20554796
Resumo:
This Letter presents the first search for a heavy particle decaying into an e ± μ(-/+) final state in sqrt[s] = 7 TeV pp collisions at the LHC. The data were recorded by the ATLAS detector during 2010 and correspond to a total integrated luminosity of 35 pb(-1). No excess above the standard model background expectation is observed. Exclusions at 95% confidence level are placed on two representative models. In an R-parity violating supersymmetric model, tau sneutrinos with a mass below 0.75 TeV are excluded, assuming all R-parity violating couplings are zero except λ(311)' = 0.11 and λ312 = 0.07. In a lepton flavor violating model, a Z'-like vector boson with masses of 0.70-1.00 TeV and corresponding cross sections times branching ratios of 0.175-0.183 pb is excluded. These results extend to higher mass R-parity violating sneutrinos and lepton flavor violating Z's than previous constraints from the Tevatron.
Resumo:
A particle system is a family of i.i.d. stochastic processes with values translated by Poisson points. We obtain conditions that ensure the stationarity in time of the particle system in RdRd and in some cases provide a full characterisation of the stationarity property. In particular, a full characterisation of stationary multivariate Brown–Resnick processes is given.
Optimized method for black carbon analysis in ice and snow using the Single Particle Soot Photometer
Resumo:
Attractive business cases in various application fields contribute to the sustained long-term interest in indoor localization and tracking by the research community. Location tracking is generally treated as a dynamic state estimation problem, consisting of two steps: (i) location estimation through measurement, and (ii) location prediction. For the estimation step, one of the most efficient and low-cost solutions is Received Signal Strength (RSS)-based ranging. However, various challenges - unrealistic propagation model, non-line of sight (NLOS), and multipath propagation - are yet to be addressed. Particle filters are a popular choice for dealing with the inherent non-linearities in both location measurements and motion dynamics. While such filters have been successfully applied to accurate, time-based ranging measurements, dealing with the more error-prone RSS based ranging is still challenging. In this work, we address the above issues with a novel, weighted likelihood, bootstrap particle filter for tracking via RSS-based ranging. Our filter weights the individual likelihoods from different anchor nodes exponentially, according to the ranging estimation. We also employ an improved propagation model for more accurate RSS-based ranging, which we suggested in recent work. We implemented and tested our algorithm in a passive localization system with IEEE 802.15.4 signals, showing that our proposed solution largely outperforms a traditional bootstrap particle filter.