981 resultados para Particle Filter
Resumo:
We present three measurements of the top-quark mass in the lepton plus jets channel with approximately 1.9 fb-1 of integrated luminosity collected with the CDF II detector using quantities with minimal dependence on the jet energy scale. One measurement exploits the transverse decay length of b-tagged jets to determine a top-quark mass of 166.9+9.5-8.5 (stat) +/- 2.9 (syst) GeV/c2, and another the transverse momentum of electrons and muons from W-boson decays to determine a top-quark mass of 173.5+8.8-8.9 (stat) +/- 3.8 (syst) GeV/c2. These quantities are combined in a third, simultaneous mass measurement to determine a top-quark mass of 170.7 +/- 6.3 (stat) +/- 2.6 (syst) GeV/c2.
Resumo:
We report a set of measurements of particle production in inelastic pbar{p} collisions collected with a minimum-bias trigger at the Tevatron Collider with the CDF II experiment. The inclusive charged particle transverse momentum differential cross section is measured, with improved precision, over a range about ten times wider than in previous measurements. The former modeling of the spectrum appears to be incompatible with the high particle momenta observed. The dependence of the charged particle transverse momentum on the event particle multiplicity is analyzed to study the various components of hadron interactions. This is one of the observable variables most poorly reproduced by the available Monte Carlo generators. A first measurement of the event transverse energy sum differential cross section is also reported. A comparison with a Pythia prediction at the hadron level is performed. The inclusive charged particle differential production cross section is fairly well reproduced only in the transverse momentum range available from previous measurements. At higher momentum the agreement is poor. The transverse energy sum is poorly reproduced over the whole spectrum. The dependence of the charged particle transverse momentum on the particle multiplicity needs the introduction of more sophisticated particle production mechanisms, such as multiple parton interactions, in order to be better explained.
Resumo:
We report a set of measurements of particle production in inelastic pbar{p} collisions collected with a minimum-bias trigger at the Tevatron Collider with the CDF II experiment. The inclusive charged particle transverse momentum differential cross section is measured, with improved precision, over a range about ten times wider than in previous measurements. The former modeling of the spectrum appears to be incompatible with the high particle momenta observed. The dependence of the charged particle transverse momentum on the event particle multiplicity is analyzed to study the various components of hadron interactions. This is one of the observable variables most poorly reproduced by the available Monte Carlo generators. A first measurement of the event transverse energy sum differential cross section is also reported. A comparison with a Pythia prediction at the hadron level is performed. The inclusive charged particle differential production cross section is fairly well reproduced only in the transverse momentum range available from previous measurements. At higher momentum the agreement is poor. The transverse energy sum is poorly reproduced over the whole spectrum. The dependence of the charged particle transverse momentum on the particle multiplicity needs the introduction of more sophisticated particle production mechanisms, such as multiple parton interactions, in order to be better explained.
Resumo:
The matched filter method for detecting a periodic structure on a surface hidden behind randomness is known to detect up to (r(0)/Lambda) gt;= 0.11, where r(0) is the coherence length of light on scattering from the rough part and 3 is the wavelength of the periodic part of the surface-the above limit being much lower than what is allowed by conventional detection methods. The primary goal of this technique is the detection and characterization of the periodic structure hidden behind randomness without the use of any complicated experimental or computational procedures. This paper examines this detection procedure for various values of the amplitude a of the periodic part beginning from a = 0 to small finite values of a. We thus address the importance of the following quantities: `(a)lambda) `, which scales the amplitude of the periodic part with the wavelength of light, and (r(0))Lambda),in determining the detectability of the intensity peaks.
Resumo:
We present three measurements of the top-quark mass in the lepton plus jets channel with approximately 1.9 fb-1 of integrated luminosity collected with the CDF II detector using quantities with minimal dependence on the jet energy scale. One measurement exploits the transverse decay length of b-tagged jets to determine a top-quark mass of 166.9+9.5-8.5 (stat) +/- 2.9 (syst) GeV/c2, and another the transverse momentum of electrons and muons from W-boson decays to determine a top-quark mass of 173.5+8.8-8.9 (stat) +/- 3.8 (syst) GeV/c2. These quantities are combined in a third, simultaneous mass measurement to determine a top-quark mass of 170.7 +/- 6.3 (stat) +/- 2.6 (syst) GeV/c2.
Resumo:
The role of oxide surface chemical composition and solvent on ion solvation and ion transport of ``soggy sand'' electrolytes are discussed here. A ``soggy sand'' electrolyte system comprising dispersions of hydrophilic/hydrophobic functionalized aerosil silica in lithium perchlorate methoxy polyethylene glycol solution was employed for the study. Static and dynamic rheology measurements show formation of an attractive particle network in the case of the composite with unmodified aerosil silica (i.e., with surface silanol groups) as well as composites with hydrophobic alkane groups. While particle network in the composite with hydrophilic aerosil silica (unmodified) were due to hydrogen bonding, hydrophobic aerosil silica particles were held together via van der Waals forces. The network strength in the latter case (i.e., for hydrophobic composites) were weaker compared with the composite with unmodified aerosil silica. Both unmodified silica as well as hydrophobic silica composites displayed solid-like mechanical strength. No enhancement in ionic conductivity compared to the liquid electrolyte was observed in the case of the unmodified silica. This was attributed to the existence of a very strong particle network, which led to the ``expulsion'' of all conducting entities from the interfacial region between adjacent particles. The ionic conductivity for composites with hydrophobic aerosil particles displayed ionic conductivity dependent on the size of the hydrophobic chemical moiety. No spanning attractive particle network was observed for aerosil particles with surfaces modified with stronger hydrophilic groups (than silanol). The composite resembled a sol, and no percolation in ionic conductivity was observed.
Resumo:
We present the result of a search for a massive color-octet vector particle, (e.g. a massive gluon) decaying to a pair of top quarks in proton-antiproton collisions with a center-of-mass energy of 1.96 TeV. This search is based on 1.9 fb$^{-1}$ of data collected using the CDF detector during Run II of the Tevatron at Fermilab. We study $t\bar{t}$ events in the lepton+jets channel with at least one $b$-tagged jet. A massive gluon is characterized by its mass, decay width, and the strength of its coupling to quarks. These parameters are determined according to the observed invariant mass distribution of top quark pairs. We set limits on the massive gluon coupling strength for masses between 400 and 800 GeV$/c^2$ and width-to-mass ratios between 0.05 and 0.50. The coupling strength of the hypothetical massive gluon to quarks is consistent with zero within the explored parameter space.
Resumo:
Atmospheric particles affect the radiation balance of the Earth and thus the climate. New particle formation from nucleation has been observed in diverse atmospheric conditions but the actual formation path is still unknown. The prevailing conditions can be exploited to evaluate proposed formation mechanisms. This study aims to improve our understanding of new particle formation from the view of atmospheric conditions. The role of atmospheric conditions on particle formation was studied by atmospheric measurements, theoretical model simulations and simulations based on observations. Two separate column models were further developed for aerosol and chemical simulations. Model simulations allowed us to expand the study from local conditions to varying conditions in the atmospheric boundary layer, while the long-term measurements described especially characteristic mean conditions associated with new particle formation. The observations show statistically significant difference in meteorological and back-ground aerosol conditions between observed event and non-event days. New particle formation above boreal forest is associated with strong convective activity, low humidity and low condensation sink. The probability of a particle formation event is predicted by an equation formulated for upper boundary layer conditions. The model simulations call into question if kinetic sulphuric acid induced nucleation is the primary particle formation mechanism in the presence of organic vapours. Simultaneously the simulations show that ignoring spatial and temporal variation in new particle formation studies may lead to faulty conclusions. On the other hand, the theoretical simulations indicate that short-scale variations in temperature and humidity unlikely have a significant effect on mean binary water sulphuric acid nucleation rate. The study emphasizes the significance of mixing and fluxes in particle formation studies, especially in the atmospheric boundary layer. The further developed models allow extensive aerosol physical and chemical studies in the future.
Resumo:
The problem of identification of parameters of a beam-moving oscillator system based on measurement of time histories of beam strains and displacements is considered. The governing equations of motion here have time varying coefficients. The parameters to be identified are however time invariant and consist of mass, stiffness and damping characteristics of the beam and oscillator subsystems. A strategy based on dynamic state estimation method, that employs particle filtering algorithms, is proposed to tackle the identification problem. The method can take into account measurement noise, guideway unevenness, spatially incomplete measurements, finite element models for supporting structure and moving vehicle, and imperfections in the formulation of the mathematical models. Numerical illustrations based on synthetic data on beam-oscillator system are presented to demonstrate the satisfactory performance of the proposed procedure.
Resumo:
This paper describes a method of automated segmentation of speech assuming the signal is continuously time varying rather than the traditional short time stationary model. It has been shown that this representation gives comparable if not marginally better results than the other techniques for automated segmentation. A formulation of the 'Bach' (music semitonal) frequency scale filter-bank is proposed. A comparative study has been made of the performances using Mel, Bark and Bach scale filter banks considering this model. The preliminary results show up to 80 % matches within 20 ms of the manually segmented data, without any information of the content of the text and without any language dependence. 'Bach' filters are seen to marginally outperform the other filters.
Resumo:
The neural network finds its application in many image denoising applications because of its inherent characteristics such as nonlinear mapping and self-adaptiveness. The design of filters largely depends on the a-priori knowledge about the type of noise. Due to this, standard filters are application and image specific. Widely used filtering algorithms reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated general approach to design a finite impulse response filter based on principal component neural network (PCNN) is proposed in this study for image filtering, optimized in the sense of visual inspection and error metric. This algorithm exploits the inter-pixel correlation by iteratively updating the filter coefficients using PCNN. This algorithm performs optimal smoothing of the noisy image by preserving high and low frequency features. Evaluation results show that the proposed filter is robust under various noise distributions. Further, the number of unknown parameters is very few and most of these parameters are adaptively obtained from the processed image.
Resumo:
This correspondence describes a method for automated segmentation of speech. The method proposed in this paper uses a specially designed filter-bank called Bach filter-bank which makes use of 'music' related perception criteria. The speech signal is treated as continuously time varying signal as against a short time stationary model. A comparative study has been made of the performances using Mel, Bark and Bach scale filter banks. The preliminary results show up to 80 % matches within 20 ms of the manually segmented data, without any information of the content of the text and without any language dependence. The Bach filters are seen to marginally outperform the other filters.
Resumo:
Image filtering techniques have potential applications in biomedical image processing such as image restoration and image enhancement. The potential of traditional filters largely depends on the apriori knowledge about the type of noise corrupting the image. This makes the standard filters to be application specific. For example, the well-known median filter and its variants can remove the salt-and-pepper (or impulse) noise at low noise levels. Each of these methods has its own advantages and disadvantages. In this paper, we have introduced a new finite impulse response (FIR) filter for image restoration where, the filter undergoes a learning procedure. The filter coefficients are adaptively updated based on correlated Hebbian learning. This algorithm exploits the inter pixel correlation in the form of Hebbian learning and hence performs optimal smoothening of the noisy images. The application of the proposed filter on images corrupted with Gaussian noise, results in restorations which are better in quality compared to those restored by average and Wiener filters. The restored image is found to be visually appealing and artifact-free
Resumo:
The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Perfectly hard particles are those which experience an infinite repulsive force when they overlap, and no force when they do not overlap. In the hard-particle model, the only static state is the isostatic state where the forces between particles are statically determinate. In the flowing state, the interactions between particles are instantaneous because the time of contact approaches zero in the limit of infinite particle stiffness. Here, we discuss the development of a hard particle model for a realistic granular flow down an inclined plane, and examine its utility for predicting the salient features both qualitatively and quantitatively. We first discuss Discrete Element simulations, that even very dense flows of sand or glass beads with volume fraction between 0.5 and 0.58 are in the rapid flow regime, due to the very high particle stiffness. An important length scale in the shear flow of inelastic particles is the `conduction length' delta = (d/(1 - e(2))(1/2)), where d is the particle diameter and e is the coefficient of restitution. When the macroscopic scale h (height of the flowing layer) is larger than the conduction length, the rates of shear production and inelastic dissipation are nearly equal in the bulk of the flow, while the rate of conduction of energy is O((delta/h)(2)) smaller than the rate of dissipation of energy. Energy conduction is important in boundary layers of thickness delta at the top and bottom. The flow in the boundary layer at the top and bottom is examined using asymptotic analysis. We derive an exact relationship showing that the a boundary layer solution exists only if the volume fraction in the bulk decreases as the angle of inclination is increased. In the opposite case, where the volume fraction increases as the angle of inclination is increased, there is no boundary layer solution. The boundary layer theory also provides us with a way of understanding the cessation of flow when at a given angle of inclination when the height of the layer is decreased below a value h(stop), which is a function of the angle of inclination. There is dissipation of energy due to particle collisions in the flow as well as due to particle collisions with the base, and the fraction of energy dissipation in the base increases as the thickness decreases. When the shear production in the flow cannot compensate for the additional energy drawn out of the flow due to the wall collisions, the temperature decreases to zero and the flow stops. Scaling relations can be derived for h(stop) as a function of angle of inclination.