964 resultados para threshold detector
Resumo:
This letter presents signal processing techniques to detect a passive thermal threshold detector based on a chipless time-domain ultrawideband (UWB) radio frequency identification (RFID) tag. The tag is composed by a UWB antenna connected to a transmission line, in turn loaded with a biomorphic thermal switch. The working principle consists of detecting the impedance change of the thermal switch. This change occurs when the temperature exceeds a threshold. A UWB radar is used as the reader. The difference between the actual time sample and a reference signal obtained from the averaging of previous samples is used to determine the switch transition and to mitigate the interferences derived from clutter reflections. A gain compensation function is applied to equalize the attenuation due to propagation loss. An improved method based on the continuous wavelet transform with Morlet wavelet is used to overcome detection problems associated to a low signal-to-noise ratio at the receiver. The average delay profile is used to detect the tag delay. Experimental measurements up to 5 m are obtained.
Sistema de adquisición de datos para una aplicación de detección del ruido de reversa en tiempo real
Resumo:
Entre todas las fuentes de ruido, la activación de la propulsión en reversa de un avión después de aterrizar es conocida por las autoridades del aeropuerto como una causa importante de impacto acústico, molestias y quejas en las proximidades vecinas de los aeropuertos. Por ello, muchos de los aeropuertos de todo el mundo han establecido restricciones en el uso de la reversa, especialmente en las horas de la noche. Una forma de reducir el impacto acústico en las actividades aeroportuarias es implementar herramientas eficaces para la detección de ruido en reversa en los aeropuertos. Para este proyecto de fin de carrera, aplicando la metodología TREND (Thrust Reverser Noise Detection), se pretende desarrollar un sistema software capaz de determinar que una aeronave que aterrice en la pista active el frenado en reversa en tiempo real. Para el diseño de la aplicación se plantea un modelo software, que se compone de dos módulos: El módulo de adquisición de señales acústicas, simula un sistema de captación por señales de audio. Éste módulo obtiene muestra de señales estéreo de ficheros de audio de formato “.WAV” o del sistema de captación, para acondicionar las muestras de audio y enviarlas al siguiente módulo. El sistema de captación (array de micrófonos), se encuentra situado en una localización cercana a la pista de aterrizaje. El módulo de procesado busca los eventos de detección aplicando la metodología TREND con las muestras acústicas que recibe del módulo de adquisición. La metodología TREND describe la búsqueda de dos eventos sonoros llamados evento 1 (EV1) y evento 2 (EV2); el primero de ellos, es el evento que se activa cuando una aeronave aterriza discriminando otros eventos sonoros como despegues de aviones y otros sonidos de fondo, mientras que el segundo, se producirá después del evento 1, sólo cuando la aeronave utilice la reversa para frenar. Para determinar la detección del evento 1, es necesario discriminar las señales ajenas al aterrizaje aplicando un filtrado en la señal capturada, después, se aplicará un detector de umbral del nivel de presión sonora y por último, se determina la procedencia de la fuente de sonido con respecto al sistema de captación. En el caso de la detección del evento 2, está basada en la implementación de umbrales en la evolución temporal del nivel de potencia acústica aplicando el modelo de propagación inversa, con ayuda del cálculo de la estimación de la distancia en cada instante de tiempo mientras el avión recorre la pista de aterrizaje. Con cada aterrizaje detectado se realiza una grabación que se archiva en una carpeta específica y todos los datos adquiridos, son registrados por la aplicación software en un fichero de texto. ABSTRACT. Among all noise sources, the activation of reverse thrust to slow the aircraft after landing is considered as an important cause of noise pollution by the airport authorities, as well as complaints and annoyance in the airport´s nearby locations. Therefore, many airports around the globe have restricted the use of reverse thrust, especially during the evening hours. One way to reduce noise impact on airport activities is the implementation of effective tools that deal with reverse noise detection. This Final Project aims to the development of a software system capable of detecting if an aircraft landing on the runway activates reverse thrust on real time, using the TREND (Thrust Reverser Noise Detection) methodology. To design this application, a two modules model is proposed: • The acoustic signals obtainment module, which simulates an audio waves based catchment system. This module obtains stereo signal samples from “.WAV” audio files or the catchment system in order to prepare these audio samples and send them to the next module. The catchment system (a microphone array) is located on a place near the landing runway. • The processing module, which looks for detection events among the acoustic samples received from the other module, using the TREND methodology. The TREND methodology describes the search of two sounds events named event 1 (EV1) and event 2 (EV2). The first is the event activated by a landing plane, discriminating other sound events such as background noises or taking off planes; the second one will occur after event one only when the aircraft uses reverse to slow down. To determine event 1 detection, signals outside the landing must be discriminated using a filter on the catched signal. A pressure level´s threshold detector will be used on the signal afterwards. Finally, the origin of the sound source is determined regarding the catchment system. The detection of event 2 is based on threshold implementations in the temporal evolution of the acoustic power´s level by using the inverse propagation model and calculating the distance estimation at each time step while the plane goes on the landing runway. A recording is made every time a landing is detected, which is stored in a folder. All acquired data are registered by the software application on a text file.
Resumo:
A search for new physics using three-lepton (trilepton) data collected with the CDF II detector and corresponding to an integrated luminosity of 976 pb-1 is presented. The standard model predicts a low rate of trilepton events, which makes some supersymmetric processes, such as chargino-neutralino production, measurable in this channel. The mu+mu+l signature is investigated, where l is an electron or a muon, with the additional requirement of large missing transverse energy. In this analysis, the lepton transverse momenta with respect to the beam direction (pT) are as low as 5 GeV/c, a selection that improves the sensitivity to particles which are light as well as to ones which result in leptonically decaying tau leptons. At the same time, this low-p_T selection presents additional challenges due to the non-negligible heavy-quark background at low lepton momenta. This background is measured with an innovative technique using experimental data. Several dimuon and trilepton control regions are investigated, and good agreement between experimental results and standard-model predictions is observed. In the signal region, we observe one three-muon event and expect 0.4+/-0.1 mu+mu+l events
Resumo:
The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.
Resumo:
Evidence is reported for a narrow structure near the $J/\psi\phi$ threshold in exclusive $B^+\to J/\psi\phi K^+$ decays produced in $\bar{p} p $ collisions at $\sqrt{s}=1.96 \TeV$. A signal of $14\pm5$ events, with statistical significance in excess of 3.8 standard deviations, is observed in a data sample corresponding to an integrated luminosity of $2.7 \ifb$, collected by the CDF II detector. The mass and natural width of the structure are measured to be $4143.0\pm2.9(\mathrm{stat})\pm1.2(\mathrm{syst}) \MeVcc$ and $11.7^{+8.3}_{-5.0}(\mathrm{stat})\pm3.7(\mathrm{syst}) \MeVcc$.
Resumo:
Multi-hit 3-layer delay-line anode (Hexanode) has an increased ability to detect multi-hit events in a collision experiment. Coupled with a pair of micro-channel plates, it can provide position information of the particles even if the particles arrive at the same time or within small time dwell. But it suffers from some ambiguous outputs and signal losses due to timing order and triggering thresholds etc. We have developed a signal reconstruction program to correct those events. After the program correction, the dead time only exists when 2 paxticles arrive at the same time and the same position within a much smaller range. With the combination of Hexanode and the program, the experimental efficiencies will be greatly improved in near threshold double ionization on He collisions.
Resumo:
In this study, the PTW 1000SRS array with Octavius 4D phantom was characterised for FF and FFF beams. MU linearity, field size, dose rate, dose per pulse (DPP) response and dynamic conformal arc treatment accuracy of the 1000SRS array were assessed for 6MV, 6FFF and 10FFF beams using a Varian TrueBeam STx linac. The measurements were compared with a pinpoint IC, microdiamond IC and EBT3 Gafchromic film. Measured dose profiles and FWHMs were compared with film measurements. Verification of FFF volumetric modulated arc therapy (VMAT) clinical plans were assessed using gamma analysis with 3%/3 mm and 2%/2 mm tolerances (10% threshold). To assess the effect of cross calibration dose rate, clinical plans with different dose rates were delivered and analysed. Output factors agreed with film measurements to within 4.5% for fields between 0.5 and 1 cm and within 2.7% for field sizes between 1.5 and 10 cm and were highly correlated with the microdiamond IC detector. Field sizes measured with the 1000SRS array were within 0.5 mm of film measurements. A drop in response of up to 1.8%, 2.4% and 5.2% for 6MV, 6FFF and 10FFF beams respectively was observed with increasing nominal dose rate. With an increase in DPP, a drop of up to 1.7%, 2.4% and 4.2% was observed in 6MV, 6FFF and 10FFF respectively. The differences in dose following dynamic conformal arc deliveries were less than 1% (all energies) from calculated. Delivered VMAT plans showed an average pass percentage of 99.5(±0.8)% and 98.4(±3.4)% with 2%/2 mm criteria for 6FFF and 10FFF respectively. A drop to 97.7(±2.2)% and 88.4(±9.6)% were observed for 6FFF and 10FFF respectively when plans were delivered at the minimum dose rate and calibrated at the maximum dose rate. Calibration using a beam with the average dose rate of the plan may be an efficient method to overcome the dose rate effects observed by the 1000SRS array.
Resumo:
Photothermal deflection technique was used for determining the laser damage threshold of polymer samples of teflon (PTFE) and nylon. The experiment was conducted using a Q-switched Nd-YAG laser operating at its fundamental wavelength (1-06μm, pulse width 10 nS FWHM) as irradiation source and a He-Ne laser as the probe beam, along with a position sensitive detector. The damage threshold values determined by photothermal deflection method were in good agreement with those determined by other methods.
Resumo:
A new approach is presented to identify the number of incoming signals in antenna array processing. The new method exploits the inherent properties existing in the noise eigenvalues of the covariance matrix of the array output. A single threshold has been established concerning information about the signal and noise strength, data length, and array size. When the subspace-based algorithms are adopted the computation cost of the signal number detector can almost be neglected. The performance of the threshold is robust against low SNR and short data length.
Resumo:
The upgrade of the Mainz Mikrotron (MAMI) electron accelerator facility in 2007 which raised the beam energy up to 1.5,GeV, gives the opportunity to study strangeness production channels through electromagnetic process. The Kaon Spectrometer (KAOS) managed by the A1 Collaboration, enables the efficient detection of the kaons associated with strangeness electroproduction. Used as a single arm spectrometer, it can be combined with the existing high-resolution spectrometers for exclusive measurements in the kinematic domain accessible to them.rnrnFor studying hypernuclear production in the ^A Z(e,e'K^+) _Lambda ^A(Z-1) reaction, the detection of electrons at very forward angles is needed. Therefore, the use of KAOS as a double-arm spectrometer for detection of kaons and the electrons at the same time is mandatory. Thus, the electron arm should be provided with a new detector package, with high counting rate capability and high granularity for a good spatial resolution. To this end, a new state-of-the-art scintillating fiber hodoscope has been developed as an electron detector.rnrnThe hodoscope is made of two planes with a total of 18432 scintillating double-clad fibers of 0.83 mm diameter. Each plane is formed by 72 modules. Each module is formed from a 60deg slanted multi-layer bundle, where 4 fibers of a tilted column are connected to a common read out. The read-out is made with 32 channels of linear array multianode photomultipliers. Signal processing makes use of newly developed double-threshold discriminators. The discriminated signal is sent in parallel to dead-time free time-to-digital modules and to logic modules for triggering purposes.rnrnTwo fiber modules were tested with a carbon beam at GSI, showing a time resolution of 220 ps (FWHM) and a position residual of 270 microm m (FWHM) with a detection efficiency epsilon>99%.rnrnThe characterization of the spectrometer arm has been achieved through simulations calculating the transfer matrix of track parameters from the fiber detector focal plane to the primary vertex. This transfer matrix has been calculated to first order using beam transport optics and has been checked by quasielastic scattering off a carbon target, where the full kinematics is determined by measuring the recoil proton momentum. The reconstruction accuracy for the emission parameters at the quasielastic vertex was found to be on the order of 0.3 % in first test realized.rnrnThe design, construction process, commissioning, testing and characterization of the fiber hodoscope are presented in this work which has been developed at the Institut für Kernphysik of the Johannes Gutenberg - Universität Mainz.
Resumo:
This Letter presents a search for quantum black-hole production using 20.3 fb(-1) of data collected with the ATLAS detector in pp collisions at the LHC at root s = 8 TeV. The quantum black holes are assumed to decay into a final state characterized by a lepton (electron or muon) and a jet. In either channel, no event with a lepton-jet invariant mass of 3.5 TeV or more is observed, consistent with the expected background. Limits are set on the product of cross sections and branching fractions for the lepton + jet final states of quantum black holes produced in a search region for invariant masses above 1 TeV. The combined 95% confidence level upper limit on this product for quantum black holes with threshold mass above 3.5 TeV is 0.18 fb. This limit constrains the threshold quantum black-hole mass to be above 5.3 TeV in the model considered.
Resumo:
A search for an excess of events with multiple high transverse momentum objects including charged leptons and jets is presented, using 20.3 fb−1 of proton-proton collision data recorded by the ATLAS detector at the Large Hadron Collider in 2012 at a centre-of-mass energy of √s = 8TeV. No excess of events beyond Standard Model expectations is observed. Using extra-dimensional models for black hole and string ball production and decay, exclusion contours are determined as a function of the mass threshold for production and the fundamental gravity scale for two, four and six extra dimensions. For six extra dimensions, mass thresholds of 4.8–6.2TeV are excluded at 95% confidence level, depending on the fundamental gravity scale and model assumptions. Upper limits on the fiducial cross-sections for non-Standard Model production of these final states are set.
Resumo:
A novel approach is presented, whereby gold nanostructured screen-printed carbon electrodes (SPCnAuEs) are combined with in-situ ionic liquid formation dispersive liquid–liquid microextraction (in-situ IL-DLLME) and microvolume back-extraction for the determination of mercury in water samples. In-situ IL-DLLME is based on a simple metathesis reaction between a water-miscible IL and a salt to form a water-immiscible IL into sample solution. Mercury complex with ammonium pyrrolidinedithiocarbamate is extracted from sample solution into the water-immiscible IL formed in-situ. Then, an ultrasound-assisted procedure is employed to back-extract the mercury into 10 µL of a 4 M HCl aqueous solution, which is finally analyzed using SPCnAuEs. Sample preparation methodology was optimized using a multivariate optimization strategy. Under optimized conditions, a linear range between 0.5 and 10 µg L−1 was obtained with a correlation coefficient of 0.997 for six calibration points. The limit of detection obtained was 0.2 µg L−1, which is lower than the threshold value established by the Environmental Protection Agency and European Union (i.e., 2 µg L−1 and 1 µg L−1, respectively). The repeatability of the proposed method was evaluated at two different spiking levels (3 and 10 µg L−1) and a coefficient of variation of 13% was obtained in both cases. The performance of the proposed methodology was evaluated in real-world water samples including tap water, bottled water, river water and industrial wastewater. Relative recoveries between 95% and 108% were obtained.
Resumo:
The volatile chemicals which comprise the odor of the illicit drug cocaine have been analyzed by adsorption onto activated charcoal followed by solvent elution and GC/MS analysis. A series of field tests have been performed to determine the dominant odor compound to which dogs alert. All of our data to date indicate that the dominant odor is due to the presence of methyl benzoate which is associated with the cocaine, rather than the cocaine itself. When methyl benzoate and cocaine are spiked onto U.S. currency, the threshold level of methyl benzoate required for a canine to signal an alert is typically 1-10 $\mu$g. Humans have been shown to have a sensitivity similar to dogs for methyl benzoate but with poorer selectivity/reliability. The dominant decomposition pathway for cocaine has been evaluated at elevated temperatures (up to 280$\sp\circ$C). Benzoic acid, but no detectable methyl benzoate, is formed. Solvent extraction and SFE were used to study the recovery of cocaine from U.S. currency. The amount of cocaine which could be recovered was found to decrease with time. ^