997 resultados para Line segment detector
Resumo:
A search for a sidereal modulation in the MINOS near detector neutrino data was performed. If present, this signature could be a consequence of Lorentz and CPT violation as predicted by the effective field theory called the standard-model extension. No evidence for a sidereal signal in the data set was found, implying that there is no significant change in neutrino propagation that depends on the direction of the neutrino beam in a sun-centered inertial frame. Upper limits on the magnitudes of the Lorentz and CPT violating terms in the standard-model extension lie between 10(-4) and 10(-2) of the maximum expected, assuming a suppression of these signatures by a factor of 10(-17).
Resumo:
We investigate a neutrino mass model in which the neutrino data is accounted for by bilinear R-parity violating supersymmetry with anomaly mediated supersymmetry breaking. We focus on the CERN Large Hadron Collider (LHC) phenomenology, studying the reach of generic supersymmetry search channels with leptons, missing energy and jets. A special feature of this model is the existence of long-lived neutralinos and charginos which decay inside the detector leading to detached vertices. We demonstrate that the largest reach is obtained in the displaced vertices channel and that practically all of the reasonable parameter space will be covered with an integrated luminosity of 10 fb(-1). We also compare the displaced vertex reaches of the LHC and Tevatron.
Resumo:
Yields, correlation shapes, and mean transverse momenta p(T) of charged particles associated with intermediate-to high-p(T) trigger particles (2.5 < p(T) < 10 GeV/c) in d + Au and Au + Au collisions at root s(NN) = 200 GeV are presented. For associated particles at higher p(T) greater than or similar to 2.5 GeV/c, narrow correlation peaks are seen in d + Au and Au + Au, indicating that the main production mechanism is jet fragmentation. At lower associated particle pT < 2 GeV/c, a large enhancement of the near- (Delta phi similar to 0) and away-side (Delta phi similar to pi) associated yields is found, together with a strong broadening of the away-side azimuthal distributions in Au + Au collisions compared to d + Au measurements, suggesting that other particle production mechanisms play a role. This is further supported by the observed significant softening of the away-side associated particle yield distribution at Delta phi similar to pi in central Au + Au collisions.
Resumo:
We present a measurement of pi(+)pi(-)pi(+)pi(-) photonuclear production in ultraperipheral Au-Au collisions at root s(NN) = 200 GeV from the STAR experiment. The pi(+)pi(-)pi(+)pi(-) final states are observed at low transverse momentum and are accompanied by mutual nuclear excitation of the beam particles. The strong enhancement of the production cross section at low transverse momentum is consistent with coherent photoproduction. The pi(+)pi(-)pi(+)pi(-) invariant mass spectrum of the coherent events exhibits a broad peak around 1540 +/- 40 MeV/c(2) with a width of 570 +/- 60 MeV/c(2), in agreement with the photoproduction data for the rho(0)(1700). We do not observe a corresponding peak in the pi(+)pi(-) final state and measure an upper limit for the ratio of the branching fractions of the rho(0)(1700) to pi(+)pi(-) and pi(+)pi(-)pi(+)pi(-) of 2.5% at 90% confidence level. The ratio of rho(0)(1700) and rho(0)(770) coherent production cross sections is measured to be 13.4 +/- 0.8(stat.) +/- 4.4(syst.)%.
Sensitivity to noise and ergodicity of an assembly line of cellular automata that classifies density
Resumo:
We investigate the sensitivity of the composite cellular automaton of H. Fuks [Phys. Rev. E 55, R2081 (1997)] to noise and assess the density classification performance of the resulting probabilistic cellular automaton (PCA) numerically. We conclude that the composite PCA performs the density classification task reliably only up to very small levels of noise. In particular, it cannot outperform the noisy Gacs-Kurdyumov-Levin automaton, an imperfect classifier, for any level of noise. While the original composite CA is nonergodic, analyses of relaxation times indicate that its noisy version is an ergodic automaton, with the relaxation times decaying algebraically over an extended range of parameters with an exponent very close (possibly equal) to the mean-field value.
Resumo:
The contribution of the detector dynamics to the weak measurement is analyzed. According to the usual theory [Y. Aharonov, D. Z. Albert, and L. Vaidman, Phys. Rev. Lett. 60, 1351 (1988)] the outcome of a weak measurement with preselection and postselection can be expressed as the real part of a complex number: the weak value. By accounting for the Hamiltonian evolution of the detector, here we find that there is a contribution proportional to the imaginary part of the weak value to the outcome of the weak measurement. This is due to the coherence of the probe being essential for the concept of complex weak value to be meaningful. As a particular example, we consider the measurement of a spin component and find that the contribution of the imaginary part of the weak value is sizable.
Resumo:
Data collected at the Pierre Auger Observatory are used to establish an upper limit on the diffuse flux of tau neutrinos in the cosmic radiation. Earth-skimming nu(tau) may interact in the Earth's crust and produce a tau lepton by means of charged-current interactions. The tau lepton may emerge from the Earth and decay in the atmosphere to produce a nearly horizontal shower with a typical signature, a persistent electromagnetic component even at very large atmospheric depths. The search procedure to select events induced by tau decays against the background of normal showers induced by cosmic rays is described. The method used to compute the exposure for a detector continuously growing with time is detailed. Systematic uncertainties in the exposure from the detector, the analysis, and the involved physics are discussed. No tau neutrino candidates have been found. For neutrinos in the energy range 2x10(17) eV < E(nu)< 2x10(19) eV, assuming a diffuse spectrum of the form E(nu)(-2), data collected between 1 January 2004 and 30 April 2008 yield a 90% confidence-level upper limit of E(nu)(2)dN(nu tau)/dE(nu)< 9x10(-8) GeV cm(-2) s(-1) sr(-1).
Resumo:
We report the discovery with XMM-Newton of a hard-thermal (T similar to 130 MK) and variable X-ray emission from the Be star HD 157832, a new member of the puzzling class of gamma-Cas-like Be/X-ray systems. Recent optical spectroscopy reveals the presence of a large/dense circumstellar disk seen at intermediate/high inclination. With a B1.5V spectral type, HD 157832 is the coolest gamma-Cas analog known. In addition, its non-detection in the ROSAT all-sky survey shows that its average soft X-ray luminosity varied by a factor larger than similar to 3 over a time interval of 14 yr. These two remarkable features, ""low"" effective temperature, and likely high X-ray variability turn HD 157832 into a promising object for understanding the origin of the unusually high-temperature X-ray emission in these systems.
Resumo:
This work describes the coupling of a biomimetic sensor to a flow injection system for the sensitive determination of paracetamol. The sensor was prepared as previously described in the literature (M. D. P. T. Sotomayor, A. Sigoli, M. R. V. Lanza, A. A. Tanaka and L. T. Kubota, J. Braz. Chem. Soc., 2008, 19, 734) by modifying a glassy carbon electrode surface with a Nafion (R) membrane doped with iron tetrapyridinoporphyrazine (FeTPyPz), a biomimetic catalyst of the P450 enzyme. The performance of the sensor for paracetamol detection was investigated and optimized in a flow injection system (FIA) using a wall jet electrochemical cell. Under optimized conditions a wide linear response range (1.0 x 10(-5) to 5.0 x 10(-2) mol L(-1)) was obtained, with a sensitivity of 2579 (+/- 129) mu A L mu mol(-1). The detection and quantification limits of the sensor for paracetamol in the FIA system were 1.0 and 3.5 mu mol L(-1), respectively. The analytical frequency was 51 samples h(-1), and over a period of five days (320 determinations) the biosensor maintained practically the same response. The system was successfully applied to paracetamol quantification in seven pharmaceutical formulations and in water samples from six rivers in Sao Paulo State, Brazil.
Resumo:
A multi-pumping flow system exploiting prior assay is proposed for sequential turbidimetric determination of sulphate and chloride in natural waters. Both methods are implemented in the same manifold that provides facilities for: in-line sample clean-up with a Bio-Rex 70 mini-column with fluidized beads: addition of low amounts of sulphate or chloride ions to the reaction medium for improving supersaturation; analyte precipitation with Ba(2+) or Ag(+); real-time decision on the need for next assay. The sample is initially run for chloride determination, and the analytical signal is compared with a preset value. If higher, the sample is run again, now for sulphate determination. The strategy may lead to all increased sample throughput. The proposed system is computer-controlled and presents enhanced figures of merit. About 10 samples are run per hour (about 60 measurements) and results are reproducible and Unaffected by the presence of potential interfering ions at concentration levels usually found in natural waters. Accuracy was assessed against ion chromatography. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This investigation presents a comprehensive characterization of magnetic and transport properties of an interesting superconducting wire, Nb-Ti -Ta, obtained through the solid-state diffusion between Nb-12 at.% Ta alloy and pure Ti. The physical properties obtained from magnetic and transport measurements related to the microstructure unambiguously confirmed a previous proposition that the superconducting currents flow in the center of the diffusion layer, which has a steep composition variation. The determination of the critical field also confirmed that the flux line core size is not constant, and in addition it was possible to determine that, in the center of the layer, the flux line core is smaller than at the borders. A possible core shape design is proposed. Among the wires studied, the one that presented the best critical current density was achieved for a diffusion layer with a composition of about Nb-32% Ti-10% Ta, obtained with a heat treatment at 700 degrees C during 120 h, in agreement with previous studies. It was determined that this wire has the higher upper critical field, indicating that the optimization of the superconducting behavior is related to an intrinsic property of the ternary alloy.
Resumo:
A modular superconducting fault current limiter (SFCL) consisting of 16 elements was constructed and tested in a 220 V line for a fault current between 1 kA to 7.4 kA. The elements are made up of second generation (2G) YBCO-coated conductor tapes with stainless steel reinforcement. For each element four tapes were electrically connected in parallel with effective length of 0.4 m per element, totaling 16 elements connected in series. The evaluation of SFCL performance was carried out under DC and AC tests. The DC test was performed through pulsed current tests and its recovery characteristics under load current were analysed by changing the shunt resistor value. The AC test performed using a 3 MVA/220 V/60 Hz transformer has shown the current limiting ratio achieved a factor higher than 10 during fault of up to five cycles without conductor degradation. The measurement of the voltage for each element during the AC test showed that in this modular SFCL the quench is homogeneous and the transition occurs similarly in all the elements.
Resumo:
The objective of this research is to identify the benefits of ergonomic improvements in workstations and in planned parts supply in an automotive assembly line. Another aim is to verify to what extent it is possible to create competitive advantages in the manufacturing area with reduction in vehicle assembly time by using technological investments in ergonomics with benefits to the worker and to the company. The Methods Time Measurement (MTM) methodology is chosen to measure the process time differences. To ensure a reliable comparison, a company in Brazil that has two different types of assembly line installations in the same plant was observed, and both assembly lines were under the same influences in terms of human resources, wages, food, and educational level of the staff. In this article, the first assembly line is called ""new"" and was built 6 years ago, with high investments in ergonomic solutions, in the supply system, and in the process. The other is called ""traditional"" and was built 23 years ago with few investments in the area. (C) 2010 Wiley Periodicals, Inc.
Resumo:
In this paper, a supervisor system, able to diagnose different types of faults during the operation of a proton exchange membrane fuel cell is introduced. The diagnosis is developed by applying Bayesian networks, which qualify and quantify the cause-effect relationship among the variables of the process. The fault diagnosis is based on the on-line monitoring of variables easy to measure in the machine such as voltage, electric current, and temperature. The equipment is a fuel cell system which can operate even when a fault occurs. The fault effects are based on experiments on the fault tolerant fuel cell, which are reproduced in a fuel cell model. A database of fault records is constructed from the fuel cell model, improving the generation time and avoiding permanent damage to the equipment. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Simulated annealing (SA) is an optimization technique that can process cost functions with degrees of nonlinearities, discontinuities and stochasticity. It can process arbitrary boundary conditions and constraints imposed on these cost functions. The SA technique is applied to the problem of robot path planning. Three situations are considered here: the path is represented as a polyline; as a Bezier curve; and as a spline interpolated curve. In the proposed SA algorithm, the sensitivity of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensitivity of each parameter is associated to its probability distribution in the definition of the next candidate. (C) 2010 Elsevier Ltd. All rights reserved.