984 resultados para ENERGY COMPONENT
Resumo:
We report the first three-particle coincidence measurement in pseudorapidity (Delta eta) between a high transverse momentum (p(perpendicular to)) trigger particle and two lower p(perpendicular to) associated particles within azimuth |Delta phi| < 0.7 in root s(NN) = 200 GeV d + Au and Au + Au collisions. Charge ordering properties are exploited to separate the jetlike component and the ridge (long range Delta eta correlation). The results indicate that the correlation of ridge particles are uniform not only with respect to the trigger particle but also between themselves event by event in our measured Delta eta. In addition, the production of the ridge appears to be uncorrelated to the presence of the narrow jetlike component.
Resumo:
The STAR Collaboration at the Relativistic Heavy Ion Collider presents a systematic study of high-transverse-momentum charged-di-hadron correlations at small azimuthal pair separation Delta phi in d+Au and central Au+Au collisions at s(NN)=200 GeV. Significant correlated yield for pairs with large longitudinal separation Delta eta is observed in central Au+Au collisions, in contrast to d+Au collisions. The associated yield distribution in Delta eta x Delta phi can be decomposed into a narrow jet-like peak at small angular separation which has a similar shape to that found in d+Au collisions, and a component that is narrow in Delta phi and depends only weakly on Delta eta, the ""ridge."" Using two systematically independent determinations of the background normalization and shape, finite ridge yield is found to persist for trigger p(t)>6 GeV/c, indicating that it is correlated with jet production. The transverse-momentum spectrum of hadrons comprising the ridge is found to be similar to that of bulk particle production in the measured range (2 < p(t)< 4 GeV/c).
Resumo:
Data collected at the Pierre Auger Observatory are used to establish an upper limit on the diffuse flux of tau neutrinos in the cosmic radiation. Earth-skimming nu(tau) may interact in the Earth's crust and produce a tau lepton by means of charged-current interactions. The tau lepton may emerge from the Earth and decay in the atmosphere to produce a nearly horizontal shower with a typical signature, a persistent electromagnetic component even at very large atmospheric depths. The search procedure to select events induced by tau decays against the background of normal showers induced by cosmic rays is described. The method used to compute the exposure for a detector continuously growing with time is detailed. Systematic uncertainties in the exposure from the detector, the analysis, and the involved physics are discussed. No tau neutrino candidates have been found. For neutrinos in the energy range 2x10(17) eV < E(nu)< 2x10(19) eV, assuming a diffuse spectrum of the form E(nu)(-2), data collected between 1 January 2004 and 30 April 2008 yield a 90% confidence-level upper limit of E(nu)(2)dN(nu tau)/dE(nu)< 9x10(-8) GeV cm(-2) s(-1) sr(-1).
Resumo:
This study investigated the energy system contributions of rowers in three different conditions: rowing on an ergometer without and with the slide and rowing in the water. For this purpose, eight rowers were submitted to 2,000 m race simulations in each of the situations defined above. The fractions of the aerobic (W(AER)), anaerobic alactic (W(PCR)) and anaerobic lactic (W([La-])) systems were calculated based on the oxygen uptake, the fast component of excess post-exercise oxygen uptake and changes in net blood lactate, respectively. In the water, the metabolic work was significantly higher [(851 (82) kJ] than during both ergometer [674 (60) kJ] and ergometer with slide [663 (65) kJ] (P <= 0.05). The time in the water [515 (11) s] was higher (P < 0.001) than in the ergometers with [398 (10) s] and without the slide [402 (15) s], resulting in no difference when relative energy expenditure was considered: in the water [99 (9) kJ min(-1)], ergometer without the slide [99.6 (9) kJ min(-1)] and ergometer with the slide [100.2 (9.6) kJ min(-1)]. The respective contributions of the WAER, WPCR and W[La-] systems were water = 87 (2), 7 (2) and 6 (2)%, ergometer = 84 (2), 7 (2) and 9 (2)%, and ergometer with the slide = 84 (2), 7 (2) and 9 (1)%. (V) over dotO(2), HR and lactate were not different among conditions. These results seem to indicate that the ergometer braking system simulates conditions of a bigger and faster boat and not a single scull. Probably, a 2,500 m test should be used to properly simulate in the water single-scull race.
Resumo:
Molecular dynamics simulations are used to study energy and momentum transfer of low-energy Ar atoms scattered from the Ni(001) surface. The investigation concentrates on the dependence of these processes on incident energy, angles of incidence and surface temperature. Energy transfer exhibits a strong dependence on the surface temperature, at incident energies below 500 meV, and incident angles close to specular incidence. Above 500 meV, the surface temperature dependence vanishes, and a limiting value in the amount of energy transferred to the surface is attained. Momentum exchange is investigated in terms of tangential and normal components. Both components exhibit a weak surface temperature dependence, but they have opposite behaviours at all incidence angles. In each component, momentum can be lost or gained following the interaction with the surface. (C) 1997 Elsevier Science B.V.
Resumo:
Objective: The purpose of this study was to compare the energy cost of standardized physical activity (ECA) between patients with cystic fibrosis (CF) and healthy control subjects. Design: Cross-sectional study using patients with CF and volunteers from the community. Setting: University laboratory. Subjects: Fifteen patients (age 24.6 +/- 4.6 y) recruited with consent from their treating physician and 16 healthy control subjects (age 25.3 +/- 3.2) recruited via local advertisement. Interventions. Patients and controls walked on a computerised treadmill at 1.5 km/h for 60 min followed by a 60 min recovery period and, on a second occasion, cycled at 0.5 kp (kilopond), 30 rpm followed by a 60 min recovery. The ECA was measured via indirect calorimetry. Resting energy expenditure (REE), nutritional status, pulmonary function and genotype were determined. Results: The REE in patients was significantly greater than the REE measured in controls (P = 0.03) and was not related to the severity of lung disease or genotype. There was a significant difference between groups when comparing the ECA for walking kg root FFM (P = 0.001) and cycling kg root FFM (P = 0.04). The ECA for each activity was adjusted (ECA(adj)) for the contribution of REE (ECA kJ kg root FFM 120 min(-1) - REE kJ kg root FFM 120 min(-1)). ECA(adj) revealed a significant difference between groups for the walking protocol (P = 0.001) but no difference for the cycling protocol (P = 0.45). This finding may be related to the fact that the work rate during walking was more highly regulated than during cycling. Conclusions ECA in CF is increased and is likely to be explained by an additional energy-requiring component related to the exercise itself and not an increased REE. Sponsorship. The Prince Charles Hospital Foundation; MLR was in receipt of a QUTPRA Scholarship.
Resumo:
Recent observations from type Ia Supernovae and from cosmic microwave background (CMB) anisotropies have revealed that most of the matter of the Universe interacts in a repulsive manner, composing the so-called dark energy constituent of the Universe. Determining the properties of dark energy is one of the most important tasks of modern cosmology and this is the main motivation for this work. The analysis of cosmic gravitational waves (GW) represents, besides the CMB temperature and polarization anisotropies, an additional approach in the determination of parameters that may constrain the dark energy models and their consistence. In recent work, a generalized Chaplygin gas model was considered in a flat universe and the corresponding spectrum of gravitational waves was obtained. In the present work we have added a massless gas component to that model and the new spectrum has been compared to the previous one. The Chaplygin gas is also used to simulate a L-CDM model by means of a particular combination of parameters so that the Chaplygin gas and the L-CDM models can be easily distinguished in the theoretical scenarios here established. We find that the models are strongly degenerated in the range of frequencies studied. This degeneracy is in part expected since the models must converge to each other when some particular combinations of parameters are considered.
Resumo:
The paper proposes a methodology to increase the probability of delivering power to any load point by identifying new investments in distribution energy systems. The proposed methodology is based on statistical failure and repair data of distribution components and it uses a fuzzy-probabilistic modeling for the components outage parameters. The fuzzy membership functions of the outage parameters of each component are based on statistical records. A mixed integer nonlinear programming optimization model is developed in order to identify the adequate investments in distribution energy system components which allow increasing the probability of delivering power to any customer in the distribution system at the minimum possible cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a 180 bus distribution network.
Resumo:
Agências financiadoras: FCT - PEstOE/FIS/UI0618/2011; PTDC/FIS/098254/2008 ERC-PATCHYCOLLOIDS e MIUR-PRIN
Resumo:
X-Ray Spectrom. 2003; 32: 396–401
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
This paper proposes a PSO based approach to increase the probability of delivering power to any load point by identifying new investments in distribution energy systems. The statistical failure and repair data of distribution components is the main basis of the proposed methodology that uses a fuzzyprobabilistic modeling for the components outage parameters. The fuzzy membership functions of the outage parameters of each component are based on statistical records. A Modified Discrete PSO optimization model is developed in order to identify the adequate investments in distribution energy system components which allow increasing the probability of delivering power to any customer in the distribution system at the minimum possible cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a 180 bus distribution network.
Resumo:
In this study, energy production for autonomous underwater vehicles is investigated. This project is part of a bigger project called TURTLE. The autonomous vehicles perform oceanic researches at seabed for which they are intended to be kept operational underwater for several months. In order to ful l a long-term underwater condition, powerful batteries are combined with \micro- scale" energy production on the spot. This work tends to develop a system that generates power up to a maximum of 30 W. Latter energy harvesting structure consists basically of a turbine combined with a generator and low-power electronics to adjust the achieved voltage to a required battery charger voltage. Every component is examined separately hence an optimum can be de ned for all, and subsequently also an overall optimum. Di erent design parameters as e.g. number of blades, solidity ratio and cross-section area are compared for di erent turbines, in order to see what is the most feasible type. Further, a generator is chosen by studying how ux distributions might be adjusted to low velocities, and how cogging torque can be excluded by adapted designs. Low-power electronics are con gured in order to convert and stabilize heavily varying three-phase voltages to a constant, recti ed voltage which is usable for battery storage. Clearly, di erent component parameters as maximum power and torque are matched here to increase the overall power generation. Furthermore an overall maximum power is set up for achieving a maximum power ow at load side. Due to among others typical low velocities of about 0.1 to 0.5 m/s, and constructing limits of the prototype, the vast range of components is restricted to only a few that could be used. Hence, a helical turbine is combined in a direct drive mode to a coreless-stator axial- ux permanent-magnet generator, from which the output voltage is adjusted subsequently by a recti er, impedance matching unit, upconverter circuit and an overall control unit to regulate di erent component parameters. All these electronics are combined in a closed-loop design to involve positive feedback signals. Furthermore a theoretical con guration for the TURTLE vehicle is described in this work and a solution is proposed that might be implemented, for which several design tests are performable in a future study.
Resumo:
The Electrohysterogram (EHG) is a new instrument for pregnancy monitoring. It measures the uterine muscle electrical signal, which is closely related with uterine contractions. The EHG is described as a viable alternative and a more precise instrument than the currently most widely used method for the description of uterine contractions: the external tocogram. The EHG has also been indicated as a promising tool in the assessment of preterm delivery risk. This work intends to contribute towards the EHG characterization through the inventory of its components which are: • Contractions; • Labor contractions; • Alvarez waves; • Fetal movements; • Long Duration Low Frequency Waves; The instruments used for cataloging were: Spectral Analysis, parametric and non-parametric, energy estimators, time-frequency methods and the tocogram annotated by expert physicians. The EHG and respective tocograms were obtained from the Icelandic 16-electrode Electrohysterogram Database. 288 components were classified. There is not a component database of this type available for consultation. The spectral analysis module and power estimation was added to Uterine Explorer, an EHG analysis software developed in FCT-UNL. The importance of this component database is related to the need to improve the understanding of the EHG which is a relatively complex signal, as well as contributing towards the detection of preterm birth. Preterm birth accounts for 10% of all births and is one of the most relevant obstetric conditions. Despite the technological and scientific advances in perinatal medicine, in developed countries, prematurity is the major cause of neonatal death. Although various risk factors such as previous preterm births, infection, uterine malformations, multiple gestation and short uterine cervix in second trimester, have been associated with this condition, its etiology remains unknown [1][2][3].
Resumo:
PURPOSE: The origin of the slow component is not fully understood. The mechanical hypothesis is one of the potential factors, because an increase in external mechanical work with fatigue was previously reported for a constant velocity run. The purpose of this study was to determine whether a change in mechanical work could occur during the development of the VO2 slow component under the effect of fatigue. METHODS: Twelve regional-level competitive runners performed a square-wave transition, corresponding to 95% of the speed associated with peak VO2 obtained during an incremental test. The VO2 response was fit with a classical model including two exponential functions. A specific treadmill with three-dimensional force transducers was used to measure the ground reaction force. Kinetic work (W(kin)), potential work (W(pot)), external work (W(ext)), and an index of internal work (W(int)) per unit of distance were quantified continuously. RESULTS: During the slow component of VO2, a significant increase in W (P< 0.01), no change in W, and a significant decrease in W and W index (P< 0.05, P< 0.001, respectively) were observed. CONCLUSION: The present study showed that the slow component of VO2 did not result partly from a change in mechanical work under the effect of fatigue. Nevertheless, the decrease in stride frequency (P< 0.001) and contact time (P< 0.001) suggested an alternative mechanical explanation. The slow component during running may be due to the cost of generating force or to alterations in the storage and recoil of elastic energy, and not to the external mechanical work.