986 resultados para Energy flux distributions
Resumo:
This paper reports measurements of atmospheric neutrino and antineutrino interactions in the MINOS Far Detector, based on 2553 live-days (37.9 kton-years) of data. A total of 2072 candidate events are observed. These are separated into 905 contained-vertex muons and 466 neutrino-induced rock-muons, both produced by charged-current nu(mu) and (nu) over bar (mu) interactions, and 701 contained-vertex showers, composed mainly of charged-current nu(e) and (nu) over bar (e) interactions and neutral-current interactions. The curvature of muon tracks in the magnetic field of the MINOS Far Detector is used to select separate samples of nu(mu) and (nu) over bar (mu) events. The observed ratio of (nu) over bar (mu) to v(mu) events is compared with the Monte Carlo ( MC) simulation, giving a double ratio of R((nu) over bar/nu)data/R(nu) over bar/nu MC = 1.03 +/- 0.08(stat) +/- 0.08(syst). The v(mu) and (nu) over bar (mu) data are separated into bins of L/E resolution, based on the reconstructed energy and direction of each event, and a maximum likelihood fit to the observed L/E distributions is used to determine the atmospheric neutrino oscillation parameters. This fit returns 90% confidence limits of |Delta m(2)| = (1.9 +/- 0.4) x 10(-3) eV(2) and sin(2)2 theta > 0.86. The fit is extended to incorporate separate nu(mu) and (nu) over bar mu oscillation parameters, returning 90% confidence limits of |Delta m(2)| - |Delta(m) over bar (2)| = 0.6(-0.8)(+2.4) x 10(-3) eV(2) on the difference between the squared-mass splittings for neutrinos and antineutrinos.
Resumo:
Abstract Background The application and better understanding of traditional and new breast tumor biomarkers and prognostic factors are increasing due to the fact that they are able to identify individuals at high risk of breast cancer, who may benefit from preventive interventions. Also, biomarkers can make possible for physicians to design an individualized treatment for each patient. Previous studies showed that trace elements (TEs) determined by X-Ray Fluorescence (XRF) techniques are found in significantly higher concentrations in neoplastic breast tissues (malignant and benign) when compared with normal tissues. The aim of this work was to evaluate the potential of TEs, determined by the use of the Energy Dispersive X-Ray Fluorescence (EDXRF) technique, as biomarkers and prognostic factors in breast cancer. Methods By using EDXRF, we determined Ca, Fe, Cu, and Zn trace elements concentrations in 106 samples of normal and breast cancer tissues. Cut-off values for each TE were determined through Receiver Operating Characteristic (ROC) analysis from the TEs distributions. These values were used to set the positive or negative expression. This expression was subsequently correlated with clinical prognostic factors through Fisher’s exact test and chi-square test. Kaplan Meier survival curves were also evaluated to assess the effect of the expression of TEs in the overall patient survival. Results Concentrations of TEs are higher in neoplastic tissues (malignant and benign) when compared with normal tissues. Results from ROC analysis showed that TEs can be considered a tumor biomarker because, after establishing a cut-off value, it was possible to classify different tissues as normal or neoplastic, as well as different types of cancer. The expression of TEs was found statistically correlated with age and menstrual status. The survival curves estimated by the Kaplan-Meier method showed that patients with positive expression for Cu presented a poor overall survival (p < 0.001). Conclusions This study suggests that TEs expression has a great potential of application as a tumor biomarker, once it was revealed to be an effective tool to distinguish different types of breast tissues and to identify the difference between malignant and benign tumors. The expressions of all TEs were found statistically correlated with well-known prognostic factors for breast cancer. The element copper also showed statistical correlation with overall survival.
Resumo:
The observation of ultrahigh energy neutrinos (UHE vs) has become a priority in experimental astroparticle physics. UHE vs can be detected with a variety of techniques. In particular, neutrinos can interact in the atmosphere (downward-going v) or in the Earth crust (Earth-skimming v), producing air showers that can be observed with arrays of detectors at the ground. With the surface detector array of the Pierre Auger Observatory we can detect these types of cascades. The distinguishing signature for neutrino events is the presence of very inclined showers produced close to the ground (i.e., after having traversed a large amount of atmosphere). In this work we review the procedure and criteria established to search for UHEs in the data collected with the ground array of the Pierre Auger Observatory.This includes Earth-skimming as well as downward-going neutrinos. No neutrino candidates have been found, which allows us to place competitive limits to the diffuse flux of UHE vs in the EeV range and above.
Resumo:
The relation between the intercepted light and orchard productivity was considered linear, although this dependence seems to be more subordinate to planting system rather than light intensity. At whole plant level not always the increase of irradiance determines productivity improvement. One of the reasons can be the plant intrinsic un-efficiency in using energy. Generally in full light only the 5 – 10% of the total incoming energy is allocated to net photosynthesis. Therefore preserving or improving this efficiency becomes pivotal for scientist and fruit growers. Even tough a conspicuous energy amount is reflected or transmitted, plants can not avoid to absorb photons in excess. The chlorophyll over-excitation promotes the reactive species production increasing the photoinhibition risks. The dangerous consequences of photoinhibition forced plants to evolve a complex and multilevel machine able to dissipate the energy excess quenching heat (Non Photochemical Quenching), moving electrons (water-water cycle , cyclic transport around PSI, glutathione-ascorbate cycle and photorespiration) and scavenging the generated reactive species. The price plants must pay for this equipment is the use of CO2 and reducing power with a consequent decrease of the photosynthetic efficiency, both because some photons are not used for carboxylation and an effective CO2 and reducing power loss occurs. Net photosynthesis increases with light until the saturation point, additional PPFD doesn’t improve carboxylation but it rises the efficiency of the alternative pathways in energy dissipation but also ROS production and photoinhibition risks. The wide photo-protective apparatus, although is not able to cope with the excessive incoming energy, therefore photodamage occurs. Each event increasing the photon pressure and/or decreasing the efficiency of the described photo-protective mechanisms (i.e. thermal stress, water and nutritional deficiency) can emphasize the photoinhibition. Likely in nature a small amount of not damaged photosystems is found because of the effective, efficient and energy consuming recovery system. Since the damaged PSII is quickly repaired with energy expense, it would be interesting to investigate how much PSII recovery costs to plant productivity. This PhD. dissertation purposes to improve the knowledge about the several strategies accomplished for managing the incoming energy and the light excess implication on photo-damage in peach. The thesis is organized in three scientific units. In the first section a new rapid, non-intrusive, whole tissue and universal technique for functional PSII determination was implemented and validated on different kinds of plants as C3 and C4 species, woody and herbaceous plants, wild type and Chlorophyll b-less mutant and monocot and dicot plants. In the second unit, using a “singular” experimental orchard named “Asymmetric orchard”, the relation between light environment and photosynthetic performance, water use and photoinhibition was investigated in peach at whole plant level, furthermore the effect of photon pressure variation on energy management was considered on single leaf. In the third section the quenching analysis method suggested by Kornyeyev and Hendrickson (2007) was validate on peach. Afterwards it was applied in the field where the influence of moderate light and water reduction on peach photosynthetic performances, water requirements, energy management and photoinhibition was studied. Using solar energy as fuel for life plant is intrinsically suicidal since the high constant photodamage risk. This dissertation would try to highlight the complex relation existing between plant, in particular peach, and light analysing the principal strategies plants developed to manage the incoming light for deriving the maximal benefits as possible minimizing the risks. In the first instance the new method proposed for functional PSII determination based on P700 redox kinetics seems to be a valid, non intrusive, universal and field-applicable technique, even because it is able to measure in deep the whole leaf tissue rather than the first leaf layers as fluorescence. Fluorescence Fv/Fm parameter gives a good estimate of functional PSII but only when data obtained by ad-axial and ab-axial leaf surface are averaged. In addition to this method the energy quenching analysis proposed by Kornyeyev and Hendrickson (2007), combined with the photosynthesis model proposed by von Caemmerer (2000) is a forceful tool to analyse and study, even in the field, the relation between plant and environmental factors such as water, temperature but first of all light. “Asymmetric” training system is a good way to study light energy, photosynthetic performance and water use relations in the field. At whole plant level net carboxylation increases with PPFD reaching a saturating point. Light excess rather than improve photosynthesis may emphasize water and thermal stress leading to stomatal limitation. Furthermore too much light does not promote net carboxylation improvement but PSII damage, in fact in the most light exposed plants about 50-60% of the total PSII is inactivated. At single leaf level, net carboxylation increases till saturation point (1000 – 1200 μmolm-2s-1) and light excess is dissipated by non photochemical quenching and non net carboxylative transports. The latter follows a quite similar pattern of Pn/PPFD curve reaching the saturation point at almost the same photon flux density. At middle-low irradiance NPQ seems to be lumen pH limited because the incoming photon pressure is not enough to generate the optimum lumen pH for violaxanthin de-epoxidase (VDE) full activation. Peach leaves try to cope with the light excess increasing the non net carboxylative transports. While PPFD rises the xanthophyll cycle is more and more activated and the rate of non net carboxylative transports is reduced. Some of these alternative transports, such as the water-water cycle, the cyclic transport around the PSI and the glutathione-ascorbate cycle are able to generate additional H+ in lumen in order to support the VDE activation when light can be limiting. Moreover the alternative transports seems to be involved as an important dissipative way when high temperature and sub-optimal conductance emphasize the photoinhibition risks. In peach, a moderate water and light reduction does not determine net carboxylation decrease but, diminishing the incoming light and the environmental evapo-transpiration request, stomatal conductance decreases, improving water use efficiency. Therefore lowering light intensity till not limiting levels, water could be saved not compromising net photosynthesis. The quenching analysis is able to partition absorbed energy in the several utilization, photoprotection and photo-oxidation pathways. When recovery is permitted only few PSII remained un-repaired, although more net PSII damage is recorded in plants placed in full light. Even in this experiment, in over saturating light the main dissipation pathway is the non photochemical quenching; at middle-low irradiance it seems to be pH limited and other transports, such as photorespiration and alternative transports, are used to support photoprotection and to contribute for creating the optimal trans-thylakoidal ΔpH for violaxanthin de-epoxidase. These alternative pathways become the main quenching mechanisms at very low light environment. Another aspect pointed out by this study is the role of NPQ as dissipative pathway when conductance becomes severely limiting. The evidence that in nature a small amount of damaged PSII is seen indicates the presence of an effective and efficient recovery mechanism that masks the real photodamage occurring during the day. At single leaf level, when repair is not allowed leaves in full light are two fold more photoinhibited than the shaded ones. Therefore light in excess of the photosynthetic optima does not promote net carboxylation but increases water loss and PSII damage. The more is photoinhibition the more must be the photosystems to be repaired and consequently the energy and dry matter to allocate in this essential activity. Since above the saturation point net photosynthesis is constant while photoinhibition increases it would be interesting to investigate how photodamage costs in terms of tree productivity. An other aspect of pivotal importance to be further widened is the combined influence of light and other environmental parameters, like water status, temperature and nutrition on peach light, water and phtosyntate management.
Resumo:
The reactions 32S+58,64Ni are studied at 14.5 AMeV. From this energy on, fragmentation begins to be a dominant process, although evaporation and fission are still present. After a selection of the collision mechanism, we show that important even-odd effects are present in the isotopic fragment distributions when the excitation energy is small. The staggering effect appears to be a universal feature of fragment production, slightly enhanced when the emission source is neutron poor. A closer look at the behavior of isotopic chains reveals that odd-even effects cannot be explained by pairing effects in the nuclear mass alone, but depend in a more complex way on the de-excitation chain.
Resumo:
Since its discovery, top quark has represented one of the most investigated field in particle physics. The aim of this thesis is the reconstruction of hadronic top with high transverse momentum (boosted) with the Template Overlap Method (TOM). Because of the high energy, the decay products of boosted tops are partially or totally overlapped and thus they are contained in a single large radius jet (fat-jet). TOM compares the internal energy distributions of the candidate fat-jet to a sample of tops obtained by a MC simulation (template). The algorithm is based on the definition of an overlap function, which quantifies the level of agreement between the fat-jet and the template, allowing an efficient discrimination of signal from the background contributions. A working point has been decided in order to obtain a signal efficiency close to 90% and a corresponding background rejection at 70%. TOM performances have been tested on MC samples in the muon channel and compared with the previous methods present in literature. All the methods will be merged in a multivariate analysis to give a global top tagging which will be included in ttbar production differential cross section performed on the data acquired in 2012 at sqrt(s)=8 TeV in high phase space region, where new physics processes could be possible. Due to its peculiarity to increase the pT, the Template Overlap Method will play a crucial role in the next data taking at sqrt(s)=13 TeV, where the almost totality of the tops will be produced at high energy, making the standard reconstruction methods inefficient.
Resumo:
In the year 2013, the detection of a diffuse astrophysical neutrino flux with the IceCube neutrino telescope – constructed at the geographic South Pole – was announced by the IceCube collaboration. However, the origin of these neutrinos is still unknown as no sources have been identified to this day. Promising neutrino source candidates are blazars, which are a subclass of active galactic nuclei with radio jets pointing towards the Earth. In this thesis, the neutrino flux from blazars is tested with a maximum likelihood stacking approach, analyzing the combined emission from uniform groups of objects. The stacking enhances the sensitivity w.r.t. the still unsuccessful single source searches. The analysis utilizes four years of IceCube data including one year from the completed detector. As all results presented in this work are compatible with background, upper limits on the neutrino flux are given. It is shown that, under certain conditions, some hadronic blazar models can be challenged or even rejected. Moreover, the sensitivity of this analysis – and any other future IceCube point source search – was enhanced by the development of a new angular reconstruction method. It is based on a detailed simulation of the photon propagation in the Antarctic ice. The median resolution for muon tracks, induced by high-energy neutrinos, is improved for all neutrino energies above IceCube’s lower threshold at 0.1TeV. By reprocessing the detector data and simulation from the year 2010, it is shown that the new method improves IceCube’s discovery potential by 20% to 30% depending on the declination.
Resumo:
The electron Monte Carlo (eMC) dose calculation algorithm in Eclipse (Varian Medical Systems) is based on the macro MC method and is able to predict dose distributions for high energy electron beams with high accuracy. However, there are limitations for low energy electron beams. This work aims to improve the accuracy of the dose calculation using eMC for 4 and 6 MeV electron beams of Varian linear accelerators. Improvements implemented into the eMC include (1) improved determination of the initial electron energy spectrum by increased resolution of mono-energetic depth dose curves used during beam configuration; (2) inclusion of all the scrapers of the applicator in the beam model; (3) reduction of the maximum size of the sphere to be selected within the macro MC transport when the energy of the incident electron is below certain thresholds. The impact of these changes in eMC is investigated by comparing calculated dose distributions for 4 and 6 MeV electron beams at source to surface distance (SSD) of 100 and 110 cm with applicators ranging from 6 x 6 to 25 x 25 cm(2) of a Varian Clinac 2300C/D with the corresponding measurements. Dose differences between calculated and measured absolute depth dose curves are reduced from 6% to less than 1.5% for both energies and all applicators considered at SSD of 100 cm. Using the original eMC implementation, absolute dose profiles at depths of 1 cm, d(max) and R50 in water lead to dose differences of up to 8% for applicators larger than 15 x 15 cm(2) at SSD 100 cm. Those differences are now reduced to less than 2% for all dose profiles investigated when the improved version of eMC is used. At SSD of 110 cm the dose difference for the original eMC version is even more pronounced and can be larger than 10%. Those differences are reduced to within 2% or 2 mm with the improved version of eMC. In this work several enhancements were made in the eMC algorithm leading to significant improvements in the accuracy of the dose calculation for 4 and 6 MeV electron beams of Varian linear accelerators.
Resumo:
Land surface temperature (LST) plays a key role in governing the land surface energy budget, and measurements or estimates of LST are an integral part of many land surface models and methods to estimate land surface sensible heat (H) and latent heat fluxes. In particular, the LST anchors the potential temperature profile in Monin-Obukhov similarity theory, from which H can be derived. Brutsaert has made important contributions to our understanding the nature of surface temperature measurements as well as the practical but theoretically sound use of LST in this framework. His work has coincided with the wide-spread availability of remotely sensed LST measurements. Use of remotely sensed LST estimates inevitably involves complicating factors, such as: varying spatial and temporal scales in measurements, theory, and models; spatial variability of LST and H; the relationship between measurements of LST and the temperature felt by the atmosphere; and the need to correct satellite-based radiometric LST measurements for the radiative effects of the atmosphere. This paper reviews the progress made in research in these areas by tracing and commenting on Brutsaert's contributions.
Resumo:
The 1s-2s interval has been measured in the muonium (;mgr;(+)e(-)) atom by Doppler-free two-photon pulsed laser spectroscopy. The frequency separation of the states was determined to be 2 455 528 941.0(9.8) MHz, in good agreement with quantum electrodynamics. The result may be interpreted as a measurement of the muon-electron charge ratio as -1-1.1(2.1)x10(-9). We expect significantly higher accuracy at future high flux muon sources and from cw laser technology.
Resumo:
Similarity measure is one of the main factors that affect the accuracy of intensity-based 2D/3D registration of X-ray fluoroscopy to CT images. Information theory has been used to derive similarity measure for image registration leading to the introduction of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measure only takes intensity values into account without considering spatial information and its robustness is questionable. Previous attempt to incorporate spatial information into mutual information either requires computing the entropy of higher dimensional probability distributions, or is not robust to outliers. In this paper, we show how to incorporate spatial information into mutual information without suffering from these problems. Using a variational approximation derived from the Kullback-Leibler bound, spatial information can be effectively incorporated into mutual information via energy minimization. The resulting similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experimental results are presented on datasets of two applications: (a) intra-operative patient pose estimation from a few (e.g. 2) calibrated fluoroscopic images, and (b) post-operative cup alignment estimation from single X-ray radiograph with gonadal shielding.
Measuring energy spectra of TeV gamma-ray emission from the Cygnus region of our galaxy with Milagro
Resumo:
High energy gamma rays can provide fundamental clues to the origins of cosmic rays. In this thesis, TeV gamma-ray emission from the Cygnus region is studied. Previously the Milagro experiment detected five TeV gamma-ray sources in this region and a significant excess of TeV gamma rays whose origin is still unclear. To better understand the diffuse excess the separation of sources and diffuse emission is studied using the latest and most sensitive data set of the Milagro experiment. In addition, a newly developed technique is applied that allows the energy spectrum of the TeV gamma rays to be reconstructed using Milagro data. No conclusive statement can be made about the spectrum of the diffuse emission from the Cygnus region because of its low significance of 2.2 σ above the background in the studied data sample. The entire Cygnus region emission is best fit with a power law with a spectral index of α=2.40 (68% confidence interval: 1.35-2.92) and a exponential cutoff energy of 31.6 TeV (10.0-251.2 TeV). In the case of a simple power law assumption without a cutoff energy the best fit yields a spectral index of α=2.97 (68% confidence interval: 2.83-3.10). Neither of these best fits are in good agreement with the data. The best spectral fit to the TeV emission from MGRO J2019+37, the brightest source in the Cygnus region, yields a spectral index of α=2.30 (68% confidence interval: 1.40-2.70) with a cutoff energy of 50.1 TeV (68% confidence interval: 17.8-251.2 TeV) and a spectral index of α=2.75 (68% confidence interval: 2.65-2.85) when no exponential cutoff energy is assumed. According to the present analysis, MGRO J2019+37 contributes 25% to the differential flux from the entire Cygnus at 15 TeV.
Resumo:
Fatal falls from great height are a frequently encountered setting in forensic pathology. They present--by virtue of a calculable energy transmission to the body--an ideal model for the assessment of the effects of blunt trauma to a human body. As multislice computed tomography (MSCT) has proven not only to be invaluable in clinical examinations, but also to be a viable tool in post-mortem imaging, especially in the field of osseous injuries, we performed a MSCT scan on 20 victims of falls from great height. We hereby detected fractures and their distributions were compared with the impact energy. Our study suggests a marked increase of extensive damage to different body regions at about 20 kJ and more. The thorax was most often affected, regardless of the amount of impacting energy and the primary impact site. Cranial fracture frequency displayed a biphasic distribution with regard to the impacting energy; they were more frequent in energies of less than 10, and more than 20 kJ, but rarer in the intermediate energy group, namely that of 10-20 kJ.