932 resultados para Over the counter derivatives
Resumo:
This work examines the sources of moisture affecting the semi-arid Brazilian Northeast (NEB) during its pre-rainy and rainy season (JFMAM) through a Lagrangian diagnosis method. The FLEXPART model identifies the humidity contributions to the moisture budget over a region through the continuous computation of changes in the specific humidity along back or forward trajectories up to 10 days period. The numerical experiments were done for the period that spans between 2000 and 2004 and results were aggregated on a monthly basis. Results show that besides a minor local recycling component, the vast majority of moisture reaching NEB area is originated in the south Atlantic basin and that the nearby wet Amazon basin bears almost no impact. Moreover, although the maximum precipitation in the ""Poligono das Secas'' region (PS) occurs in March and the maximum precipitation associated with air parcels emanating from the South Atlantic towards PS is observed along January to March, the highest moisture contribution from this oceanic region occurs slightly later (April). A dynamical analysis suggests that the maximum precipitation observed in the PS sector does not coincide with the maximum moisture supply probably due to the combined effect of the Walker and Hadley cells in inhibiting the rising motions over the region in the months following April.
Resumo:
The differences on the phase and wavelength of the quasi-stationary waves over the South America generated by El Nino (EN) and La Nina (LN) events seem to affect the daily evolution of the South American Low Level Jet east of the Andes (SALLJ). For the austral summer period of 1977 2004 the SALLJ episodes detected according to Bonner criterion 1 show normal to above-normal frequency in EN years, and in LN years the episodes show normal to below-normal frequency. During EN and LN years the SALLJ episodes were associated with positive rainfall anomalies over the La Plata Basin, but more intense during LN years. During EN years the increase in the SALLJ cases were associated to intensification of the Subtropical Jet (SJ) around 30 degrees S and positive Sea Level Pressure (SLP) anomalies over the western equatorial Atlantic and tropical South America, particularly over central Brazil. This favored the intensification of the northeasterly trade winds over the northern continent and it channeled by the Andes mountain to the La Plata Basin region where negative SLP are found. The SALLJ cases identified during the LN events were weaker and less frequent when compared to those for EN years. In this case the SJ was weaker than in EN years and the negative SLP anomalies over the tropical continent contributed to the inversion of the northeasterly trade winds. Also a southerly flow anomaly was generated by the geostrophic balance due to the anomalous blocking over southeast Pacific and the intense cyclonic transient over the southern tip of South America. As result the warm tropical air brought by the SALLJ encounters the cold extratropical air from the southerly winds over the La Plata basin. This configuration can increase the conditional instability over the La Plata basin and may explain the more intense positive rainfall anomalies in SALLJ cases during LN years than in EN years.
Resumo:
This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.
Resumo:
Based on previous results obtained from observations and linear wave theory analysis, the hypothesis that large-scale patterns can generate extreme cold events in southeast South America through the propagation of remotely excited Rossby waves was already suggested. This work will confirm these findings and extend their analysis through a series of numerical experiments using a primitive equation model where waves are excited by a thermal forcing situated in positions chosen according to observed convection anomalies over the equatorial region. The basic state used for these experiments is a composite of austral winters with maximum and minimum frequency of occurrence of generalized frosts that can affect a large area known as the Wet Pampas located in the central and eastern part of Argentina. The results suggest that stationary Rossby waves may be one important mechanism linking anomalous tropical convection with the extreme cold events in the Wet Pampas. The combination of tropical convection and a specific basic state can generate the right environment to guide the Rossby waves trigged by the tropical forcing towards South America. Depending on the phase of the waves entering the South American continent, they can favour the advection of anomalous wind at low levels from the south carrying cold and dry air over the whole southern extreme of the continent, producing a generalized frost in the Wet Pampa region. On the other hand, when a basic state based on the composites of minimum frosts is used, an anomalous anticyclone over the southern part of the continent generates a circulation with a south-southeast wind which brings maritime air and therefore humidity over the Wet Pampas region, creating negative temperature anomalies only over the northeastern part of the region. Under these conditions even if frosts occur they would not be generalized, as observed for the other basic state with maximum frequency of occurrence of generalized frosts.
Resumo:
Context. Dwarf irregular galaxies are relatively simple unevolved objects where it is easy to test models of galactic chemical evolution. Aims. We attempt to determine the star formation and gas accretion history of IC 10, a local dwarf irregular for which abundance, gas, and mass determinations are available. Methods. We apply detailed chemical evolution models to predict the evolution of several chemical elements (He, O, N, S) and compared our predictions with the observational data. We consider additional constraints such as the present-time gas fraction, the star formation rate (SFR), and the total estimated mass of IC 10. We assume a dark matter halo for this galaxy and study the development of a galactic wind. We consider different star formation regimes: bursting and continuous. We explore different wind situations: i) normal wind, where all the gas is lost at the same rate and ii) metal-enhanced wind, where metals produced by supernovae are preferentially lost. We study a case without wind. We vary the star formation efficiency (SFE), the wind efficiency, and the time scale of the gas infall, which are the most important parameters in our models. Results. We find that only models with metal-enhanced galactic winds can reproduce the properties of IC 10. The star formation must have proceeded in bursts rather than continuously and the bursts must have been less numerous than similar to 10 over the whole galactic lifetime. Finally, IC 10 must have formed by a slow process of gas accretion with a timescale of the order of 8 Gyr.
Resumo:
Context. We present spectroscopic ground-based observations of the early Be star HD 49330 obtained simultaneously with the CoRoT-LRA1 run just before the burst observed in the CoRoT data. Aims. Ground-based spectroscopic observations of the early Be star HD 49330 obtained during the precursor phase and just before the start of an outburst allow us to disantangle stellar and circumstellar contributions and identify modes of stellar pulsations in this rapidly rotating star. Methods. Time series analysis (TSA) is performed on photospheric line profiles of He I and Si III by means of the least squares method. Results. We find two main frequencies f1 = 11.86 c d(-1) and f2 = 16.89 c d(-1) which can be associated with high order p-mode pulsations. We also detect a frequency f3 = 1.51 c d(-1) which can be associated with a low order g-mode. Moreover we show that the stellar line profile variability changed over the spectroscopic run. These results are in agreement with the results of the CoRoT data analysis, as shown in Huat et al. (2009). Conclusions. Our study of mid-and short-term spectroscopic variability allows the identification of p-and g-modes in HD 49330. It also allows us to display changes in the line profile variability before the start of an outburst. This brings new constraints for the seimic modelling of this star.
Resumo:
The VISTA near infrared survey of the Magellanic System (VMC) will provide deep YJK(s) photometry reaching stars in the oldest turn-off point throughout the Magellanic Clouds (MCs). As part of the preparation for the survey, we aim to access the accuracy in the star formation history (SFH) that can be expected from VMC data, in particular for the Large Magellanic Cloud (LMC). To this aim, we first simulate VMC images containing not only the LMC stellar populations but also the foreground Milky Way (MW) stars and background galaxies. The simulations cover the whole range of density of LMC field stars. We then perform aperture photometry over these simulated images, access the expected levels of photometric errors and incompleteness, and apply the classical technique of SFH-recovery based on the reconstruction of colour-magnitude diagrams (CMD) via the minimisation of a chi-squared-like statistics. We verify that the foreground MW stars are accurately recovered by the minimisation algorithms, whereas the background galaxies can be largely eliminated from the CMD analysis due to their particular colours and morphologies. We then evaluate the expected errors in the recovered star formation rate as a function of stellar age, SFR(t), starting from models with a known age-metallicity relation (AMR). It turns out that, for a given sky area, the random errors for ages older than similar to 0.4 Gyr seem to be independent of the crowding. This can be explained by a counterbalancing effect between the loss of stars from a decrease in the completeness and the gain of stars from an increase in the stellar density. For a spatial resolution of similar to 0.1 deg(2), the random errors in SFR(t) will be below 20% for this wide range of ages. On the other hand, due to the lower stellar statistics for stars younger than similar to 0.4 Gyr, the outer LMC regions will require larger areas to achieve the same level of accuracy in the SFR( t). If we consider the AMR as unknown, the SFH-recovery algorithm is able to accurately recover the input AMR, at the price of an increase of random errors in the SFR(t) by a factor of about 2.5. Experiments of SFH-recovery performed for varying distance modulus and reddening indicate that these parameters can be determined with (relative) accuracies of Delta(m-M)(0) similar to 0.02 mag and Delta E(B-V) similar to 0.01 mag, for each individual field over the LMC. The propagation of these errors in the SFR(t) implies systematic errors below 30%. This level of accuracy in the SFR(t) can reveal significant imprints in the dynamical evolution of this unique and nearby stellar system, as well as possible signatures of the past interaction between the MCs and the MW.
Resumo:
Over the last decade, X-ray observations have revealed the existence of several classes of isolated neutron stars (INSs) which are radio-quiet or exhibit radio emission with properties much at variance with those of ordinary radio pulsars. The identification of new sources is crucial in order to understand the relations among the different classes and to compare observational constraints with theoretical expectations. A recent analysis of the 2XMMp catalogue provided fewer than 30 new thermally emitting INS candidates. Among these, the source 2XMM J104608.7-594306 appears particularly interesting because of the softness of its X-ray spectrum, kT = 117 +/- 14 eV and N(H) = (3.5 +/- 1.1) x 10(21) cm(-2) (3 sigma), and of the present upper limits in the optical, m(B) greater than or similar to 26, m(V) greater than or similar to 25.5 and m(R) greater than or similar to 25 (98.76% confidence level), which imply a logarithmic X-ray-to-optical flux ratio log(F(X)/F(V)) greater than or similar to 3.1, corrected for absorption. We present the X-ray and optical properties of 2XMM J104608.7-594306 and discuss its nature in the light of two possible scenarios invoked to explain the X-ray thermal emission from INSs: the release of residual heat in a cooling neutron star, as in the seven radio-quiet ROSAT-discovered INSs, and accretion from the interstellar medium. We find that the present observational picture of 2XMM J104608.7-594306 is consistent with a distant cooling INS with properties in agreement with the most up-to-date expectations of population synthesis models: it is fainter, hotter and more absorbed than the seven ROSAT sources and possibly located in the Carina Nebula, a region likely to harbour unidentified cooling neutron stars. The accretion scenario, although not entirely ruled out by observations, would require a very slow (similar to 10 km s(-1)) INS accreting at the Bondi-Hoyle rate.
Resumo:
Mercury (Hg) pollution is one of the most serious environmental problems. Due to public concern prompted by the symptoms displayed by people who consumed contaminated fish in Minamata, Japan in 1956, Hg pollution has since been kept under constant surveillance. However, despite considerable accumulation of knowledge on the noxious effects of ingested or inhaled Hg, especially for humans, there is virtually nothing known about the genotoxic effects of Hg. Because increased mitotic crossing over is assumed to be the first step leading to carcinogenesis, we used a sensitive short-term test (homozygotization index) to look for DNA alterations induced by Hg fumes. In one Aspergillus nidulans diploid strain (UT448//UT184), the effects of the Hg fumes appeared scattered all over the DNA, causing 3.05 times more recombination frequencies than the mean for other strains. Another diploid (Dp II- I//UT184) was little affected by Hg. This led us to hypothesize that a genetic factor present in the UT184 master strain genome, close to the nicB8 genetic marker, is responsible for this behavior. These findings corroborate our previous findings that the homozygotization index can be used as a bioassay for rapid and efficient assessment of ecotoxicological hazards.
Resumo:
Size-resolved vertical aerosol number fluxes of particles in the diameter range 0.25-2.5 mu m were measured with the eddy covariance method from a 53 m high tower over the Amazon rain forest, 60 km NNW of Manaus, Brazil. This study focuses on data measured during the relatively clean wet season, but a shorter measurement period from the more polluted dry season is used as a comparison. Size-resolved net particle fluxes of the five lowest size bins, representing 0.25-0.45 mu m in diameter, were in general dominated by deposition in more or less all wind sectors in the wet season. This is an indication that the source of primary biogenic aerosol particles may be small in this particle size range. Transfer velocities within this particle size range were observed to increase linearly with increasing friction velocity and increasing particle diameter. In the diameter range 0.5-2.5 mu m, vertical particle fluxes were highly dependent on wind direction. In wind sectors where anthropogenic influence was low, net upward fluxes were observed. However, in wind sectors associated with higher anthropogenic influence, deposition fluxes dominated. The net upward fluxes were interpreted as a result of primary biogenic aerosol emission, but deposition of anthropogenic particles seems to have masked this emission in wind sectors with higher anthropogenic influence. The net emission fluxes were at maximum in the afternoon when the mixed layer is well developed, and were best correlated with horizontal wind speed according to the equation log(10)F = 0.48.U + 2.21 where F is the net emission number flux of 0.5-2.5 mu m particles [m(-2) s(-1)] and U is the horizontal wind speed [ms(-1)] at the top of the tower.
Resumo:
Through long-range transport of dust, the North-African desert supplies essential minerals to the Amazon rain forest. Since North African dust reaches South America mostly during the Northern Hemisphere winter, the dust sources active during winter are the main contributors to the forest. Given that the Bod,l, depression area in southwestern Chad is the main winter dust source, a close link is expected between the Bod,l, emission patterns and volumes and the mineral supply flux to the Amazon. Until now, the particular link between the Bod,l, and the Amazon forest was based on sparse satellite measurements and modeling studies. In this study, we combine a detailed analysis of space-borne and ground data with reanalysis model data and surface measurements taken in the central Amazon during the Amazonian Aerosol Characterization Experiment (AMAZE-08) in order to explore the validity and the nature of the proposed link between the Bod,l, depression and the Amazon forest. This case study follows the dust events of 11-16 and 18-27 February 2008, from the emission in the Bod,l, over West Africa (most likely with contribution from other dust sources in the region) the crossing of the Atlantic Ocean, to the observed effects above the Amazon canopy about 10 days after the emission. The dust was lifted by surface winds stronger than 14 m s(-1), usually starting early in the morning. The lofted dust, mixed with biomass burning aerosols over Nigeria, was transported over the Atlantic Ocean, and arrived over the South American continent. The top of the aerosol layer reached above 3 km, and the bottom merged with the boundary layer. The arrival of the dusty air parcel over the Amazon forest increased the average concentration of aerosol crustal elements by an order of magnitude.
Resumo:
In-situ measurements in convective clouds (up to the freezing level) over the Amazon basin show that smoke from deforestation fires prevents clouds from precipitating until they acquire a vertical development of at least 4 km, compared to only 1-2 km in clean clouds. The average cloud depth required for the onset of warm rain increased by similar to 350 m for each additional 100 cloud condensation nuclei per cm(3) at a super-saturation of 0.5% (CCN0.5%). In polluted clouds, the diameter of modal liquid water content grows much slower with cloud depth (at least by a factor of similar to 2), due to the large number of droplets that compete for available water and to the suppressed coalescence processes. Contrary to what other studies have suggested, we did not observe this effect to reach saturation at 3000 or more accumulation mode particles per cm(3). The CCN0.5% concentration was found to be a very good predictor for the cloud depth required for the onset of warm precipitation and other microphysical factors, leaving only a secondary role for the updraft velocities in determining the cloud drop size distributions. The effective radius of the cloud droplets (r(e)) was found to be a quite robust parameter for a given environment and cloud depth, showing only a small effect of partial droplet evaporation from the cloud's mixing with its drier environment. This supports one of the basic assumptions of satellite analysis of cloud microphysical processes: the ability to look at different cloud top heights in the same region and regard their r(e) as if they had been measured inside one well developed cloud. The dependence of r(e) on the adiabatic fraction decreased higher in the clouds, especially for cleaner conditions, and disappeared at r(e)>=similar to 10 mu m. We propose that droplet coalescence, which is at its peak when warm rain is formed in the cloud at r(e)=similar to 10 mu m, continues to be significant during the cloud's mixing with the entrained air, cancelling out the decrease in r(e) due to evaporation.
Resumo:
We searched for a sidereal modulation in the MINOS far detector neutrino rate. Such a signal would be a consequence of Lorentz and CPT violation as described by the standard-model extension framework. It also would be the first detection of a perturbative effect to conventional neutrino mass oscillations. We found no evidence for this sidereal signature, and the upper limits placed on the magnitudes of the Lorentz and CPT violating coefficients describing the theory are an improvement by factors of 20-510 over the current best limits found by using the MINOS near detector.
Resumo:
PHENIX has measured the e(+)e(-) pair continuum in root s(NN) = 200 GeV Au+Au and p+p collisions over a wide range of mass and transverse momenta. The e(+)e(-) yield is compared to the expectations from hadronic sources, based on PHENIX measurements. In the intermediate-mass region, between the masses of the phi and the J/psi meson, the yield is consistent with expectations from correlated c (c) over bar production, although other mechanisms are not ruled out. In the low-mass region, below the phi, the p+p inclusive mass spectrum is well described by known contributions from light meson decays. In contrast, the Au+Au minimum bias inclusive mass spectrum in this region shows an enhancement by a factor of 4.7 +/- 0.4(stat) +/- 1.5(syst) +/- 0.9(model). At low mass (m(ee) < 0.3 GeV/c(2)) and high p(T) (1 < p(T) < 5 GeV/c) an enhanced e(+)e(-) pair yield is observed that is consistent with production of virtual direct photons. This excess is used to infer the yield of real direct photons. In central Au+Au collisions, the excess of the direct photon yield over the p+p is exponential in p(T), with inverse slope T = 221 +/- 19(stat) +/- 19(syst) MeV. Hydrodynamical models with initial temperatures ranging from T(init) similar or equal to 300-600 MeV at times of 0.6-0.15 fm/c after the collision are in qualitative agreement with the direct photon data in Au+Au. For low p(T) < 1 GeV/c the low-mass region shows a further significant enhancement that increases with centrality and has an inverse slope of T similar or equal to 100 MeV. Theoretical models underpredict the low-mass, low-p(T) enhancement.
Resumo:
The solvent effects on the low-lying absorption spectrum and on the (15)N chemical shielding of pyrimidine in water are calculated using the combined and sequential Monte Carlo simulation and quantum mechanical calculations. Special attention is devoted to the solute polarization. This is included by an iterative procedure previously developed where the solute is electrostatically equilibrated with the solvent. In addition, we verify the simple yet unexplored alternative of combining the polarizable continuum model (PCM) and the hybrid QM/MM method. We use PCM to obtain the average solute polarization and include this in the MM part of the sequential QM/MM methodology, PCM-MM/QM. These procedures are compared and further used in the discrete and the explicit solvent models. The use of the PCM polarization implemented in the MM part seems to generate a very good description of the average solute polarization leading to very good results for the n-pi* excitation energy and the (15)N nuclear chemical shield of pyrimidine in aqueous environment. The best results obtained here using the solute pyrimidine surrounded by 28 explicit water molecules embedded in the electrostatic field of the remaining 472 molecules give the statistically converged values for the low lying n-pi* absorption transition in water of 36 900 +/- 100 (PCM polarization) and 36 950 +/- 100 cm(-1) (iterative polarization), in excellent agreement among one another and with the experimental value observed with a band maximum at 36 900 cm(-1). For the nuclear shielding (15)N the corresponding gas-water chemical shift obtained using the solute pyrimidine surrounded by 9 explicit water molecules embedded in the electrostatic field of the remaining 491 molecules give the statistically converged values of 24.4 +/- 0.8 and 28.5 +/- 0.8 ppm, compared with the inferred experimental value of 19 +/- 2 ppm. Considering the simplicity of the PCM over the iterative polarization this is an important aspect and the computational savings point to the possibility of dealing with larger solute molecules. This PCM-MM/QM approach reconciles the simplicity of the PCM model with the reliability of the combined QM/MM approaches.