135 resultados para Volatile signature
Resumo:
The quantification of sources of carbonaceous aerosol is important to understand their atmospheric concentrations and regulating processes and to study possible effects on climate and air quality, in addition to develop mitigation strategies. In the framework of the European Integrated Project on Aerosol Cloud Climate Interactions (EUCAARI) fine (D(p) < 2.5 mu m) and coarse (2.5 mu m < Dp < 10 mu m) aerosol particles were sampled from February to June (wet season) and from August to September (dry season) 2008 in the central Amazon basin. The mass of fine particles averaged 2.4 mu g m(-3) during the wet season and 4.2 mu g m(-3) during the dry season. The average coarse aerosol mass concentration during wet and dry periods was 7.9 and 7.6 mu g m(-3), respectively. The overall chemical composition of fine and coarse mass did not show any seasonality with the largest fraction of fine and coarse aerosol mass explained by organic carbon (OC); the average OC to mass ratio was 0.4 and 0.6 in fine and coarse aerosol modes, respectively. The mass absorbing cross section of soot was determined by comparison of elemental carbon and light absorption coefficient measurements and it was equal to 4.7 m(2) g(-1) at 637 nm. Carbon aerosol sources were identified by Positive Matrix Factorization (PMF) analysis of thermograms: 44% of fine total carbon mass was assigned to biomass burning, 43% to secondary organic aerosol (SOA), and 13% to volatile species that are difficult to apportion. In the coarse mode, primary biogenic aerosol particles (PBAP) dominated the carbonaceous aerosol mass. The results confirmed the importance of PBAP in forested areas. The source apportionment results were employed to evaluate the ability of global chemistry transport models to simulate carbonaceous aerosol sources in a regional tropical background site. The comparison showed an overestimation of elemental carbon (EC) by the TM5 model during the dry season and OC both during the dry and wet periods. The overestimation was likely due to the overestimation of biomass burning emission inventories and SOA production over tropical areas.
Resumo:
We searched for a sidereal modulation in the MINOS far detector neutrino rate. Such a signal would be a consequence of Lorentz and CPT violation as described by the standard-model extension framework. It also would be the first detection of a perturbative effect to conventional neutrino mass oscillations. We found no evidence for this sidereal signature, and the upper limits placed on the magnitudes of the Lorentz and CPT violating coefficients describing the theory are an improvement by factors of 20-510 over the current best limits found by using the MINOS near detector.
Resumo:
A search for depletion of the combined flux of active neutrino species over a 735 km baseline is reported using neutral-current interaction data recorded by the MINOS detectors in the NuMI neutrino beam. Such a depletion is not expected according to conventional interpretations of neutrino oscillation data involving the three known neutrino flavors. A depletion would be a signature of oscillations or decay to postulated noninteracting sterile neutrinos, scenarios not ruled out by existing data. From an exposure of 3.18 x 10(20) protons on target in which neutrinos of energies between similar to 500 MeV and 120 GeV are produced predominantly as nu(mu), the visible energy spectrum of candidate neutral-current reactions in the MINOS far detector is reconstructed. Comparison of this spectrum to that inferred from a similarly selected near-detector sample shows that of the portion of the nu(mu) flux observed to disappear in charged-current interaction data, the fraction that could be converting to a sterile state is less than 52% at 90% confidence level (C. L.). The hypothesis that active neutrinos mix with a single sterile neutrino via oscillations is tested by fitting the data to various models. In the particular four-neutrino models considered, the mixing angles theta(24) and theta(34) are constrained to be less than 11 degrees and 56 degrees at 90% C. L., respectively. The possibility that active neutrinos may decay to sterile neutrinos is also investigated. Pure neutrino decay without oscillations is ruled out at 5.4 standard deviations. For the scenario in which active neutrinos decay into sterile states concurrently with neutrino oscillations, a lower limit is established for the neutrino decay lifetime tau(3)/m(3) > 2.1 x 10(-12) s/eV at 90% C.L.
Resumo:
A search for a sidereal modulation in the MINOS near detector neutrino data was performed. If present, this signature could be a consequence of Lorentz and CPT violation as predicted by the effective field theory called the standard-model extension. No evidence for a sidereal signal in the data set was found, implying that there is no significant change in neutrino propagation that depends on the direction of the neutrino beam in a sun-centered inertial frame. Upper limits on the magnitudes of the Lorentz and CPT violating terms in the standard-model extension lie between 10(-4) and 10(-2) of the maximum expected, assuming a suppression of these signatures by a factor of 10(-17).
Resumo:
Cosmological analyses based on currently available observations are unable to rule out a sizeable coupling between dark energy and dark matter. However, the signature of the coupling is not easy to grasp, since the coupling is degenerate with other cosmological parameters, such as the dark energy equation of state and the dark matter abundance. We discuss possible ways to break such degeneracy. Based on the perturbation formalism, we carry out the global fitting by using the latest observational data and get a tight constraint on the interaction between dark sectors. We find that the appropriate interaction can alleviate the coincidence problem.
Resumo:
Cross sections of (120)Sn(alpha,alpha)(120)Sn elastic scattering have been extracted from the alpha-particle-beam contamination of a recent (120)Sn((6)He,(6)He)(120)Sn experiment. Both reactions are analyzed using systematic double-folding potentials in the real part and smoothly varying Woods-Saxon potentials in the imaginary part. The potential extracted from the (120)Sn((6)He,(6)He)(120)Sn data may be used as the basis for the construction of a simple global (6)He optical potential. The comparison of the (6)He and alpha data shows that the halo nature of the (6)He nucleus leads to a clear signature in the reflexion coefficients eta(L) : The relevant angular momenta L with eta(L) >> 0 and eta(L) << 1 are shifted to larger L with a broader distribution. This signature is not present in the alpha-scattering data and can thus be used as a new criterion for the definition of a halo nucleus.
Resumo:
We present the results of an elliptic flow, v(2), analysis of Cu + Cu collisions recorded with the solenoidal tracker detector (STAR) at the BNL Relativistic Heavy Ion Collider at root s(NN) = 62.4 and 200 GeV. Elliptic flow as a function of transverse momentum, v(2)(p(T)), is reported for different collision centralities for charged hadrons h(+/-) and strangeness-ontaining hadrons K(S)(0), Lambda, Xi, and phi in the midrapidity region vertical bar eta vertical bar < 1.0. Significant reduction in systematic uncertainty of the measurement due to nonflow effects has been achieved by correlating particles at midrapidity, vertical bar eta vertical bar < 1.0, with those at forward rapidity, 2.5 < vertical bar eta vertical bar < 4.0. We also present azimuthal correlations in p + p collisions at root s = 200 GeV to help in estimating nonflow effects. To study the system-size dependence of elliptic flow, we present a detailed comparison with previously published results from Au + Au collisions at root s(NN) = 200 GeV. We observe that v(2)(p(T)) of strange hadrons has similar scaling properties as were first observed in Au + Au collisions, that is, (i) at low transverse momenta, p(T) < 2 GeV/c, v(2) scales with transverse kinetic energy, m(T) - m, and (ii) at intermediate p(T), 2 < p(T) < 4 GeV/c, it scales with the number of constituent quarks, n(q.) We have found that ideal hydrodynamic calculations fail to reproduce the centrality dependence of v(2)(p(T)) for K(S)(0) and Lambda. Eccentricity scaled v(2) values, v(2)/epsilon, are larger in more central collisions, suggesting stronger collective flow develops in more central collisions. The comparison with Au + Au collisions, which go further in density, shows that v(2)/epsilon depends on the system size, that is, the number of participants N(part). This indicates that the ideal hydrodynamic limit is not reached in Cu + Cu collisions, presumably because the assumption of thermalization is not attained.
Resumo:
We show that bifurcations in chaotic scattering manifest themselves through the appearance of an infinitely fine-scale structure of singularities in the cross section. These ""rainbow singularities"" are created in a cascade, which is closely related to the bifurcation cascade undergone by the set of trapped orbits (the chaotic saddle). This cascade provides a signature in the differential cross section of the complex pattern of bifurcations of orbits underlying the transition to chaotic scattering. We show that there is a power law with a universal coefficient governing the sequence of births of rainbow singularities and we verify this prediction by numerical simulations.
Resumo:
We report on the experimental observation of vortex tangles in an atomic Bose-Einstein condensate (BEC) of (87)Rb atoms when an external oscillatory perturbation is introduced in the trap. The vortex tangle configuration is a signature of the presence of a turbulent regime in the cloud. We also show that this turbulent cloud suppresses the aspect ratio inversion typically observed in quantum degenerate bosonic gases during free expansion. Instead, the cloud expands keeping the ratio between their axis constant. Turbulence in atomic superfluids may constitute an alternative system to investigate decay mechanisms as well as to test fundamental theoretical aspects in this field.
Resumo:
Data collected at the Pierre Auger Observatory are used to establish an upper limit on the diffuse flux of tau neutrinos in the cosmic radiation. Earth-skimming nu(tau) may interact in the Earth's crust and produce a tau lepton by means of charged-current interactions. The tau lepton may emerge from the Earth and decay in the atmosphere to produce a nearly horizontal shower with a typical signature, a persistent electromagnetic component even at very large atmospheric depths. The search procedure to select events induced by tau decays against the background of normal showers induced by cosmic rays is described. The method used to compute the exposure for a detector continuously growing with time is detailed. Systematic uncertainties in the exposure from the detector, the analysis, and the involved physics are discussed. No tau neutrino candidates have been found. For neutrinos in the energy range 2x10(17) eV < E(nu)< 2x10(19) eV, assuming a diffuse spectrum of the form E(nu)(-2), data collected between 1 January 2004 and 30 April 2008 yield a 90% confidence-level upper limit of E(nu)(2)dN(nu tau)/dE(nu)< 9x10(-8) GeV cm(-2) s(-1) sr(-1).
Resumo:
Background: Feature selection is a pattern recognition approach to choose important variables according to some criteria in order to distinguish or explain certain phenomena (i.e., for dimensionality reduction). There are many genomic and proteomic applications that rely on feature selection to answer questions such as selecting signature genes which are informative about some biological state, e. g., normal tissues and several types of cancer; or inferring a prediction network among elements such as genes, proteins and external stimuli. In these applications, a recurrent problem is the lack of samples to perform an adequate estimate of the joint probabilities between element states. A myriad of feature selection algorithms and criterion functions have been proposed, although it is difficult to point the best solution for each application. Results: The intent of this work is to provide an open-source multiplataform graphical environment for bioinformatics problems, which supports many feature selection algorithms, criterion functions and graphic visualization tools such as scatterplots, parallel coordinates and graphs. A feature selection approach for growing genetic networks from seed genes ( targets or predictors) is also implemented in the system. Conclusion: The proposed feature selection environment allows data analysis using several algorithms, criterion functions and graphic visualization tools. Our experiments have shown the software effectiveness in two distinct types of biological problems. Besides, the environment can be used in different pattern recognition applications, although the main concern regards bioinformatics tasks.
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.
Resumo:
Lignin phenols were measured in the sediments of Sepitiba Bay, Rio de Janeiro, Brazil and in bedload sediments and suspended sediments of the four major fluvial inputs to the bay: Sao Francisco and Guandu Channels and the Guarda and Cacao Rivers. Fluvial suspended lignin yields (Sigma 8 3.5-14.6 mgC 10 g dw(-1)) vary little between the wet and dry seasons and are poorly correlated with fluvial chlorophyll concentrations (0.8-50.2 mu gC L(-1)). Despite current land use practices that favor grassland agriculture or industrial uses, fluvial lignin compositions are dominated by a degraded leaf-sourced material. The exception is the Guarda River, which has a slight influence from grasses. The Lignin Phenol Vegetation Index, coupled with acid/aldehyde and 3.5 Db/V ratios, indicate that degraded leaf-derived phenols are also the primary preserved lignin component in the bay. The presence of fringe Typha sp. and Spartina sp. grass beds surrounding portions of the Bay are not reflected in the lignin signature. Instead, lignin entering the bay appears to reflect the erosion of soils containing a degraded signature from the former Atlantic rain forest that once dominated the watershed, instead of containing a significant signature derived from current agricultural uses. A three-component mixing model using the LPVI, atomic N:C ratios, and stable carbon isotopes (which range between -26.8 and -21.8 parts per thousand) supports the hypothesis that fluvial inputs to the bay are dominated by planktonic matter (78% of the input), with lignin dominated by leaf (14% of the input) over grass (6%). Sediments are composed of a roughly 50-50 mixture of autochthonous material and terrigenous material, with lignin being primarily sourced from leaf. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Sedimentary organic matter is a good tool for environmental evaluation where the sediments are deposited. We determined the elemental and C- and N-isotopic compositions of 211 sub-surface sediment samples from 13 cores (ranging from 18 to 46cm), collected in the Cananeia-Iguape estuarine-lagoonal system. The aim of this research is to evaluate the environmental variations of this tropical coastal micro-tidal system over the last decades, through SOM distribution. The studied parameters show differences between the cores located in the northern (sandy-silt sediments) and southern (sand and silty-sand) portions. The whole area presents a mixed organic matter origin signature (local mangrove plants: < -25.6 parts per thousand PDB/ phytoplancton delta(13)C values: -19.4 parts per thousand PDB). The northern cores, which submitted higher sedimentation deposition (1.46cm year(-1)), are more homogenous, presenting lower delta(13)C (< -25.2 parts per thousand PDB) and higher C/N values (in general >14), directly related to the terrestrial input from Ribeira de Iguape River (24,000 km(2) basin). The southern portion presents lower sedimentation rates (0.38cm year(-1)) and is associated to a small river basin (1,340 km(2)), presenting values Of delta(13)C: -25.0 to 23.0 parts per thousand PDB and of C/N ratio: 11 to 15. In general, the elemental contents in the 15 cores may be considered from low to medium (< 2.0% C - < 0.1% N), compared to similar environments. Although a greater marine influence is observed in the southern system portion, the majority of the cores present an elevated increase of continental deposition, most likely related to the strong silting process that the area has been subjected to since the 1850s, when an artificial channel was built linking, directly, the Ribeira River to the estuarine-lagoonal system.
Resumo:
The exploitation of aqueous biphasic extraction is proposed for the first time in flow analysis This extraction strategy stands out for being environmentally attractive since it is based in the utilization of two immiscible phases that are intrinsically aqueous The organic solvents of the traditional liquid-liquid extractions ale no longer used, being replaced by non-toxic, non-flammable and non-volatile ones. A single interface flow analysis (SIFA) system was implemented to carry out the extraction process due to its favourable operational characteristics that include the high automation level and simplicity of operation, the establishment of a dynamic interface where the mass transfer occurred between the two immiscible aqueous phases, and the versatile control over the extraction process namely the extraction time The application selected to demonstrate the feasibility of SIFA to perform this aqueous biphasic extraction was the pre-concentration of lead. After extraction, lead reacted with 8-hydroxyquinoline-5-sulfonic acid and the resulting product was determined by a fluorimetric detector included in the flow manifold. Therefore, the SIFA single interface was used both as extraction (enrichment) and reaction interface. (C) 2010 Elsevier B.V All rights reserved.