893 resultados para data analysis: algorithms and implementation
Resumo:
The Enriquillo and Azuei are saltwater lakes located in a closed water basin in the southwestern region of the island of La Hispaniola, these have been experiencing dramatic changes in total lake-surface area coverage during the period 1980-2012. The size of Lake Enriquillo presented a surface area of approximately 276 km2 in 1984, gradually decreasing to 172 km2 in 1996. The surface area of the lake reached its lowest point in the satellite observation record in 2004, at 165 km2. Then the recent growth of the lake began reaching its 1984 size by 2006. Based on surface area measurement for June and July 2013, Lake Enriquillo has a surface area of ~358 km2. Sumatra sizes at both ends of the record are 116 km2 in 1984 and 134 km2in 2013, an overall 15.8% increase in 30 years. Determining the causes of lake surface area changes is of extreme importance due to its environmental, social, and economic impacts. The overall goal of this study is to quantify the changing water balance in these lakes and their catchment area using satellite and ground observations and a regional atmospheric-hydrologic modeling approach. Data analyses of environmental variables in the region reflect a hydrological unbalance of the lakes due to changing regional hydro-climatic conditions. Historical data show precipitation, land surface temperature and humidity, and sea surface temperature (SST), increasing over region during the past decades. Salinity levels have also been decreasing by more than 30% from previously reported baseline levels. Here we present a summary of the historical data obtained, new sensors deployed in the sourrounding sierras and the lakes, and the integrated modeling exercises. As well as the challenges of gathering, storing, sharing, and analyzing this large volumen of data in a remote location from such a diverse number of sources.
Resumo:
The aim of this article is to assess the role of real effective exchange rate volatility on long-run economic growth for a set of 82 advanced and emerging economies using a panel data set ranging from 1970 to 2009. With an accurate measure for exchange rate volatility, the results for the two-step system GMM panel growth models show that a more (less) volatile RER has significant negative (positive) impact on economic growth and the results are robust for different model specifications. In addition to that, exchange rate stability seems to be more important to foster long-run economic growth than exchange rate misalignment
Resumo:
Housing is an important component of wealth for a typical household in many countries. The objective of this paper is to investigate the effect of real-estate price variation on welfare, trying to close a gap between the welfare literature in Brazil and that in the U.S., the U.K., and other developed countries. Our first motivation relates to the fact that real estate is probably more important here than elsewhere as a proportion of wealth, which potentially makes the impact of a price change bigger here. Our second motivation relates to the fact that real-estate prices boomed in Brazil in the last five years. Prime real estate in Rio de Janeiro and São Paulo have tripled in value in that period, and a smaller but generalized increase has been observed throughout the country. Third, we have also seen a recent consumption boom in Brazil in the last five years. Indeed, the recent rise of some of the poor to middle-income status is well documented not only for Brazil but for other emerging countries as well. Regarding consumption and real-estate prices in Brazil, one cannot imply causality from correlation, but one can do causal inference with an appropriate structural model and proper inference, or with a proper inference in a reduced-form setup. Our last motivation is related to the complete absence of studies of this kind in Brazil, which makes ours a pioneering study. We assemble a panel-data set for the determinants of non-durable consumption growth by Brazilian states, merging the techniques and ideas in Campbell and Cocco (2007) and in Case, Quigley and Shiller (2005). With appropriate controls, and panel-data methods, we investigate whether house-price variation has a positive effect on non-durable consumption. The results show a non-negligible significant impact of the change in the price of real estate on welfare consumption), although smaller then what Campbell and Cocco have found. Our findings support the view that the channel through which house prices affect consumption is a financial one.
Resumo:
What can we learn from solar neutrino observations? Is there any solution to the solar neutrino anomaly which is favored by the present experimental panorama? After SNO results, is it possible to affirm that neutrinos have mass? In order to answer such questions we analyze the current available data from the solar neutrino experiments, including the recent SNO result, in view of many acceptable solutions to the solar neutrino problem based on different conversion mechanisms, for the first time using the same statistical procedure. This allows us to do a direct comparison of the goodness of the fit among different solutions, from which we can discuss and conclude on the current status of each proposed dynamical mechanism. These solutions are based on different assumptions: (a) neutrino mass and mixing, (b) a nonvanishing neutrino magnetic moment, (c) the existence of nonstandard flavor-changing and nonuniversal neutrino interactions, and (d) a tiny violation of the equivalence principle. We investigate the quality of the fit provided by each one of these solutions not only to the total rate measured by all the solar neutrino experiments but also to the recoil electron energy spectrum measured at different zenith angles by the Super-Kamiokande Collaboration. We conclude that several nonstandard neutrino flavor conversion mechanisms provide a very good fit to the experimental data which is comparable with (or even slightly better than) the most famous solution to the solar neutrino anomaly based on the neutrino oscillation induced by mass.
Resumo:
In this paper a set of Brazilian commercial gasoline representative samples from São Paulo State, selected by HCA, plus six samples obtained directly from refineries were analysed by a high-sensitive gas chromatographic (GC) method ASTM D6733. The levels of saturated hydrocarbons and anhydrous ethanol obtained by GC were correlated with the quality obtained from Brazilian Government Petroleum, Natural Gas and Biofuels Agency (ANP) specifications through exploratory analysis (HCA and PCA). This correlation showed that the GC method, together with HCA and PCA, could be employed as a screening technique to determine compliance with the prescribed legal standards of Brazilian gasoline.
Resumo:
In this work, initial crystallographic studies of human haemoglobin (Hb) crystallized in isoionic and oxygen-free PEG solution are presented. Under these conditions, functional measurements of the O-2-linked binding of water molecules and release of protons have evidenced that Hb assumes an unforeseen new allosteric conformation. The determination of the high-resolution structure of the crystal of human deoxy-Hb fully stripped of anions may provide a structural explanation for the role of anions in the allosteric properties of Hb and, particularly, for the influence of chloride on the Bohr effect, the mechanism by which Hb oxygen affinity is regulated by pH. X-ray diffraction data were collected to 1.87 Angstrom resolution using a synchrotron-radiation source. Crystals belong to the space group P2(1)2(1)2 and preliminary analysis revealed the presence of one tetramer in the asymmetric unit. The structure is currently being refined using maximum-likelihood protocols.
Resumo:
Hemoglobin remains, despite the enormous amount of research involving this molecule, as a prototype for allosteric models and new conformations. Functional studies carried out on Hemoglobin-I from the South-American Catfish Liposarcus anisitsi [1] suggest the existence of conformational states beyond those already described for human hemoglobin, which could be confirmed crystallographically. The present work represents the initial steps towards that goal.
Resumo:
Until mid 2006, SCIAMACHY data processors for the operational retrieval of nitrogen dioxide (NO2) column data were based on the historical version 2 of the GOME Data Processor (GDP). On top of known problems inherent to GDP 2, ground-based validations of SCIAMACHY NO2 data revealed issues specific to SCIAMACHY, like a large cloud-dependent offset occurring at Northern latitudes. In 2006, the GDOAS prototype algorithm of the improved GDP version 4 was transferred to the off-line SCIAMACHY Ground Processor (SGP) version 3.0. In parallel, the calibration of SCIAMACHY radiometric data was upgraded. Before operational switch-on of SGP 3.0 and public release of upgraded SCIAMACHY NO2 data, we have investigated the accuracy of the algorithm transfer: (a) by checking the consistency of SGP 3.0 with prototype algorithms; and (b) by comparing SGP 3.0 NO2 data with ground-based observations reported by the WMO/GAW NDACC network of UV-visible DOAS/SAOZ spectrometers. This delta-validation study concludes that SGP 3.0 is a significant improvement with respect to the previous processor IPF 5.04. For three particular SCIAMACHY states, the study reveals unexplained features in the slant columns and air mass factors, although the quantitative impact on SGP 3.0 vertical columns is not significant.
Resumo:
In this paper is reported the use of the chromatographic profiles of volatiles to determine disease markers in plants - in this case, leaves of Eucalyptus globulus contaminated by the necrotroph fungus Teratosphaeria nubilosa. The volatile fraction was isolated by headspace solid phase microextraction (HS-SPME) and analyzed by comprehensive two-dimensional gas chromatography-fast quadrupole mass spectrometry (GC. ×. GC-qMS). For the correlation between the metabolic profile described by the chromatograms and the presence of the infection, unfolded-partial least squares discriminant analysis (U-PLS-DA) with orthogonal signal correction (OSC) were employed. The proposed method was checked to be independent of factors such as the age of the harvested plants. The manipulation of the mathematical model obtained also resulted in graphic representations similar to real chromatograms, which allowed the tentative identification of more than 40 compounds potentially useful as disease biomarkers for this plant/pathogen pair. The proposed methodology can be considered as highly reliable, since the diagnosis is based on the whole chromatographic profile rather than in the detection of a single analyte. © 2013 Elsevier B.V..
Resumo:
We sequenced part of the 16S rRNA mitochondrial gene in 17 extant taxa of Pilosa (sloths and anteaters) and used these sequences along with GenBank sequences of both extant and extinct sloths to perform phylogenetic analysis based on parsimony, maximum-likelihood and Bayesian methods. By increasing the taxa density for anteaters and sloths we were able to clarify some points of the Pilosa phylogenetic tree. Our mitochondrial 16S results show Bradypodidae as a monophyletic and robustly supported clade in all the analysis. However, the Pleistocene fossil Mylodon darwinii does not group significantly to either Bradypodidae or Megalonychidae which indicates that trichotomy best represents the relationship between the families Mylodontidae, Bradypodidae and Megalonychidae. Divergence times also allowed us to discuss the taxonomic status of Cyclopes and the three species of three-toed sloths, Bradypus tridactylus, Bradypus variegatus and Bradypus torquatus. In the Bradypodidae the split between Bradypus torquatus and the proto-Bradypus tridactylus / B. variegatus was estimated as about 7.7 million years ago (MYA), while in the Myrmecophagidae the first offshoot was Cyclopes at about 31.8 MYA followed by the split between Myrmecophaga and Tamandua at 12.9 MYA. We estimate the split between sloths and anteaters to have occurred at about 37 MYA.
Resumo:
The increase in new electronic devices had generated a considerable increase in obtaining spatial data information; hence these data are becoming more and more widely used. As well as for conventional data, spatial data need to be analyzed so interesting information can be retrieved from them. Therefore, data clustering techniques can be used to extract clusters of a set of spatial data. However, current approaches do not consider the implicit semantics that exist between a region and an object’s attributes. This paper presents an approach that enhances spatial data mining process, so they can use the semantic that exists within a region. A framework was developed, OntoSDM, which enables spatial data mining algorithms to communicate with ontologies in order to enhance the algorithm’s result. The experiments demonstrated a semantically improved result, generating more interesting clusters, therefore reducing manual analysis work of an expert.
Resumo:
Most authors struggle to pick a title that adequately conveys all of the material covered in a book. When I first saw Applied Spatial Data Analysis with R, I expected a review of spatial statistical models and their applications in packages (libraries) from the CRAN site of R. The authors’ title is not misleading, but I was very pleasantly surprised by how deep the word “applied” is here. The first half of the book essentially covers how R handles spatial data. To some statisticians this may be boring. Do you want, or need, to know the difference between S3 and S4 classes, how spatial objects in R are organized, and how various methods work on the spatial objects? A few years ago I would have said “no,” especially to the “want” part. Just let me slap my EXCEL spreadsheet into R and run some spatial functions on it. Unfortunately, the world is not so simple, and ultimately we want to minimize effort to get all of our spatial analyses accomplished. The first half of this book certainly convinced me that some extra effort in organizing my data into certain spatial class structures makes the analysis easier and less subject to mistakes. I also admit that I found it very interesting and I learned a lot.
Resumo:
Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiff(max)) for q not equal 1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiff(max) values were capable of distinguish HRV groups (p-values 5.10 x 10(-3); 1.11 x 10(-7), and 5.50 x 10(-7) for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4758815]