975 resultados para data consistency
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Ciência e Tecnologia Animal - FEIS
Resumo:
Pós-graduação em Saúde Coletiva - FMB
Resumo:
We provide a nonparametric 'revealed preference’ characterization of rational household behavior in terms of the collective consumption model, while accounting for general (possibly non-convex) individual preferences. We establish a Collective Axiom of Revealed Preference (CARP), which provides a necessary and sufficient condition for data consistency with collective rationality. Our main result takes the form of a ‘collective’ version of the Afriat Theorem for rational behavior in terms of the unitary model. This theorem has some interesting implications. With only a finite set of observations, the nature of consumption externalities (positive or negative) in the intra-household allocation process is non-testable. The same non-testability conclusion holds for privateness (with or without externalities) or publicness of consumption. By contrast, concavity of individual utility functions (representing convex preferences) turns out to be testable. In addition, monotonicity is testable for the model that assumes all household consumption is public.
Resumo:
The purpose of this thesis is to analyse the spatial and temporal variability of the aragonite saturation state (ΩAR), commonly used as an indicator of ocean acidification, in the North-East Atlantic. When the aragonite saturation state decreases below a certain threshold, ΩAR <1, calcifying organisms (i.e. molluscs, pteropods, foraminifera, crabs, etc.) are subject to dissolution of shells and aragonite structures. This objective agrees with the challenge 'Ocean, climate change and acidification' of the EU COST Ocean Governance for Sustainability project, which aims to combine the information collected on the state of health of the oceans. Two open-sources data products, EMODnet and GLODAPv2, have been integrated and analysed for the first time in the North-East Atlantic region. The integrated dataset contains 1038 ΩAR vertical profiles whose time distribution spans from 1970 to 2014. The ΩAR has been computed from CO2SYS software considering different combinations of input parameters, pH, Total Alkalinity (TAlk) and Dissolved Inorganic Carbon (DIC), associated with Temperature, Salinity and Pressure at in situ conditions. A sensitivity analysis has been performed to better understand the data consistency of ΩAR computed from the different combinations of pH, Talk and DIC and to verify the difference among observed TAlk and DIC parameters and their output values from the CO2SYS tool. Maps of ΩAR have been computed with the best data coverage obtained from the two datasets, at different levels of depth in the area of investigation and they have been compared to the work of Jiang et al. (2015). The results are consistent and show similar horizontal and vertical patterns. The study highlights some aragonite undersaturated values (ΩAR <1) below 500 meters depth, suggesting a potential effect of acidification in the considered time period. This thesis aims to be a preliminary work for future studies that will be able to design the ΩAR variability on a decadal distribution based on the extended time-series acquired in this work.
Resumo:
The mid-Holocene (6 kyr BP; thousand years before present) is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of this relatively large amount of information, we have compared a compilation of 50 air and sea surface temperature reconstructions with the results of three simulations performed with general circulation models and one carried out with LOVECLIM, a model of intermediate complexity. The conclusions derived from this analysis confirm that models and data agree on the large-scale spatial pattern but the models underestimate the magnitude of some observed changes and that large discrepancies are observed at the local scale. To further investigate the origin of those inconsistencies, we have constrained LOVECLIM to follow the signal recorded by the proxies selected in the compilation using a data-assimilation method based on a particle filter. In one simulation, all the 50 proxy-based records are used while in the other two only the continental or oceanic proxy-based records constrain the model results. As expected, data assimilation leads to improving the consistency between model results and the reconstructions. In particular, this is achieved in a robust way in all the experiments through a strengthening of the westerlies at midlatitude that warms up northern Europe. Furthermore, the comparison of the LOVECLIM simulations with and without data assimilation has also objectively identified 16 proxy-based paleoclimate records whose reconstructed signal is either incompatible with the signal recorded by some other proxy-based records or with model physics.
Resumo:
We consider the problem of nonparametric estimation of a concave regression function F. We show that the supremum distance between the least square s estimatorand F on a compact interval is typically of order(log(n)/n)2/5. This entails rates of convergence for the estimator’s derivative. Moreover, we discuss the impact of additional constraints on F such as monotonicity and pointwise bounds. Then we apply these results to the analysis of current status data, where the distribution function of the event times is assumed to be concave.
Resumo:
The thermodynamic consistency of almost 90 VLE data series, including isothermal and isobaric conditions for systems of both total and partial miscibility in the liquid phase, has been examined by means of the area and point-to-point tests. In addition, the Gibbs energy of mixing function calculated from these experimental data has been inspected, with some rather surprising results: certain data sets exhibiting high dispersion or leading to Gibbs energy of mixing curves inconsistent with the total or partial miscibility of the liquid phase, surprisingly, pass the tests. Several possible inconsistencies in the tests themselves or in their application are discussed. Related to this is a very interesting and ambitious initiative that arose within the NIST organization: the development of an algorithm to assess the quality of experimental VLE data. The present paper questions the applicability of two of the five tests that are combined in the algorithm. It further shows that the deviation of the experimental VLE data from the correlation obtained by a given model, the basis of some point-to-point tests, should not be used to evaluate the quality of these data.
Resumo:
Thermodynamics Conference 2013 (Statistical Mechanics and Thermodynamics Group of the Royal Society of Chemistry), The University of Manchester, 3-6 September 2013.
Resumo:
"UILU-ENG 83-1724."--Cover.
Resumo:
Acknowledgments The authors wish to thank the crews, fishermen and scientists who conducted the various surveys from which data were obtained, and Mark Belchier and Simeon Hill for their contributions. This work was supported by the Government of South Georgia and South Sandwich Islands. Additional logistical support provided by The South Atlantic Environmental Research Institute with thanks to Paul Brickle. Thanks to Stephen Smith of Fisheries and Oceans Canada (DFO) for help in constructing bootstrap confidence limits. Paul Fernandes receives funding from the MASTS pooling initiative (The Marine Alliance for Science and Technology for Scotland), and their support is gratefully acknowledged. MASTS is funded by the Scottish Funding Council (grant reference HR09011) and contributing institutions. We also wish to thank two anonymous referees for their helpful suggestions on earlier versions of this manuscript.
Resumo:
Effective decision making uses various databases including both micro and macro level datasets. In many cases it is a big challenge to ensure the consistency of the two levels. Different types of problems can occur and several methods can be used to solve them. The paper concentrates on the input alignment of the households’ income for microsimulation, which means refers to improving the elements of a micro data survey (EU-SILC) by using macro data from administrative sources. We use a combined micro-macro model called ECONS-TAX for this improvement. We also produced model projections until 2015 which is important because the official EU-SILC micro database will only be available in Hungary in the summer of 2017. The paper presents our estimations about the dynamics of income elements and the changes in income inequalities. Results show that the aligned data provides a different level of income inequality, but does not affect the direction of change from year to year. However, when we analyzed policy change, the use of aligned data caused larger differences both in income levels and in their dynamics.
Resumo:
Collecting ground truth data is an important step to be accomplished before performing a supervised classification. However, its quality depends on human, financial and time ressources. It is then important to apply a validation process to assess the reliability of the acquired data. In this study, agricultural infomation was collected in the Brazilian Amazonian State of Mato Grosso in order to map crop expansion based on MODIS EVI temporal profiles. The field work was carried out through interviews for the years 2005-2006 and 2006-2007. This work presents a methodology to validate the training data quality and determine the optimal sample to be used according to the classifier employed. The technique is based on the detection of outlier pixels for each class and is carried out by computing Mahalanobis distances for each pixel. The higher the distance, the further the pixel is from the class centre. Preliminary observations through variation coefficent validate the efficiency of the technique to detect outliers. Then, various subsamples are defined by applying different thresholds to exclude outlier pixels from the classification process. The classification results prove the robustness of the Maximum Likelihood and Spectral Angle Mapper classifiers. Indeed, those classifiers were insensitive to outlier exclusion. On the contrary, the decision tree classifier showed better results when deleting 7.5% of pixels in the training data. The technique managed to detect outliers for all classes. In this study, few outliers were present in the training data, so that the classification quality was not deeply affected by the outliers.
Resumo:
Monte Carlo track structures (MCTS) simulations have been recognized as useful tools for radiobiological modeling. However, the authors noticed several issues regarding the consistency of reported data. Therefore, in this work, they analyze the impact of various user defined parameters on simulated direct DNA damage yields. In addition, they draw attention to discrepancies in published literature in DNA strand break (SB) yields and selected methodologies. The MCTS code Geant4-DNA was used to compare radial dose profiles in a nanometer-scale region of interest (ROI) for photon sources of varying sizes and energies. Then, electron tracks of 0.28 keV-220 keV were superimposed on a geometric DNA model composed of 2.7 × 10(6) nucleosomes, and SBs were simulated according to four definitions based on energy deposits or energy transfers in DNA strand targets compared to a threshold energy ETH. The SB frequencies and complexities in nucleosomes as a function of incident electron energies were obtained. SBs were classified into higher order clusters such as single and double strand breaks (SSBs and DSBs) based on inter-SB distances and on the number of affected strands. Comparisons of different nonuniform dose distributions lacking charged particle equilibrium may lead to erroneous conclusions regarding the effect of energy on relative biological effectiveness. The energy transfer-based SB definitions give similar SB yields as the one based on energy deposit when ETH ≈ 10.79 eV, but deviate significantly for higher ETH values. Between 30 and 40 nucleosomes/Gy show at least one SB in the ROI. The number of nucleosomes that present a complex damage pattern of more than 2 SBs and the degree of complexity of the damage in these nucleosomes diminish as the incident electron energy increases. DNA damage classification into SSB and DSB is highly dependent on the definitions of these higher order structures and their implementations. The authors' show that, for the four studied models, different yields are expected by up to 54% for SSBs and by up to 32% for DSBs, as a function of the incident electrons energy and of the models being compared. MCTS simulations allow to compare direct DNA damage types and complexities induced by ionizing radiation. However, simulation results depend to a large degree on user-defined parameters, definitions, and algorithms such as: DNA model, dose distribution, SB definition, and the DNA damage clustering algorithm. These interdependencies should be well controlled during the simulations and explicitly reported when comparing results to experiments or calculations.