980 resultados para Data errors


Relevância:

30.00% 30.00%

Publicador:

Resumo:

People are always at risk of making errors when they attempt to retrieve information from memory. An important question is how to create the optimal learning conditions so that, over time, the correct information is learned and the number of mistakes declines. Feedback is a powerful tool, both for reinforcing new learning and correcting memory errors. In 5 experiments, I sought to understand the best procedures for administering feedback during learning. First, I evaluated the popular recommendation that feedback is most effective when given immediately, and I showed that this recommendation does not always hold when correcting errors made with educational materials in the classroom. Second, I asked whether immediate feedback is more effective in a particular case—when correcting false memories, or strongly-held errors that may be difficult to notice even when the learner is confronted with the feedback message. Third, I examined whether varying levels of learner motivation might help to explain cross-experimental variability in feedback timing effects: Are unmotivated learners less likely to benefit from corrective feedback, especially when it is administered at a delay? Overall, the results revealed that there is no best “one-size-fits-all” recommendation for administering feedback; the optimal procedure depends on various characteristics of learners and their errors. As a package, the data are consistent with the spacing hypothesis of feedback timing, although this theoretical account does not successfully explain all of the data in the larger literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite its importance in the global climate system, age-calibrated marine geologic records reflecting the evolution of glacial cycles through the Pleistocene are largely absent from the central Arctic Ocean. This is especially true for sediments older than 200 ka. Three sites cored during the Integrated Ocean Drilling Program's Expedition 302, the Arctic Coring Expedition (ACEX), provide a 27 m continuous sedimentary section from the Lomonosov Ridge in the central Arctic Ocean. Two key biostratigraphic datums and constraints from the magnetic inclination data are used to anchor the chronology of these sediments back to the base of the Cobb Mountain subchron (1215 ka). Beyond 1215 ka, two best fitting geomagnetic models are used to investigate the nature of cyclostratigraphic change. Within this chronology we show that bulk and mineral magnetic properties of the sediments vary on predicted Milankovitch frequencies. These cyclic variations record ''glacial'' and ''interglacial'' modes of sediment deposition on the Lomonosov Ridge as evident in studies of ice-rafted debris and stable isotopic and faunal assemblages for the last two glacial cycles and were used to tune the age model. Potential errors, which largely arise from uncertainties in the nature of downhole paleomagnetic variability, and the choice of a tuning target are handled by defining an error envelope that is based on the best fitting cyclostratigraphic and geomagnetic solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SOUZA, Anderson A. S. ; SANTANA, André M. ; BRITTO, Ricardo S. ; GONÇALVES, Luiz Marcos G. ; MEDEIROS, Adelardo A. D. Representation of Odometry Errors on Occupancy Grids. In: INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 5., 2008, Funchal, Portugal. Proceedings... Funchal, Portugal: ICINCO, 2008.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To determine the prevalence of refractive errors in the public and private school system in the city of Natal, Northeastern Brazil. Methods: Refractometry was performed on both eyes of 1,024 randomly selected students, enrolled in the 2001 school year and the data were evaluated by the SPSS Data Editor 10.0. Ametropia was divided into: 1- from 0.1 to 0.99 diopter (D); 2- 1.0 to 2.99D; 3- 3.00 to 5.99D and 4- 6D or greater. Astigmatism was regrouped in: I- with-the-rule (axis from 0 to 30 and 150 to 180 degrees), II- against-the-rule (axis between 60 and 120 degrees) and III- oblique (axis between > 30 and < 60 and >120 and <150 degrees). The age groups were categorized as follows, in: 1- 5 to 10 years, 2- 11 to 15 years, 3- 16 to 20 years, 4- over 21 years. Results: Among refractive errors, hyperopia was the most common with 71%, followed by astigmatism (34%) and myopia (13.3%). Of the students with myopia and hyperopia, 48.5% and 34.1% had astigmatism, respectively. With respect to diopters, 58.1% of myopic students were in group 1, and 39% distributed between groups 2 and 3. Hyperopia were mostly found in group 1 (61.7%) as well as astigmatism (70.6%). The association of the astigmatism axes of both eyes showed 92.5% with axis with-the-rule in both eyes, while the percentage for those with axis againstthe- rule was 82.1% and even lower for the oblique axis (50%). Conclusion: The results found differed from those of most international studies, mainly from the Orient, which pointed to myopia as the most common refractive error, and corroborates the national ones, with the majority being hyperopia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SOUZA, Anderson A. S. ; SANTANA, André M. ; BRITTO, Ricardo S. ; GONÇALVES, Luiz Marcos G. ; MEDEIROS, Adelardo A. D. Representation of Odometry Errors on Occupancy Grids. In: INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 5., 2008, Funchal, Portugal. Proceedings... Funchal, Portugal: ICINCO, 2008.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The protein lysate array is an emerging technology for quantifying the protein concentration ratios in multiple biological samples. It is gaining popularity, and has the potential to answer questions about post-translational modifications and protein pathway relationships. Statistical inference for a parametric quantification procedure has been inadequately addressed in the literature, mainly due to two challenges: the increasing dimension of the parameter space and the need to account for dependence in the data. Each chapter of this thesis addresses one of these issues. In Chapter 1, an introduction to the protein lysate array quantification is presented, followed by the motivations and goals for this thesis work. In Chapter 2, we develop a multi-step procedure for the Sigmoidal models, ensuring consistent estimation of the concentration level with full asymptotic efficiency. The results obtained in this chapter justify inferential procedures based on large-sample approximations. Simulation studies and real data analysis are used to illustrate the performance of the proposed method in finite-samples. The multi-step procedure is simpler in both theory and computation than the single-step least squares method that has been used in current practice. In Chapter 3, we introduce a new model to account for the dependence structure of the errors by a nonlinear mixed effects model. We consider a method to approximate the maximum likelihood estimator of all the parameters. Using the simulation studies on various error structures, we show that for data with non-i.i.d. errors the proposed method leads to more accurate estimates and better confidence intervals than the existing single-step least squares method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To determine the prevalence of refractive errors in the public and private school system in the city of Natal, Northeastern Brazil. Methods: Refractometry was performed on both eyes of 1,024 randomly selected students, enrolled in the 2001 school year and the data were evaluated by the SPSS Data Editor 10.0. Ametropia was divided into: 1- from 0.1 to 0.99 diopter (D); 2- 1.0 to 2.99D; 3- 3.00 to 5.99D and 4- 6D or greater. Astigmatism was regrouped in: I- with-the-rule (axis from 0 to 30 and 150 to 180 degrees), II- against-the-rule (axis between 60 and 120 degrees) and III- oblique (axis between > 30 and < 60 and >120 and <150 degrees). The age groups were categorized as follows, in: 1- 5 to 10 years, 2- 11 to 15 years, 3- 16 to 20 years, 4- over 21 years. Results: Among refractive errors, hyperopia was the most common with 71%, followed by astigmatism (34%) and myopia (13.3%). Of the students with myopia and hyperopia, 48.5% and 34.1% had astigmatism, respectively. With respect to diopters, 58.1% of myopic students were in group 1, and 39% distributed between groups 2 and 3. Hyperopia were mostly found in group 1 (61.7%) as well as astigmatism (70.6%). The association of the astigmatism axes of both eyes showed 92.5% with axis with-the-rule in both eyes, while the percentage for those with axis againstthe- rule was 82.1% and even lower for the oblique axis (50%). Conclusion: The results found differed from those of most international studies, mainly from the Orient, which pointed to myopia as the most common refractive error, and corroborates the national ones, with the majority being hyperopia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Observing system experiments (OSEs) are carried out over a 1-year period to quantify the impact of Argo observations on the Mercator Ocean 0.25° global ocean analysis and forecasting system. The reference simulation assimilates sea surface temperature (SST), SSALTO/DUACS (Segment Sol multi-missions dALTimetrie, d'orbitographie et de localisation précise/Data unification and Altimeter combination system) altimeter data and Argo and other in situ observations from the Coriolis data center. Two other simulations are carried out where all Argo and half of the Argo data are withheld. Assimilating Argo observations has a significant impact on analyzed and forecast temperature and salinity fields at different depths. Without Argo data assimilation, large errors occur in analyzed fields as estimated from the differences when compared with in situ observations. For example, in the 0–300 m layer RMS (root mean square) differences between analyzed fields and observations reach 0.25 psu and 1.25 °C in the western boundary currents and 0.1 psu and 0.75 °C in the open ocean. The impact of the Argo data in reducing observation–model forecast differences is also significant from the surface down to a depth of 2000 m. Differences between in situ observations and forecast fields are thus reduced by 20 % in the upper layers and by up to 40 % at a depth of 2000 m when Argo data are assimilated. At depth, the most impacted regions in the global ocean are the Mediterranean outflow, the Gulf Stream region and the Labrador Sea. A significant degradation can be observed when only half of the data are assimilated. Therefore, Argo observations matter to constrain the model solution, even for an eddy-permitting model configuration. The impact of the Argo floats' data assimilation on other model variables is briefly assessed: the improvement of the fit to Argo profiles do not lead globally to unphysical corrections on the sea surface temperature and sea surface height. The main conclusion is that the performance of the Mercator Ocean 0.25° global data assimilation system is heavily dependent on the availability of Argo data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As condições de ambiente térmico e aéreo, no interior de instalações para animais, alteram-se durante o dia, devido à influência do ambiente externo. Para que análises estatísticas e geoestatísticas sejam representativas, uma grande quantidade de pontos distribuídos espacialmente na área da instalação deve ser monitorada. Este trabalho propõe que a variação no tempo das variáveis ambientais de interesse para a produção animal, monitoradas no interior de instalações para animais, pode ser modelada com precisão a partir de registros discretos no tempo. O objetivo deste trabalho foi desenvolver um método numérico para corrigir as variações temporais dessas variáveis ambientais, transformando os dados para que tais observações independam do tempo gasto durante a aferição. O método proposto aproximou os valores registrados com retardos de tempo aos esperados no exato momento de interesse, caso os dados fossem medidos simultaneamente neste momento em todos os pontos distribuídos espacialmente. O modelo de correção numérica para variáveis ambientais foi validado para o parâmetro ambiental temperatura do ar, sendo que os valores corrigidos pelo método não diferiram pelo teste Tukey, a 5% de probabilidade dos valores reais registrados por meio de dataloggers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Every Argo data file submitted by a DAC for distribution on the GDAC has its format and data consistency checked by the Argo FileChecker. Two types of checks are applied: 1. Format checks. Ensures the file formats match the Argo standards precisely. 2. Data consistency checks. Additional data consistency checks are performed on a file after it passes the format checks. These checks do not duplicate any of the quality control checks performed elsewhere. These checks can be thought of as “sanity checks” to ensure that the data are consistent with each other. The data consistency checks enforce data standards and ensure that certain data values are reasonable and/or consistent with other information in the files. Examples of the “data standard” checks are the “mandatory parameters” defined for meta-data files and the technical parameter names in technical data files. Files with format or consistency errors are rejected by the GDAC and are not distributed. Less serious problems will generate warnings and the file will still be distributed on the GDAC. Reference Tables and Data Standards: Many of the consistency checks involve comparing the data to the published reference tables and data standards. These tables are documented in the User’s Manual. (The FileChecker implements “text versions” of these tables.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide a comprehensive study of out-of-sample forecasts for the EUR/USD exchange rate based on multivariate macroeconomic models and forecast combinations. We use profit maximization measures based on directional accuracy and trading strategies in addition to standard loss minimization measures. When comparing predictive accuracy and profit measures, data snooping bias free tests are used. The results indicate that forecast combinations, in particular those based on principal components of forecasts, help to improve over benchmark trading strategies, although the excess return per unit of deviation is limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agroforestry has large potential for carbon (C) sequestration while providing many economical, social, and ecological benefits via its diversified products. Airborne lidar is considered as the most accurate technology for mapping aboveground biomass (AGB) over landscape levels. However, little research in the past has been done to study AGB of agroforestry systems using airborne lidar data. Focusing on an agroforestry system in the Brazilian Amazon, this study first predicted plot-level AGB using fixed-effects regression models that assumed the regression coefficients to be constants. The model prediction errors were then analyzed from the perspectives of tree DBH (diameter at breast height)?height relationships and plot-level wood density, which suggested the need for stratifying agroforestry fields to improve plot-level AGB modeling. We separated teak plantations from other agroforestry types and predicted AGB using mixed-effects models that can incorporate the variation of AGB-height relationship across agroforestry types. We found that, at the plot scale, mixed-effects models led to better model prediction performance (based on leave-one-out cross-validation) than the fixed-effects models, with the coefficient of determination (R2) increasing from 0.38 to 0.64. At the landscape level, the difference between AGB densities from the two types of models was ~10% on average and up to ~30% at the pixel level. This study suggested the importance of stratification based on tree AGB allometry and the utility of mixed-effects models in modeling and mapping AGB of agroforestry systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Since 2005, the workload of community pharmacists in England has increased with a concomitant increase in stress and work pressure. However, it is unclear how these factors are impacting on the ability of community pharmacists to ensure accuracy during the dispensing process. This research seeks to extend our understanding of the nature, outcome, and predictors of dispensing errors. Methodology: A retrospective analysis of a purposive sample of incident report forms (IRFs) from the database of a pharmacist indemnity insurance provider was conducted. Data collected included; type of error, degree of harm caused, pharmacy and pharmacist demographics, and possible contributory factors. Results: In total, 339 files from UK community pharmacies were retrieved from the database. The files dated from June 2006 to November 2011. Incorrect item (45.1%, n = 153/339) followed by incorrect strength (24.5%, n = 83/339) were the most common forms of error. Almost half (41.6%, n = 147/339) of the patients suffered some form of harm ranging from minor harm (26.7%, n = 87/339) to death (0.3%, n = 1/339). Insufficient staff (51.6%, n = 175/339), similar packaging (40.7%, n = 138/339) and the pharmacy being busier than normal (39.5%, n = 134/339) were identified as key contributory factors. Cross-tabular analysis against the final accuracy check variable revealed significant association between the pharmacy location (P < 0.024), dispensary layout (P < 0.025), insufficient staff (P < 0.019), and busier than normal (P < 0.005) variables. Conclusion: The results provide an overview of some of the individual, organisational and technical factors at play at the time of a dispensing error and highlight the need to examine further the relationships between these factors and dispensing error occurrence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uncertainty of the future of a firm has to be modelled and incorporated into the evaluation of companies outside their explicit period of analysis, i.e., in the continuing or terminal value considered within valuation models. However, there is a multiplicity of factors that influence the continuing value of businesses which are not currently being considered within valuation models. In fact, ignoring these factors may cause significant errors of judgment, which can lead models to values of goodwill or badwill, far from the substantial value of the inherent assets. Consequently, these results provided will be markedly different from market values. So, why not consider alternative models incorporating life expectancy of companies, as well as the influence of other attributes of the company in order to get a smoother adjustment between market price and valuation methods? This study aims to provide a contribution towards this area, having as its main objective the analysis of potential determinants of firm value in the long term. Using a sample of 714 listed companies, belonging to 15 European countries, and a panel data for the period between 1992 and 2011, our results show that continuing value cannot be regarded as the current value of a constant or growth perpetuity of a particular attribute of the company, but instead be according to a set of attributes such as free cash flow, net income, the average life expectancy of the company, investment in R&D, capabilities and quality of management, liquidity and financing structure.