914 resultados para Analyses errors


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Micro-electromechanical systems (MEMS) are micro scale devices that are able to convert electrical energy into mechanical energy or vice versa. In this paper, the mathematical model of an electronic circuit of a resonant MEMS mass sensor, with time-periodic parametric excitation, was analyzed and controlled by Chebyshev polynomial expansion of the Picard interaction and Lyapunov-Floquet transformation, and by Optimal Linear Feedback Control (OLFC). Both controls consider the union of feedback and feedforward controls. The feedback control obtained by Picard interaction and Lyapunov-Floquet transformation is the first strategy and the optimal control theory the second strategy. Numerical simulations show the efficiency of the two control methods, as well as the sensitivity of each control strategy to parametric errors. Without parametric errors, both control strategies were effective in maintaining the system in the desired orbit. On the other hand, in the presence of parametric errors, the OLFC technique was more robust.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this action research study, I investigated the careless errors made by my seventh-grade mathematics students on their homework and tests. Beyond analyzing the types of careless errors and the frequency at which they were made, I also analyzed my students’ attitudes toward reviewing their work before they turn it in and self-reflection about the quality of work that they were producing. I found that many students did not know how to review their test before turning it in; no one had ever taught them how to do so. However, when students were given tools to help them with this task, they were able to make strides towards reducing the number of careless errors that they made and began to turn in high quality work that demonstrated their understanding of the content that had been taught. As a result of this research, I plan to teach my students how to go back over their homework and tests before turning them in. I also intend to continue to use the tools that I have produced to encourage students to self-reflect on the work that they have done. Assessment is such an important piece of educating my students and the careless errors made on these assessments needed to be addressed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dynamic analysis is an increasingly important means of supporting software validation and maintenance. To date, developers of dynamic analyses have used low-level instrumentation and debug interfaces to realize their analyses. Many dynamic analyses, however, share multiple common high-level requirements, e.g., capture of program data state as well as events, and efficient and accurate event capture in the presence of threading. We present SOFYA – an infra-structure designed to provide high-level, efficient, concurrency-aware support for building analyses that reason about rich observations of program data and events. It provides a layered, modular architecture, which has been successfully used to rapidly develop and evaluate a variety of demanding dynamic program analyses. In this paper, we describe the SOFYA framework, the challenges it addresses, and survey several such analyses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Case analyses of fair practices in hiring in academia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evaluations of measurement invariance provide essential construct validity evidence. However, the quality of such evidence is partly dependent upon the validity of the resulting statistical conclusions. The presence of Type I or Type II errors can render measurement invariance conclusions meaningless. The purpose of this study was to determine the effects of categorization and censoring on the behavior of the chi-square/likelihood ratio test statistic and two alternative fit indices (CFI and RMSEA) under the context of evaluating measurement invariance. Monte Carlo simulation was used to examine Type I error and power rates for the (a) overall test statistic/fit indices, and (b) change in test statistic/fit indices. Data were generated according to a multiple-group single-factor CFA model across 40 conditions that varied by sample size, strength of item factor loadings, and categorization thresholds. Seven different combinations of model estimators (ML, Yuan-Bentler scaled ML, and WLSMV) and specified measurement scales (continuous, censored, and categorical) were used to analyze each of the simulation conditions. As hypothesized, non-normality increased Type I error rates for the continuous scale of measurement and did not affect error rates for the categorical scale of measurement. Maximum likelihood estimation combined with a categorical scale of measurement resulted in more correct statistical conclusions than the other analysis combinations. For the continuous and censored scales of measurement, the Yuan-Bentler scaled ML resulted in more correct conclusions than normal-theory ML. The censored measurement scale did not offer any advantages over the continuous measurement scale. Comparing across fit statistics and indices, the chi-square-based test statistics were preferred over the alternative fit indices, and ΔRMSEA was preferred over ΔCFI. Results from this study should be used to inform the modeling decisions of applied researchers. However, no single analysis combination can be recommended for all situations. Therefore, it is essential that researchers consider the context and purpose of their analyses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

White Rock Lake reservoir in Dallas, Texas contains a 150-cm sediment record of silty clay that documents land-use changes since its construction in 1912. Pollen analysis corroborates historical evidence that between 1912 and 1950 the watershed was primarily agricultural. Land disturbance by plowing coupled with strong and variable spring precipitation caused large amounts of sediment to enter the lake during this period. Diatoms were not preserved at this time probably because of low productivity compared to diatom dissolution by warm, alkaline water prior to burial in the sediments. After 1956, the watershed became progressively urbanized. Erosion decreased, land stabilized, and pollen of riparian trees increased as the lake water became somewhat less turbid. By 1986 the sediment record indicates that diatom productivity had increased beyond rates of diatom destruction. Neither increased nutrients nor reduced pesticides can account for increased diatom productivity, but grain size studies imply that before 1986 diatoms were light limited by high levels of turbidity. This study documents how reservoirs may relate to land-use practices and how watershed management could extend reservoir life and improve water quality.