970 resultados para Errors analysis
Resumo:
Tutkimuksen tavoitteena on selvittää analyytikoiden ennustevirheiden suuruus sekä jakauma. Keskeisenä tavoitteena on selvittää, kuinka paljon yrityksen toimiala ja markkina-arvo vaikuttavat ennustevirheen suuruuteen ja tätä kautta vääristävät osakkeiden hinnoittelua. Tutkimus olettaa, että osakkeiden hinnoittelu perustuu tuotto-odotuksiin eli analyytikoiden tulosennusteisiin sekä riskiin. Tutkimuksessa on käytetty kvantitatiivisia menetelmiä. Tutkimusaineistona on käytetty Helsingin pörssin päälistalla vuosina 1998 – 1999 olleita yrityksiä, joille on annettu tulosennusteita. Tulosennusteet on poimittu manuaalisesti REUTTERS:in tietokannasta. Tulosennusteet vuosille 1998 - 1999 eivät ole selvästi positiivisia tai negatiivisia. Ennustevirheet eivät myöskään ole jakautuneet selvästi toimialan mukaan. Kuitenkin vuonna 1999 ”teollisuus” sekä ”palvelut” toimialoille annettiin selvästi liian pessimistisiä ennusteita. Myöskään yhtiön markkina-arvolla ei ole selvää yhteyttä ennustevirheen suuruuteen. Kuitenkin vuonna 1999 isojen yhtiöiden tuloksista on annettu liian pessimistisiä ja pienten liian positiivisia arvioita. Etsittäessä ennustevirheen selittäjiä regressioanalyysin avulla, vahvimmiksi selittäjiksi nousivat analyytikoiden määrä per yhtiö ja analyytikkoennusteiden keskihajonta. Selittäjät saavuttivat 40 prosentin selitysasteen.
Resumo:
Le présent travail est encadré dans le domaine de la linguistique appliquée de l'espagnol comme langue étrangère (ELE), et plus spécifiquement dans l'enseignement du genre grammatical en ELE. Notre intérêt en tant que enseignants c’est de pouvoir établir une méthode fiable selon les critères du Plan curricular de l'Institut Cervantès et la technique du consciousness-raising, ainsi que élaborer des activités destinées à l'enseignement du genre grammatical dans la classe d’ELE. L'enseignement d'ELE au Québec suit les mêmes méthodes qu'en Europe. En ce qui concerne l'enseignement du genre grammatical, les études consultées ratifient le manque d'instruction pertinente à propos du genre grammatical, ainsi que la difficulté dans la concordance même dans des niveaux avancés. Cependant, l'analyse de manuels d'ELE utilisés dans les diverses institutions de Montréal permet de conclure que ceux-ci ne suivent pas les règles établies par le Plan curricular en ce qui concerne l'enseignement du genre. Pour vérifier ces faits un travail de champ a été mis en place avec 84 étudiants et étudiantes de six institutions de Montréal pendant deux mois et deux semaines. Les résultats de la recherche et l'analyse d'erreurs nous montrent qu’il y a des problèmes avec le genre grammatical chez les étudiants de niveau intermédiaire et que les erreurs ne disparaissent pas avec les activités de renfort créées. Il est donc nécessaire d’adopter une méthode plus appropriée à l'apprentissage du genre grammatical dans une classe d’ELE et la présence du professeur pour la présenter. En effet, l'exécution d'activités créées n'est pas suffisant, car bien que les résultats montrent un léger progrès dans le cas du groupe B, ou d’expérience, en comparaison au groupe A, ou de control, on a constaté qu’une instruction formelle aurait entrainé un meilleur et plus complet apprentissage du genre grammatical dans le cas de nos étudiants; de là la nécessité d'établir une méthode fiable pour son enseignement.
Resumo:
The present study investigates the growth of error in baroclinic waves. It is found that stable or neutral waves are particularly sensitive to errors in the initial condition. Short stable waves are mainly sensitive to phase errors and the ultra long waves to amplitude errors. Analysis simulation experiments have indicated that the amplitudes of the very long waves become usually too small in the free atmosphere, due to the sparse and very irregular distribution of upper air observations. This also applies to the four-dimensional data assimilation experiments, since the amplitudes of the very long waves are usually underpredicted. The numerical experiments reported here show that if the very long waves have these kinds of amplitude errors in the upper troposphere or lower stratosphere the error is rapidly propagated (within a day or two) to the surface and to the lower troposphere.
Resumo:
Pós-graduação em Letras - FCLAS
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Dr. Rossi discusses the common errors that are made when fitting statistical models to data. Focuses on the planning, data analysis, and interpretation phases of a statistical analysis, and highlights the errors that are commonly made by researchers of these phases. The implications of these commonly made errors are discussed along with a discussion of the methods that can be used to prevent these errors from occurring. A prescription for carrying out a correct statistical analysis will be discussed.
Resumo:
El retroceso de las costas acantiladas es un fenómeno muy extendido sobre los litorales rocosos expuestos a la incidencia combinada de los procesos marinos y meteorológicos que se dan en la franja costera. Este fenómeno se revela violentamente como movimientos gravitacionales del terreno esporádicos, pudiendo causar pérdidas materiales y/o humanas. Aunque el conocimiento de estos riesgos de erosión resulta de vital importancia para la correcta gestión de la costa, el desarrollo de modelos predictivos se encuentra limitado desde el punto de vista geomorfológico debido a la complejidad e interacción de los procesos de desarrollo espacio-temporal que tienen lugar en la zona costera. Los modelos de predicción publicados son escasos y con importantes inconvenientes: a) extrapolación, extienden la información de registros históricos; b) empíricos, sobre registros históricos estudian la respuesta al cambio de un parámetro; c) estocásticos, determinan la cadencia y magnitud de los eventos futuros extrapolando las distribuciones de probabilidad extraídas de catálogos históricos; d) proceso-respuesta, de estabilidad y propagación del error inexplorada; e) en Ecuaciones en Derivadas Parciales, computacionalmente costosos y poco exactos. La primera parte de esta tesis detalla las principales características de los modelos más recientes de cada tipo y, para los más habitualmente utilizados, se indican sus rangos de aplicación, ventajas e inconvenientes. Finalmente como síntesis de los procesos más relevantes que contemplan los modelos revisados, se presenta un diagrama conceptual de la recesión costera, donde se recogen los procesos más influyentes que deben ser tenidos en cuenta, a la hora de utilizar o crear un modelo de recesión costera con el objetivo de evaluar la peligrosidad (tiempo/frecuencia) del fenómeno a medio-corto plazo. En esta tesis se desarrolla un modelo de proceso-respuesta de retroceso de acantilados costeros que incorpora el comportamiento geomecánico de materiales cuya resistencia a compresión no supere los 5 MPa. El modelo simula la evolución espaciotemporal de un perfil-2D del acantilado que puede estar formado por materiales heterogéneos. Para ello, se acoplan la dinámica marina: nivel medio del mar, cambios en el nivel medio del lago, mareas y oleaje; con la evolución del terreno: erosión, desprendimiento rocoso y formación de talud de derrubios. El modelo en sus diferentes variantes es capaz de incluir el análisis de la estabilidad geomecánica de los materiales, el efecto de los derrubios presentes al pie del acantilado, el efecto del agua subterránea, la playa, el run-up, cambios en el nivel medio del mar o cambios (estacionales o interanuales) en el nivel medio de la masa de agua (lagos). Se ha estudiado el error de discretización del modelo y su propagación en el tiempo a partir de las soluciones exactas para los dos primeros periodos de marea para diferentes aproximaciones numéricas tanto en tiempo como en espacio. Los resultados obtenidos han permitido justificar las elecciones que minimizan el error y los métodos de aproximación más adecuados para su posterior uso en la modelización. El modelo ha sido validado frente a datos reales en la costa de Holderness, Yorkshire, Reino Unido; y en la costa norte del lago Erie, Ontario, Canadá. Los resultados obtenidos presentan un importante avance en los modelos de recesión costera, especialmente en su relación con las condiciones geomecánicas del medio, la influencia del agua subterránea, la verticalización de los perfiles rocosos y su respuesta ante condiciones variables producidas por el cambio climático (por ejemplo, nivel medio del mar, cambios en los niveles de lago, etc.). The recession of coastal cliffs is a widespread phenomenon on the rocky shores that are exposed to the combined incidence of marine and meteorological processes that occur in the shoreline. This phenomenon is revealed violently and occasionally, as gravitational movements of the ground and can cause material or human losses. Although knowledge of the risks of erosion is vital for the proper management of the coast, the development of cliff erosion predictive models is limited by the complex interactions between environmental processes and material properties over a range of temporal and spatial scales. Published prediction models are scarce and present important drawbacks: extrapolation, that extend historical records to the future; empirical, that based on historical records studies the system response against the change in one parameter; stochastic, that represent of cliff behaviour based on assumptions regarding the magnitude and frequency of events in a probabilistic framework based on historical records; process-response, stability and error propagation unexplored; PDE´s, highly computationally expensive and not very accurate. The first part of this thesis describes the main features of the latest models of each type and, for the most commonly used, their ranges of application, advantages and disadvantages are given. Finally as a synthesis of the most relevant processes that include the revised models, a conceptual diagram of coastal recession is presented. This conceptual model includes the most influential processes that must be taken into account when using or creating a model of coastal recession to evaluate the dangerousness (time/frequency) of the phenomenon to medium-short term. A new process-response coastal recession model developed in this thesis has been designed to incorporate the behavioural and mechanical characteristics of coastal cliffs which are composed of with materials whose compressive strength is less than 5 MPa. The model simulates the spatial and temporal evolution of a cliff-2D profile that can consist of heterogeneous materials. To do so, marine dynamics: mean sea level, waves, tides, lake seasonal changes; is coupled with the evolution of land recession: erosion, cliff face failure and associated protective colluvial wedge. The model in its different variants can include analysis of material geomechanical stability, the effect of debris present at the cliff foot, groundwater effects, beach and run-up effects, changes in the mean sea level or changes (seasonal or inter-annual) in the mean lake level. Computational implementation and study of different numerical resolution techniques, in both time and space approximations, and the produced errors are exposed and analysed for the first two tidal periods. The results obtained in the errors analysis allow us to operate the model with a configuration that minimizes the error of the approximation methods. The model is validated through profile evolution assessment at various locations of coastline retreat on the Holderness Coast, Yorkshire, UK and on the north coast of Lake Erie, Ontario, Canada. The results represent an important stepforward in linking material properties to the processes of cliff recession, in considering the effect of groundwater charge and the slope oversteeping and their response to changing conditions caused by climate change (i.e. sea level, changes in lakes levels, etc.).
Resumo:
It is known that distillation tray efficiency depends on the liquid flow pattern, particularly for large diameter trays. Scale·up failures due to liquid channelling have occurred, and it is known that fitting flow control devices to trays sometirr.es improves tray efficiency. Several theoretical models which explain these observations have been published. Further progress in understanding is at present blocked by lack of experimental measurements of the pattern of liquid concentration over the tray. Flow pattern effects are expected to be significant only on commercial size trays of a large diameter and the lack of data is a result of the costs, risks and difficulty of making these measurements on full scale production columns. This work presents a new experiment which simulates distillation by water cooling. and provides a means of testing commercial size trays in the laboratory. Hot water is fed on to the tray and cooled by air forced through the perforations. The analogy between heat and mass transfer shows that the water temperature at any point is analogous to liquid concentration and the enthalpy of the air is analogous to vapour concentration. The effect of the liquid flow pattern on mass transfer is revealed by the temperature field on the tray. The experiment was implemented and evaluated in a column of 1.2 m. dia. The water temperatures were measured by thennocouples interfaced to an electronic computerised data logging system. The "best surface" through the experimental temperature measurements was obtained by the mathematical technique of B. splines, and presented in tenos of lines of constant temperature. The results revealed that in general liquid channelling is more imponant in the bubbly "mixed" regime than in the spray regime. However, it was observed that severe channelling also occurred for intense spray at incipient flood conditions. This is an unexpected result. A computer program was written to calculate point efficiency as well as tray efficiency, and the results were compared with distillation efficiencies for similar loadings. The theoretical model of Porter and Lockett for predicting distillation was modified to predict water cooling and the theoretical predictions were shown to be similar to the experimental temperature profiles. A comparison of the repeatability of the experiments with an errors analysis revealed that accurate tray efficiency measurements require temperature measurements to better than ± 0.1 °c which is achievable with conventional techniques. This was not achieved in this work, and resulted in considerable scatter in the efficiency results. Nevertheless it is concluded that the new experiment is a valuable tool for investigating the effect of the liquid flow pattern on tray mass transfer.
Resumo:
This study aimed to establish the developmental profile of executives components in typical child development. This is a correlational cross-sectional study of predominantly quantitative. The instruments for data collection are the subtests included in the NEPSY-II Attention and Executive Functioning domain. Eighty children between 5 and 8 years of age, of both genders, students from public and private schools in the city of Natal were evaluated. The sample was divided into six-month intervals for subsequent analysis of strategies and types of errors. Analysis of variance (univariate and multivariate) and Tukey and Games-Howell post hoc tests were conducted to verify the effect of age on test performance. Subsequent correlations indicate the strength and direction of the relationship between variables. Were identified two peaks of development in the six-month interval adopted for the skills of selective attention and inhibitory control. The results indicate that there’s no significant influence of sex and type of school on the performance of the sample. The performance of preschool children (5 and 6 years) was lower than the other subgroups in most tests. Highlights the role of autoregulation discourse among preschool children during activities of greater executive demand and the abstraction resource as a resolution strategy between the older. Were identified similar development trajectories among selective attention abilities and inhibitory control. In general, there is a decrease in the number of mistakes and increase of success with the age progression. Future longitudinal research can extend the age range encompassed in this study, investigating the developmental course of executive abilities.
Resumo:
CHAPTER 1 - The gummy stem blight, caused by the fungus D. bryoniae, is a disease commonly found in watermelon cultivated in several countries. In Brazil, there are numerous studies related to the disease, but there are not uniform methods for quantifying of disease severity in the field. Thus, we developed a diagrammatic scale based on scanned photos of watermelon leaves infected with D. bryoniae. The scale developed showed levels of 0; 10; 20; 45; 65 and 90% of severity. The scale validation was divided into two parts: initially, 10 evaluators (half with experienced and other half without experience) estimated the disease severity based on the initial observation of 100 photos of watermelon leaves with symptoms of the disease at different severity levels. Before, the same evaluators estimated the disease severity with the support of the scale prepared from the Quant program. Data were analyzed using linear regression and were obtained angular, linear, and correlation coefficients. Based on these data, we determined the accuracy and precision of the evaluations. The correlation coefficients (R2) ranged from 0.88 - 0.97 for the experienced evaluators and from 0.55 - 0.95 for the inexperienced evaluators. The average angular coefficient (A) for inexperienced evaluators was 20.42 and 8.61 with and without the support of diagrammatic scale, respectively. Experienced evaluators showed values of average linear coefficient of 5.30 and 1.68 with and without the support of diagrammatic scale, respectively. The absolute errors analysis indicated that the use of diagrammatic scale contributed to minimize the flaws in the severity levels estimation. The diagrammatic scale proposed shown adequate for gummy stem blight severity evaluation in watermelon. CHAPTER 2 - The gummy stem blight (Didymella bryoniae) is a disease that affects the productivity of watermelon leading to losses over 40%. This study aimed to evaluate the efficiency of different production systems in control of gummy stem blight in watermelon for to establish efficient methods to combat the disease. There were applied the following treatments: conventional tillage (T1), integrated management (T2) and organic management (T3). In T1 and T2 were applied mineral fertilization and T3 was used bovine manure. There was application of fungicides and insecticides in commercial dose in T1 and T2, being after soil chemical analysis in T2. Disease severity was assessed by grading scale. The experimental design was randomized blocks. The severity of gummy stem blight has increased substantially during the fruit formation. Watermelon plants grown with integrated management (T2) showed lower levels of disease severity, while plants in organic management (T3) exhibited higher levels of severity. We conclude that management based on judicious accompaniments in field represents best way to achieve the phytosanitary aspect adequate for cultivation of watermelon in Tocantins.
Resumo:
Potential errors in the application of mixture theory to the analysis of multiple-frequency bioelectrical impedance data for the determination of body fluid volumes are assessed. Potential sources of error include: conductive length; tissue fluid resistivity; body density; weight and technical errors of measurement. Inclusion of inaccurate estimates of body density and weight introduce errors of typically < +/-3% but incorrect assumptions regarding conductive length or fluid resistivities may each incur errors of up to 20%.
Resumo:
Abstract In a few rare diseases, specialised studies in cerebrospinal fluid (CSF) are required to identify the underlying metabolic disorder. We aimed to explore the possibility of detecting key synaptic proteins in the CSF, in particular dopaminergic and gabaergic, as new procedures that could be useful for both pathophysiological and diagnostic purposes in investigation of inherited disorders of neurotransmission. Dopamine receptor type 2 (D2R), dopamine transporter (DAT) and vesicular monoamine transporter type 2 (VMAT2) were analysed in CSF samplesfrom 30 healthy controls (11 days to 17 years) by western blot analysis. Because VMAT2 was the only protein with intracellular localisation, and in order to compare results, GABA vesicular transporter, which is another intracellular protein, was also studied. Spearman’s correlation and Student’s t tests were applied to compare optical density signals between different proteins. All these synaptic proteins could be easily detected and quantified in the CSF. DAT, D2R and GABA VT expression decrease with age, particularly in the first months of life, reflecting the expected intense synaptic activity and neuronal circuitry formation. A statistically significant relationship was found between D2R and DAT expression, reinforcing the previous evidence of DAT regulation by D2R. To our knowledge, there are no previous studies on human CSF reporting a reliable analysis of these proteins. These kinds of studies could help elucidate new causes of disturbed dopaminergic and gabaergic transmission as well as understanding different responses to L-dopa in inherited disorders affecting dopamine metabolism. Moreover, this approach to synaptic activity in vivo can be extended to different groups of proteins and diseases.
Resumo:
The evolution of continuous traits is the central component of comparative analyses in phylogenetics, and the comparison of alternative models of trait evolution has greatly improved our understanding of the mechanisms driving phenotypic differentiation. Several factors influence the comparison of models, and we explore the effects of random errors in trait measurement on the accuracy of model selection. We simulate trait data under a Brownian motion model (BM) and introduce different magnitudes of random measurement error. We then evaluate the resulting statistical support for this model against two alternative models: Ornstein-Uhlenbeck (OU) and accelerating/decelerating rates (ACDC). Our analyses show that even small measurement errors (10%) consistently bias model selection towards erroneous rejection of BM in favour of more parameter-rich models (most frequently the OU model). Fortunately, methods that explicitly incorporate measurement errors in phylogenetic analyses considerably improve the accuracy of model selection. Our results call for caution in interpreting the results of model selection in comparative analyses, especially when complex models garner only modest additional support. Importantly, as measurement errors occur in most trait data sets, we suggest that estimation of measurement errors should always be performed during comparative analysis to reduce chances of misidentification of evolutionary processes.
Resumo:
INTRODUCTION: Video records are widely used to analyze performance in alpine skiing at professional or amateur level. Parts of these analyses require the labeling of some movements (i.e. determining when specific events occur). If differences among coaches and differences for the same coach between different dates are expected, they have never been quantified. Moreover, knowing these differences is essential to determine which parameters reliable should be used. This study aimed to quantify the precision and the repeatability for alpine skiing coaches of various levels, as it is done in other fields (Koo et al, 2005). METHODS: A software similar to commercialized products was designed to allow video analyses. 15 coaches divided into 3 groups (5 amateur coaches (G1), 5 professional instructors (G2) and 5 semi-professional coaches (G3)) were enrolled. They were asked to label 15 timing parameters (TP) according to the Swiss ski manual (Terribilini et al, 2001) for each curve. TP included phases (initiation, steering I-II), body and ski movements (e.g. rotation, weighting, extension, balance). Three video sequences sampled at 25 Hz were used and one curve per video was labeled. The first video was used to familiarize the analyzer to the software. The two other videos, corresponding to slalom and giant slalom, were considered for the analysis. G1 realized twice the analysis (A1 and A2) at different dates and TP were randomized between both analyses. Reference TP were considered as the median of G2 and G3 at A1. The precision was defined as the RMS difference between individual TP and reference TP, whereas the repeatability was calculated as the RMS difference between individual TP at A1 and at A2. RESULTS AND DISCUSSION: For G1, G2 and G3, a precision of +/-5.6 frames, +/-3.0 and +/-2.0 frames, was respectively obtained. These results showed that G2 was more precise than G1, and G3 more precise than G2, were in accordance with group levels. The repeatability for G1 was +/-3.1 frames. Furthermore, differences among TP precision were observed, considering G2 and G3, with largest differences of +/-5.9 frames for "body counter rotation movement in steering phase II", and of 0.8 frame for "ski unweighting in initiation phase". CONCLUSION: This study quantified coach ability to label video in term of precision and repeatability. The best precision was obtained for G3 and was of +/-0.08s, which corresponds to +/-6.5% of the curve cycle. Regarding the repeatability, we obtained a result of +/-0.12s for G1, corresponding to +/-12% of the curve cycle. The repeatability of G2 and G3 are expected to be lower than the precision of G1 and the corresponding repeatability will be assessed soon. In conclusion, our results indicate that the labeling of video records is reliable for some TP, whereas caution is required for others. REFERENCES Koo S, Gold MD, Andriacchi TP. (2005). Osteoarthritis, 13, 782-789. Terribilini M, et al. (2001). Swiss Ski manual, 29-46. IASS, Lucerne.
Resumo:
Abstract Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used.