912 resultados para Error Correction Models
Resumo:
During Legs 127 and 128, we found a systematic error in the index property measurements, in that the wet bulk density, grain density, and porosity did not satisfy well-established interrelationships. We have found that an almost constant difference exists between the weight of water lost during drying and the volume of water lost. This discrepancy is independent of volume or water content of the sample. The water losses should be equal because the density of water is close to 1.0 g/cm**3. The pycnometer wet volume measurement has been identified as the source of the systematic error. The wet volume on average is 0.2 cm**3 too low. For the rare cases when the water content is negligible, there is no offset. The source of the wet volume error results from the partial vapor pressure of water in the pycnometer cell. Newly corrected tables of index properties measured during Legs 127 and 128 are included. The corrected index properties are internally consistent. The data are in better agreement with theoretical models that relate the index properties to other physical properties, such as thermal conductivity and acoustic velocity. In future, a standard volume sampler should be used, or the wet volume should be calculated from the dry volume and the water loss by weight.
Resumo:
Vol. 2 has imprint: New York: Printed and published by Isaac Riley, Wall-street, 1807.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
There has been an abundance of literature on the modelling of hydrocyclones over the past 30 years. However, in the comminution area at least, the more popular commercially available packages (e.g. JKSimMet, Limn, MODSIM) use the models developed by Nageswararao and Plitt in the 1970s, either as published at that time, or with minor modification. With the benefit of 30 years of hindsight, this paper discusses the assumptions and approximations used in developing these models. Differences in model structure and the choice of dependent and independent variables are also considered. Redundancies are highlighted and an assessment made of the general applicability of each of the models, their limitations and the sources of error in their model predictions. This paper provides the latest version of the Nageswararao model based on the above analysis, in a form that can readily be implemented in any suitable programming language, or within a spreadsheet. The Plitt model is also presented in similar form. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
An investigation was conducted to evaluate the impact of experimental designs and spatial analyses (single-trial models) of the response to selection for grain yield in the northern grains region of Australia (Queensland and northern New South Wales). Two sets of multi-environment experiments were considered. One set, based on 33 trials conducted from 1994 to 1996, was used to represent the testing system of the wheat breeding program and is referred to as the multi-environment trial (MET). The second set, based on 47 trials conducted from 1986 to 1993, sampled a more diverse set of years and management regimes and was used to represent the target population of environments (TPE). There were 18 genotypes in common between the MET and TPE sets of trials. From indirect selection theory, the phenotypic correlation coefficient between the MET and TPE single-trial adjusted genotype means [r(p(MT))] was used to determine the effect of the single-trial model on the expected indirect response to selection for grain yield in the TPE based on selection in the MET. Five single-trial models were considered: randomised complete block (RCB), incomplete block (IB), spatial analysis (SS), spatial analysis with a measurement error (SSM) and a combination of spatial analysis and experimental design information to identify the preferred (PF) model. Bootstrap-resampling methodology was used to construct multiple MET data sets, ranging in size from 2 to 20 environments per MET sample. The size and environmental composition of the MET and the single-trial model influenced the r(p(MT)). On average, the PF model resulted in a higher r(p(MT)) than the IB, SS and SSM models, which were in turn superior to the RCB model for MET sizes based on fewer than ten environments. For METs based on ten or more environments, the r(p(MT)) was similar for all single-trial models.
Resumo:
An existing capillarity correction for free surface groundwater flow as modelled by the Boussinesq equation is re-investigated. Existing solutions, based on the shallow flow expansion, have considered only the zeroth-order approximation. Here, a second-order capillarity correction to tide-induced watertable fluctuations in a coastal aquifer adjacent to a sloping beach is derived. A new definition of the capillarity correction is proposed for small capillary fringes, and a simplified solution is derived. Comparisons of the two models show that the simplified model can be used in most cases. The significant effects of higher-order capillarity corrections on tidal fluctuations in a sloping beach are also demonstrated. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The Wet Tropics World Heritage Area in Far North Queens- land, Australia consists predominantly of tropical rainforest and wet sclerophyll forest in areas of variable relief. Previous maps of vegetation communities in the area were produced by a labor-intensive combination of field survey and air-photo interpretation. Thus,. the aim of this work was to develop a new vegetation mapping method based on imaging radar that incorporates topographical corrections, which could be repeated frequently, and which would reduce the need for detailed field assessments and associated costs. The method employed G topographic correction and mapping procedure that was developed to enable vegetation structural classes to be mapped from satellite imaging radar. Eight JERS-1 scenes covering the Wet Tropics area for 1996 were acquired from NASDA under the auspices of the Global Rainforest Mapping Project. JERS scenes were geometrically corrected for topographic distortion using an 80 m DEM and a combination of polynomial warping and radar viewing geometry modeling. An image mosaic was created to cover the Wet Tropics region, and a new technique for image smoothing was applied to the JERS texture bonds and DEM before a Maximum Likelihood classification was applied to identify major land-cover and vegetation communities. Despite these efforts, dominant vegetation community classes could only be classified to low levels of accuracy (57.5 percent) which were partly explained by the significantly larger pixel size of the DEM in comparison to the JERS image (12.5 m). In addition, the spatial and floristic detail contained in the classes of the original validation maps were much finer than the JERS classification product was able to distinguish. In comparison to field and aerial photo-based approaches for mapping the vegetation of the Wet Tropics, appropriately corrected SAR data provides a more regional scale, all-weather mapping technique for broader vegetation classes. Further work is required to establish an appropriate combination of imaging radar with elevation data and other environmental surrogates to accurately map vegetation communities across the entire Wet Tropics.
Resumo:
Very few empirically validated interventions for improving metacognitive skills (i.e., self-awareness and self-regulation) and functional outcomes have been reported. This single-case experimental study presents JM, a 36-year-old man with a very severe traumatic brain injury (TBI) who demonstrated long-term awareness deficits. Treatment at four years post-injury involved a metacognitive contextual intervention based on a conceptualization of neuro-cognitive, psychological, and socio-environmental factors contributing to his awareness deficits. The 16-week intervention targeted error awareness and self-correction in two real life settings: (a) cooking at home: and (b) volunteer work. Outcome measures included behavioral observation of error behavior and standardized awareness measures. Relative to baseline performance in the cooking setting, JM demonstrated a 44% reduction in error frequency and increased self-correction. Although no spontaneous generalization was evident in the volunteer work setting, specific training in this environment led to a 39% decrease in errors. JM later gained paid employment and received brief metacognitive training in his work environment. JM's global self-knowledge of deficits assessed by self-report was unchanged after the program. Overall, the study provides preliminary support for a metacognitive contextual approach to improve error awareness and functional Outcome in real life settings.
Resumo:
Software simulation models are computer programs that need to be verified and debugged like any other software. In previous work, a method for error isolation in simulation models has been proposed. The method relies on a set of feature matrices that can be used to determine which part of the model implementation is responsible for deviations in the output of the model. Currrently these feature matrices have to be generated by hand from the model implementation, which is a tedious and error-prone task. In this paper, a method based on mutation analysis, as well as prototype tool support for the verification of the manually generated feature matrices is presented. The application of the method and tool to a model for wastewater treatment shows that the feature matrices can be verified effectively using a minimal number of mutants.
Resumo:
A expansão rápida da maxila assistida cirurgicamente (ERMAC) é um dos procedimentos de escolha para correção da deficiência transversal em pacientes adultos. Este estudo avaliou as alterações produzidas nos arcos dentais superiores e inferiores de 18 pacientes, sendo seis do sexo masculino e 12 do sexo feminino, com média de idade de 23,3 anos submetidos à ERMAC. Para cada paciente foram preparados três modelos de gesso, que foram digitalizados por meio do Scanner 3D, obtidos em diferentes fases: inicial, antes do procedimento operatório (T1); três meses pós-expansão (T2); seis meses pós-expansão (T3). Foram avaliadas as distâncias transversais do arcos dentários superior e inferior, a inclinação dentária dos dentes posteriores superiores, a altura da coroa clínica dos dentes posteriores do arco superior e foi observado se havia correlação entre a quantidade de inclinação dentária com o desenvolvimento de recessões gengivais. Para análise dos resultados foram utilizados a análise de Variância, o Teste de Tukey e o Teste de Correlação de Pearson, sendo que para a análise do erro sistemático intra-examinador foi utilizado o teste t pareado e para determinação do erro casual utilizou-se o cálculo do erro de Dahlberg. Com base na metodologia utilizada e nos resultados obtidos, pode-se concluir que: 1. com relação as alterações produzidas no sentido transversal do arco superior, obteve-se um aumento em todas as variáveis de T1 para T2 e uma manutenção dos valores em todas as variáveis de T2 para T3 demonstrando efetividade e estabilidade do procedimento; 2. no arco inferior não houve alterações transversais estatisticamente significantes, com exceção dos primeiros molares; 3. com relação às inclinações dentárias, observou-se um aumento desta de T1 para T2 em todos os dentes, porém, com significância estatística apenas para segundo molar e primeiro e segundo pré-molar do lado direito e primeiro molar e segundo pré-molar do lado esquerdo.; 4. a ERMAC não acarretou o desenvolvimento de recessões gengivais em nenhum dos tempos; 5. não houve correlação entre a quantidade de inclinação dentária e o desenvolvimento de recessões gengivais.(AU)
Resumo:
In the Bayesian framework, predictions for a regression problem are expressed in terms of a distribution of output values. The mode of this distribution corresponds to the most probable output, while the uncertainty associated with the predictions can conveniently be expressed in terms of error bars. In this paper we consider the evaluation of error bars in the context of the class of generalized linear regression models. We provide insights into the dependence of the error bars on the location of the data points and we derive an upper bound on the true error bars in terms of the contributions from individual data points which are themselves easily evaluated.
Resumo:
We investigate the dependence of Bayesian error bars on the distribution of data in input space. For generalized linear regression models we derive an upper bound on the error bars which shows that, in the neighbourhood of the data points, the error bars are substantially reduced from their prior values. For regions of high data density we also show that the contribution to the output variance due to the uncertainty in the weights can exhibit an approximate inverse proportionality to the probability density. Empirical results support these conclusions.
Resumo:
Based on a statistical mechanics approach, we develop a method for approximately computing average case learning curves and their sample fluctuations for Gaussian process regression models. We give examples for the Wiener process and show that universal relations (that are independent of the input distribution) between error measures can be derived.
Resumo:
We study the performance of Low Density Parity Check (LDPC) error-correcting codes using the methods of statistical physics. LDPC codes are based on the generation of codewords using Boolean sums of the original message bits by employing two randomly-constructed sparse matrices. These codes can be mapped onto Ising spin models and studied using common methods of statistical physics. We examine various regular constructions and obtain insight into their theoretical and practical limitations. We also briefly report on results obtained for irregular code constructions, for codes with non-binary alphabet, and on how a finite system size effects the error probability.
Resumo:
The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.