965 resultados para Error in substance


Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the effect of regularization in an on-line gradient-descent learning scenario for a general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labelled by a two-layer teacher network with an arbitrary number of hidden units which may be corrupted by Gaussian output noise. We examine the effect of weight decay regularization on the dynamical evolution of the order parameters and generalization error in various phases of the learning process, in both noiseless and noisy scenarios.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose To investigate the utility of uncorrected visual acuity measures in screening for refractive error in white school children aged 6-7-years and 12-13-years. Methods The Northern Ireland Childhood Errors of Refraction (NICER) study used a stratified random cluster design to recruit children from schools in Northern Ireland. Detailed eye examinations included assessment of logMAR visual acuity and cycloplegic autorefraction. Spherical equivalent refractive data from the right eye were used to classify significant refractive error as myopia of at least 1DS, hyperopia as greater than +3.50DS and astigmatism as greater than 1.50DC, whether it occurred in isolation or in association with myopia or hyperopia. Results Results are presented from 661 white 12-13-year-old and 392 white 6-7-year-old school-children. Using a cut-off of uncorrected visual acuity poorer than 0.20 logMAR to detect significant refractive error gave a sensitivity of 50% and specificity of 92% in 6-7-year-olds and 73% and 93% respectively in 12-13-year-olds. In 12-13-year-old children a cut-off of poorer than 0.20 logMAR had a sensitivity of 92% and a specificity of 91% in detecting myopia and a sensitivity of 41% and a specificity of 84% in detecting hyperopia. Conclusions Vision screening using logMAR acuity can reliably detect myopia, but not hyperopia or astigmatism in school-age children. Providers of vision screening programs should be cognisant that where detection of uncorrected hyperopic and/or astigmatic refractive error is an aspiration, current UK protocols will not effectively deliver.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis addresses the viability of automatic speech recognition for control room systems; with careful system design, automatic speech recognition (ASR) devices can be useful means for human computer interaction in specific types of task. These tasks can be defined as complex verbal activities, such as command and control, and can be paired with spatial tasks, such as monitoring, without detriment. It is suggested that ASR use be confined to routine plant operation, as opposed the critical incidents, due to possible problems of stress on the operators' speech.  It is proposed that using ASR will require operators to adapt a commonly used skill to cater for a novel use of speech. Before using the ASR device, new operators will require some form of training. It is shown that a demonstration by an experienced user of the device can lead to superior performance than instructions. Thus, a relatively cheap and very efficient form of operator training can be supplied by demonstration by experienced ASR operators. From a series of studies into speech based interaction with computers, it is concluded that the interaction be designed to capitalise upon the tendency of operators to use short, succinct, task specific styles of speech. From studies comparing different types of feedback, it is concluded that operators be given screen based feedback, rather than auditory feedback, for control room operation. Feedback will take two forms: the use of the ASR device will require recognition feedback, which will be best supplied using text; the performance of a process control task will require task feedback integrated into the mimic display. This latter feedback can be either textual or symbolic, but it is suggested that symbolic feedback will be more beneficial. Related to both interaction style and feedback is the issue of handling recognition errors. These should be corrected by simple command repetition practices, rather than use error handling dialogues. This method of error correction is held to be non intrusive to primary command and control operations. This thesis also addresses some of the problems of user error in ASR use, and provides a number of recommendations for its reduction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of learning by examples in ultrametric committee machines (UCMs) is studied within the framework of statistical mechanics. Using the replica formalism we calculate the average generalization error in UCMs with L hidden layers and for a large enough number of units. In most of the regimes studied we find that the generalization error, as a function of the number of examples presented, develops a discontinuous drop at a critical value of the load parameter. We also find that when L>1 a number of teacher networks with the same number of hidden layers and different overlaps induce learning processes with the same critical points.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methods: It has been estimated that medication error harms 1-2% of patients admitted to general hospitals. There has been no previous systematic review of the incidence, cause or type of medication error in mental healthcare services. Methods: A systematic literature search for studies that examined the incidence or cause of medication error in one or more stage(s) of the medication-management process in the setting of a community or hospital-based mental healthcare service was undertaken. The results in the context of the design of the study and the denominator used were examined. Results: All studies examined medication management processes, as opposed to outcomes. The reported rate of error was highest in studies that retrospectively examined drug charts, intermediate in those that relied on reporting by pharmacists to identify error and lowest in those that relied on organisational incident reporting systems. Only a few of the errors identified by the studies caused actual harm, mostly because they were detected and remedial action was taken before the patient received the drug. The focus of the research was on inpatients and prescriptions dispensed by mental health pharmacists. Conclusion: Research about medication error in mental healthcare is limited. In particular, very little is known about the incidence of error in non-hospital settings or about the harm caused by it. Evidence is available from other sources that a substantial number of adverse drug events are caused by psychotropic drugs. Some of these are preventable and might probably, therefore, be due to medication error. On the basis of this and features of the organisation of mental healthcare that might predispose to medication error, priorities for future research are suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To investigate the relationship between pupil diameter and refractive error and how refractive correction, target luminance, and accommodation modulate this relationship. Methods: Sixty emmetropic, myopic, and hyperopic subjects (age range, 18 to 35 years) viewed an illuminated target (luminance: 10, 100, 200, 400, 1000, 2000, and 4100 cd/m2) within a Badal optical system, at 0 diopters (D) and −3 D vergence, with and without refractive correction. Refractive error was corrected using daily disposable contact lenses. Pupil diameter and accommodation were recorded continuously using a commercially available photorefractor. Results: No significant difference in pupil diameter was found between the refractive groups at 0 D or −3 D target vergence, in the corrected or uncorrected conditions. As expected, pupil diameter decreased with increasing luminance. Target vergence had no significant influence on pupil diameter. In the corrected condition, at 0 D target vergence, the accommodation response was similar in all refractive groups. At −3 D target vergence, the emmetropic and myopic groups accommodated significantly more than the hyperopic group at all luminance levels. There was no correlation between accommodation response and pupil diameter or refractive error in any refractive group. In the uncorrected condition, the accommodation response was significantly greater in the hyperopic group than in the myopic group at all luminance levels, particularly for near viewing. In the hyperopic group, the accommodation response was significantly correlated with refractive error but not pupil diameter. In the myopic group, accommodation response level was not correlated with refractive error or pupil diameter. Conclusions: Refractive error has no influence on pupil diameter, irrespective of refractive correction or accommodative demand. This suggests that the pupil is controlled by the pupillary light reflex and is not driven by retinal blur.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It has never been easy for manufacturing companies to understand their confidence level in terms of how accurate and to what degree of flexibility parts can be made. This brings uncertainty in finding the most suitable manufacturing method as well as in controlling their product and process verification systems. The aim of this research is to develop a system for capturing the company’s knowledge and expertise and then reflect it into an MRP (Manufacturing Resource Planning) system. A key activity here is measuring manufacturing and machining capabilities to a reasonable confidence level. For this purpose an in-line control measurement system is introduced to the company. Using SPC (Statistical Process Control) not only helps to predict the trend in manufacturing of parts but also minimises the human error in measurement. Gauge R&R (Repeatability and Reproducibility) study identifies problems in measurement systems. Measurement is like any other process in terms of variability. Reducing this variation via an automated machine probing system helps to avoid defects in future products.Developments in aerospace, nuclear, oil and gas industries demand materials with high performance and high temperature resistance under corrosive and oxidising environments. Superalloys were developed in the latter half of the 20th century as high strength materials for such purposes. For the same characteristics superalloys are considered as difficult-to-cut alloys when it comes to formation and machining. Furthermore due to the sensitivity of superalloy applications, in many cases they should be manufactured with tight tolerances. In addition superalloys, specifically Nickel based, have unique features such as low thermal conductivity due to having a high amount of Nickel in their material composition. This causes a high surface temperature on the work-piece at the machining stage which leads to deformation in the final product.Like every process, the material variations have a significant impact on machining quality. The main cause of variations can originate from chemical composition and mechanical hardness. The non-uniform distribution of metal elements is a major source of variation in metallurgical structures. Different heat treatment standards are designed for processing the material to the desired hardness levels based on application. In order to take corrective actions, a study on the material aspects of superalloys has been conducted. In this study samples from different batches of material have been analysed. This involved material preparation for microscopy analysis, and the effect of chemical compositions on hardness (before and after heat treatment). Some of the results are discussed and presented in this paper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this paper is to explore the engineering lecturers' experiences of generic skills assessment within an active learning context in Malaysia. Using a case-study methodology, lecturers' assessment approaches were investigated regarding three generic skills; verbal communication, problem solving and team work. Because of the importance to learning of the assessment of such skills it is this assessment that is discussed. The findings show the lecturers' initial feedback to have been generally lacking in substance, since they have limited knowledge and experience of assessing generic skills. Typical barriers identified during the study included; generic skills not being well defined, inadequate alignment across the engineering curricula and teaching approaches, assessment practices that were too flexible, particular those to do with implementation; and a failure to keep up to date with industrial requirements. The emerging findings of the interviews reinforce the arguments that there is clearly much room for improvement in the present state of generic skills assessment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The current study was designed to build on and extend the existing knowledge base of factors that cause, maintain, and influence child molestation. Theorized links among the type of offender and the offender's levels of moral development and social competence in the perpetration of child molestation were investigated. The conceptual framework for the study is based on the cognitive developmental stages of moral development as proposed by Kohlberg, the unified theory, or Four-Preconditions Model, of child molestation as proposed by Finkelhor, and the Information-Processing Model of Social Skills as proposed by McFall. The study sample consisted of 127 adult male child molesters participating in outpatient group therapy. All subjects completed a Self-Report Questionnaire which included questions designed to obtain relevant demographic data, questions similar to those used by the researchers for the Massachusetts Treatment Center: Child Molester Typology 3's social competency dimension, the Defining Issues Test (DIT) short form, the Social Avoidance and Distress Scale (SADS), the Rathus Assertiveness Schedule (RAS), and the Questionnaire Measure of Empathic Tendency (Empathy Scale). Data were analyzed utilizing confirmatory factor analysis, t-tests, and chi-square statistics. Partial support was found for the hypothesis that moral development is a separate but correlated construct from social competence. As predicted, although the actual mean score differences were small, a statistically significant difference was found in the current study between the mean DITP scores of the subject sample and that of the general male population, suggesting that child molesters, as a group, function at a lower level of moral development than does the general male population, and the situational offenders in the study sample demonstrated a statistically significantly higher level of moral development than the preferential offenders. The data did not support the hypothesis that situational offenders will demonstrate lower levels of social competence than preferential offenders. Relatively little significance is placed on this finding, however, because the measure for the social competency variable was likely subject to considerable measurement error in that the items used as indicators were not clearly defined. The last hypothesis, which involved the potential differences in social anxiety, assertion skills, and empathy between the situational and preferential offender types, was not supported by the data. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Optical imaging is an emerging technology towards non-invasive breast cancer diagnostics. In recent years, portable and patient comfortable hand-held optical imagers are developed towards two-dimensional (2D) tumor detections. However, these imagers are not capable of three-dimensional (3D) tomography because they cannot register the positional information of the hand-held probe onto the imaged tissue. A hand-held optical imager has been developed in our Optical Imaging Laboratory with 3D tomography capabilities, as demonstrated from tissue phantom studies. The overall goal of my dissertation is towards the translation of our imager to the clinical setting for 3D tomographic imaging in human breast tissues. A systematic experimental approach was designed and executed as follows: (i) fast 2D imaging, (ii) coregistered imaging, and (iii) 3D tomographic imaging studies. (i) Fast 2D imaging was initially demonstrated in tissue phantoms (1% Liposyn solution) and in vitro (minced chicken breast and 1% Liposyn). A 0.45 cm3 fluorescent target at 1:0 contrast ratio was detectable up to 2.5 cm deep. Fast 2D imaging experiments performed in vivo with healthy female subjects also detected a 0.45 cm3 fluorescent target superficially placed ∼2.5 cm under the breast tissue. (ii) Coregistered imaging was automated and validated in phantoms with ∼0.19 cm error in the probe’s positional information. Coregistration also improved the target depth detection to 3.5 cm, from multi-location imaging approach. Coregistered imaging was further validated in-vivo , although the error in probe’s positional information increased to ∼0.9 cm (subject to soft tissue deformation and movement). (iii) Three-dimensional tomography studies were successfully demonstrated in vitro using 0.45 cm3 fluorescence targets. The feasibility of 3D tomography was demonstrated for the first time in breast tissues using the hand-held optical imager, wherein a 0.45 cm3 fluorescent target (superficially placed) was recovered along with artifacts. Diffuse optical imaging studies were performed in two breast cancer patients with invasive ductal carcinoma. The images showed greater absorption at the tumor cites (as observed from x-ray mammography, ultrasound, and/or MRI). In summary, my dissertation demonstrated the potential of a hand-held optical imager towards 2D breast tumor detection and 3D breast tomography, holding a promise for extensive clinical translational efforts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The map representation of an environment should be selected based on its intended application. For example, a geometrically accurate map describing the Euclidean space of an environment is not necessarily the best choice if only a small subset its features are required. One possible subset is the orientations of the flat surfaces in the environment, represented by a special parameterization of normal vectors called axes. Devoid of positional information, the entries of an axis map form a non-injective relationship with the flat surfaces in the environment, which results in physically distinct flat surfaces being represented by a single axis. This drastically reduces the complexity of the map, but retains important information about the environment that can be used in meaningful applications in both two and three dimensions. This thesis presents axis mapping, which is an algorithm that accurately and automatically estimates an axis map of an environment based on sensor measurements collected by a mobile platform. Furthermore, two major applications of axis maps are developed and implemented. First, the LiDAR compass is a heading estimation algorithm that compares measurements of axes with an axis map of the environment. Pairing the LiDAR compass with simple translation measurements forms the basis for an accurate two-dimensional localization algorithm. It is shown that this algorithm eliminates the growth of heading error in both indoor and outdoor environments, resulting in accurate localization over long distances. Second, in the context of geotechnical engineering, a three-dimensional axis map is called a stereonet, which is used as a tool to examine the strength and stability of a rock face. Axis mapping provides a novel approach to create accurate stereonets safely, rapidly, and inexpensively compared to established methods. The non-injective property of axis maps is leveraged to probabilistically describe the relationships between non-sequential measurements of the rock face. The automatic estimation of stereonets was tested in three separate outdoor environments. It is shown that axis mapping can accurately estimate stereonets while improving safety, requiring significantly less time and effort, and lowering costs compared to traditional and current state-of-the-art approaches.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La percepción clara y distinta es el elemento sobre el que se asienta la certeza metafísica de Descartes. Con todo, el planteamiento de los argumentos escépticos referidos a la duda metódica cartesiana ha evidenciado la necesidad de hallar una justificación al propio criterio de la percepción clara y distinta. Frente a los intentos basados en la indubitabilidad de la percepción o en la garantía surgida de la bondad divina, se defenderá una justificación alternativa pragmatista.