985 resultados para 1 Sigma error


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recognition of objects with smooth bounding surfaces from their contour images is considerably more complicated than that of objects with sharp edges, since in the former case the set of object points that generates the silhouette contours changes from one view to another. The "curvature method", developed by Basri and Ullman [1988], provides a method to approximate the appearance of such objects from different viewpoints. In this paper we analyze the curvature method. We apply the method to ellipsoidal objects and compute analytically the error obtained for different rotations of the objects. The error depends on the exact shape of the ellipsoid (namely, the relative lengths of its axes), and it increases a sthe ellipsoid becomes "deep" (elongated in the Z-direction). We show that the errors are usually small, and that, in general, a small number of models is required to predict the appearance of an ellipsoid from all possible views. Finally, we show experimentally that the curvature method applies as well to objects with hyperbolic surface patches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we bound the generalization error of a class of Radial Basis Function networks, for certain well defined function learning tasks, in terms of the number of parameters and number of examples. We show that the total generalization error is partly due to the insufficient representational capacity of the network (because of its finite size) and partly due to insufficient information about the target function (because of finite number of samples). We make several observations about generalization error which are valid irrespective of the approximation scheme. Our result also sheds light on ways to choose an appropriate network architecture for a particular problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robots must plan and execute tasks in the presence of uncertainty. Uncertainty arises from sensing errors, control errors, and uncertainty in the geometry of the environment. The last, which is called model error, has received little previous attention. We present a framework for computing motion strategies that are guaranteed to succeed in the presence of all three kinds of uncertainty. The motion strategies comprise sensor-based gross motions, compliant motions, and simple pushing motions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

N.W. Hardy and M.H. Lee. The effect of the product cost factor on error handling in industrial robots. In Maria Gini, editor, Detecting and Resolving Errors in Manufacturing Systems. Papers from the 1994 AAAI Spring Symposium Series, pages 59-64, Menlo Park, CA, March 1994. The AAAI Press. Technical Report SS-94-04, ISBN 0-929280-60-1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ground state structure of C(4N+2) rings is believed to exhibit a geometric transition from angle alternation (N < or = 2) to bond alternation (N > 2). All previous density functional theory (DFT) studies on these molecules have failed to reproduce this behavior by predicting either that the transition occurs at too large a ring size, or that the transition leads to a higher symmetry cumulene. Employing the recently proposed perspective of delocalization error within DFT we rationalize this failure of common density functional approximations (DFAs) and present calculations with the rCAM-B3LYP exchange-correlation functional that show an angle-to-bond-alternation transition between C(10) and C(14). The behavior exemplified here manifests itself more generally as the well known tendency of DFAs to bias toward delocalized electron distributions as favored by Huckel aromaticity, of which the C(4N+2) rings provide a quintessential example. Additional examples are the relative energies of the C(20) bowl, cage, and ring isomers; we show that the results from functionals with minimal delocalization error are in good agreement with CCSD(T) results, in contrast to other commonly used DFAs. An unbiased DFT treatment of electron delocalization is a key for reliable prediction of relative stability and hence the structures of complex molecules where many structure stabilization mechanisms exist.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the presence of a chemical potential, the physics of level crossings leads to singularities at zero temperature, even when the spatial volume is finite. These singularities are smoothed out at a finite temperature but leave behind nontrivial finite size effects which must be understood in order to extract thermodynamic quantities using Monte Carlo methods, particularly close to critical points. We illustrate some of these issues using the classical nonlinear O(2) sigma model with a coupling β and chemical potential μ on a 2+1-dimensional Euclidean lattice. In the conventional formulation this model suffers from a sign problem at nonzero chemical potential and hence cannot be studied with the Wolff cluster algorithm. However, when formulated in terms of the worldline of particles, the sign problem is absent, and the model can be studied efficiently with the "worm algorithm." Using this method we study the finite size effects that arise due to the chemical potential and develop an effective quantum mechanical approach to capture the effects. As a side result we obtain energy levels of up to four particles as a function of the box size and uncover a part of the phase diagram in the (β,μ) plane. © 2010 The American Physical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Droplet-based digital microfluidics technology has now come of age, and software-controlled biochips for healthcare applications are starting to emerge. However, today's digital microfluidic biochips suffer from the drawback that there is no feedback to the control software from the underlying hardware platform. Due to the lack of precision inherent in biochemical experiments, errors are likely during droplet manipulation; error recovery based on the repetition of experiments leads to wastage of expensive reagents and hard-to-prepare samples. By exploiting recent advances in the integration of optical detectors (sensors) into a digital microfluidics biochip, we present a physical-aware system reconfiguration technique that uses sensor data at intermediate checkpoints to dynamically reconfigure the biochip. A cyberphysical resynthesis technique is used to recompute electrode-actuation sequences, thereby deriving new schedules, module placement, and droplet routing pathways, with minimum impact on the time-to-response. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pensar que existen soluciones para cerrar la brecha entre el colegio y la universidad es utópico. Sin embargo, sí tiene sentido el trabajo que se haga con respecto al problema de la brecha para conocer y acercar los ideales y las expectativas que tienen las diferentes instituciones de educación. En la Universidad de los Andes fue evidente que dicho trabajo se podría orientar en diferentes direcciones y haciendo énfasis en la institución o bien en los profesores o bien en los estudiantes. Se podían abordar temas como: diseño curricular, creencias y actitudes de los profesores y de los estudiantes, métodos de enseñanza, concepciones sobre la enseñanza y el aprendizaje, dificultades y errores de aprendizaje y otros temas. Luego de varios traspiés en la elección del tema de investigación, elegimos finalmente explorar el tema del aprendizaje y considerar a los primíparos para el estudio por ser ellos los que viven realmente el proceso de transición del colegio a la universidad. Por otra parte, nos restringimos al área de precálculo motivados en parte porque en esta materia había un mayor índice de desaprobación. Concretamente, se propuso como objetivo general describir un perfil de aprendizaje en matemáticas del estudiante de Precálculo en el momento de ingresar a la Universidad. Del objetivo anterior se derivó el problema principal de este proyecto: definir los elementos conceptuales con los cuáles articular la descripción de dicho perfil. La presentación está dividida en cuatro partes, en la primera se expone un marco conceptual que presenta los elementos con los cuales se describirá el perfil, la segunda y tercera se refieren respectivamente a la metodología de la investigación y a los resultados obtenidos y la última a las conclusiones del trabajo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La investigación tiene dos fases: 1) Se plantea a los estudiantes de primer ingreso a la Universidad Panamericana, Guadalajara, México la simplificación de la expresión algebraica ; analizándose las respuestas equivocadas con su posible origen. 2) Se hace un estudio con 7 profesores de educación media básica y media superior, en el cual, se les presenta la simplificación errónea (a la izq.) con la consigna de mencionar el origen del error y cómo le ayudarían al alumno. Alumnos cometen errores de muy diverso origen, y los profesores encuestados no siempre analizan a profundidad el origen del error cometido por este alumno.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the 1970’s and 1980’s, the late Dr Norman Holme undertook extensive towed sledge surveys in the English Channel and some in the Irish Sea. Only a minority of the resulting images were analysed and reported before his death in 1989 but logbooks, video and film material has been archived in the National Marine Biological Library (NMBL) in Plymouth. A scoping study was therefore commissioned by the Joint Nature Conservation Committee and as a part of the Mapping European Seabed Habitats (MESH) project to identify the value of the material archived and the procedure and cost to undertake further work. The results of the scoping study are: 1. NMBL archives hold 106 videotapes (reel-to-reel Sony HD format) and 59 video cassettes (including 15 from the Irish Sea) in VHS format together with 90 rolls of 35 mm colour transparency film (various lengths up to about 240 frames per film). These are stored in the Archive Room, either in a storage cabinet or in original film canisters. 2. Reel-to-reel material is extensive and had already been selectively copied to VHS cassettes. The cost of transferring it to an accepted ‘long-life’ medium (Betamax) would be approximately £15,000. It was not possible to view the tapes as a suitable machine was not located. The value of the tapes is uncertain but they are likely to become beyond salvation within one to two years. 3. Video cassette material is in good condition and is expected to remain so for several more years at least. Images viewed were generally of poor quality and the speed of tow often makes pictures blurred. No immediate action is required. 4. Colour transparency films are in good condition and the images are very clear. They provide the best source of information for mapping seabed biotopes. They should be scanned to digital format but inexpensive fast copying is problematic as there are no between-frame breaks between images and machines need to centre the image based on between-frame breaks. The minimum cost to scan all of the images commercially is approximately £6,000 and could be as much as £40,000 on some quotations. There is a further cost in coding and databasing each image and, all-in-all it would seem most economic to purchase a ‘continuous film’ scanner and undertake the work in-house. 5. Positional information in ships logs has been matched to films and to video tapes. Decca Chain co-ordinates recorded in the logbooks have been converted to latitude and longitude (degrees, minutes and seconds) and a further routine developed to convert to degrees and decimal degrees required for GIS mapping. However, it is unclear whether corrections to Decca positions were applied at the time the position was noted. Tow tracks have been mapped onto an electronic copy of a Hydrographic Office chart. 6. The positions of start and end of each tow were entered to a spread sheet so that they can be displayed on GIS or on a Hydrographic Office Chart backdrop. The cost of the Hydrographic Office chart backdrop at a scale of 1:75,000 for the whole area was £458 incl. VAT. 7. Viewing all of the video cassettes to note habitats and biological communities, even by an experienced marine biologist, would take at least in the order of 200 hours and is not recommended. English Channel towed sledge seabed images. Phase 1: scoping study and example analysis. 6 8. Once colour transparencies are scanned and indexed, viewing to identify seabed habitats and biological communities would probably take about 100 hours for an experienced marine biologist and is recommended. 9. It is expected that identifying biotopes along approximately 1 km lengths of each tow would be feasible although uncertainties about Decca co-ordinate corrections and exact positions of images most likely gives a ±250 m position error. More work to locate each image accurately and solve the Decca correction question would improve accuracy of image location. 10. Using codings (produced by Holme to identify different seabed types), and some viewing of video and transparency material, 10 biotopes have been identified, although more would be added as a result of full analysis. 11. Using the data available from the Holme archive, it is possible to populate various fields within the Marine Recorder database. The overall ‘survey’ will be ‘English Channel towed video sled survey’. The ‘events’ become the 104 tows. Each tow could be described as four samples, i.e. the start and end of the tow and two areas in the middle to give examples along the length of the tow. These samples would have their own latitude/longitude co-ordinates. The four samples would link to a GIS map. 12. Stills and video clips together with text information could be incorporated into a multimedia presentation, to demonstrate the range of level seabed types found along a part of the northern English Channel. More recent images taken during SCUBA diving of reef habitats in the same area as the towed sledge surveys could be added to the Holme images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract: Raman spectroscopy has been used for the first time to predict the FA composition of unextracted adipose tissue of pork, beef, lamb, and chicken. It was found that the bulk unsaturation parameters could be predicted successfully [R-2 = 0.97, root mean square error of prediction (RMSEP) = 4.6% of 4 sigma], with cis unsaturation, which accounted for the majority of the unsaturation, giving similar correlations. The combined abundance of all measured PUFA (>= 2 double bonds per chain) was also well predicted with R-2 = 0.97 and RMSEP = 4.0% of 4 sigma. Trans unsaturation was not as well modeled (R-2 = 0.52, RMSEP = 18% of 4 sigma); this reduced prediction ability can be attributed to the low levels of trans FA found in adipose tissue (0.035 times the cis unsaturation level). For the individual FA, the average partial least squares (PLS) regression coefficient of the 18 most abundant FA (relative abundances ranging from 0.1 to 38.6% of the total FA content) was R-2 = 0.73; the average RMSEP = 11.9% of 4 sigma. Regression coefficients and prediction errors for the five most abundant FA were all better than the average value (in some cases as low as RMSEP = 4.7% of 4 sigma). Cross-correlation between the abundances of the minor FA and more abundant acids could be determined by principal component analysis methods, and the resulting groups of correlated compounds were also well-predicted using PLS. The accuracy of the prediction of individual FA was at least as good as other spectroscopic methods, and the extremely straightforward sampling method meant that very rapid analysis of samples at ambient temperature was easily achieved. This work shows that Raman profiling of hundreds of samples per day is easily achievable with an automated sampling system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The results of a study aimed at determining the most important experimental parameters for automated, quantitative analysis of solid dosage form pharmaceuticals (seized and model 'ecstasy' tablets) are reported. Data obtained with a macro-Raman spectrometer were complemented by micro-Raman measurements, which gave information on particle size and provided excellent data for developing statistical models of the sampling errors associated with collecting data as a series of grid points on the tablets' surface. Spectra recorded at single points on the surface of seized MDMA-caffeine-lactose tablets with a Raman microscope (lambda(ex) = 785 nm, 3 mum diameter spot) were typically dominated by one or other of the three components, consistent with Raman mapping data which showed the drug and caffeine microcrystals were ca 40 mum in diameter. Spectra collected with a microscope from eight points on a 200 mum grid were combined and in the resultant spectra the average value of the Raman band intensity ratio used to quantify the MDMA: caffeine ratio, mu(r), was 1.19 with an unacceptably high standard deviation, sigma(r), of 1.20. In contrast, with a conventional macro-Raman system (150 mum spot diameter), combined eight grid point data gave mu(r) = 1.47 with sigma(r) = 0.16. A simple statistical model which could be used to predict sigma(r) under the various conditions used was developed. The model showed that the decrease in sigma(r) on moving to a 150 mum spot was too large to be due entirely to the increased spot diameter but was consistent with the increased sampling volume that arose from a combination of the larger spot size and depth of focus in the macroscopic system. With the macro-Raman system, combining 64 grid points (0.5 mm spacing and 1-2 s accumulation per point) to give a single averaged spectrum for a tablet was found to be a practical balance between minimizing sampling errors and keeping overhead times at an acceptable level. The effectiveness of this sampling strategy was also tested by quantitative analysis of a set of model ecstasy tablets prepared from MDEA-sorbitol (0-30% by mass MDEA). A simple univariate calibration model of averaged 64 point data had R-2 = 0.998 and an r.m.s. standard error of prediction of 1.1% whereas data obtained by sampling just four points on the same tablet showed deviations from the calibration of up to 5%.