991 resultados para Error-resilient Applications


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new algorithm has been developed for smoothing the surfaces in finite element formulations of contact-impact. A key feature of this method is that the smoothing is done implicitly by constructing smooth signed distance functions for the bodies. These functions are then employed for the computation of the gap and other variables needed for implementation of contact-impact. The smoothed signed distance functions are constructed by a moving least-squares approximation with a polynomial basis. Results show that when nodes are placed on a surface, the surface can be reproduced with an error of about one per cent or less with either a quadratic or a linear basis. With a quadratic basis, the method exactly reproduces a circle or a sphere even for coarse meshes. Results are presented for contact problems involving the contact of circular bodies. Copyright (C) 2002 John Wiley Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Combinatorial optimization problems share an interesting property with spin glass systems in that their state spaces can exhibit ultrametric structure. We use sampling methods to analyse the error surfaces of feedforward multi-layer perceptron neural networks learning encoder problems. The third order statistics of these points of attraction are examined and found to be arranged in a highly ultrametric way. This is a unique result for a finite, continuous parameter space. The implications of this result are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The isotope composition of Ph is difficult to determine accurately due to the lack of a stable normalisation ratio. Double and triple-spike addition techniques provide one solution and presently yield the most accurate measurements. A number of recent studies have claimed that improved accuracy and precision could also be achieved by multi-collector ICP-MS (MC-ICP-MS) Pb-isotope analysis using the addition of Tl of known isotope composition to Pb samples. In this paper, we verify whether the known isotope composition of Tl can be used for correction of mass discrimination of Pb with an extensive dataset for the NIST standard SRM 981, comparison of MC-ICP-MS with TIMS data, and comparison with three isochrons from different geological environments. When all our NIST SRM 981 data are normalised with one constant Tl-205/Tl-203 of 2.38869, the following averages and reproducibilities were obtained: Pb-207/Pb-206=0.91461+/-18; Pb-208/Ph-206 = 2.1674+/-7; and (PbPh)-Pb-206-Ph-204 = 16.941+/-6. These two sigma standard deviations of the mean correspond to 149, 330, and 374 ppm, respectively. Accuracies relative to triple-spike values are 149, 157, and 52 ppm, respectively, and thus well within uncertainties. The largest component of the uncertainties stems from the Ph data alone and is not caused by differential mass discrimination behaviour of Ph and Tl. In routine operation, variation of sample introduction memory and production of isobaric molecular interferences in the spectrometer's collision cell currently appear to be the ultimate limitation to better reproducibility. Comparative study of five different datasets from actual samples (bullets, international rock standards, carbonates, metamorphic minerals, and sulphide minerals) demonstrates that in most cases geological scatter of the sample exceeds the achieved analytical reproducibility. We observe good agreement between TIMS and MC-ICP-MS data for international rock standards but find that such comparison does not constitute the ultimate. test for the validity of the MC-ICP-MS technique. Two attempted isochrons resulted in geological scatter (in one case small) in excess of analytical reproducibility. However, in one case (leached Great Dyke sulphides) we obtained a true isochron (MSWD = 0.63) age of 2578.3 +/- 0.9 Ma, which is identical to and more precise than a recently published U-Pb zircon age (2579 3 Ma) for a Great Dyke websterite [Earth Planet. Sci. Lett. 180 (2000) 1-12]. Reproducibility of this age by means of an isochron we regard as a robust test of accuracy over a wide dynamic range. We show that reliable and accurate Pb-isotope data can be obtained by careful operation of second-generation MC-ICP magnetic sector mass spectrometers. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motion study is an engineering technology that analyzes human body motions. During the past decade (1990-1999) a series of studies investigated the role of motion study in developmental disabilities. This article reviews the literature on the applications of motion study in the field. A historical and conceptual review of motion study leading to the current status of studies is presented followed by a review of the research literature. Two main eras of research focus were identified. The first era (1990-1995) of studies established the superior effectiveness and efficiency of tasks designed with motion study or motion study-related principles over traditional site-based task designs. The second era (1995-1999) of studies examined the interaction between motion study-based task designs and other variables such as choice, preference, and functionally equivalent and competing task designs and communicative alternatives. Our review found that applying motion study principles as an antecedent guide and practice to eliminating or reducing ineffective motions and simplifying effective motions resulted in positive task outcomes with most of the participants.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A range of lasers. is now available for use in dentistry. This paper summarizes key current and emerging applications, for lasers in clinical practice. A major diagnostic application of low power lasers is the detection of caries, using fluorescence elicited from hydroxyapatite or from bacterial by-products. Laser fluorescence is an effective method for detecting and quantifying incipient occlusal and cervical,carious lesions, and with further refinement could be used in the, same manner for proximal lesions. Photoactivated dye techniques have been developed which use low power lasers to elicit a photochemical reaction, Photoactivated dye techniques' can be used to disinfect root canals, periodontal pockets, cavity preparations and sites of peri-implantitis. Using similar principles, more powerful lasers tan be used for photodynamic therapy in the treatment of malignancies of the oral mucosa. Laser-driven photochemical reactions can also be used for tooth whitening. In combination with fluoride, laser irradiation can improve the resistance of tooth structure to demineralization, and this application is of particular benefit for susceptible sites in high caries risk patients. Laser technology for caries' removal, cavity preparation and soft tissue surgery is at a high state of refinement, having had several decades of development up to the present time. Used in conjunction with or as a replacement for traditional methods, it is expected that specific laser technologies will become an essential component of contemporary dental practice over the next decade.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In spite of their wide application in comminution circuits, hydrocyclones have at least one significant disadvantage in that their operation inherently tends to return the fine denser liberated minerals to the grinding mill. This results in unnecessary overgrinding which adds to the milling cost and can adversely affect the efficiency of downstream processes. In an attempt to solve this problem, a three-product cyclone has been developed at the Julius Kruttschnitt Mineral Research Centre (JKMRC) to generate a second overflow in which the fine dense liberated minerals can be selectively concentrated for further treatment. In this paper, the design and operation of the three-product cyclone are described. The influence of the length of the second vortex finder on the performance of a 150-mm unit treating a mixture of magnetite and silica is investigated. Conventional cyclone tests were also conducted under similar conditions. Using the operational performance data of the three-product and conventional cyclones, it is shown that by optimising the length of the second vortex finder, the amount of fine dense mineral particles that reports to the three-product cyclone underflow can be reduced. In addition, the three-product cyclone can be used to generate middlings stream that may be more suitable for flash flotation than the conventional cyclone underflow, or alternatively, could be classified with a microscreen to separate the valuables from the gangue. At the same time, a fines stream having similar properties to those of the conventional overflow can be obtained. Hence, if the middlings stream was used as feed for flash flotation or microscreening, the fines stream could be used in lieu of the conventional overflow without compromising the feed requirements for the conventional flotation circuit. Some of the other potential applications of the new cyclone are described. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and Purpose. This study evaluated an electromyographic technique for the measurement of muscle activity of the deep cervical flexor (DCF) muscles. Electromyographic signals were detected from the DCF, sternocleidomastoid (SCM), and anterior scalene (AS) muscles during performance of the craniocervical flexion (CCF) test, which involves performing 5 stages of increasing craniocervical flexion range of motion-the anatomical action of the DCF muscles. Subjects. Ten volunteers without known pathology or impairment participated in this study. Methods. Root-mean-square (RMS) values were calculated for the DCF, SCM, and AS muscles during performance of the CCF test. Myoelectric signals were recorded from the DCF muscles using bipolar electrodes placed over the posterior oropharyngeal wall. Reliability estimates of normalized RMS values were obtained by evaluating intraclass correlation coefficients and the normalized standard error of the mean (SEM). Results. A linear relationship was evident between the amplitude of DCF muscle activity and the incremental stages of the CCF test (F=239.04, df=36, P<.0001). Normalized SEMs in the range 6.7% to 10.3% were obtained for the normalized RMS values for the DCF muscles, providing evidence of reliability for these variables. Discussion and Conclusion. This approach for obtaining a direct measure of the DCF muscles, which differs from those previously used, may be useful for the examination of these muscles in future electromyographic applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most important advantages of database systems is that the underlying mathematics is rich enough to specify very complex operations with a small number of statements in the database language. This research covers an aspect of biological informatics that is the marriage of information technology and biology, involving the study of real-world phenomena using virtual plants derived from L-systems simulation. L-systems were introduced by Aristid Lindenmayer as a mathematical model of multicellular organisms. Not much consideration has been given to the problem of persistent storage for these simulations. Current procedures for querying data generated by L-systems for scientific experiments, simulations and measurements are also inadequate. To address these problems the research in this paper presents a generic process for data-modeling tools (L-DBM) between L-systems and database systems. This paper shows how L-system productions can be generically and automatically represented in database schemas and how a database can be populated from the L-system strings. This paper further describes the idea of pre-computing recursive structures in the data into derived attributes using compiler generation. A method to allow a correspondence between biologists' terms and compiler-generated terms in a biologist computing environment is supplied. Once the L-DBM gets any specific L-systems productions and its declarations, it can generate the specific schema for both simple correspondence terminology and also complex recursive structure data attributes and relationships.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Some results are obtained for non-compact cases in topological vector spaces for the existence problem of solutions for some set-valued variational inequalities with quasi-monotone and lower hemi-continuous operators, and with quasi-semi-monotone and upper hemi-continuous operators. Some applications are given in non-reflexive Banach spaces for these existence problems of solutions and for perturbation problems for these set-valued variational inequalities with quasi-monotone and quasi-semi-monotone operators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let X and Y be Hausdorff topological vector spaces, K a nonempty, closed, and convex subset of X, C: K--> 2(Y) a point-to-set mapping such that for any x is an element of K, C(x) is a pointed, closed, and convex cone in Y and int C(x) not equal 0. Given a mapping g : K --> K and a vector valued bifunction f : K x K - Y, we consider the implicit vector equilibrium problem (IVEP) of finding x* is an element of K such that f (g(x*), y) is not an element of - int C(x) for all y is an element of K. This problem generalizes the (scalar) implicit equilibrium problem and implicit variational inequality problem. We propose the dual of the implicit vector equilibrium problem (DIVEP) and establish the equivalence between (IVEP) and (DIVEP) under certain assumptions. Also, we give characterizations of the set of solutions for (IVP) in case of nonmonotonicity, weak C-pseudomonotonicity, C-pseudomonotonicity, and strict C-pseudomonotonicity, respectively. Under these assumptions, we conclude that the sets of solutions are nonempty, closed, and convex. Finally, we give some applications of (IVEP) to vector variational inequality problems and vector optimization problems. (C) 2003 Elsevier Science Ltd. All rights reserved.