315 resultados para DNA Error Correction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regardless of technology benefits, safety planners still face difficulties explaining errors related to the use of different technologies and evaluating how the errors impact the performance of safety decision making. This paper presents a preliminary error impact analysis testbed to model object identification and tracking errors caused by image-based devices and algorithms and to analyze the impact of the errors for spatial safety assessment of earthmoving and surface mining activities. More specifically, this research designed a testbed to model workspaces for earthmoving operations, to simulate safety-related violations, and to apply different object identification and tracking errors on the data collected and processed for spatial safety assessment. Three different cases were analyzed based on actual earthmoving operations conducted at a limestone quarry. Using the testbed, the impacts of the errors were investigated for the safety planning purpose.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The interaction of 10-hydroxycamptothecine (HCPT) with DNA under pseudo-physiological conditions (Tris-HCl buffer of pH 7.4), using ethidium bromide (EB) dye as a probe, was investigated with the use of spectrofluorimetry, UV-vis spectrometry and viscosity measurement. The binding constant and binding number for HCPT with DNA were evaluated as (7.1 ± 0.5) × 104 M-1 and 1.1, respectively, by multivariate curve resolution-alternating least squares (MCR-ALS). Moreover, parallel factor analysis (PARAFAC) was applied to resolve the three-way fluorescence data obtained from the interaction system, and the concentration information for the three components of the system at equilibrium was simultaneously obtained. It was found that there was a cooperative interaction between the HCPT-DNA complex and EB, which produced a ternary complex of HCPT-DNA-EB. © 2011 Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Fusionless scoliosis surgery is an early-stage treatment for idiopathic scoliosis which claims potential advantages over current fusion-based surgical procedures. Anterior vertebral stapling using a shape memory alloy staple is one such approach. Despite increasing interest in this technique, little is known about the effects on the spine following insertion, or the mechanism of action of the staple. The purpose of this study was to investigate the biomechanical consequences of staple insertion in the anterior thoracic spine, using in vitro experiments on an immature bovine model. Methods: Individual calf spine thoracic motion segments were tested in flexion, extension, lateral bending and axial rotation. Changes in motion segment rotational stiffness following staple insertion were measured on a series of 14 specimens. Strain gauges were attached to three of the staples in the series to measure forces transmitted through the staple during loading. A micro-CT scan of a single specimen was performed after loading to qualitatively examine damage to the vertebral bone caused by the staple. Findings: Small but statistically significant decreases in bending stiffness occurred in flexion,extension, lateral bending away from the staple, and axial rotation away from the staple. Each strain-gauged staple showed a baseline compressive loading following insertion which was seen to gradually decrease during testing. Post-test micro-CT showed substantial bone and growth plate damage near the staple. Interpretation: Based on our findings it is possible that growth modulation following staple insertion is due to tissue damage rather than sustained mechanical compression of the motion segment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction. Surgical treatment of scoliosis is assessed in the spine clinic by the surgeon making numerous measurements on X-Rays as well as the rib hump. But it is important to understand which of these measures correlate with self-reported improvements in patients’ quality of life following surgery. The objective of this study was to examine the relationship between patient satisfaction after thoracoscopic (keyhole) anterior scoliosis surgery and standard deformity correction measures using the Scoliosis Research Society (SRS) adolescent questionnaire. Methods. A series of 100 consecutive adolescent idiopathic scoliosis patients received a single anterior rod via a keyhole approach at the Mater Children’s Hospital, Brisbane. Patients completed SRS outcomes questionnaires before surgery and again at 24 months after surgery. Multiple regression and t-tests were used to investigate the relationship between SRS scores and deformity correction achieved after surgery. Results. There were 94 females and 6 males with a mean age of 16.1 years. The mean Cobb angle improved from 52º pre-operatively to 21º for the instrumented levels post-operatively (59% correction) and the mean rib hump improved from 16º to 8º (51% correction). The mean total SRS score for the cohort was 99.4/120 which indicated a high level of satisfaction with the results of their scoliosis surgery. None of the deformity related parameters in the multiple regressions were significant. However, the twenty patients with the smallest Cobb angles after surgery reported significantly higher SRS scores than the twenty patients with the largest Cobb angles after surgery, but there was no difference on the basis of rib hump correction. Discussion. Patients undergoing thoracoscopic (keyhole) anterior scoliosis correction report good SRS scores which are comparable to those in previous studies. We suggest that the absence of any statistically significant difference in SRS scores between patients with and without rod or screw complications is because these complications are not associated with any clinically significant loss of correction in our patient group. The Cobb angle after surgery was the only significant predictor of patient satisfaction when comparing subgroups of patients with the largest and smallest Cobb angles after surgery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the identification of common single locus point mutations as risk factors for thrombophilia, many DNA testing methodologies have been described for detecting these variations. Traditionally, functional or immunological testing methods have been used to investigate quantitative anticoagulant deficiencies. However, with the emergence of the genetic variations, factor V Leiden, prothrombin 20210 and, to a lesser extent, the methylene tetrahydrofolate reductase (MTHFR677) and factor V HR2 haplotype, traditional testing methodologies have proved to be less useful and instead DNA technology is more commonly employed in diagnostics. This review considers many of the DNA techniques that have proved to be useful in the detection of common genetic variants that predispose to thrombophilia. Techniques involving gel analysis are used to detect the presence or absence of restriction sites, electrophoretic mobility shifts, as in single strand conformation polymorphism or denaturing gradient gel electrophoresis, and product formation in allele-specific amplification. Such techniques may be sensitive, but are unwielding and often need to be validated objectively. In order to overcome some of the limitations of gel analysis, especially when dealing with larger sample numbers, many alternative detection formats, such as closed tube systems, microplates and microarrays (minisequencing, real-time polymerase chain reaction, and oligonucleotide ligation assays) have been developed. In addition, many of the emerging technologies take advantage of colourimetric or fluorescence detection (including energy transfer) that allows qualitative and quantitative interpretation of results. With the large variety of DNA technologies available, the choice of methodology will depend on several factors including cost and the need for speed, simplicity and robustness. © 2000 Lippincott Williams & Wilkins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study Krylov subspace methods for approximating the matrix-function vector product φ(tA)b where φ(z) = [exp(z) - 1]/z. This product arises in the numerical integration of large stiff systems of differential equations by the Exponential Euler Method, where A is the Jacobian matrix of the system. Recently, this method has found application in the simulation of transport phenomena in porous media within mathematical models of wood drying and groundwater flow. We develop an a posteriori upper bound on the Krylov subspace approximation error and provide a new interpretation of a previously published error estimate. This leads to an alternative Krylov approximation to φ(tA)b, the so-called Harmonic Ritz approximant, which we find does not exhibit oscillatory behaviour of the residual error.