143 resultados para iterative error correction
Resumo:
High-Order Co-Clustering (HOCC) methods have attracted high attention in recent years because of their ability to cluster multiple types of objects simultaneously using all available information. During the clustering process, HOCC methods exploit object co-occurrence information, i.e., inter-type relationships amongst different types of objects as well as object affinity information, i.e., intra-type relationships amongst the same types of objects. However, it is difficult to learn accurate intra-type relationships in the presence of noise and outliers. Existing HOCC methods consider the p nearest neighbours based on Euclidean distance for the intra-type relationships, which leads to incomplete and inaccurate intra-type relationships. In this paper, we propose a novel HOCC method that incorporates multiple subspace learning with a heterogeneous manifold ensemble to learn complete and accurate intra-type relationships. Multiple subspace learning reconstructs the similarity between any pair of objects that belong to the same subspace. The heterogeneous manifold ensemble is created based on two-types of intra-type relationships learnt using p-nearest-neighbour graph and multiple subspaces learning. Moreover, in order to make sure the robustness of clustering process, we introduce a sparse error matrix into matrix decomposition and develop a novel iterative algorithm. Empirical experiments show that the proposed method achieves improved results over the state-of-art HOCC methods for FScore and NMI.
Resumo:
Managing spinal deformities in young children is challenging, particularly early onset scoliosis (EOS). Surgical intervention is often required if EOS has been unresponsive to conservative treatment particularly with rapidly progressive curves. An emerging treatment option for EOS is fusionless scoliosis surgery. Similar to bracing, this surgical option potentially harnesses growth, motion and function of the spine along with correcting spinal deformity. Dual growing rods are one such fusionless treatment, which aims to modulate growth of the vertebrae. The aim of this study was to ascertain the extent to which semi-constrained growing rods (Medtronic Sofamor Danek Memphis, TN, USA) with a telescopic sleeve component, reduce rotational constraint on the spine compared with standard rigid rods and hence potentially provide a more physiological mechanical environment for the growing spine. This study found that semi-constrained growing rods would be expected to allow growth via the telescopic rod components while maintaining the axial flexibility of the spine and the improved capacity for final correction.
Resumo:
A new mesh adaptivity algorithm that combines a posteriori error estimation with bubble-type local mesh generation (BLMG) strategy for elliptic differential equations is proposed. The size function used in the BLMG is defined on each vertex during the adaptive process based on the obtained error estimator. In order to avoid the excessive coarsening and refining in each iterative step, two factor thresholds are introduced in the size function. The advantages of the BLMG-based adaptive finite element method, compared with other known methods, are given as follows: the refining and coarsening are obtained fluently in the same framework; the local a posteriori error estimation is easy to implement through the adjacency list of the BLMG method; at all levels of refinement, the updated triangles remain very well shaped, even if the mesh size at any particular refinement level varies by several orders of magnitude. Several numerical examples with singularities for the elliptic problems, where the explicit error estimators are used, verify the efficiency of the algorithm. The analysis for the parameters introduced in the size function shows that the algorithm has good flexibility.
Resumo:
Bounds on the expectation and variance of errors at the output of a multilayer feedforward neural network with perturbed weights and inputs are derived. It is assumed that errors in weights and inputs to the network are statistically independent and small. The bounds obtained are applicable to both digital and analogue network implementations and are shown to be of practical value.
Resumo:
Background The use of dual growing rods is a fusionless surgical approach to the treatment of early onset scoliosis (EOS), which aims of harness potential growth in order to correct spinal deformity. The purpose of this study was to compare the in-vitro biomechanical response of two different dual rod designs under axial rotation loading. Methods Six porcine spines were dissected into seven level thoracolumbar multi-segmental units. Each specimen was mounted and tested in a biaxial Instron machine, undergoing nondestructive left/right axial rotation to peak moments of 4Nm at a constant rotation rate of 8deg.s-1. A motion tracking system (Optotrak) measured 3D displacements of individual vertebrae. Each spine was tested in an un-instrumented state first and then with appropriately sized semi-constrained growing rods and ‘rigid’ rods in alternating sequence. Range of motion, neutral zone size and stiffness were calculated from the moment-rotation curves and intervertebral ranges of motion were calculated from Optotrak data. Findings Irrespective of test sequence, rigid rods showed significantly reduction of total rotation across all instrumented levels (with increased stiffness) whilst semi-constrained rods exhibited similar rotation behavior to the un-instrumented (P<0.05). An 11% and 8% increase in stiffness for left and right axial rotation respectively and 15% reduction in total range of motion was recorded with dual rigid rods compared with semi-constrained rods. Interpretation Based on these findings, the semi-constrained growing rods do not increase axial rotation stiffness compared with un-instrumented spines. This is thought to provide a more physiological environment for the growing spine compared to dual rigid rod constructs.
Resumo:
Purpose To evaluate the influence of cone location and corneal cylinder on RGP corrected visual acuities and residual astigmatism in patients with keratoconus. Methods In this prospective study, 156 eyes from 134 patients were enrolled. Complete ophthalmologic examination including manifest refraction, Best spectacle visual acuity (BSCVA), slit-lamp biomicroscopy was performed and corneal topography analysis was done. According to the cone location on the topographic map, the patients were divided into central and paracentral cone groups. Trial RGP lenses were selected based on the flat Sim K readings and a ‘three-point touch’ fitting approach was used. Over contact lens refraction was performed, residual astigmatism (RA) was measured and best-corrected RGP visual acuities (RGPVA) were recorded. Results The mean age (±SD) was 22.1 ± 5.3 years. 76 eyes (48.6%) had central and 80 eyes (51.4%) had paracentral cone. Prior to RGP lenses fitting mean (±SD) subjective refraction spherical equivalent (SRSE), subjective refraction astigmatism (SRAST) and BSCVA (logMAR) were −5.04 ± 2.27 D, −3.51 ± 1.68 D and 0.34 ± 0.14, respectively. There were statistically significant differences between central and paracentral cone groups in mean values of SRSE, SRAST, flat meridian (Sim K1), steep meridian (Sim K2), mean K and corneal cylinder (p-values < 0.05). Comparison of BSCVA to RGPVA shows that vision has improved 0.3 logMAR by RGP lenses (p < 0.0001). Mean (±SD) RA was −0.72 ± 0.39 D. There were no statistically significant differences between RGPVAs and RAs of central and paracentral cone groups (p = 0.22) and (p = 0.42), respectively. Pearson's correlation analysis shows that there is a statistically significant relationship between corneal cylinder and BSCVA and RGPVA, However, the relationship between corneal cylinder and residual astigmatism was not significant. Conclusions Cone location has no effect on the RGP corrected visual acuities and residual astigmatism in patients with keratoconus. Corneal cylinder and Sim K values influence RGP-corrected visual acuities but do not influence residual astigmatism.
Resumo:
Study design Retrospective validation study. Objectives To propose a method to evaluate, from a clinical standpoint, the ability of a finite-element model (FEM) of the trunk to simulate orthotic correction of spinal deformity and to apply it to validate a previously described FEM. Summary of background data Several FEMs of the scoliotic spine have been described in the literature. These models can prove useful in understanding the mechanisms of scoliosis progression and in optimizing its treatment, but their validation has often been lacking or incomplete. Methods Three-dimensional (3D) geometries of 10 patients before and during conservative treatment were reconstructed from biplanar radiographs. The effect of bracing was simulated by modeling displacements induced by the brace pads. Simulated clinical indices (Cobb angle, T1–T12 and T4–T12 kyphosis, L1–L5 lordosis, apical vertebral rotation, torsion, rib hump) and vertebral orientations and positions were compared to those measured in the patients' 3D geometries. Results Errors in clinical indices were of the same order of magnitude as the uncertainties due to 3D reconstruction; for instance, Cobb angle was simulated with a root mean square error of 5.7°, and rib hump error was 5.6°. Vertebral orientation was simulated with a root mean square error of 4.8° and vertebral position with an error of 2.5 mm. Conclusions The methodology proposed here allowed in-depth evaluation of subject-specific simulations, confirming that FEMs of the trunk have the potential to accurately simulate brace action. These promising results provide a basis for ongoing 3D model development, toward the design of more efficient orthoses.
Resumo:
Big Datasets are endemic, but they are often notoriously difficult to analyse because of their size, heterogeneity, history and quality. The purpose of this paper is to open a discourse on the use of modern experimental design methods to analyse Big Data in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has wide generality and advantageous inferential and computational properties. In particular, the principled experimental design approach is shown to provide a flexible framework for analysis that, for certain classes of objectives and utility functions, delivers near equivalent answers compared with analyses of the full dataset under a controlled error rate. It can also provide a formalised method for iterative parameter estimation, model checking, identification of data gaps and evaluation of data quality. Finally, it has the potential to add value to other Big Data sampling algorithms, in particular divide-and-conquer strategies, by determining efficient sub-samples.
Resumo:
Melanopsin containing intrinsically photosensitive Retinal Ganglion cells (ipRGCs) mediate the pupil light reflex (PLR) during light onset and at light offset (the post-illumination pupil response, PIPR). Recent evidence shows that the PLR and PIPR can provide non-invasive, objective markers of age-related retinal and optic nerve disease, however there is no consensus on the effects of healthy ageing or refractive error on the ipRGC mediated pupil function. Here we isolated melanopsin contributions to the pupil control pathway in 59 human participants with no ocular pathology across a range of ages and refractive errors. We show that there is no effect of age or refractive error on ipRGC inputs to the human pupil control pathway. The stability of the ipRGC mediated pupil response across the human lifespan provides a functional correlate of their robustness observed during ageing in rodent models.
Resumo:
So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.
Resumo:
We propose an iterative estimating equations procedure for analysis of longitudinal data. We show that, under very mild conditions, the probability that the procedure converges at an exponential rate tends to one as the sample size increases to infinity. Furthermore, we show that the limiting estimator is consistent and asymptotically efficient, as expected. The method applies to semiparametric regression models with unspecified covariances among the observations. In the special case of linear models, the procedure reduces to iterative reweighted least squares. Finite sample performance of the procedure is studied by simulations, and compared with other methods. A numerical example from a medical study is considered to illustrate the application of the method.
Resumo:
A 'pseudo-Bayesian' interpretation of standard errors yields a natural induced smoothing of statistical estimating functions. When applied to rank estimation, the lack of smoothness which prevents standard error estimation is remedied. Efficiency and robustness are preserved, while the smoothed estimation has excellent computational properties. In particular, convergence of the iterative equation for standard error is fast, and standard error calculation becomes asymptotically a one-step procedure. This property also extends to covariance matrix calculation for rank estimates in multi-parameter problems. Examples, and some simple explanations, are given.
Resumo:
For a wide class of semi-Markov decision processes the optimal policies are expressible in terms of the Gittins indices, which have been found useful in sequential clinical trials and pharmaceutical research planning. In general, the indices can be approximated via calibration based on dynamic programming of finite horizon. This paper provides some results on the accuracy of such approximations, and, in particular, gives the error bounds for some well known processes (Bernoulli reward processes, normal reward processes and exponential target processes).