194 resultados para Posterior cruciate ligamen
Resumo:
Introduction. Ideally after selective thoracic fusion for Lenke Class IC (i.e. major thoracic / secondary lumbar) curves, the lumbar spine will spontaneously accommodate to the corrected position of the thoracic curve, thereby achieving a balanced spine, avoiding the need for fusion of lumbar spinal segments1. The purpose of this study was to evaluate the behaviour of the lumbar curve in Lenke IC class adolescent idiopathic scoliosis (AIS) following video-assisted thoracoscopic spinal fusion and instrumentation (VATS) of the major thoracic curve. Methods. A retrospective review of 22 consecutive patients with AIS who underwent VATS by a single surgeon was conducted. The results were compared to published literature examining the behaviour of the secondary lumbar curve where other surgical approaches were employed. Results. Twenty-two patients (all female) with AIS underwent VATS. All major thoracic curves were right convex. The average age at surgery was 14 years (range 10 to 22 years). On average 6.7 levels (6 to 8) were instrumented. The mean follow-up was 25.1 months (6 to 36). The pre-operative major thoracic Cobb angle mean was 53.8° (40° to 75°). The pre-operative secondary lumbar Cobb angle mean was 43.9° (34° to 55°). On bending radiographs, the secondary curve corrected to 11.3° (0° to 35°). The rib hump mean measurement was 15.0° (7° to 21°). At latest follow-up the major thoracic Cobb angle measured on average 27.2° (20° to 41°) (p<0.001 – univariate ANOVA) and the mean secondary lumbar curve was 27.3° (15° to 42°) (p<0.001). This represented an uninstrumented secondary curve correction factor of 37.8%. The mean rib hump measured was 6.5° (2° to 15°) (p<0.001). The results above were comparable to published series when open surgery was performed. Discussion. VATS is an effective method of correcting major thoracic curves with secondary lumbar curves. The behaviour of the secondary lumbar curve is consistent with published series when open surgery, both anterior and posterior, is performed.
Resumo:
Introduction Ovine models are widely used in orthopaedic research. To better understand the impact of orthopaedic procedures computer simulations are necessary. 3D finite element (FE) models of bones allow implant designs to be investigated mechanically, thereby reducing mechanical testing. Hypothesis We present the development and validation of an ovine tibia FE model for use in the analysis of tibia fracture fixation plates. Material & Methods Mechanical testing of the tibia consisted of an offset 3-pt bend test with three repetitions of loading to 350N and return to 50N. Tri-axial stacked strain gauges were applied to the anterior and posterior surfaces of the bone and two rigid bodies – consisting of eight infrared active markers, were attached to the ends of the tibia. Positional measurements were taken with a FARO arm 3D digitiser. The FE model was constructed with both geometry and material properties derived from CT images of the bone. The elasticity-density relationship used for material property determination was validated separately using mechanical testing. This model was then transformed to the same coordinate system as the in vitro mechanical test and loads applied. Results Comparison between the mechanical testing and the FE model showed good correlation in surface strains (difference: anterior 2.3%, posterior 3.2%). Discussion & Conclusion This method of model creation provides a simple method for generating subject specific FE models from CT scans. The use of the CT data set for both the geometry and the material properties ensures a more accurate representation of the specific bone. This is reflected in the similarity of the surface strain results.
Resumo:
This paper introduces a novel technique to directly optimise the Figure of Merit (FOM) for phonetic spoken term detection. The FOM is a popular measure of sTD accuracy, making it an ideal candiate for use as an objective function. A simple linear model is introduced to transform the phone log-posterior probabilities output by a phe classifier to produce enhanced log-posterior features that are more suitable for the STD task. Direct optimisation of the FOM is then performed by training the parameters of this model using a non-linear gradient descent algorithm. Substantial FOM improvements of 11% relative are achieved on held-out evaluation data, demonstrating the generalisability of the approach.
Resumo:
This paper describes the formalization and application of a methodology to evaluate the safety benefit of countermeasures in the face of uncertainty. To illustrate the methodology, 18 countermeasures for improving safety of at grade railroad crossings (AGRXs) in the Republic of Korea are considered. Akin to “stated preference” methods in travel survey research, the methodology applies random selection and laws of large numbers to derive accident modification factor (AMF) densities from expert opinions. In a full Bayesian analysis framework, the collective opinions in the form of AMF densities (data likelihood) are combined with prior knowledge (AMF density priors) for the 18 countermeasures to obtain ‘best’ estimates of AMFs (AMF posterior credible intervals). The countermeasures are then compared and recommended based on the largest safety returns with minimum risk (uncertainty). To the author's knowledge the complete methodology is new and has not previously been applied or reported in the literature. The results demonstrate that the methodology is able to discern anticipated safety benefit differences across candidate countermeasures. For the 18 at grade railroad crossings considered in this analysis, it was found that the top three performing countermeasures for reducing crashes are in-vehicle warning systems, obstacle detection systems, and constant warning time systems.
Resumo:
Purpose: To investigate the influence of soft contact lenses on regional variations in corneal thickness and shape while taking account of natural diurnal variations in these corneal parameters. Methods: Twelve young, healthy subjects wore 4 different types of soft contact lenses on 4 different days. The lenses were of two different materials (silicone hydrogel, hydrogel), designs (spherical, toric) and powers (–3.00, –7.00 D). Corneal thickness and topography measurements were taken before and after 8 hours of lens wear and on two days without lens wear, using the Pentacam HR system. Results: The hydrogel toric contact lens caused the greatest level of corneal thickening in the central (20.3 ± 10.0 microns) as well as peripheral cornea (24.1 ± 9.1 microns) (p < 0.001) with an obvious regional swelling of the cornea beneath the stabilizing zones. The anterior corneal surface generally showed slight flattening. All contact lenses resulted in central posterior corneal steepening and this was weakly correlated with central corneal swelling (p = 0.03) and peripheral corneal swelling (p = 0.01). Conclusions: There was an obvious regional corneal swelling apparent after wear of the hydrogel soft toric lenses, due to the location of the thicker stabilization zones of the toric lenses. However with the exception of the hydrogel toric lens, the magnitude of corneal swelling induced by the contact lenses over the 8 hours of wear was less than the natural diurnal thinning of the cornea over this same period.
Resumo:
For the first time in human history, large volumes of spoken audio are being broadcast, made available on the internet, archived, and monitored for surveillance every day. New technologies are urgently required to unlock these vast and powerful stores of information. Spoken Term Detection (STD) systems provide access to speech collections by detecting individual occurrences of specified search terms. The aim of this work is to develop improved STD solutions based on phonetic indexing. In particular, this work aims to develop phonetic STD systems for applications that require open-vocabulary search, fast indexing and search speeds, and accurate term detection. Within this scope, novel contributions are made within two research themes, that is, accommodating phone recognition errors and, secondly, modelling uncertainty with probabilistic scores. A state-of-the-art Dynamic Match Lattice Spotting (DMLS) system is used to address the problem of accommodating phone recognition errors with approximate phone sequence matching. Extensive experimentation on the use of DMLS is carried out and a number of novel enhancements are developed that provide for faster indexing, faster search, and improved accuracy. Firstly, a novel comparison of methods for deriving a phone error cost model is presented to improve STD accuracy, resulting in up to a 33% improvement in the Figure of Merit. A method is also presented for drastically increasing the speed of DMLS search by at least an order of magnitude with no loss in search accuracy. An investigation is then presented of the effects of increasing indexing speed for DMLS, by using simpler modelling during phone decoding, with results highlighting the trade-off between indexing speed, search speed and search accuracy. The Figure of Merit is further improved by up to 25% using a novel proposal to utilise word-level language modelling during DMLS indexing. Analysis shows that this use of language modelling can, however, be unhelpful or even disadvantageous for terms with a very low language model probability. The DMLS approach to STD involves generating an index of phone sequences using phone recognition. An alternative approach to phonetic STD is also investigated that instead indexes probabilistic acoustic scores in the form of a posterior-feature matrix. A state-of-the-art system is described and its use for STD is explored through several experiments on spontaneous conversational telephone speech. A novel technique and framework is proposed for discriminatively training such a system to directly maximise the Figure of Merit. This results in a 13% improvement in the Figure of Merit on held-out data. The framework is also found to be particularly useful for index compression in conjunction with the proposed optimisation technique, providing for a substantial index compression factor in addition to an overall gain in the Figure of Merit. These contributions significantly advance the state-of-the-art in phonetic STD, by improving the utility of such systems in a wide range of applications.
Resumo:
Background: Falls are a major health and injury problem for people with Parkinson disease (PD). Despite the severe consequences of falls, a major unresolved issue is the identification of factors that predict the risk of falls in individual patients with PD. The primary aim of this study was to prospectively determine an optimal combination of functional and disease-specific tests to predict falls in individuals with PD. ----- ----- Methods: A total of 101 people with early-stage PD undertook a battery of neurologic and functional tests in their optimally medicated state. The tests included Tinetti, Berg, Timed Up and Go, Functional Reach, and the Physiological Profile Assessment of Falls Risk; the latter assessment includes physiologic tests of visual function, proprioception, strength, cutaneous sensitivity, reaction time, and postural sway. Falls were recorded prospectively over 6 months. ----- ----- Results: Forty-eight percent of participants reported a fall and 24% more than 1 fall. In the multivariate model, a combination of the Unified Parkinson's Disease Rating Scale (UPDRS) total score, total freezing of gait score, occurrence of symptomatic postural orthostasis, Tinetti total score, and extent of postural sway in the anterior-posterior direction produced the best sensitivity (78%) and specificity (84%) for predicting falls. From the UPDRS items, only the rapid alternating task category was an independent predictor of falls. Reduced peripheral sensation and knee extension strength in fallers contributed to increased postural instability. ----- ----- Conclusions: Falls are a significant problem in optimally medicated early-stage PD. A combination of both disease-specific and balance- and mobility-related measures can accurately predict falls in individuals with PD.
Resumo:
In the study of traffic safety, expected crash frequencies across sites are generally estimated via the negative binomial model, assuming time invariant safety. Since the time invariant safety assumption may be invalid, Hauer (1997) proposed a modified empirical Bayes (EB) method. Despite the modification, no attempts have been made to examine the generalisable form of the marginal distribution resulting from the modified EB framework. Because the hyper-parameters needed to apply the modified EB method are not readily available, an assessment is lacking on how accurately the modified EB method estimates safety in the presence of the time variant safety and regression-to-the-mean (RTM) effects. This study derives the closed form marginal distribution, and reveals that the marginal distribution in the modified EB method is equivalent to the negative multinomial (NM) distribution, which is essentially the same as the likelihood function used in the random effects Poisson model. As a result, this study shows that the gamma posterior distribution from the multivariate Poisson-gamma mixture can be estimated using the NM model or the random effects Poisson model. This study also shows that the estimation errors from the modified EB method are systematically smaller than those from the comparison group method by simultaneously accounting for the RTM and time variant safety effects. Hence, the modified EB method via the NM model is a generalisable method for estimating safety in the presence of the time variant safety and the RTM effects.
Resumo:
We present a novel approach for developing summary statistics for use in approximate Bayesian computation (ABC) algorithms by using indirect inference. ABC methods are useful for posterior inference in the presence of an intractable likelihood function. In the indirect inference approach to ABC the parameters of an auxiliary model fitted to the data become the summary statistics. Although applicable to any ABC technique, we embed this approach within a sequential Monte Carlo algorithm that is completely adaptive and requires very little tuning. This methodological development was motivated by an application involving data on macroparasite population evolution modelled by a trivariate stochastic process for which there is no tractable likelihood function. The auxiliary model here is based on a beta–binomial distribution. The main objective of the analysis is to determine which parameters of the stochastic model are estimable from the observed data on mature parasite worms.
Resumo:
This work proposes to improve spoken term detection (STD) accuracy by optimising the Figure of Merit (FOM). In this article, the index takes the form of phonetic posterior-feature matrix. Accuracy is improved by formulating STD as a discriminative training problem and directly optimising the FOM, through its use as an objective function to train a transformation of the index. The outcome of indexing is then a matrix of enhanced posterior-features that are directly tailored for the STD task. The technique is shown to improve the FOM by up to 13% on held-out data. Additional analysis explores the effect of the technique on phone recognition accuracy, examines the actual values of the learned transform, and demonstrates that using an extended training data set results in further improvement in the FOM.
Resumo:
This article explores the use of probabilistic classification, namely finite mixture modelling, for identification of complex disease phenotypes, given cross-sectional data. In particular, if focuses on posterior probabilities of subgroup membership, a standard output of finite mixture modelling, and how the quantification of uncertainty in these probabilities can lead to more detailed analyses. Using a Bayesian approach, we describe two practical uses of this uncertainty: (i) as a means of describing a person’s membership to a single or multiple latent subgroups and (ii) as a means of describing identified subgroups by patient-centred covariates not included in model estimation. These proposed uses are demonstrated on a case study in Parkinson’s disease (PD), where latent subgroups are identified using multiple symptoms from the Unified Parkinson’s Disease Rating Scale (UPDRS).
Resumo:
Purpose: To examine the impact of different endotracheal tube (ETT) suction techniques on regional end-expiratory lung volume (EELV) and tidal volume (VT) in an animal model of surfactant-deficient lung injury. Methods: Six 2-week old piglets were intubated (4.0 mm ETT), muscle-relaxed and ventilated, and lung injury was induced with repeated saline lavage. In each animal, open suction (OS) and two methods of closed suction (CS) were performed in random order using both 5 and 8 French gauge (FG) catheters. The pre-suction volume state of the lung was standardised on the inflation limb of the pressure-volume relationship. Regional EELV and VT expressed as a proportion of the impedance change at vital capacity (%ZVCroi) within the anterior and posterior halves of the chest were measured during and for 60 s after suction using electrical impedance tomography. Results: During suction, 5 FG CS resulted in preservation of EELV in the anterior (nondependent) and posterior(dependent) lung compared to the other permutations, but these only reached significance in the anterior regions (p\0.001 repeated-measures ANOVA). VT within the anterior, but not posterior lung was significantly greater during 5FG CS compared to 8 FG CS; the mean difference was 15.1 [95% CI 5.1, 25.1]%ZVCroi. Neither catheter size nor suction technique influenced post-suction regional EELV or VT compared to pre-suction values (repeated-measures ANOVA). Conclusions: ETT suction causes transient loss of EELV and VT throughout the lung. Catheter size exerts a greater influence than suction method, with CS only protecting against derecruitment when a small catheter is used, especially in the non-dependent lung.
Resumo:
We consider the problem of how to efficiently and safely design dose finding studies. Both current and novel utility functions are explored using Bayesian adaptive design methodology for the estimation of a maximum tolerated dose (MTD). In particular, we explore widely adopted approaches such as the continual reassessment method and minimizing the variance of the estimate of an MTD. New utility functions are constructed in the Bayesian framework and are evaluated against current approaches. To reduce computing time, importance sampling is implemented to re-weight posterior samples thus avoiding the need to draw samples using Markov chain Monte Carlo techniques. Further, as such studies are generally first-in-man, the safety of patients is paramount. We therefore explore methods for the incorporation of safety considerations into utility functions to ensure that only safe and well-predicted doses are administered. The amalgamation of Bayesian methodology, adaptive design and compound utility functions is termed adaptive Bayesian compound design (ABCD). The performance of this amalgamation of methodology is investigated via the simulation of dose finding studies. The paper concludes with a discussion of results and extensions that could be included into our approach.
Resumo:
We present a novel method and instrument for in vivo imaging and measurement of the human corneal dynamics during an air puff. The instrument is based on high-speed swept source optical coherence tomography (ssOCT) combined with a custom adapted air puff chamber from a non-contact tonometer, which uses an air stream to deform the cornea in a non-invasive manner. During the short period of time that the deformation takes place, the ssOCT acquires multiple A-scans in time (M-scan) at the center of the air puff, allowing observation of the dynamics of the anterior and posterior corneal surfaces as well as the anterior lens surface. The dynamics of the measurement are driven by the biomechanical properties of the human eye as well as its intraocular pressure. Thus, the analysis of the M-scan may provide useful information about the biomechanical behavior of the anterior segment during the applanation caused by the air puff. An initial set of controlled clinical experiments are shown to comprehend the performance of the instrument and its potential applicability to further understand the eye biomechanics and intraocular pressure measurements. Limitations and possibilities of the new apparatus are discussed.
Resumo:
Early detection surveillance programs aim to find invasions of exotic plant pests and diseases before they are too widespread to eradicate. However, the value of these programs can be difficult to justify when no positive detections are made. To demonstrate the value of pest absence information provided by these programs, we use a hierarchical Bayesian framework to model estimates of incursion extent with and without surveillance. A model for the latent invasion process provides the baseline against which surveillance data are assessed. Ecological knowledge and pest management criteria are introduced into the model using informative priors for invasion parameters. Observation models assimilate information from spatio-temporal presence/absence data to accommodate imperfect detection and generate posterior estimates of pest extent. When applied to an early detection program operating in Queensland, Australia, the framework demonstrates that this typical surveillance regime provides a modest reduction in the estimate that a surveyed district is infested. More importantly, the model suggests that early detection surveillance programs can provide a dramatic reduction in the putative area of incursion and therefore offer a substantial benefit to incursion management. By mapping spatial estimates of the point probability of infestation, the model identifies where future surveillance resources can be most effectively deployed.