908 resultados para Error Correction Model
Resumo:
This paper presents a system for 3-D reconstruction of a patient-specific surface model from calibrated X-ray images. Our system requires two X-ray images of a patient with one acquired from the anterior-posterior direction and the other from the axial direction. A custom-designed cage is utilized in our system to calibrate both images. Starting from bone contours that are interactively identified from the X-ray images, our system constructs a patient-specific surface model of the proximal femur based on a statistical model based 2D/3D reconstruction algorithm. In this paper, we present the design and validation of the system with 25 bones. An average reconstruction error of 0.95 mm was observed.
Resumo:
In this paper I consider the impact of a noisy indicator regarding a manager’s manipulative behavior on optimal effort incentives and the extent of earnings management. The analysis in this paper extends a twotask, single performance measure LEN model by including a binary random variable. I show that contracting on the noisy indicator variable is not always useful. More specifically, the principal uses the indicator variable to prevent earnings management only under conditions where manipulative behavior is not excessive. Thus, under conditions of excessive earnings management, accounting adjustments that yield a more congruent overall performance measure can be more effective than an appraisal of the existence of earnings management to mitigate the earnings management problem.
Resumo:
BACKGROUND: A fixed cavovarus foot deformity can be associated with anteromedial ankle arthrosis due to elevated medial joint contact stresses. Supramalleolar valgus osteotomies (SMOT) and lateralizing calcaneal osteotomies (LCOT) are commonly used to treat symptoms by redistributing joint contact forces. In a cavovarus model, the effects of SMOT and LCOT on the lateralization of the center of force (COF) and reduction of the peak pressure in the ankle joint were compared. METHODS: A previously published cavovarus model with fixed hindfoot varus was simulated in 10 cadaver specimens. Closing wedge supramalleolar valgus osteotomies 3 cm above the ankle joint level (6 and 11 degrees) and lateral sliding calcaneal osteotomies (5 and 10 mm displacement) were analyzed at 300 N axial static load (half body weight). The COF migration and peak pressure decrease in the ankle were recorded using high-resolution TekScan pressure sensors. RESULTS: A significant lateral COF shift was observed for each osteotomy: 2.1 mm for the 6 degrees (P = .014) and 2.3 mm for the 11 degrees SMOT (P = .010). The 5 mm LCOT led to a lateral shift of 2.0 mm (P = .042) and the 10 mm LCOT to a shift of 3.0 mm (P = .006). Comparing the different osteotomies among themselves no significant differences were recorded. No significant anteroposterior COF shift was seen. A significant peak pressure reduction was recorded for each osteotomy: The SMOT led to a reduction of 29% (P = .033) for the 6 degrees and 47% (P = .003) for the 11 degrees osteotomy, and the LCOT to a reduction of 41% (P = .003) for the 5 mm and 49% (P = .002) for the 10 mm osteotomy. Similar to the COF lateralization no significant differences between the osteotomies were seen. CONCLUSION: LCOT and SMOT significantly reduced anteromedial ankle joint contact stresses in this cavovarus model. The unloading effects of both osteotomies were equivalent. More correction did not lead to significantly more lateralization of the COF or more reduction of peak pressure but a trend was seen. CLINICAL RELEVANCE: In patients with fixed cavovarus feet, both SMOT and LCOT provided equally good redistribution of elevated ankle joint contact forces. Increasing the amount of displacement did not seem to equally improve the joint pressures. The site of osteotomy could therefore be chosen on the basis of surgeon's preference, simplicity, or local factors in case of more complex reconstructions.
Resumo:
If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.
Resumo:
In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.
Resumo:
Localized short-echo-time (1)H-MR spectra of human brain contain contributions of many low-molecular-weight metabolites and baseline contributions of macromolecules. Two approaches to model such spectra are compared and the data acquisition sequence, optimized for reproducibility, is presented. Modeling relies on prior knowledge constraints and linear combination of metabolite spectra. Investigated was what can be gained by basis parameterization, i.e., description of basis spectra as sums of parametric lineshapes. Effects of basis composition and addition of experimentally measured macromolecular baselines were investigated also. Both fitting methods yielded quantitatively similar values, model deviations, error estimates, and reproducibility in the evaluation of 64 spectra of human gray and white matter from 40 subjects. Major advantages of parameterized basis functions are the possibilities to evaluate fitting parameters separately, to treat subgroup spectra as independent moieties, and to incorporate deviations from straightforward metabolite models. It was found that most of the 22 basis metabolites used may provide meaningful data when comparing patient cohorts. In individual spectra, sums of closely related metabolites are often more meaningful. Inclusion of a macromolecular basis component leads to relatively small, but significantly different tissue content for most metabolites. It provides a means to quantitate baseline contributions that may contain crucial clinical information.
Resumo:
PURPOSE Segmentation of the proximal femur in digital antero-posterior (AP) pelvic radiographs is required to create a three-dimensional model of the hip joint for use in planning and treatment. However, manually extracting the femoral contour is tedious and prone to subjective bias, while automatic segmentation must accommodate poor image quality, anatomical structure overlap, and femur deformity. A new method was developed for femur segmentation in AP pelvic radiographs. METHODS Using manual annotations on 100 AP pelvic radiographs, a statistical shape model (SSM) and a statistical appearance model (SAM) of the femur contour were constructed. The SSM and SAM were used to segment new AP pelvic radiographs with a three-stage approach. At initialization, the mean SSM model is coarsely registered to the femur in the AP radiograph through a scaled rigid registration. Mahalanobis distance defined on the SAM is employed as the search criteria for each annotated suggested landmark location. Dynamic programming was used to eliminate ambiguities. After all landmarks are assigned, a regularized non-rigid registration method deforms the current mean shape of SSM to produce a new segmentation of proximal femur. The second and third stages are iteratively executed to convergence. RESULTS A set of 100 clinical AP pelvic radiographs (not used for training) were evaluated. The mean segmentation error was [Formula: see text], requiring [Formula: see text] s per case when implemented with Matlab. The influence of the initialization on segmentation results was tested by six clinicians, demonstrating no significance difference. CONCLUSIONS A fast, robust and accurate method for femur segmentation in digital AP pelvic radiographs was developed by combining SSM and SAM with dynamic programming. This method can be extended to segmentation of other bony structures such as the pelvis.
Resumo:
Intensity modulated radiation therapy (IMRT) is a technique that delivers a highly conformal dose distribution to a target volume while attempting to maximally spare the surrounding normal tissues. IMRT is a common treatment modality used for treating head and neck (H&N) cancers, and the presence of many critical structures in this region requires accurate treatment delivery. The Radiological Physics Center (RPC) acts as both a remote and on-site quality assurance agency that credentials institutions participating in clinical trials. To date, about 30% of all IMRT participants have failed the RPC’s remote audit using the IMRT H&N phantom. The purpose of this project is to evaluate possible causes of H&N IMRT delivery errors observed by the RPC, specifically IMRT treatment plan complexity and the use of improper dosimetry data from machines that were thought to be matched but in reality were not. Eight H&N IMRT plans with a range of complexity defined by total MU (1460-3466), number of segments (54-225), and modulation complexity scores (MCS) (0.181-0.609) were created in Pinnacle v.8m. These plans were delivered to the RPC’s H&N phantom on a single Varian Clinac. One of the IMRT plans (1851 MU, 88 segments, and MCS=0.469) was equivalent to the median H&N plan from 130 previous RPC H&N phantom irradiations. This average IMRT plan was also delivered on four matched Varian Clinac machines and the dose distribution calculated using a different 6MV beam model. Radiochromic film and TLD within the phantom were used to analyze the dose profiles and absolute doses, respectively. The measured and calculated were compared to evaluate the dosimetric accuracy. All deliveries met the RPC acceptance criteria of ±7% absolute dose difference and 4 mm distance-to-agreement (DTA). Additionally, gamma index analysis was performed for all deliveries using a ±7%/4mm and ±5%/3mm criteria. Increasing the treatment plan complexity by varying the MU, number of segments, or varying the MCS resulted in no clear trend toward an increase in dosimetric error determined by the absolute dose difference, DTA, or gamma index. Varying the delivery machines as well as the beam model (use of a Clinac 6EX 6MV beam model vs. Clinac 21EX 6MV model), also did not show any clear trend towards an increased dosimetric error using the same criteria indicated above.
Resumo:
In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^
Resumo:
A geometrical force balance that links stresses to ice bed coupling along a flow band of an ice sheet was developed in 1988 for longitudinal tension in ice streams and published 4 years later. It remains a work in progress. Now gravitational forces balanced by forces producing tensile, compressive, basal shear, and side shear stresses are all linked to ice bed coupling by the floating fraction phi of ice that produces the concave surface of ice streams. These lead inexorably to a simple formula showing how phi varies along these flow bands where surface and bed topography are known: phi = h(O)/h(I) with h(O) being ice thickness h(I) at x = 0 for x horizontal and positive upslope from grounded ice margins. This captures the basic fact in glaciology: the height of ice depends on how strongly ice couples to the bed. It shows how far a high convex ice sheet (phi = 0) has gone in collapsing into a low flat ice shelf (phi = 1). Here phi captures ice bed coupling under an ice stream and h(O) captures ice bed coupling beyond ice streams.
Resumo:
The ability of the one-dimensional lake model FLake to represent the mixolimnion temperatures for tropical conditions was tested for three locations in East Africa: Lake Kivu and Lake Tanganyika's northern and southern basins. Meteorological observations from surrounding automatic weather stations were corrected and used to drive FLake, whereas a comprehensive set of water temperature profiles served to evaluate the model at each site. Careful forcing data correction and model configuration made it possible to reproduce the observed mixed layer seasonality at Lake Kivu and Lake Tanganyika (northern and southern basins), with correct representation of both the mixed layer depth and water temperatures. At Lake Kivu, mixolimnion temperatures predicted by FLake were found to be sensitive both to minimal variations in the external parameters and to small changes in the meteorological driving data, in particular wind velocity. In each case, small modifications may lead to a regime switch, from the correctly represented seasonal mixed layer deepening to either completely mixed or permanently stratified conditions from similar to 10 m downwards. In contrast, model temperatures were found to be robust close to the surface, with acceptable predictions of near-surface water temperatures even when the seasonal mixing regime is not reproduced. FLake can thus be a suitable tool to parameterise tropical lake water surface temperatures within atmospheric prediction models. Finally, FLake was used to attribute the seasonal mixing cycle at Lake Kivu to variations in the near-surface meteorological conditions. It was found that the annual mixing down to 60m during the main dry season is primarily due to enhanced lake evaporation and secondarily to the decreased incoming long wave radiation, both causing a significant heat loss from the lake surface and associated mixolimnion cooling.
Resumo:
A multi-model analysis of Atlantic multidecadal variability is performed with the following aims: to investigate the similarities to observations; to assess the strength and relative importance of the different elements of the mechanism proposed by Delworth et al. (J Clim 6:1993–2011, 1993) (hereafter D93) among coupled general circulation models (CGCMs); and to relate model differences to mean systematic error. The analysis is performed with long control simulations from ten CGCMs, with lengths ranging between 500 and 3600 years. In most models the variations of sea surface temperature (SST) averaged over North Atlantic show considerable power on multidecadal time scales, but with different periodicity. The SST variations are largest in the mid-latitude region, consistent with the short instrumental record. Despite large differences in model configurations, we find quite some consistency among the models in terms of processes. In eight of the ten models the mid-latitude SST variations are significantly correlated with fluctuations in the Atlantic meridional overturning circulation (AMOC), suggesting a link to northward heat transport changes. Consistent with this link, the three models with the weakest AMOC have the largest cold SST bias in the North Atlantic. There is no linear relationship on decadal timescales between AMOC and North Atlantic Oscillation in the models. Analysis of the key elements of the D93 mechanisms revealed the following: Most models present strong evidence that high-latitude winter mixing precede AMOC changes. However, the regions of wintertime convection differ among models. In most models salinity-induced density anomalies in the convective region tend to lead AMOC, while temperature-induced density anomalies lead AMOC only in one model. However, analysis shows that salinity may play an overly important role in most models, because of cold temperature biases in their relevant convective regions. In most models subpolar gyre variations tend to lead AMOC changes, and this relation is strong in more than half of the models.
Resumo:
Antisense oligonucleotides (AONs) hold promise for therapeutic correction of many genetic diseases via exon skipping, and the first AON-based drugs have entered clinical trials for neuromuscular disorders1, 2. However, despite advances in AON chemistry and design, systemic use of AONs is limited because of poor tissue uptake, and recent clinical reports confirm that sufficient therapeutic efficacy has not yet been achieved. Here we present a new class of AONs made of tricyclo-DNA (tcDNA), which displays unique pharmacological properties and unprecedented uptake by many tissues after systemic administration. We demonstrate these properties in two mouse models of Duchenne muscular dystrophy (DMD), a neurogenetic disease typically caused by frame-shifting deletions or nonsense mutations in the gene encoding dystrophin3, 4 and characterized by progressive muscle weakness, cardiomyopathy, respiratory failure5 and neurocognitive impairment6. Although current naked AONs do not enter the heart or cross the blood-brain barrier to any substantial extent, we show that systemic delivery of tcDNA-AONs promotes a high degree of rescue of dystrophin expression in skeletal muscles, the heart and, to a lesser extent, the brain. Our results demonstrate for the first time a physiological improvement of cardio-respiratory functions and a correction of behavioral features in DMD model mice. This makes tcDNA-AON chemistry particularly attractive as a potential future therapy for patients with DMD and other neuromuscular disorders or with other diseases that are eligible for exon-skipping approaches requiring whole-body treatment.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
Purpose: Proper delineation of ocular anatomy in 3D imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic Resonance Imaging (MRI) is nowadays utilized in clinical practice for the diagnosis confirmation and treatment planning of retinoblastoma in infants, where it serves as a source of information, complementary to the Fundus or Ultrasound imaging. Here we present a framework to fully automatically segment the eye anatomy in the MRI based on 3D Active Shape Models (ASM), we validate the results and present a proof of concept to automatically segment pathological eyes. Material and Methods: Manual and automatic segmentation were performed on 24 images of healthy children eyes (3.29±2.15 years). Imaging was performed using a 3T MRI scanner. The ASM comprises the lens, the vitreous humor, the sclera and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens and the optic nerve, then aligning the model and fitting it to the patient. We validated our segmentation method using a leave-one-out cross validation. The segmentation results were evaluated by measuring the overlap using the Dice Similarity Coefficient (DSC) and the mean distance error. Results: We obtained a DSC of 94.90±2.12% for the sclera and the cornea, 94.72±1.89% for the vitreous humor and 85.16±4.91% for the lens. The mean distance error was 0.26±0.09mm. The entire process took 14s on average per eye. Conclusion: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor and the lens using MRI. We additionally present a proof of concept for fully automatically segmenting pathological eyes. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.