36 resultados para Error correction model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reconstruction of patient-specific 3D bone surface from 2D calibrated fluoroscopic images and a point distribution model is discussed. We present a 2D/3D reconstruction scheme combining statistical extrapolation and regularized shape deformation with an iterative image-to-model correspondence establishing algorithm, and show its application to reconstruct the surface of proximal femur. The image-to-model correspondence is established using a non-rigid 2D point matching process, which iteratively uses a symmetric injective nearest-neighbor mapping operator and 2D thin-plate splines based deformation to find a fraction of best matched 2D point pairs between features detected from the fluoroscopic images and those extracted from the 3D model. The obtained 2D point pairs are then used to set up a set of 3D point pairs such that we turn a 2D/3D reconstruction problem to a 3D/3D one. We designed and conducted experiments on 11 cadaveric femurs to validate the present reconstruction scheme. An average mean reconstruction error of 1.2 mm was found when two fluoroscopic images were used for each bone. It decreased to 1.0 mm when three fluoroscopic images were used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Awake hamsters equipped with the dorsal window chamber preparation were subjected to hemorrhage of 50% of the estimated blood volume. Initial resuscitation (25% of estimated blood volume) with polymerized bovine hemoglobin (PBH) or 10% hydroxyethyl starch (HES) occurred in concert with an equivolumetric bleeding to simulate the early, prehospital setting (exchange transfusion). Resuscitation (25% of estimated blood volume) without bleeding was performed with PBH, HES, or autologous red blood cells (HES-RBCs). Peripheral microcirculation, tissue oxygenation, and systemic hemodynamic and blood gas parameters were assessed. After exchange transfusion, base deficit was -8.6 +/- 3.7 mmol/L (PBH) and -5.1 +/- 5.3 mmol/L (HES) (not significant). Functional capillary density was 17% +/- 6% of baseline (PBH) and 31% +/- 11% (HES) (P < 0.05) and arteriolar diameter 73% +/- 3% of baseline (PBH) and 90% + 5% (HES) (P < 0.01). At the end, hemoglobin levels were 3.7 +/- 0.3 g/dL with HES, 8.2 +/- 0.6 g/dL with PBH, and 10.4 +/- 0.8 g/dL with HES-RBCs (P < 0.01 HES vs. PBH and HES-RBCs, P < 0.05 PBH vs. HES-RBCs). Base excess was restored to baseline with PBH and HES-RBCs, but not with HES (P < 0.05). Functional capillary density was 46% +/- 5% of baseline (PBH), 62% + 20% (HES-RBCs), and 36% +/- 19% (HES) (P < 0.01 HES-RBCs vs. HES). Peripheral oxygen delivery and consumption was highest with HES-RBCs, followed by PBH (P < 0.05 HES-RBCs vs. PBH, P < 0.01 HES-RBCs and PBH vs. HES). In conclusion, the PBH led to a correction of base deficit comparable to blood transfusion. However, oxygenation of the peripheral tissue was inferior with PBH. This was attributed to its negative impact on the peripheral microcirculation caused by arteriolar vasoconstriction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Assessment of lung volume (FRC) and ventilation inhomogeneities with ultrasonic flowmeter and multiple breath washout (MBW) has been used to provide important information about lung disease in infants. Sub-optimal adjustment of the mainstream molar mass (MM) signal for temperature and external deadspace may lead to analysis errors in infants with critically small tidal volume changes during breathing. METHODS: We measured expiratory temperature in human infants at 5 weeks of age and examined the influence of temperature and deadspace changes on FRC results with computer simulation modeling. A new analysis method with optimized temperature and deadspace settings was then derived, tested for robustness to analysis errors and compared with the previously used analysis methods. RESULTS: Temperature in the facemask was higher and variations of deadspace volumes larger than previously assumed. Both showed considerable impact upon FRC and LCI results with high variability when obtained with the previously used analysis model. Using the measured temperature we optimized model parameters and tested a newly derived analysis method, which was found to be more robust to variations in deadspace. Comparison between both analysis methods showed systematic differences and a wide scatter. CONCLUSION: Corrected deadspace and more realistic temperature assumptions improved the stability of the analysis of MM measurements obtained by ultrasonic flowmeter in infants. This new analysis method using the only currently available commercial ultrasonic flowmeter in infants may help to improve stability of the analysis and further facilitate assessment of lung volume and ventilation inhomogeneities in infants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a system for 3-D reconstruction of a patient-specific surface model from calibrated X-ray images. Our system requires two X-ray images of a patient with one acquired from the anterior-posterior direction and the other from the axial direction. A custom-designed cage is utilized in our system to calibrate both images. Starting from bone contours that are interactively identified from the X-ray images, our system constructs a patient-specific surface model of the proximal femur based on a statistical model based 2D/3D reconstruction algorithm. In this paper, we present the design and validation of the system with 25 bones. An average reconstruction error of 0.95 mm was observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: A fixed cavovarus foot deformity can be associated with anteromedial ankle arthrosis due to elevated medial joint contact stresses. Supramalleolar valgus osteotomies (SMOT) and lateralizing calcaneal osteotomies (LCOT) are commonly used to treat symptoms by redistributing joint contact forces. In a cavovarus model, the effects of SMOT and LCOT on the lateralization of the center of force (COF) and reduction of the peak pressure in the ankle joint were compared. METHODS: A previously published cavovarus model with fixed hindfoot varus was simulated in 10 cadaver specimens. Closing wedge supramalleolar valgus osteotomies 3 cm above the ankle joint level (6 and 11 degrees) and lateral sliding calcaneal osteotomies (5 and 10 mm displacement) were analyzed at 300 N axial static load (half body weight). The COF migration and peak pressure decrease in the ankle were recorded using high-resolution TekScan pressure sensors. RESULTS: A significant lateral COF shift was observed for each osteotomy: 2.1 mm for the 6 degrees (P = .014) and 2.3 mm for the 11 degrees SMOT (P = .010). The 5 mm LCOT led to a lateral shift of 2.0 mm (P = .042) and the 10 mm LCOT to a shift of 3.0 mm (P = .006). Comparing the different osteotomies among themselves no significant differences were recorded. No significant anteroposterior COF shift was seen. A significant peak pressure reduction was recorded for each osteotomy: The SMOT led to a reduction of 29% (P = .033) for the 6 degrees and 47% (P = .003) for the 11 degrees osteotomy, and the LCOT to a reduction of 41% (P = .003) for the 5 mm and 49% (P = .002) for the 10 mm osteotomy. Similar to the COF lateralization no significant differences between the osteotomies were seen. CONCLUSION: LCOT and SMOT significantly reduced anteromedial ankle joint contact stresses in this cavovarus model. The unloading effects of both osteotomies were equivalent. More correction did not lead to significantly more lateralization of the COF or more reduction of peak pressure but a trend was seen. CLINICAL RELEVANCE: In patients with fixed cavovarus feet, both SMOT and LCOT provided equally good redistribution of elevated ankle joint contact forces. Increasing the amount of displacement did not seem to equally improve the joint pressures. The site of osteotomy could therefore be chosen on the basis of surgeon's preference, simplicity, or local factors in case of more complex reconstructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Localized short-echo-time (1)H-MR spectra of human brain contain contributions of many low-molecular-weight metabolites and baseline contributions of macromolecules. Two approaches to model such spectra are compared and the data acquisition sequence, optimized for reproducibility, is presented. Modeling relies on prior knowledge constraints and linear combination of metabolite spectra. Investigated was what can be gained by basis parameterization, i.e., description of basis spectra as sums of parametric lineshapes. Effects of basis composition and addition of experimentally measured macromolecular baselines were investigated also. Both fitting methods yielded quantitatively similar values, model deviations, error estimates, and reproducibility in the evaluation of 64 spectra of human gray and white matter from 40 subjects. Major advantages of parameterized basis functions are the possibilities to evaluate fitting parameters separately, to treat subgroup spectra as independent moieties, and to incorporate deviations from straightforward metabolite models. It was found that most of the 22 basis metabolites used may provide meaningful data when comparing patient cohorts. In individual spectra, sums of closely related metabolites are often more meaningful. Inclusion of a macromolecular basis component leads to relatively small, but significantly different tissue content for most metabolites. It provides a means to quantitate baseline contributions that may contain crucial clinical information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE    Segmentation of the proximal femur in digital antero-posterior (AP) pelvic radiographs is required to create a three-dimensional model of the hip joint for use in planning and treatment. However, manually extracting the femoral contour is tedious and prone to subjective bias, while automatic segmentation must accommodate poor image quality, anatomical structure overlap, and femur deformity. A new method was developed for femur segmentation in AP pelvic radiographs. METHODS    Using manual annotations on 100 AP pelvic radiographs, a statistical shape model (SSM) and a statistical appearance model (SAM) of the femur contour were constructed. The SSM and SAM were used to segment new AP pelvic radiographs with a three-stage approach. At initialization, the mean SSM model is coarsely registered to the femur in the AP radiograph through a scaled rigid registration. Mahalanobis distance defined on the SAM is employed as the search criteria for each annotated suggested landmark location. Dynamic programming was used to eliminate ambiguities. After all landmarks are assigned, a regularized non-rigid registration method deforms the current mean shape of SSM to produce a new segmentation of proximal femur. The second and third stages are iteratively executed to convergence. RESULTS    A set of 100 clinical AP pelvic radiographs (not used for training) were evaluated. The mean segmentation error was [Formula: see text], requiring [Formula: see text] s per case when implemented with Matlab. The influence of the initialization on segmentation results was tested by six clinicians, demonstrating no significance difference. CONCLUSIONS    A fast, robust and accurate method for femur segmentation in digital AP pelvic radiographs was developed by combining SSM and SAM with dynamic programming. This method can be extended to segmentation of other bony structures such as the pelvis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability of the one-dimensional lake model FLake to represent the mixolimnion temperatures for tropical conditions was tested for three locations in East Africa: Lake Kivu and Lake Tanganyika's northern and southern basins. Meteorological observations from surrounding automatic weather stations were corrected and used to drive FLake, whereas a comprehensive set of water temperature profiles served to evaluate the model at each site. Careful forcing data correction and model configuration made it possible to reproduce the observed mixed layer seasonality at Lake Kivu and Lake Tanganyika (northern and southern basins), with correct representation of both the mixed layer depth and water temperatures. At Lake Kivu, mixolimnion temperatures predicted by FLake were found to be sensitive both to minimal variations in the external parameters and to small changes in the meteorological driving data, in particular wind velocity. In each case, small modifications may lead to a regime switch, from the correctly represented seasonal mixed layer deepening to either completely mixed or permanently stratified conditions from similar to 10 m downwards. In contrast, model temperatures were found to be robust close to the surface, with acceptable predictions of near-surface water temperatures even when the seasonal mixing regime is not reproduced. FLake can thus be a suitable tool to parameterise tropical lake water surface temperatures within atmospheric prediction models. Finally, FLake was used to attribute the seasonal mixing cycle at Lake Kivu to variations in the near-surface meteorological conditions. It was found that the annual mixing down to 60m during the main dry season is primarily due to enhanced lake evaporation and secondarily to the decreased incoming long wave radiation, both causing a significant heat loss from the lake surface and associated mixolimnion cooling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A multi-model analysis of Atlantic multidecadal variability is performed with the following aims: to investigate the similarities to observations; to assess the strength and relative importance of the different elements of the mechanism proposed by Delworth et al. (J Clim 6:1993–2011, 1993) (hereafter D93) among coupled general circulation models (CGCMs); and to relate model differences to mean systematic error. The analysis is performed with long control simulations from ten CGCMs, with lengths ranging between 500 and 3600 years. In most models the variations of sea surface temperature (SST) averaged over North Atlantic show considerable power on multidecadal time scales, but with different periodicity. The SST variations are largest in the mid-latitude region, consistent with the short instrumental record. Despite large differences in model configurations, we find quite some consistency among the models in terms of processes. In eight of the ten models the mid-latitude SST variations are significantly correlated with fluctuations in the Atlantic meridional overturning circulation (AMOC), suggesting a link to northward heat transport changes. Consistent with this link, the three models with the weakest AMOC have the largest cold SST bias in the North Atlantic. There is no linear relationship on decadal timescales between AMOC and North Atlantic Oscillation in the models. Analysis of the key elements of the D93 mechanisms revealed the following: Most models present strong evidence that high-latitude winter mixing precede AMOC changes. However, the regions of wintertime convection differ among models. In most models salinity-induced density anomalies in the convective region tend to lead AMOC, while temperature-induced density anomalies lead AMOC only in one model. However, analysis shows that salinity may play an overly important role in most models, because of cold temperature biases in their relevant convective regions. In most models subpolar gyre variations tend to lead AMOC changes, and this relation is strong in more than half of the models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Antisense oligonucleotides (AONs) hold promise for therapeutic correction of many genetic diseases via exon skipping, and the first AON-based drugs have entered clinical trials for neuromuscular disorders1, 2. However, despite advances in AON chemistry and design, systemic use of AONs is limited because of poor tissue uptake, and recent clinical reports confirm that sufficient therapeutic efficacy has not yet been achieved. Here we present a new class of AONs made of tricyclo-DNA (tcDNA), which displays unique pharmacological properties and unprecedented uptake by many tissues after systemic administration. We demonstrate these properties in two mouse models of Duchenne muscular dystrophy (DMD), a neurogenetic disease typically caused by frame-shifting deletions or nonsense mutations in the gene encoding dystrophin3, 4 and characterized by progressive muscle weakness, cardiomyopathy, respiratory failure5 and neurocognitive impairment6. Although current naked AONs do not enter the heart or cross the blood-brain barrier to any substantial extent, we show that systemic delivery of tcDNA-AONs promotes a high degree of rescue of dystrophin expression in skeletal muscles, the heart and, to a lesser extent, the brain. Our results demonstrate for the first time a physiological improvement of cardio-respiratory functions and a correction of behavioral features in DMD model mice. This makes tcDNA-AON chemistry particularly attractive as a potential future therapy for patients with DMD and other neuromuscular disorders or with other diseases that are eligible for exon-skipping approaches requiring whole-body treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Proper delineation of ocular anatomy in 3D imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic Resonance Imaging (MRI) is nowadays utilized in clinical practice for the diagnosis confirmation and treatment planning of retinoblastoma in infants, where it serves as a source of information, complementary to the Fundus or Ultrasound imaging. Here we present a framework to fully automatically segment the eye anatomy in the MRI based on 3D Active Shape Models (ASM), we validate the results and present a proof of concept to automatically segment pathological eyes. Material and Methods: Manual and automatic segmentation were performed on 24 images of healthy children eyes (3.29±2.15 years). Imaging was performed using a 3T MRI scanner. The ASM comprises the lens, the vitreous humor, the sclera and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens and the optic nerve, then aligning the model and fitting it to the patient. We validated our segmentation method using a leave-one-out cross validation. The segmentation results were evaluated by measuring the overlap using the Dice Similarity Coefficient (DSC) and the mean distance error. Results: We obtained a DSC of 94.90±2.12% for the sclera and the cornea, 94.72±1.89% for the vitreous humor and 85.16±4.91% for the lens. The mean distance error was 0.26±0.09mm. The entire process took 14s on average per eye. Conclusion: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor and the lens using MRI. We additionally present a proof of concept for fully automatically segmenting pathological eyes. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.