14 resultados para methods : numerical
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
BACKGROUND: Functional magnetic resonance imaging (fMRI) of fluorine-19 allows for the mapping of oxygen partial pressure within perfluorocarbons in the alveolar space (Pao(2)). Theoretically, fMRI-detected Pao(2) can be combined with the Fick principle approach, i.e., a mass balance of oxygen uptake by ventilation and delivery by perfusion, to quantify the ventilation-perfusion ratio (Va/Q) of a lung region: The mixed venous blood and the inspiratory oxygen fraction, which are equal for all lung regions, are measured. In addition, the local expiratory oxygen fraction and the end capillary oxygen content, both of which may differ between the lung regions, are calculated using the fMRI-detected Pao(2). We investigated this approach by numerical simulations and applied it to quantify local Va/Q in the perfluorocarbons during partial liquid ventilation. METHODS: Numerical simulations were performed to analyze the sensitivity of the Va/Q calculation and to compare this approach with another one proposed by Rizi et al. in 2004 (Magn Reson Med 2004;52:65-72). Experimentally, the method was used during partial liquid ventilation in 7 anesthetized pigs. The Pao(2) distribution in intraalveolar perflubron was measured by fluorine-19 MRI. Respiratory gas fractions together with arterial and mixed venous blood samples were taken to quantify oxygen partial pressure and content. Using the Fick principle, the local Va/Q was estimated. The impact of gravity (nondependent versus dependent) of perflubron dose (10 vs 20 mL/kg body weight) and of inspired oxygen fraction (Fio(2)) (0.4-1.0) on Va/Q was examined. RESULTS: In numerical simulations, the Fick principle proved to be appropriate over the Va/Q range from 0.02 to 2.5. Va/Q values were in acceptable agreement with the method published by Rizi et al. In the experimental setting, low mean Va/Q values were found in perflubron (confidence interval [CI] 0.08-0.29 with 20 mL/kg perflubron). At this dose, Va/Q in the nondependent lung was higher (CI 0.18-0.39) than in the dependent lung regions (CI 0.06-0.16; P = 0.006; Student t test). Differences depending on Fio(2) or perflubron dose were, however, small. CONCLUSION: The results show that derivation of Va/Q from local Po(2) measurements using fMRI in perflubron is feasible. The low detected Va/Q suggests that oxygen transport into the perflubron-filled alveolar space is significantly restrained.
Resumo:
The optical quality of the human eye mainly depends on the refractive performance of the cornea. The shape of the cornea is a mechanical balance between intraocular pressure and tissue intrinsic stiffness. Several surgical procedures in ophthalmology alter the biomechanics of the cornea to provoke local or global curvature changes for vision correction. Legitimated by the large number of surgical interventions performed every day, the demand for a deeper understanding of corneal biomechanics is rising to improve the safety of procedures and medical devices. The aim of our work is to propose a numerical model of corneal biomechanics, based on the stromal microstructure. Our novel anisotropic constitutive material law features a probabilistic weighting approach to model collagen fiber distribution as observed on human cornea by Xray scattering analysis (Aghamohammadzadeh et. al., Structure, February 2004). Furthermore, collagen cross-linking was explicitly included in the strain energy function. Results showed that the proposed model is able to successfully reproduce both inflation and extensiometry experimental data (Elsheikh et. al., Curr Eye Res, 2007; Elsheikh et. al., Exp Eye Res, May 2008). In addition, the mechanical properties calculated for patients of different age groups (Group A: 65-79 years; Group B: 80-95 years) demonstrate an increased collagen cross-linking, and a decrease in collagen fiber elasticity from younger to older specimen. These findings correspond to what is known about maturing fibrous biological tissue. Since the presented model can handle different loading situations and includes the anisotropic distribution of collagen fibers, it has the potential to simulate clinical procedures involving nonsymmetrical tissue interventions. In the future, such mechanical model can be used to improve surgical planning and the design of next generation ophthalmic devices.
Resumo:
Purpose The accuracy, efficiency, and efficacy of four commonly recommended medication safety assessment methodologies were systematically reviewed. Methods Medical literature databases were systematically searched for any comparative study conducted between January 2000 and October 2009 in which at least two of the four methodologies—incident report review, direct observation, chart review, and trigger tool—were compared with one another. Any study that compared two or more methodologies for quantitative accuracy (adequacy of the assessment of medication errors and adverse drug events) efficiency (effort and cost), and efficacy and that provided numerical data was included in the analysis. Results Twenty-eight studies were included in this review. Of these, 22 compared two of the methodologies, and 6 compared three methods. Direct observation identified the greatest number of reports of drug-related problems (DRPs), while incident report review identified the fewest. However, incident report review generally showed a higher specificity compared to the other methods and most effectively captured severe DRPs. In contrast, the sensitivity of incident report review was lower when compared with trigger tool. While trigger tool was the least labor-intensive of the four methodologies, incident report review appeared to be the least expensive, but only when linked with concomitant automated reporting systems and targeted follow-up. Conclusion All four medication safety assessment techniques—incident report review, chart review, direct observation, and trigger tool—have different strengths and weaknesses. Overlap between different methods in identifying DRPs is minimal. While trigger tool appeared to be the most effective and labor-efficient method, incident report review best identified high-severity DRPs.
Resumo:
AIMS: A registry mandated by the European Society of Cardiology collects data on trends in interventional cardiology within Europe. Special interest focuses on relative increases and ratios in new techniques and their distributions across Europe. We report the data through 2004 and give an overview of the development of coronary interventions since the first data collection in 1992. METHODS AND RESULTS: Questionnaires were distributed yearly to delegates of all national societies of cardiology represented in the European Society of Cardiology. The goal was to collect the case numbers of all local institutions and operators. The overall numbers of coronary angiographies increased from 1992 to 2004 from 684 000 to 2 238 000 (from 1250 to 3930 per million inhabitants). The respective numbers for percutaneous coronary interventions (PCIs) and coronary stenting procedures increased from 184 000 to 885 000 (from 335 to 1550) and from 3000 to 770 000 (from 5 to 1350), respectively. Germany was the most active country with 712 000 angiographies (8600), 249 000 angioplasties (3000), and 200 000 stenting procedures (2400) in 2004. The indication has shifted towards acute coronary syndromes, as demonstrated by rising rates of interventions for acute myocardial infarction over the last decade. The procedures are more readily performed and perceived safer, as shown by increasing rate of "ad hoc" PCIs and decreasing need for emergency coronary artery bypass grafting (CABG). In 2004, the use of drug-eluting stents continued to rise. However, an enormous variability is reported with the highest rate in Switzerland (70%). If the rate of progression remains constant until 2010 the projected number of coronary angiographies will be over three million, and the number of PCIs about 1.5 million with a stenting rate of almost 100%. CONCLUSION: Interventional cardiology in Europe is ever expanding. New coronary revascularization procedures, alternative or complementary to balloon angioplasty, have come and gone. Only stenting has stood the test of time and matured to the default technique. Facilitated access to PCI, more complete and earlier detection of coronary artery disease promise continued growth of the procedure despite the uncontested success of prevention.
Resumo:
OBJECTIVE: In a prospective study we investigated whether numerical and functional changes of CD4+CD25(high) regulatory T cells (Treg) were associated with changes of disease activity observed during pregnancy and post partum in patients with rheumatoid arthritis (RA). METHODS: The frequency of CD4+CD25(high) T cells was determined by flow cytometry in 12 patients with RA and 14 healthy women during and after pregnancy. Fluorescence-activated cell sorting (FACS) was used to sort CD4+CD25(high) T cells and CD4+CD25- T cells were stimulated with anti-CD3 and anti-CD28 monoclonal antibodies alone or in co-culture to investigate proliferation and cytokine secretion. RESULTS: Frequencies of CD4+CD25(high) Treg were significantly higher in the third trimester compared to 8 weeks post partum in patients and controls. Numbers of CD4+CD25(high) Treg inversely correlated with disease activity in the third trimester and post partum. In co-culture experiments significantly higher amounts of IL10 and lowered levels of tumour necrosis factor (TNF)alpha and interferon (IFN)gamma were found in supernatants of the third trimester compared to postpartum samples. These findings were independent from health or disease in pregnancy, however postpartum TNFalpha and IFN gamma levels were higher in patients with disease flares. CONCLUSION: The amelioration of disease activity in the third trimester corresponded to the increased number of Treg that induced a pronounced anti-inflammatory cytokine milieu. The pregnancy related quantitative and qualitative changes of Treg suggest a beneficial effect of Treg on disease activity.
Resumo:
High-resolution and highly precise age models for recent lake sediments (last 100–150 years) are essential for quantitative paleoclimate research. These are particularly important for sedimentological and geochemical proxies, where transfer functions cannot be established and calibration must be based upon the relation of sedimentary records to instrumental data. High-precision dating for the calibration period is most critical as it determines directly the quality of the calibration statistics. Here, as an example, we compare radionuclide age models obtained on two high-elevation glacial lakes in the Central Chilean Andes (Laguna Negra: 33°38′S/70°08′W, 2,680 m a.s.l. and Laguna El Ocho: 34°02′S/70°19′W, 3,250 m a.s.l.). We show the different numerical models that produce accurate age-depth chronologies based on 210Pb profiles, and we explain how to obtain reduced age-error bars at the bottom part of the profiles, i.e., typically around the end of the 19th century. In order to constrain the age models, we propose a method with five steps: (i) sampling at irregularly-spaced intervals for 226Ra, 210Pb and 137Cs depending on the stratigraphy and microfacies, (ii) a systematic comparison of numerical models for the calculation of 210Pb-based age models: constant flux constant sedimentation (CFCS), constant initial concentration (CIC), constant rate of supply (CRS) and sediment isotope tomography (SIT), (iii) numerical constraining of the CRS and SIT models with the 137Cs chronomarker of AD 1964 and, (iv) step-wise cross-validation with independent diagnostic environmental stratigraphic markers of known age (e.g., volcanic ash layer, historical flood and earthquakes). In both examples, we also use airborne pollutants such as spheroidal carbonaceous particles (reflecting the history of fossil fuel emissions), excess atmospheric Cu deposition (reflecting the production history of a large local Cu mine), and turbidites related to historical earthquakes. Our results show that the SIT model constrained with the 137Cs AD 1964 peak performs best over the entire chronological profile (last 100–150 years) and yields the smallest standard deviations for the sediment ages. Such precision is critical for the calibration statistics, and ultimately, for the quality of the quantitative paleoclimate reconstruction. The systematic comparison of CRS and SIT models also helps to validate the robustness of the chronologies in different sections of the profile. Although surprisingly poorly known and under-explored in paleolimnological research, the SIT model has a great potential in paleoclimatological reconstructions based on lake sediments
Resumo:
We investigate a class of optimal control problems that exhibit constant exogenously given delays in the control in the equation of motion of the differential states. Therefore, we formulate an exemplary optimal control problem with one stock and one control variable and review some analytic properties of an optimal solution. However, analytical considerations are quite limited in case of delayed optimal control problems. In order to overcome these limits, we reformulate the problem and apply direct numerical methods to calculate approximate solutions that give a better understanding of this class of optimization problems. In particular, we present two possibilities to reformulate the delayed optimal control problem into an instantaneous optimal control problem and show how these can be solved numerically with a stateof- the-art direct method by applying Bock’s direct multiple shooting algorithm. We further demonstrate the strength of our approach by two economic examples.
Resumo:
Aging societies suffer from an increasing incidence of bone fractures. Bone strength depends on the amount of mineral measured by clinical densitometry, but also on the micromechanical properties of the bone hierarchical organization. A good understanding has been reached for elastic properties on several length scales, but up to now there is a lack of reliable postyield data on the lower length scales. In order to be able to describe the behavior of bone at the microscale, an anisotropic elastic-viscoplastic damage model was developed using an eccentric generalized Hill criterion and nonlinear isotropic hardening. The model was implemented as a user subroutine in Abaqus and verified using single element tests. A FE simulation of microindentation in lamellar bone was finally performed show-ing that the new constitutive model can capture the main characteristics of the indentation response of bone. As the generalized Hill criterion is limited to elliptical and cylindrical yield surfaces and the correct shape for bone is not known, a new yield surface was developed that takes any convex quadratic shape. The main advantage is that in the case of material identification the shape of the yield surface does not have to be anticipated but a minimization results in the optimal shape among all convex quadrics. The generality of the formulation was demonstrated by showing its degeneration to classical yield surfaces. Also, existing yield criteria for bone at multiple length scales were converted to the quadric formulation. Then, a computational study to determine the influence of yield surface shape and damage on the in-dentation response of bone using spherical and conical tips was performed. The constitutive model was adapted to the quadric criterion and yield surface shape and critical damage were varied. They were shown to have a major impact on the indentation curves. Their influence on indentation modulus, hardness, their ratio as well as the elastic to total work ratio were found to be very well described by multilinear regressions for both tip shapes. For conical tips, indentation depth was not a significant fac-tor, while for spherical tips damage was insignificant. All inverse methods based on microindentation suffer from a lack of uniqueness of the found material properties in the case of nonlinear material behavior. Therefore, monotonic and cyclic micropillar com-pression tests in a scanning electron microscope allowing a straightforward interpretation comple-mented by microindentation and macroscopic uniaxial compression tests were performed on dry ovine bone to identify modulus, yield stress, plastic deformation, damage accumulation and failure mecha-nisms. While the elastic properties were highly consistent, the postyield deformation and failure mech-anisms differed between the two length scales. A majority of the micropillars showed a ductile behavior with strain hardening until failure by localization in a slip plane, while the macroscopic samples failed in a quasi-brittle fashion with microcracks coalescing into macroscopic failure surfaces. In agreement with a proposed rheological model, these experiments illustrate a transition from a ductile mechanical behavior of bone at the microscale to a quasi-brittle response driven by the growth of preexisting cracks along interfaces or in the vicinity of pores at the macroscale. Subsequently, a study was undertaken to quantify the topological variability of indentations in bone and examine its relationship with mechanical properties. Indentations were performed in dry human and ovine bone in axial and transverse directions and their topography measured by AFM. Statistical shape modeling of the residual imprint allowed to define a mean shape and describe the variability with 21 principal components related to imprint depth, surface curvature and roughness. The indentation profile of bone was highly consistent and free of any pile up. A few of the topological parameters, in particular depth, showed significant correlations to variations in mechanical properties, but the cor-relations were not very strong or consistent. We could thus verify that bone is rather homogeneous in its micromechanical properties and that indentation results are not strongly influenced by small de-viations from the ideal case. As the uniaxial properties measured by micropillar compression are in conflict with the current literature on bone indentation, another dissipative mechanism has to be present. The elastic-viscoplastic damage model was therefore extended to viscoelasticity. The viscoelastic properties were identified from macroscopic experiments, while the quasistatic postelastic properties were extracted from micropillar data. It was found that viscoelasticity governed by macroscale properties has very little influence on the indentation curve and results in a clear underestimation of the creep deformation. Adding viscoplasticity leads to increased creep, but hardness is still highly overestimated. It was possible to obtain a reasonable fit with experimental indentation curves for both Berkovich and spherical indenta-tion when abandoning the assumption of shear strength being governed by an isotropy condition. These results remain to be verified by independent tests probing the micromechanical strength prop-erties in tension and shear. In conclusion, in this thesis several tools were developed to describe the complex behavior of bone on the microscale and experiments were performed to identify its material properties. Micropillar com-pression highlighted a size effect in bone due to the presence of preexisting cracks and pores or inter-faces like cement lines. It was possible to get a reasonable fit between experimental indentation curves using different tips and simulations using the constitutive model and uniaxial properties measured by micropillar compression. Additional experimental work is necessary to identify the exact nature of the size effect and the mechanical role of interfaces in bone. Deciphering the micromechanical behavior of lamellar bone and its evolution with age, disease and treatment and its failure mechanisms on several length scales will help preventing fractures in the elderly in the future.
Resumo:
Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging.
Resumo:
The numerical simulations of the magnetic properties of extended three-dimensional networks containing M(II) ions with an S = 5/2 ground-state spin have been carried out within the framework of the isotropic Heisenberg model. Analytical expressions fitting the numerical simulations for the primitive cubic, diamond, together with (10−3) cubic networks have all been derived. With these empirical formulas in hands, we can now extract the interaction between the magnetic ions from the experimental data for these networks. In the case of the primitive cubic network, these expressions are directly compared with those from the high-temperature expansions of the partition function. A fit of the experimental data for three complexes, namely [(N(CH3)4][Mn(N3)] 1, [Mn(CN4)]n 2, and [FeII(bipy)3][MnII2(ox)3] 3, has been carried out. The best fits were those obtained using the following parameters, J = −3.5 cm-1, g = 2.01 (1); J = −8.3 cm-1, g = 1.95 (2); and J = −2.0 cm-1, g = 1.95 (3).
Resumo:
In this paper we develop an adaptive procedure for the numerical solution of general, semilinear elliptic problems with possible singular perturbations. Our approach combines both prediction-type adaptive Newton methods and a linear adaptive finite element discretization (based on a robust a posteriori error analysis), thereby leading to a fully adaptive Newton–Galerkin scheme. Numerical experiments underline the robustness and reliability of the proposed approach for various examples
Resumo:
This article centers on the computational performance of the continuous and discontinuous Galerkin time stepping schemes for general first-order initial value problems in R n , with continuous nonlinearities. We briefly review a recent existence result for discrete solutions from [6], and provide a numerical comparison of the two time discretization methods.