988 resultados para RECONSTRUCTIONS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In vivo dosimetry is a way to verify the radiation dose delivered to the patient in measuring the dose generally during the first fraction of the treatment. It is the only dose delivery control based on a measurement performed during the treatment. In today's radiotherapy practice, the dose delivered to the patient is planned using 3D dose calculation algorithms and volumetric images representing the patient. Due to the high accuracy and precision necessary in radiation treatments, national and international organisations like ICRU and AAPM recommend the use of in vivo dosimetry. It is also mandatory in some countries like France. Various in vivo dosimetry methods have been developed during the past years. These methods are point-, line-, plane- or 3D dose controls. A 3D in vivo dosimetry provides the most information about the dose delivered to the patient, with respect to ID and 2D methods. However, to our knowledge, it is generally not routinely applied to patient treatments yet. The aim of this PhD thesis was to determine whether it is possible to reconstruct the 3D delivered dose using transmitted beam measurements in the context of narrow beams. An iterative dose reconstruction method has been described and implemented. The iterative algorithm includes a simple 3D dose calculation algorithm based on the convolution/superposition principle. The methodology was applied to narrow beams produced by a conventional 6 MV linac. The transmitted dose was measured using an array of ion chambers, as to simulate the linear nature of a tomotherapy detector. We showed that the iterative algorithm converges quickly and reconstructs the dose within a good agreement (at least 3% / 3 mm locally), which is inside the 5% recommended by the ICRU. Moreover it was demonstrated on phantom measurements that the proposed method allows us detecting some set-up errors and interfraction geometry modifications. We also have discussed the limitations of the 3D dose reconstruction for dose delivery error detection. Afterwards, stability tests of the tomotherapy MVCT built-in onboard detector was performed in order to evaluate if such a detector is suitable for 3D in-vivo dosimetry. The detector showed stability on short and long terms comparable to other imaging devices as the EPIDs, also used for in vivo dosimetry. Subsequently, a methodology for the dose reconstruction using the tomotherapy MVCT detector is proposed in the context of static irradiations. This manuscript is composed of two articles and a script providing further information related to this work. In the latter, the first chapter introduces the state-of-the-art of in vivo dosimetry and adaptive radiotherapy, and explains why we are interested in performing 3D dose reconstructions. In chapter 2 a dose calculation algorithm implemented for this work is reviewed with a detailed description of the physical parameters needed for calculating 3D absorbed dose distributions. The tomotherapy MVCT detector used for transit measurements and its characteristics are described in chapter 3. Chapter 4 contains a first article entitled '3D dose reconstruction for narrow beams using ion chamber array measurements', which describes the dose reconstruction method and presents tests of the methodology on phantoms irradiated with 6 MV narrow photon beams. Chapter 5 contains a second article 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations. A dose reconstruction process specific to the use of the tomotherapy MVCT detector is presented in chapter 6. A discussion and perspectives of the PhD thesis are presented in chapter 7, followed by a conclusion in chapter 8. The tomotherapy treatment device is described in appendix 1 and an overview of 3D conformai- and intensity modulated radiotherapy is presented in appendix 2. - La dosimétrie in vivo est une technique utilisée pour vérifier la dose délivrée au patient en faisant une mesure, généralement pendant la première séance du traitement. Il s'agit de la seule technique de contrôle de la dose délivrée basée sur une mesure réalisée durant l'irradiation du patient. La dose au patient est calculée au moyen d'algorithmes 3D utilisant des images volumétriques du patient. En raison de la haute précision nécessaire lors des traitements de radiothérapie, des organismes nationaux et internationaux tels que l'ICRU et l'AAPM recommandent l'utilisation de la dosimétrie in vivo, qui est devenue obligatoire dans certains pays dont la France. Diverses méthodes de dosimétrie in vivo existent. Elles peuvent être classées en dosimétrie ponctuelle, planaire ou tridimensionnelle. La dosimétrie 3D est celle qui fournit le plus d'information sur la dose délivrée. Cependant, à notre connaissance, elle n'est généralement pas appliquée dans la routine clinique. Le but de cette recherche était de déterminer s'il est possible de reconstruire la dose 3D délivrée en se basant sur des mesures de la dose transmise, dans le contexte des faisceaux étroits. Une méthode itérative de reconstruction de la dose a été décrite et implémentée. L'algorithme itératif contient un algorithme simple basé sur le principe de convolution/superposition pour le calcul de la dose. La dose transmise a été mesurée à l'aide d'une série de chambres à ionisations alignées afin de simuler la nature linéaire du détecteur de la tomothérapie. Nous avons montré que l'algorithme itératif converge rapidement et qu'il permet de reconstruire la dose délivrée avec une bonne précision (au moins 3 % localement / 3 mm). De plus, nous avons démontré que cette méthode permet de détecter certaines erreurs de positionnement du patient, ainsi que des modifications géométriques qui peuvent subvenir entre les séances de traitement. Nous avons discuté les limites de cette méthode pour la détection de certaines erreurs d'irradiation. Par la suite, des tests de stabilité du détecteur MVCT intégré à la tomothérapie ont été effectués, dans le but de déterminer si ce dernier peut être utilisé pour la dosimétrie in vivo. Ce détecteur a démontré une stabilité à court et à long terme comparable à d'autres détecteurs tels que les EPIDs également utilisés pour l'imagerie et la dosimétrie in vivo. Pour finir, une adaptation de la méthode de reconstruction de la dose a été proposée afin de pouvoir l'implémenter sur une installation de tomothérapie. Ce manuscrit est composé de deux articles et d'un script contenant des informations supplémentaires sur ce travail. Dans ce dernier, le premier chapitre introduit l'état de l'art de la dosimétrie in vivo et de la radiothérapie adaptative, et explique pourquoi nous nous intéressons à la reconstruction 3D de la dose délivrée. Dans le chapitre 2, l'algorithme 3D de calcul de dose implémenté pour ce travail est décrit, ainsi que les paramètres physiques principaux nécessaires pour le calcul de dose. Les caractéristiques du détecteur MVCT de la tomothérapie utilisé pour les mesures de transit sont décrites dans le chapitre 3. Le chapitre 4 contient un premier article intitulé '3D dose reconstruction for narrow beams using ion chamber array measurements', qui décrit la méthode de reconstruction et présente des tests de la méthodologie sur des fantômes irradiés avec des faisceaux étroits. Le chapitre 5 contient un second article intitulé 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations'. Un procédé de reconstruction de la dose spécifique pour l'utilisation du détecteur MVCT de la tomothérapie est présenté au chapitre 6. Une discussion et les perspectives de la thèse de doctorat sont présentées au chapitre 7, suivies par une conclusion au chapitre 8. Le concept de la tomothérapie est exposé dans l'annexe 1. Pour finir, la radiothérapie «informationnelle 3D et la radiothérapie par modulation d'intensité sont présentées dans l'annexe 2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Medialization of the cup with a respective increase in femoral offset has been proposed in THA to increase abductor moment arms. Insofar as there are potential disadvantages to cup medialization, it is important to ascertain whether the purported biomechanical benefits of cup medialization are large enough to warrant the downsides; to date, studies regarding this question have disagreed. QUESTIONS/PURPOSES: The purpose of this study was to quantify the effect of cup medialization with a compensatory increase in femoral offset compared with anatomic reconstruction for patients undergoing THA. We tested the hypothesis that there is a (linear) correlation between preoperative anatomic parameters and muscle moment arm increase caused by cup medialization. METHODS: Fifteen patients undergoing THA were selected, covering a typical range of preoperative femoral offsets. For each patient, a finite element model was built based on a preoperative CT scan. The model included the pelvis, femur, gluteus minimus, medius, and maximus. Two reconstructions were compared: (1) anatomic position of the acetabular center of rotation, and (2) cup medialization compensated by an increase in the femoral offset. Passive abduction-adduction and flexion-extension were simulated in the range of normal gait. Muscle moment arms were evaluated and correlated to preoperative femoral offset, acetabular offset, height of the greater trochanter (relative to femoral center of rotation), and femoral antetorsion angle. RESULTS: The increase of muscle moment arms caused by cup medialization varied among patients. Muscle moment arms increase by 10% to 85% of the amount of cup medialization for abduction-adduction and from -35% (decrease) to 50% for flexion-extension. The change in moment arm was inversely correlated (R(2) = 0.588, p = 0.001) to femoral antetorsion (anteversion), such that patients with less femoral antetorsion gained more in terms of hip muscle moments. No linear correlation was observed between changes in moment arm and other preoperative parameters in this series. CONCLUSIONS: The benefit of cup medialization is variable and depends on the individual anatomy. CLINICAL RELEVANCE: Cup medialization with compensatory increase of the femoral offset may be particularly effective in patients with less femoral antetorsion. However, cup medialization must be balanced against its tradeoffs, including the additional loss of medial acetabular bone stock, and eventual proprioceptive implications of the nonanatomic center of rotation and perhaps joint reaction forces. Clinical studies should better determine the relevance of small changes of moment arms on function and joint reaction forces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INTRODUCTION: Lumbar spinal stenosis (LSS) treatment is based primarily on the clinical criteria providing that imaging confirms radiological stenosis. The radiological measurement more commonly used is the dural sac cross-sectional area (DSCA). It has been recently shown that grading stenosis based on the morphology of the dural sac as seen on axial T2 MRI images, better reflects severity of stenosis than DSCA and is of prognostic value. This radiological prospective study investigates the variability of surface measurements and morphological grading of stenosis for varying degrees of angulation of the T2 axial images relative to the disc space as observed in clinical practice. MATERIALS AND METHODS: Lumbar spine TSE T2 three-dimensional (3D) MRI sequences were obtained from 32 consecutive patients presenting with either suspected spinal stenosis or low back pain. Axial reconstructions using the OsiriX software at 0°, 10°, 20° and 30° relative to the disc space orientation were obtained for a total of 97 levels. For each level, DSCA was digitally measured and stenosis was graded according to the 4-point (A-D) morphological grading by two observers. RESULTS: A good interobserver agreement was found in grade evaluation of stenosis (k = 0.71). DSCA varied significantly as the slice orientation increased from 0° to +10°, +20° and +30° at each level examined (P < 0.0001) (-15 to +32% at 10°, -24 to +143% at 20° and -29 to +231% at 30° of slice orientation). Stenosis definition based on the surface measurements changed in 39 out of the 97 levels studied, whereas the morphology grade was modified only in two levels (P < 0.01). DISCUSSION: The need to obtain continuous slices using the classical 2D MRI acquisition technique entails often at least a 10° slice inclination relative to one of the studied discs. Even at this low angulation, we found a significantly statistical difference between surface changes and morphological grading change. In clinical practice, given the above findings, it might therefore not be necessary to align the axial cuts to each individual disc level which could be more time-consuming than obtaining a single series of axial cuts perpendicular to the middle of the lumbar spine or to the most stenotic level. In conclusion, morphological grading seems to offer an alternative means of assessing severity of spinal stenosis that is little affected by image acquisition technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Despite the improvements achieved in antibiotic therapy, severe aortic infection resulting in mycotic aneurysms is still a highly lethal disease and surgical management remains a challenging task. PATIENTS AND METHODS: A total of 43 patients with severe aortic infections were analyzed and separated in four groups: (1) Infections of the aortic root Ventriculo-aortic disconnection due to deep aortic infection (6 patients). Two patients were operated using homo-composit grafts. Of the 6 patients total, one died early and two died late during a mean follow-up of 6 years. The two patients with homografts are still alive. (2) Infections of the ascending aorta and the aortic arch. In situ repair for mycotic aneurysmal lesions of the ascending aorta was performed in 6 patients using synthetic graft material in 4/6, biological material in 1/6 and direct suture in 1/6. Two patients had to be reoperated; one of them died early. There was no recurrent infection during a mean follow-up of 6 years. (3) Infections of the descending thoracic and thoraco-abdominal aorta in-situ repair for mycotic aneurysmal lesions of the descending and thoraco-abdominal aorta was performed in 12 patients using homografts in five. Two patients died early and two other patients died late during a mean follow-up of 6 years. (4) Infections of the infrarenal abdominal aorta. In this series of 19 patients with mycotic infrarenal aortic aneurysms, in situ reconstruction was performed in 12 (5/12 with homografts) and extra-anatomic reconstruction (axillo-femoral bypass) was performed in 7. Hospital mortality was 5/19 patients and another 5/19 patients died during a mean follow-up of 6 years. One of the early deaths was due to aortic stump rupture. Two patients with axillo-femoral reconstructions were later converted to descending-thoracic-aortic-bifemoral bypasses. Five thromboses of axillo-femoral bypasses were observed in three of the seven patients with extra-anatomic repairs. RESULTS: Infections of the aortic root, the ascending aorta and the aortic arch are approached with total cardio-pulmonary bypass, using cardioplegic myocardial protection and deep hypothermia with circulatory arrest if necessary. Proximal unloading and distal support using partial cardiopulmonary bypass is preferred for repair of infected descending and thoracoabdominal aortic lesions, whereas no such adjuncts are required for repair of infected infrarenal aortic lesions. CONCLUSIONS: The anatomical location of the aortic infection and the availability of homologous graft material are the main factors determining the surgical strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim  Recently developed parametric methods in historical biogeography allow researchers to integrate temporal and palaeogeographical information into the reconstruction of biogeographical scenarios, thus overcoming a known bias of parsimony-based approaches. Here, we compare a parametric method, dispersal-extinction-cladogenesis (DEC), against a parsimony-based method, dispersal-vicariance analysis (DIVA), which does not incorporate branch lengths but accounts for phylogenetic uncertainty through a Bayesian empirical approach (Bayes-DIVA). We analyse the benefits and limitations of each method using the cosmopolitan plant family Sapindaceae as a case study.Location  World-wide.Methods  Phylogenetic relationships were estimated by Bayesian inference on a large dataset representing generic diversity within Sapindaceae. Lineage divergence times were estimated by penalized likelihood over a sample of trees from the posterior distribution of the phylogeny to account for dating uncertainty in biogeographical reconstructions. We compared biogeographical scenarios between Bayes-DIVA and two different DEC models: one with no geological constraints and another that employed a stratified palaeogeographical model in which dispersal rates were scaled according to area connectivity across four time slices, reflecting the changing continental configuration over the last 110 million years.Results  Despite differences in the underlying biogeographical model, Bayes-DIVA and DEC inferred similar biogeographical scenarios. The main differences were: (1) in the timing of dispersal events - which in Bayes-DIVA sometimes conflicts with palaeogeographical information, and (2) in the lower frequency of terminal dispersal events inferred by DEC. Uncertainty in divergence time estimations influenced both the inference of ancestral ranges and the decisiveness with which an area can be assigned to a node.Main conclusions  By considering lineage divergence times, the DEC method gives more accurate reconstructions that are in agreement with palaeogeographical evidence. In contrast, Bayes-DIVA showed the highest decisiveness in unequivocally reconstructing ancestral ranges, probably reflecting its ability to integrate phylogenetic uncertainty. Care should be taken in defining the palaeogeographical model in DEC because of the possibility of overestimating the frequency of extinction events, or of inferring ancestral ranges that are outside the extant species ranges, owing to dispersal constraints enforced by the model. The wide-spanning spatial and temporal model proposed here could prove useful for testing large-scale biogeographical patterns in plants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mathematical methods combined with measurements of single-cell dynamics provide a means to reconstruct intracellular processes that are only partly or indirectly accessible experimentally. To obtain reliable reconstructions, the pooling of measurements from several cells of a clonal population is mandatory. However, cell-to-cell variability originating from diverse sources poses computational challenges for such process reconstruction. We introduce a scalable Bayesian inference framework that properly accounts for population heterogeneity. The method allows inference of inaccessible molecular states and kinetic parameters; computation of Bayes factors for model selection; and dissection of intrinsic, extrinsic and technical noise. We show how additional single-cell readouts such as morphological features can be included in the analysis. We use the method to reconstruct the expression dynamics of a gene under an inducible promoter in yeast from time-lapse microscopy data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to compare the diagnostic efficiency of plain film and spiral CT examinations with 3D reconstructions of 42 tibial plateau fractures and to assess the accuracy of these two techniques in the pre-operative surgical plan in 22 cases. Forty-two tibial plateau fractures were examined with plain film (anteroposterior, lateral, two obliques) and spiral CT with surface-shaded-display 3D reconstructions. The Swiss AO-ASIF classification system of bone fracture from Muller was used. In 22 cases the surgical plans and the sequence of reconstruction of the fragments were prospectively determined with both techniques, successively, and then correlated with the surgical reports and post-operative plain film. The fractures were underestimated with plain film in 18 of 42 cases (43%). Due to the spiral CT 3D reconstructions, and precise pre-operative information, the surgical plans based on plain film were modified and adjusted in 13 cases among 22 (59%). Spiral CT 3D reconstructions give a better and more accurate demonstration of the tibial plateau fracture and allows a more precise pre-operative surgical plan.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Vascular reconstructions are becoming challenging due to the comorbidity of the aging population and since the introduction of minimally invasive approaches. Many sutureless anastomosis devices have been designed to facilitate the cardiovascular surgeon's work and the vascular join (VJ) is one of these. We designed an animal study to assess its reliability and long-term efficacy. METHODS: VJ allows the construction of end-to-end and end-to-side anastomoses. It consists of two metallic crowns fixed to the extremity of the two conduits so that vessel edges are joined layer by layer. There is no foreign material exposed to blood. In adult sheep both carotid arteries were prepared and severed. End-to-end anastomoses were performed using the VJ device on one side and the classical running suture technique on the other side. Animals were followed-up with Duplex-scan every 3 months and sacrificed after 12 months. Histopathological analysis was carried out. RESULTS: In 20 animals all 22 sutureless anastomoses were successfully completed in less than 2 min versus 6 +/- 3 min for running suture. Duplex showed the occlusion of three controls and one sutureless anastomosis. Two controls and one sutureless had stenosis >50%. Histology showed very thin layer of myointimal hyperplasia (50 +/- 10 microm) in the sutureless group versus 300 +/- 27 microm in the control. No significant inflammatory reaction was detected. CONCLUSIONS: VJ provides edge-to-edge vascular repair that can be considered the most physiological way to restore vessel continuity. For the first time, in healthy sheep, an anastomotic device provided better results than suture technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES: The reconstruction of the right ventricular outflow tract (RVOT) with valved conduits remains a challenge. The reoperation rate at 5 years can be as high as 25% and depends on age, type of conduit, conduit diameter and principal heart malformation. The aim of this study is to provide a bench model with computer fluid dynamics to analyse the haemodynamics of the RVOT, pulmonary artery, its bifurcation, and left and right pulmonary arteries that in the future may serve as a tool for analysis and prediction of outcome following RVOT reconstruction. METHODS: Pressure, flow and diameter at the RVOT, pulmonary artery, bifurcation of the pulmonary artery, and left and right pulmonary arteries were measured in five normal pigs with a mean weight of 24.6 ± 0.89 kg. Data obtained were used for a 3D computer fluid-dynamics simulation of flow conditions, focusing on the pressure, flow and shear stress profile of the pulmonary trunk to the level of the left and right pulmonary arteries. RESULTS: Three inlet steady flow profiles were obtained at 0.2, 0.29 and 0.36 m/s that correspond to the flow rates of 1.5, 2.0 and 2.5 l/min flow at the RVOT. The flow velocity profile was constant at the RVOT down to the bifurcation and decreased at the left and right pulmonary arteries. In all three inlet velocity profiles, low sheer stress and low-velocity areas were detected along the left wall of the pulmonary artery, at the pulmonary artery bifurcation and at the ostia of both pulmonary arteries. CONCLUSIONS: This computed fluid real-time model provides us with a realistic picture of fluid dynamics in the pulmonary tract area. Deep shear stress areas correspond to a turbulent flow profile that is a predictive factor for the development of vessel wall arteriosclerosis. We believe that this bench model may be a useful tool for further evaluation of RVOT pathology following surgical reconstructions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A crucial method for investigating patients with coronary artery disease (CAD) is the calculation of the left ventricular ejection fraction (LVEF). It is, consequently, imperative to precisely estimate the value of LVEF--a process that can be done with myocardial perfusion scintigraphy. Therefore, the present study aimed to establish and compare the estimation performance of the quantitative parameters of the reconstruction methods filtered backprojection (FBP) and ordered-subset expectation maximization (OSEM). METHODS: A beating-heart phantom with known values of end-diastolic volume, end-systolic volume, and LVEF was used. Quantitative gated SPECT/quantitative perfusion SPECT software was used to obtain these quantitative parameters in a semiautomatic mode. The Butterworth filter was used in FBP, with the cutoff frequencies between 0.2 and 0.8 cycles per pixel combined with the orders of 5, 10, 15, and 20. Sixty-three reconstructions were performed using 2, 4, 6, 8, 10, 12, and 16 OSEM subsets, combined with several iterations: 2, 4, 6, 8, 10, 12, 16, 32, and 64. RESULTS: With FBP, the values of end-diastolic, end-systolic, and the stroke volumes rise as the cutoff frequency increases, whereas the value of LVEF diminishes. This same pattern is verified with the OSEM reconstruction. However, with OSEM there is a more precise estimation of the quantitative parameters, especially with the combinations 2 iterations × 10 subsets and 2 iterations × 12 subsets. CONCLUSION: The OSEM reconstruction presents better estimations of the quantitative parameters than does FBP. This study recommends the use of 2 iterations with 10 or 12 subsets for OSEM and a cutoff frequency of 0.5 cycles per pixel with the orders 5, 10, or 15 for FBP as the best estimations for the left ventricular volumes and ejection fraction quantification in myocardial perfusion scintigraphy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Genome-scale metabolic network reconstructions are now routinely used in the study of metabolic pathways, their evolution and design. The development of such reconstructions involves the integration of information on reactions and metabolites from the scientific literature as well as public databases and existing genome-scale metabolic models. The reconciliation of discrepancies between data from these sources generally requires significant manual curation, which constitutes a major obstacle in efforts to develop and apply genome-scale metabolic network reconstructions. In this work, we discuss some of the major difficulties encountered in the mapping and reconciliation of metabolic resources and review three recent initiatives that aim to accelerate this process, namely BKM-react, MetRxn and MNXref (presented in this article). Each of these resources provides a pre-compiled reconciliation of many of the most commonly used metabolic resources. By reducing the time required for manual curation of metabolite and reaction discrepancies, these resources aim to accelerate the development and application of high-quality genome-scale metabolic network reconstructions and models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Strepsirhines comprise 10 living or recently extinct families, ≥50% of extant primate families. Their phylogenetic relationships have been intensively studied, but common topologies have only recently emerged; e.g. all recent reconstructions link the Lepilemuridae and Cheirogaleidae. The position of the indriids, however, remains uncertain, and molecular studies have placed them as the sister to every clade except Daubentonia, the preferred sister group of morphologists. The node subtending Afro-Asian lorisids has been similarly elusive. We probed these phylogenetic inconsistencies using a test data set including 20 strepsirhine taxa and 2 outgroups represented by 3,543 mtDNA base pairs, and 43 selected morphological characters, subjecting the data to maximum parsimony, maximum likelihood and Bayesian inference analyses, and reconstructing topology and node ages jointly from the molecular data using relaxed molecular clock analyses. Our permutations yielded compatible but not identical evolutionary histories, and currently popular techniques seem unable to deal adequately with morphological data. We investigated the influence of morphological characters on tree topologies, and examined the effect of taxon sampling in two experiments: (1) we removed the molecular data only for 5 endangered Malagasy taxa to simulate 'extinction leaving a fossil record'; (2) we removed both the sequence and morphological data for these taxa. Topologies were affected more by the inclusion of morphological data only, indicating that palaeontological studies that involve inserting a partial morphological data set into a combined data matrix of extant species should be interpreted with caution. The gap of approximately 10 million years between the daubentoniid divergence and those of the other Malagasy families deserves more study. The apparently contemporaneous divergence of African and non-daubentoniid Malagasy families 40-30 million years ago may be related to regional plume-induced uplift followed by a global period of cooling and drying. © 2013 S. Karger AG, Basel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Calceology is the study of recovered archaeological leather footwear and is comprised of conservation, documentation and identification of leather shoe components and shoe styles. Recovered leather shoes are complex artefacts that present technical, stylistic and personal information about the culture and people that used them. The current method in calceological research for typology and chronology is by comparison with parallel examples, though its use poses problems by an absence of basic definitions and the lack of a taxonomic hierarchy. The research findings of the primary cutting patterns, used for making all leather footwear, are integrated with the named style method and the Goubitz notation, resulting in a combined methodology as a basis for typological organisation for recovered footwear and a chronology for named shoe styles. The history of calceological research is examined in chapter two and is accompanied by a review of methodological problems as seen in the literature. Through the examination of various documentation and research techniques used during the history of calceological studies, the reasons why a standard typology and methodology failed to develop are investigated. The variety and continual invention of a new research method for each publication of a recovered leather assemblage hindered the development of a single standard methodology. Chapter three covers the initial research with the database through which the primary cutting patterns were identified and the named styles were defined. The chronological span of each named style was established through iterative cross-site sedation and named style comparisons. The technical interpretation of the primary cutting patterns' consistent use is due to constraints imposed by the leather and the forms needed to cover the foot. Basic parts of the shoe patterns and the foot are defined, plus terms provided for identifying the key points for pattern making. Chapter four presents the seventeen primary cutting patterns and their sub-types, these are divided into three main groups: six integral soled patterns, four hybrid soled patterns and seven separately soled patterns. Descriptions of the letter codes, pattern layout, construction principle, closing seam placement and list of sub-types are included in the descriptions of each primary cutting pattern. The named shoe styles and their relative chronology are presented in chapter five. Nomenclature for the named styles is based on the find location of the first published example plus the primary cutting pattern code letter. The named styles are presented in chronological order from Prehistory through to the late 16th century. Short descriptions of the named styles are given and illustrated with examples of recovered archaeological leather footwear, reconstructions of archaeological shoes and iconographical sources. Chapter six presents documentation of recovered archaeological leather using the Goubitz notation, an inventory and description of style elements and fastening methods used for defining named shoe styles, technical information about sole/upper constructions and the consequences created by the use of lasts and sewing forms for style identification and fastening placement in relation to the instep point. The chapter concludes with further technical information about the implications for researchers about shoemaking, pattern making and reconstructive archaeology. The conclusion restates the original research question of why a group of primary cutting patterns appear to have been used consistently throughout the European archaeological record. The quantitative and qualitative results from the database show the use of these patterns but it is the properties of the leather that imposes the use of the primary cutting patterns. The combined methodology of primary pattern identification, named style and artefact registration provides a framework for calceological research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Phylogenetic reconstructions are a major component of many studies in evolutionary biology, but their accuracy can be reduced under certain conditions. Recent studies showed that the convergent evolution of some phenotypes resulted from recurrent amino acid substitutions in genes belonging to distant lineages. It has been suggested that these convergent substitutions could bias phylogenetic reconstruction toward grouping convergent phenotypes together, but such an effect has never been appropriately tested. We used computer simulations to determine the effect of convergent substitutions on the accuracy of phylogenetic inference. We show that, in some realistic conditions, even a relatively small proportion of convergent codons can strongly bias phylogenetic reconstruction, especially when amino acid sequences are used as characters. The strength of this bias does not depend on the reconstruction method but varies as a function of how much divergence had occurred among the lineages prior to any episodes of convergent substitutions. While the occurrence of this bias is difficult to predict, the risk of spurious groupings is strongly decreased by considering only 3rd codon positions, which are less subject to selection, as long as saturation problems are not present. Therefore, we recommend that, whenever possible, topologies obtained with amino acid sequences and 3rd codon positions be compared to identify potential phylogenetic biases and avoid evolutionarily misleading conclusions.