985 resultados para Conjugate gradient methods
Resumo:
Aim Specialized mutualistic clades may revert and thus increase their autonomy and generalist characteristics. However, our understanding of the drivers that trigger reductions in mutualistic traits and of the consequences for the tolerance of these species to various environmental conditions remains limited. This study investigates the relationship between the environmental niche and the degree of myrmecophily (i.e. the ability to interact with ants) among members of the Lycaenidae. Location The western Swiss Alps. Methods We measured the tolerance of Lycaenidae species to low temperatures by comparing observations from a random stratified field sampling with climatic maps. We then compared the species-specific degree of myrmecophily with the species range limits at colder temperatures while controlling for phylogenetic dependence. We further evaluated whether the community-averaged degree of myrmecophily increases with temperature, as would be expected in the case of environmental filters acting on myrmecophilous species. Results Twenty-nine Lycaenidae species were found during sampling. Ancestral state reconstruction indicated that the 24 species of Polyommatinae displayed both strong myrmecophily and secondary loss of mutualism; these species were used in the subsequent statistical analyses. Species with a higher degree of ant interaction were, on average, more likely to inhabit warmer sites. Species inhabiting the coldest environments displayed little or no interaction with ants. Main conclusions Colder climates at high elevations filter out species with a high degree of myrmecophily and may have been the direct evolutionary force that promoted the loss of mutualism. A larger taxon sampling across the Holarctic may help to distinguish between the ecological and evolutionary effects of climate.
Resumo:
Objectives: Existing VADs are single-ventricle pumps needing anticoagulation. We developed a bi-ventricular external assist device that partially reproduces the physiological muscle function of the heart. This artificial muscle could wrap the heart and improve its contractile force.Methods: The device has a carbon fiber skeleton fitting a 30-40kg patient's heart, to which a Nitinol based artificial muscle is connected. The artificial muscle wraps both ventricles. The Nitinol fibers are woven on a Kevlar mesh surrounding each ventricle. The fibers are electrically driven with a dedicated control unit developed for this purpose. We assessed hemodynamic performances of this device using a previously described dedicated bench test. Volume ejected and pressure gradient have been measured with afterload ranging from 10 to 50mmHg.Results: With an afterload of 50mmHg the system has an ejection fraction of 4% on the right side and 5% on the left side. The system is able to generate a systolic ejection of 2.2mL on the right side and 3.25mL on the left side. With an afterload of 25mmHg the results are reduced of about 20%. The activation frequency can reach 80/minute resulting in a total volume displacement of 176mL/minute on the right side and 260mL/minute on the left side.Conclusions: These preliminary studies confirmed the possibility of improving the ejection fraction of a failing heart using artificial muscle for external cardiac compression avoiding anticoagulation therapy. This device could be helpful in weaning cardio-pulmonary bypass and/or for short-term cardio-circulatory support in pediatric population with cardiac failure.
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.
Resumo:
The TGF-β homolog Decapentaplegic (Dpp) acts as a secreted morphogen in the Drosophila wing disc, and spreads through the target tissue in order to form a long range concentration gradient. Despite extensive studies, the mechanism by which the Dpp gradient is formed remains controversial. Two opposing mechanisms have been proposed: receptor-mediated transcytosis (RMT) and restricted extracellular diffusion (RED). In these scenarios the receptor for Dpp plays different roles. In the RMT model it is essential for endocytosis, re-secretion, and thus transport of Dpp, whereas in the RED model it merely modulates Dpp distribution by binding it at the cell surface for internalization and subsequent degradation. Here we analyzed the effect of receptor mutant clones on the Dpp profile in quantitative mathematical models representing transport by either RMT or RED. We then, using novel genetic tools, experimentally monitored the actual Dpp gradient in wing discs containing receptor gain-of-function and loss-of-function clones. Gain-of-function clones reveal that Dpp binds in vivo strongly to the type I receptor Thick veins, but not to the type II receptor Punt. Importantly, results with the loss-of-function clones then refute the RMT model for Dpp gradient formation, while supporting the RED model in which the majority of Dpp is not bound to Thick veins. Together our results show that receptor-mediated transcytosis cannot account for Dpp gradient formation, and support restricted extracellular diffusion as the main mechanism for Dpp dispersal. The properties of this mechanism, in which only a minority of Dpp is receptor-bound, may facilitate long-range distribution.
Resumo:
This paper presents 3-D brain tissue classificationschemes using three recent promising energy minimizationmethods for Markov random fields: graph cuts, loopybelief propagation and tree-reweighted message passing.The classification is performed using the well knownfinite Gaussian mixture Markov Random Field model.Results from the above methods are compared with widelyused iterative conditional modes algorithm. Theevaluation is performed on a dataset containing simulatedT1-weighted MR brain volumes with varying noise andintensity non-uniformities. The comparisons are performedin terms of energies as well as based on ground truthsegmentations, using various quantitative metrics.
Resumo:
PURPOSE: To suppress the noise, by sacrificing some of the signal homogeneity for numerical stability, in uniform T1 weighted (T1w) images obtained with the magnetization prepared 2 rapid gradient echoes sequence (MP2RAGE) and to compare the clinical utility of these robust T1w images against the uniform T1w images. MATERIALS AND METHODS: 8 healthy subjects (29.0±4.1 years; 6 Male), who provided written consent, underwent two scan sessions within a 24 hour period on a 7T head-only scanner. The uniform and robust T1w image volumes were calculated inline on the scanner. Two experienced radiologists qualitatively rated the images for: general image quality; 7T specific artefacts; and, local structure definition. Voxel-based and volume-based morphometry packages were used to compare the segmentation quality between the uniform and robust images. Statistical differences were evaluated by using a positive sided Wilcoxon rank test. RESULTS: The robust image suppresses background noise inside and outside the skull. The inhomogeneity introduced was ranked as mild. The robust image was significantly ranked higher than the uniform image for both observers (observer 1/2, p-value = 0.0006/0.0004). In particular, an improved delineation of the pituitary gland, cerebellar lobes was observed in the robust versus uniform T1w image. The reproducibility of the segmentation results between repeat scans improved (p-value = 0.0004) from an average volumetric difference across structures of ≈6.6% to ≈2.4% for the uniform image and robust T1w image respectively. CONCLUSIONS: The robust T1w image enables MP2RAGE to produce, clinically familiar T1w images, in addition to T1 maps, which can be readily used in uniform morphometry packages.
Resumo:
PURPOSE: To assess the value of adding axial traction to direct MR arthrography of the shoulder, in terms of subacromial and glenohumeral joint space widths, and coverage of the superior labrum-biceps tendon complex and articular cartilage by contrast material. MATERIALS AND METHODS: Twenty-one patients investigated by direct MR arthrography of the shoulder were prospectively included. Studies were performed with a 3 Tesla (T) unit and included a three-dimensional isotropic fat-suppressed T1-weighted gradient-recalled echo sequence, without and with axial traction (4 kg). Two radiologists independently measured the width of the subacromial, superior, and inferior glenohumeral joint spaces. They subsequently rated the amount of contrast material around the superior labrum-biceps tendon complex and between glenohumeral cartilage surfaces, using a three-point scale: 0 = no, 1 = partial, 2 = full. RESULTS: Under traction, the subacromial (Δ = 2.0 mm, P = 0.0003), superior (Δ = 0.7 mm, P = 0.0001) and inferior (Δ = 1.4 mm, P = 0.0006) glenohumeral joint space widths were all significantly increased, and both readers noted significantly more contrast material around the superior labrum-biceps tendon complex (P = 0.014), and between the superior (P = 0.001) and inferior (P = 0.025) glenohumeral cartilage surfaces. CONCLUSION: Direct MR arthrography of the shoulder under axial traction increases subacromial and glenohumeral joint space widths, and prompts better coverage of the superior labrum-biceps tendon complex and articular cartilage by contrast material. J. Magn. Reson. Imaging 2013;37:1228-1233. © 2012 Wiley Periodicals, Inc.
Resumo:
The paper contrasts empirically the results of alternative methods for estimating thevalue and the depreciation of mineral resources. The historical data of Mexico andVenezuela, covering the period 1920s-1980s, is used to contrast the results of severalmethods. These are the present value, the net price method, the user cost method andthe imputed income method. The paper establishes that the net price and the user costare not competing methods as such, but alternative adjustments to different scenariosof closed and open economies. The results prove that the biases of the methods, ascommonly described in the theoretical literature, only hold under the most restrictedscenario of constant rents over time. It is argued that the difference between what isexpected to happen and what actually did happen is for the most part due to a missingvariable, namely technological change. This is an important caveat to therecommendations made based on these models.
Resumo:
Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.
Resumo:
Bacteria are generally difficult specimens to prepare for conventional resin section electron microscopy and mycobacteria, with their thick and complex cell envelope layers being especially prone to artefacts. Here we made a systematic comparison of different methods for preparing Mycobacterium smegmatis for thin section electron microscopy analysis. These methods were: (1) conventional preparation by fixatives and epoxy resins at ambient temperature. (2) Tokuyasu cryo-section of chemically fixed bacteria. (3) rapid freezing followed by freeze substitution and embedding in epoxy resin at room temperature or (4) combined with Lowicryl HM20 embedding and ultraviolet (UV) polymerization at low temperature and (5) CEMOVIS, or cryo electron microscopy of vitreous sections. The best preservation of bacteria was obtained with the cryo electron microscopy of vitreous sections method, as expected, especially with respect to the preservation of the cell envelope and lipid bodies. By comparison with cryo electron microscopy of vitreous sections both the conventional and Tokuyasu methods produced different, undesirable artefacts. The two different types of freeze-substitution protocols showed variable preservation of the cell envelope but gave acceptable preservation of the cytoplasm, but not lipid bodies, and bacterial DNA. In conclusion although cryo electron microscopy of vitreous sections must be considered the 'gold standard' among sectioning methods for electron microscopy, because it avoids solvents and stains, the use of optimally prepared freeze substitution also offers some advantages for ultrastructural analysis of bacteria.
Resumo:
The question of where retroviral DNA becomes integrated in chromosomes is important for understanding (i) the mechanisms of viral growth, (ii) devising new anti-retroviral therapy, (iii) understanding how genomes evolve, and (iv) developing safer methods for gene therapy. With the completion of genome sequences for many organisms, it has become possible to study integration targeting by cloning and sequencing large numbers of host-virus DNA junctions, then mapping the host DNA segments back onto the genomic sequence. This allows statistical analysis of the distribution of integration sites relative to the myriad types of genomic features that are also being mapped onto the sequence scaffold. Here we present methods for recovering and analyzing integration site sequences.
Resumo:
AbstractText BACKGROUND: Profiling sperm DNA present on vaginal swabs taken from rape victims often contributes to identifying and incarcerating rapists. Large amounts of the victim's epithelial cells contaminate the sperm present on swabs, however, and complicate this process. The standard method for obtaining relatively pure sperm DNA from a vaginal swab is to digest the epithelial cells with Proteinase K in order to solubilize the victim's DNA, and to then physically separate the soluble DNA from the intact sperm by pelleting the sperm, removing the victim's fraction, and repeatedly washing the sperm pellet. An alternative approach that does not require washing steps is to digest with Proteinase K, pellet the sperm, remove the victim's fraction, and then digest the residual victim's DNA with a nuclease. METHODS: The nuclease approach has been commercialized in a product, the Erase Sperm Isolation Kit (PTC Labs, Columbia, MO, USA), and five crime laboratories have tested it on semen-spiked female buccal swabs in a direct comparison with their standard methods. Comparisons have also been performed on timed post-coital vaginal swabs and evidence collected from sexual assault cases. RESULTS: For the semen-spiked buccal swabs, Erase outperformed the standard methods in all five laboratories and in most cases was able to provide a clean male profile from buccal swabs spiked with only 1,500 sperm. The vaginal swabs taken after consensual sex and the evidence collected from rape victims showed a similar pattern of Erase providing superior profiles. CONCLUSIONS: In all samples tested, STR profiles of the male DNA fractions obtained with Erase were as good as or better than those obtained using the standard methods.
Resumo:
BACKGROUND: Clinical studies suggest that transmyocardial laser revascularization may improve regional blood flow of the subendocardial layer. The vascular growth pattern of laser channels was analyzed. METHODS: Twenty pigs were randomized to undergo ligation of left marginal arteries (n = 5), to undergo transmyocardial laser revascularization of the left lateral wall (n = 5), to undergo both procedures (n = 5) or to a control group (n = 5). All the animals were sacrificed after 1 month. Computed morphometric analysis of vascular density of the involved area was expressed as number of vascular structures per square millimeter (+/-1 standard deviation). RESULTS: The vascular density of the scar tissue of the laser channel was significantly increased in comparison with myocardial infarction alone: 49.6+/-12.8/mm2 versus 25.5+/-8.6/mm2 (p < 0.0001). The vascular densities of subendocardial and subepicardial channel areas were similar: 52.9+/-16.8/mm2 versus 46.3+/-13.6/mm2 (p = 0.41). The area immediately adjacent to the channels showed a vascular density similar to that of normal tissue: 6.02+/-1.7/mm2 versus 5.2+/-1.9/mm2 (p = 0.08). In the infarction + transmyocardial laser revascularization group, the channels were indistinguishable from infarction scar. CONCLUSIONS: Scars of transmyocardial laser revascularization channels exhibit an increased vascular density in comparison with scar tissue of myocardial infarction, which does not extend into their immediate vicinity. There was no vascular density gradient along the longitudinal axis of the channels.