32 resultados para the SIMPLE algorithm


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The coverage and volume of geo-referenced datasets are extensive and incessantly¦growing. The systematic capture of geo-referenced information generates large volumes¦of spatio-temporal data to be analyzed. Clustering and visualization play a key¦role in the exploratory data analysis and the extraction of knowledge embedded in¦these data. However, new challenges in visualization and clustering are posed when¦dealing with the special characteristics of this data. For instance, its complex structures,¦large quantity of samples, variables involved in a temporal context, high dimensionality¦and large variability in cluster shapes.¦The central aim of my thesis is to propose new algorithms and methodologies for¦clustering and visualization, in order to assist the knowledge extraction from spatiotemporal¦geo-referenced data, thus improving making decision processes.¦I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical¦Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis:¦the Tree-structured Self-organizing Maps Component Planes. In addition, I present¦methodologies that combined with FGHSON and the Tree-structured SOM Component¦Planes allow the integration of space and time seamlessly and simultaneously in¦order to extract knowledge embedded in a temporal context.¦The originality of the FGHSON lies in its capability to reflect the underlying structure¦of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of¦clusters is crucial when data include complex structures with large variability of cluster¦shapes, variances, densities and number of clusters. The most important characteristics¦of the FGHSON include: (1) It does not require an a-priori setup of the number¦of clusters. (2) The algorithm executes several self-organizing processes in parallel.¦Hence, when dealing with large datasets the processes can be distributed reducing the¦computational cost. (3) Only three parameters are necessary to set up the algorithm.¦In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm¦lies in its ability to create a structure that allows the visual exploratory data analysis¦of large high-dimensional datasets. This algorithm creates a hierarchical structure¦of Self-Organizing Map Component Planes, arranging similar variables' projections in¦the same branches of the tree. Hence, similarities on variables' behavior can be easily¦detected (e.g. local correlations, maximal and minimal values and outliers).¦Both FGHSON and the Tree-structured SOM Component Planes were applied in¦several agroecological problems proving to be very efficient in the exploratory analysis¦and clustering of spatio-temporal datasets.¦In this thesis I also tested three soft competitive learning algorithms. Two of them¦well-known non supervised soft competitive algorithms, namely the Self-Organizing¦Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the¦third was our original contribution, the FGHSON. Although the algorithms presented¦here have been used in several areas, to my knowledge there is not any work applying¦and comparing the performance of those techniques when dealing with spatiotemporal¦geospatial data, as it is presented in this thesis.¦I propose original methodologies to explore spatio-temporal geo-referenced datasets¦through time. Our approach uses time windows to capture temporal similarities and¦variations by using the FGHSON clustering algorithm. The developed methodologies¦are used in two case studies. In the first, the objective was to find similar agroecozones¦through time and in the second one it was to find similar environmental patterns¦shifted in time.¦Several results presented in this thesis have led to new contributions to agroecological¦knowledge, for instance, in sugar cane, and blackberry production.¦Finally, in the framework of this thesis we developed several software tools: (1)¦a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called¦BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user¦interface tool which integrates the FGHSON algorithm with Google Earth in order to¦show zones with similar agroecological characteristics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: To assess how different diagnostic decision aids perform in terms of sensitivity, specificity, and harm. METHODS: Four diagnostic decision aids were compared, as applied to a simulated patient population: a findings-based algorithm following a linear or branched pathway, a serial threshold-based strategy, and a parallel threshold-based strategy. Headache in immune-compromised HIV patients in a developing country was used as an example. Diagnoses included cryptococcal meningitis, cerebral toxoplasmosis, tuberculous meningitis, bacterial meningitis, and malaria. Data were derived from literature and expert opinion. Diagnostic strategies' validity was assessed in terms of sensitivity, specificity, and harm related to mortality and morbidity. Sensitivity analyses and Monte Carlo simulation were performed. RESULTS: The parallel threshold-based approach led to a sensitivity of 92% and a specificity of 65%. Sensitivities of the serial threshold-based approach and the branched and linear algorithms were 47%, 47%, and 74%, respectively, and the specificities were 85%, 95%, and 96%. The parallel threshold-based approach resulted in the least harm, with the serial threshold-based approach, the branched algorithm, and the linear algorithm being associated with 1.56-, 1.44-, and 1.17-times higher harm, respectively. Findings were corroborated by sensitivity and Monte Carlo analyses. CONCLUSION: A threshold-based diagnostic approach is designed to find the optimal trade-off that minimizes expected harm, enhancing sensitivity and lowering specificity when appropriate, as in the given example of a symptom pointing to several life-threatening diseases. Findings-based algorithms, in contrast, solely consider clinical observations. A parallel workup, as opposed to a serial workup, additionally allows for all potential diseases to be reviewed, further reducing false negatives. The parallel threshold-based approach might, however, not be as good in other disease settings.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The method of instrumental variable (referred to as Mendelian randomization when the instrument is a genetic variant) has been initially developed to infer on a causal effect of a risk factor on some outcome of interest in a linear model. Adapting this method to nonlinear models, however, is known to be problematic. In this paper, we consider the simple case when the genetic instrument, the risk factor, and the outcome are all binary. We compare via simulations the usual two-stages estimate of a causal odds-ratio and its adjusted version with a recently proposed estimate in the context of a clinical trial with noncompliance. In contrast to the former two, we confirm that the latter is (under some conditions) a valid estimate of a causal odds-ratio defined in the subpopulation of compliers, and we propose its use in the context of Mendelian randomization. By analogy with a clinical trial with noncompliance, compliers are those individuals for whom the presence/absence of the risk factor X is determined by the presence/absence of the genetic variant Z (i.e., for whom we would observe X = Z whatever the alleles randomly received at conception). We also recall and illustrate the huge variability of instrumental variable estimates when the instrument is weak (i.e., with a low percentage of compliers, as is typically the case with genetic instruments for which this proportion is frequently smaller than 10%) where the inter-quartile range of our simulated estimates was up to 18 times higher compared to a conventional (e.g., intention-to-treat) approach. We thus conclude that the need to find stronger instruments is probably as important as the need to develop a methodology allowing to consistently estimate a causal odds-ratio.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Since the emergence of diffusion tensor imaging, a lot of work has been done to better understand the properties of diffusion MRI tractography. However, the validation of the reconstructed fiber connections remains problematic in many respects. For example, it is difficult to assess whether a connection is the result of the diffusion coherence contrast itself or the simple result of other uncontrolled parameters like for example: noise, brain geometry and algorithmic characteristics. METHODOLOGY/PRINCIPAL FINDINGS: In this work, we propose a method to estimate the respective contributions of diffusion coherence versus other effects to a tractography result by comparing data sets with and without diffusion coherence contrast. We use this methodology to assign a confidence level to every gray matter to gray matter connection and add this new information directly in the connectivity matrix. CONCLUSIONS/SIGNIFICANCE: Our results demonstrate that whereas we can have a strong confidence in mid- and long-range connections obtained by a tractography experiment, it is difficult to distinguish between short connections traced due to diffusion coherence contrast from those produced by chance due to the other uncontrolled factors of the tractography methodology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction: Diffuse large B-cell lymphomas (DLBCL) represent a heterogeneous disease with variable clinical outcome. Identifying phenotypic biomarkers of tumor cells on paraffin sections that predict different clinical outcome remain an important goal that may also help to better understand the biology of this lymphoma. Differentiating non-germinal centre B-cell-like (non-GCB) from Germinal Centre B-cell-like (GCB) DLBCL according to Hans algorithm has been considered as an important immunohistochemical biomarker with prognostic value among patients treated with R-CHOP although not reproducibly found by all groups. Gene expression studies have also shown that IgM expression might be used as a surrogate for the GCB and ABC subtypes with a strong preferential expression of IgM in ABC DLBCL subtype. ImmunoFISH index based on the differential expression of MUM-1, FOXP1 by immunohistochemistry and on the BCL6 rearrangement by FISH has been previously reported (C Copie-Bergman, J Clin Oncol. 2009;27:5573-9) as prognostic in an homogeneous series of DLBCL treated with R-CHOP. In addition, oncogenic MYC protein overexpression by immunohistochemistry may represent an easy tool to identify the consequences of MYC deregulation in DLBCL. Our aim was to analyse by immunohistochemistry the prognostic relevance of MYC, IgM, GCB/nonGCB subtype and ImmunoFISH index in a large series of de novo DLBCL treated with Rituximab (R)-chemotherapy (anthracyclin based) included in the 2003 program of the Groupe d'Etude des Lymphomes de l'Adulte (GELA) trials. Methods: The 2003 program included patients with de novo CD20+ DLBCL enrolled in 6 different LNH-03 GELA trials (LNH-03-1B, -B, -3B, 39B, -6B, 7B) stratifying patients according to age and age-adjusted IPI. Tumor samples were analyzed by immunohistochemistry using CD10, BCL6, MUM1, FOXP1 (according to Barrans threshold), MYC, IgM antibodies on tissue microarrays and by FISH using BCL6 split signal DNA probes. Considering evaluable Hans score, 670 patients were included in the study with 237 (35.4%) receiving intensive R-ACVBP regimen and 433 (64.6%) R-CHOP/R-mini-CHOP. Results: 304 (45.4%) DLBCL were classified as GCB and 366 (54.6%) as non-GCB according to Hans algorithm. 337/567 cases (59.4%) were positive for the ImmunoFISH index (i.e. two out of the three markers positive: MUM1 protein positive, FOXP1 protein Variable or Strong, BCL6 rearrangement). Immunofish index was preferentially positive in the non-GCB subtype (81.3%) compared to the GCB subtype (31.2%), (p<0.001). IgM was recorded as positive in tumor cells in 351/637 (52.4%) DLBCL cases with a preferential expression in non-GCB 195 (53.3%) vs GCB subtype 100(32.9%), p<0.001). MYC was positive in 170/577 (29.5%) cases with a 40% cut-off and in 44/577 (14.2%) cases with a cut-off of 70%. There was no preferential expression of MYC among GCB or non-GCB subtype (p>0.4) for both cut-offs. Progression-free Survival (PFS) was significantly worse among patients with high IPI score (p<0.0001), IgM positive tumor (p<0.0001), MYC positive tumor with a 40% threshold (p<0.001), ImmunoFISH positive index (p<0.002), non-GCB DLBCL subtype (p<0.0001). Overall Survival (OS) was also significantly worse among patients with high IPI score (p<0.0001), IgM positive tumor (p=0.02), MYC positive tumor with a 40% threshold (p<0.01), ImmunoFISH positive index (p=0.02), non-GCB DLBCL subtype (p<0.0001). All significant parameters were included in a multivariate analysis using Cox Model and in addition to IPI, only the GCB/non-GCB subtype according to Hans algorithm predicted significantly a worse PFS among non-GCB subgroup (HR 1.9 [1.3-2.8] p=0.002) as well as a worse OS (HR 2.0 [1.3-3.2], p=0.003). This strong prognostic value of non-GCB subtyping was confirmed considering only patients treated with R- CHOP for PFS (HR 2.1 [1.4-3.3], p=0.001) and for OS (HR 2.3 [1.3-3.8], p=0.002). Conclusion: Our study on a large series of patients included in trials confirmed the relevance of immunohistochemistry as a useful tool to identify significant prognostic biomarkers for clinical use. We show here that IgM and MYC might be useful prognostic biomarkers. In addition, we confirmed in this series the prognostic value of the ImmunoFISH index. Above all, we fully validated the strong and independent prognostic value of the Hans algorithm, daily used by the pathologists to subtype DLBCL.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rheumatoid arthritis is the only secondary cause of osteoporosis that is considered independent of bone density in the FRAX(®) algorithm. Although input for rheumatoid arthritis in FRAX(®) is a dichotomous variable, intuitively, one would expect that more severe or active disease would be associated with a greater risk for fracture. We reviewed the literature to determine if specific disease parameters or medication use could be used to better characterize fracture risk in individuals with rheumatoid arthritis. Although many studies document a correlation between various parameters of disease activity or severity and decreased bone density, fewer have associated these variables with fracture risk. We reviewed these studies in detail and concluded that disability measures such as HAQ (Health Assessment Questionnaire) and functional class do correlate with clinical fractures but not morphometric vertebral fractures. One large study found a strong correlation with duration of disease and fracture risk but additional studies are needed to confirm this. There was little evidence to correlate other measures of disease such as DAS (disease activity score), VAS (visual analogue scale), acute phase reactants, use of non-glucocorticoid medications and increased fracture risk. We concluded that FRAX(®) calculations may underestimate fracture probability in patients with impaired functional status from rheumatoid arthritis but that this could not be quantified at this time. At this time, other disease measures cannot be used for fracture prediction. However only a few, mostly small studies addressed other disease parameters and further research is needed. Additional questions for future research are suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The success of anatomic repair of Bankart lesion diminishes in the presence of a capsule stretching and/or attenuation is reported in a variable percentage of patients with a chronic gleno-humeral instability. We introduce a new arthroscopic stitch, the MIBA stitch, designed with a twofold aim: to improve tissue grip to reduce the risk of soft tissue tear, particularly cutting through capsular-labral tissue, to and address capsule-labral detachment and capsular attenuation using a double loaded suture anchor. This stitch is a combination of horizontal mattress stitch passing through the capsular-labral complex in a "south-to-north" direction and an overlapping single vertical suture passing through the capsule and labrum in a "east-to-west" direction. The mattress stitch is tied before the vertical stitch in order to reinforce the simple vertical stitch, improving grip and contact force between capsular-labral tissue and glenoid bone.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cross-hole radar tomography is a useful tool for mapping shallow subsurface electrical properties viz. dielectric permittivity and electrical conductivity. Common practice is to invert cross-hole radar data with ray-based tomographic algorithms using first arrival traveltimes and first cycle amplitudes. However, the resolution of conventional standard ray-based inversion schemes for cross-hole ground-penetrating radar (GPR) is limited because only a fraction of the information contained in the radar data is used. The resolution can be improved significantly by using a full-waveform inversion that considers the entire waveform, or significant parts thereof. A recently developed 2D time-domain vectorial full-waveform crosshole radar inversion code has been modified in the present study by allowing optimized acquisition setups that reduce the acquisition time and computational costs significantly. This is achieved by minimizing the number of transmitter points and maximizing the number of receiver positions. The improved algorithm was employed to invert cross-hole GPR data acquired within a gravel aquifer (4-10 m depth) in the Thur valley, Switzerland. The simulated traces of the final model obtained by the full-waveform inversion fit the observed traces very well in the lower part of the section and reasonably well in the upper part of the section. Compared to the ray-based inversion, the results from the full-waveform inversion show significantly higher resolution images. At either side, 2.5 m distance away from the cross-hole plane, borehole logs were acquired. There is a good correspondence between the conductivity tomograms and the natural gamma logs at the boundary of the gravel layer and the underlying lacustrine clay deposits. Using existing petrophysical models, the inversion results and neutron-neutron logs are converted to porosity. Without any additional calibration, the values obtained for the converted neutron-neutron logs and permittivity results are very close and similar vertical variations can be observed. The full-waveform inversion provides in both cases additional information about the subsurface. Due to the presence of the water table and associated refracted/reflected waves, the upper traces are not well fitted and the upper 2 m in the permittivity and conductivity tomograms are not reliably reconstructed because the unsaturated zone is not incorporated into the inversion domain.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The multiscale finite-volume (MSFV) method has been derived to efficiently solve large problems with spatially varying coefficients. The fine-scale problem is subdivided into local problems that can be solved separately and are coupled by a global problem. This algorithm, in consequence, shares some characteristics with two-level domain decomposition (DD) methods. However, the MSFV algorithm is different in that it incorporates a flux reconstruction step, which delivers a fine-scale mass conservative flux field without the need for iterating. This is achieved by the use of two overlapping coarse grids. The recently introduced correction function allows for a consistent handling of source terms, which makes the MSFV method a flexible algorithm that is applicable to a wide spectrum of problems. It is demonstrated that the MSFV operator, used to compute an approximate pressure solution, can be equivalently constructed by writing the Schur complement with a tangential approximation of a single-cell overlapping grid and incorporation of appropriate coarse-scale mass-balance equations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Surveillance of multiple congenital anomalies is considered to be more sensitive for the detection of new teratogens than surveillance of all or isolated congenital anomalies. Current literature proposes the manual review of all cases for classification into isolated or multiple congenital anomalies. METHODS: Multiple anomalies were defined as two or more major congenital anomalies, excluding sequences and syndromes. A computer algorithm for classification of major congenital anomaly cases in the EUROCAT database according to International Classification of Diseases (ICD)v10 codes was programmed, further developed, and implemented for 1 year's data (2004) from 25 registries. The group of cases classified with potential multiple congenital anomalies were manually reviewed by three geneticists to reach a final agreement of classification as "multiple congenital anomaly" cases. RESULTS: A total of 17,733 cases with major congenital anomalies were reported giving an overall prevalence of major congenital anomalies at 2.17%. The computer algorithm classified 10.5% of all cases as "potentially multiple congenital anomalies". After manual review of these cases, 7% were agreed to have true multiple congenital anomalies. Furthermore, the algorithm classified 15% of all cases as having chromosomal anomalies, 2% as monogenic syndromes, and 76% as isolated congenital anomalies. The proportion of multiple anomalies varies by congenital anomaly subgroup with up to 35% of cases with bilateral renal agenesis. CONCLUSIONS: The implementation of the EUROCAT computer algorithm is a feasible, efficient, and transparent way to improve classification of congenital anomalies for surveillance and research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Résumé La diminution de la biodiversité, à toutes les échelles spatiales et sur l'ensemble de la planète, compte parmi les problèmes les plus préoccupants de notre époque. En terme de conservation, il est aujourd'hui primordial de mieux comprendre les mécanismes qui créent et maintiennent la biodiversité dans les écosystèmes naturels ou anthropiques. La présente étude a pour principal objectif d'améliorer notre compréhension des patrons de biodiversité végétale et des mécanismes sous jacents, dans un écosystème complexe, riche en espèces et à forte valeur patrimoniale, les pâturages boisés jurassiens. Structure et échelle spatiales sont progressivement reconnues comme des dimensions incontournables dans l'étude des patrons de biodiversité. De plus, ces deux éléments jouent un rôle central dans plusieurs théories écologiques. Toutefois, peu d'hypothèses issues de simulations ou d'études théoriques concernant le lien entre structure spatiale du paysage et biodiversité ont été testées de façon empirique. De même, l'influence des différentes composantes de l'échelle spatiale sur les patrons de biodiversité est méconnue. Cette étude vise donc à tester quelques-unes de ces hypothèses et à explorer les patrons spatiaux de biodiversité dans un contexte multi-échelle, pour différentes mesures de biodiversité (richesse et composition en espèces) à l'aide de données de terrain. Ces données ont été collectées selon un plan d'échantillonnage hiérarchique. Dans un premier temps, nous avons testé l'hypothèse élémentaire selon laquelle la richesse spécifique (le nombre d'espèces sur une surface donnée) est liée à l'hétérogénéité environnementale quelque soit l'échelle. Nous avons décomposé l'hétérogénéité environnementale en deux parties, la variabilité des conditions environnementales et sa configuration spatiale. Nous avons montré que, en général, la richesse spécifique augmentait avec l'hétérogénéité de l'environnement : elle augmentait avec le nombre de types d'habitats et diminuait avec l'agrégation spatiale de ces habitats. Ces effets ont été observés à toutes les échelles mais leur nature variait en fonction de l'échelle, suggérant une modification des mécanismes. Dans un deuxième temps, la structure spatiale de la composition en espèces a été décomposée en relation avec 20 variables environnementales et 11 traits d'espèces. Nous avons utilisé la technique de partition de la variation et un descripteur spatial, récemment développé, donnant accès à une large gamme d'échelles spatiales. Nos résultats ont montré que la structure spatiale de la composition en espèces végétales était principalement liée à la topographie, aux échelles les plus grossières, et à la disponibilité en lumière, aux échelles les plus fines. La fraction non-environnementale de la variation spatiale de la composition spécifique avait une relation complexe avec plusieurs traits d'espèces suggérant un lien avec des processus biologiques tels que la dispersion, dépendant de l'échelle spatiale. Dans un dernier temps, nous avons testé, à plusieurs échelles spatiales, les relations entre trois composantes de la biodiversité : la richesse spécifique totale d'un échantillon (diversité gamma), la richesse spécifique moyenne (diversité alpha), mesurée sur des sous-échantillons, et les différences de composition spécifique entre les sous-échantillons (diversité beta). Les relations deux à deux entre les diversités alpha, beta et gamma ne suivaient pas les relations attendues, tout du moins à certaines échelles spatiales. Plusieurs de ces relations étaient fortement dépendantes de l'échelle. Nos résultats ont mis en évidence l'importance du rapport d'échelle (rapport entre la taille de l'échantillon et du sous-échantillon) lors de l'étude des patrons spatiaux de biodiversité. Ainsi, cette étude offre un nouvel aperçu des patrons spatiaux de biodiversité végétale et des mécanismes potentiels permettant la coexistence des espèces. Nos résultats suggèrent que les patrons de biodiversité ne peuvent être expliqués par une seule théorie, mais plutôt par une combinaison de théories. Ils ont également mis en évidence le rôle essentiel joué par la structure spatiale dans la détermination de la biodiversité, quelque soit le composant de la biodiversité considéré. Enfin, cette étude souligne l'importance de prendre en compte plusieurs échelles spatiales et différents constituants de l'échelle spatiale pour toute étude relative à la diversité spécifique. Abstract The world-wide loss of biodiversity at all scales has become a matter of urgent concern, and improving our understanding of local drivers of biodiversity in natural and anthropogenic ecosystems is now crucial for conservation. The main objective of this study was to further our comprehension of the driving forces controlling biodiversity patterns in a complex and diverse ecosystem of high conservation value, wooded pastures. Spatial pattern and scale are central to several ecological theories, and it is increasingly recognized that they must be taken -into consideration when studying biodiversity patterns. However, few hypotheses developed from simulations or theoretical studies have been tested using field data, and the evolution of biodiversity patterns with different scale components remains largely unknown. We test several such hypotheses and explore spatial patterns of biodiversity in a multi-scale context and using different measures of biodiversity (species richness and composition), with field data. Data were collected using a hierarchical sampling design. We first tested the simple hypothesis that species richness, the number of species in a given area, is related to environmental heterogeneity at all scales. We decomposed environmental heterogeneity into two parts: the variability of environmental conditions and its spatial configuration. We showed that species richness generally increased with environmental heterogeneity: species richness increased with increasing number of habitat types and with decreasing spatial aggregation of those habitats. Effects occurred at all scales but the nature of the effect changed with scale, suggesting a change in underlying mechanisms. We then decomposed the spatial structure of species composition in relation to environmental variables and species traits using variation partitioning and a recently developed spatial descriptor, allowing us to capture a wide range of spatial scales. We showed that the spatial structure of plant species composition was related to topography at the coarsest scales and insolation at finer scales. The non-environmental fraction of the spatial variation in species composition had a complex relationship with several species traits, suggesting a scale-dependent link to biological processes, particularly dispersal. Finally, we tested, at different spatial scales, the relationships between different components of biodiversity: total sample species richness (gamma diversity), mean species .richness (alpha diversity), measured in nested subsamples, and differences in species composition between subsamples (beta diversity). The pairwise relationships between alpha, beta and gamma diversity did not follow the expected patterns, at least at certain scales. Our result indicated a strong scale-dependency of several relationships, and highlighted the importance of the scale ratio when studying biodiversity patterns. Thus, our results bring new insights on the spatial patterns of biodiversity and the possible mechanisms allowing species coexistence. They suggest that biodiversity patterns cannot be explained by any single theory proposed in the literature, but a combination of theories is sufficient. Spatial structure plays a crucial role for all components of biodiversity. Results emphasize the importance of considering multiple spatial scales and multiple scale components when studying species diversity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Context: Ovarian tumors (OT) typing is a competency expected from pathologists, with significant clinical implications. OT however come in numerous different types, some rather rare, with the consequence of few opportunities for practice in some departments. Aim: Our aim was to design a tool for pathologists to train in less common OT typing. Method and Results: Representative slides of 20 less common OT were scanned (Nano Zoomer Digital Hamamatsu®) and the diagnostic algorithm proposed by Young and Scully applied to each case (Young RH and Scully RE, Seminars in Diagnostic Pathology 2001, 18: 161-235) to include: recognition of morphological pattern(s); shortlisting of differential diagnosis; proposition of relevant immunohistochemical markers. The next steps of this project will be: evaluation of the tool in several post-graduate training centers in Europe and Québec; improvement of its design based on evaluation results; diffusion to a larger public. Discussion: In clinical medicine, solving many cases is recognized as of utmost importance for a novice to become an expert. This project relies on the virtual slides technology to provide pathologists with a learning tool aimed at increasing their skills in OT typing. After due evaluation, this model might be extended to other uncommon tumors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: Gadolinium-enhanced pulmonary magnetic resonance angiography (MRA) can be an option in patients with a history of previous adverse reaction to iodinated contrast material and renal insufficiency. Radiation is also avoided. The aim of this study is to prospectively compare the diagnostic value of MRA with that of a diagnostic strategy, taking into account catheter angiography, computed tomography angiography (CTA), and lung scintigraphy [ventilation-perfusion (VQ)]. MATERIAL AND METHODS: Magnetic resonance angiography was done in 48 patients with clinically suspected pulmonary embolism (PE) using fast gradient echo coronal acquisition with gadolinium. Interpretation was done with native coronal images and multiplanar maximum intensity projection reconstructions. Results were compared to catheter angiography (n=15), CTA (n=34), VQ (n=45), as well as 6-12 months clinical follow-ups, according to a sequenced reference tree. RESULTS: The final diagnosis of PE was retained in 11 patients (23%). There were two false negatives and no false positive results with MRA. Computed tomography angiography resulted in no false negatives or false positives. Magnetic resonance angiography had a sensitivity of 82% and a specificity of 100%. CONCLUSION: In our study, pulmonary MRA had a sensitivity of 82% and a specificity of 100% for the diagnosis of PE, with slightly less sensitivity than CTA. In the diagnostic algorithm of PE, pulmonary MRA should be considered as an alternative to CTA when iodine contrast injection or radiation is a significant matter.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Malgré son importance dans notre vie de tous les jours, certaines propriétés de l?eau restent inexpliquées. L'étude des interactions entre l'eau et les particules organiques occupe des groupes de recherche dans le monde entier et est loin d'être finie. Dans mon travail j'ai essayé de comprendre, au niveau moléculaire, ces interactions importantes pour la vie. J'ai utilisé pour cela un modèle simple de l'eau pour décrire des solutions aqueuses de différentes particules. Récemment, l?eau liquide a été décrite comme une structure formée d?un réseau aléatoire de liaisons hydrogènes. En introduisant une particule hydrophobe dans cette structure à basse température, certaines liaisons hydrogènes sont détruites ce qui est énergétiquement défavorable. Les molécules d?eau s?arrangent alors autour de cette particule en formant une cage qui permet de récupérer des liaisons hydrogènes (entre molécules d?eau) encore plus fortes : les particules sont alors solubles dans l?eau. A des températures plus élevées, l?agitation thermique des molécules devient importante et brise les liaisons hydrogènes. Maintenant, la dissolution des particules devient énergétiquement défavorable, et les particules se séparent de l?eau en formant des agrégats qui minimisent leur surface exposée à l?eau. Pourtant, à très haute température, les effets entropiques deviennent tellement forts que les particules se mélangent de nouveau avec les molécules d?eau. En utilisant un modèle basé sur ces changements de structure formée par des liaisons hydrogènes j?ai pu reproduire les phénomènes principaux liés à l?hydrophobicité. J?ai trouvé une région de coexistence de deux phases entre les températures critiques inférieure et supérieure de solubilité, dans laquelle les particules hydrophobes s?agrègent. En dehors de cette région, les particules sont dissoutes dans l?eau. J?ai démontré que l?interaction hydrophobe est décrite par un modèle qui prend uniquement en compte les changements de structure de l?eau liquide en présence d?une particule hydrophobe, plutôt que les interactions directes entre les particules. Encouragée par ces résultats prometteurs, j?ai étudié des solutions aqueuses de particules hydrophobes en présence de co-solvants cosmotropiques et chaotropiques. Ce sont des substances qui stabilisent ou déstabilisent les agrégats de particules hydrophobes. La présence de ces substances peut être incluse dans le modèle en décrivant leur effet sur la structure de l?eau. J?ai pu reproduire la concentration élevée de co-solvants chaotropiques dans le voisinage immédiat de la particule, et l?effet inverse dans le cas de co-solvants cosmotropiques. Ce changement de concentration du co-solvant à proximité de particules hydrophobes est la cause principale de son effet sur la solubilité des particules hydrophobes. J?ai démontré que le modèle adapté prédit correctement les effets implicites des co-solvants sur les interactions de plusieurs corps entre les particules hydrophobes. En outre, j?ai étendu le modèle à la description de particules amphiphiles comme des lipides. J?ai trouvé la formation de différents types de micelles en fonction de la distribution des regions hydrophobes à la surface des particules. L?hydrophobicité reste également un sujet controversé en science des protéines. J?ai défini une nouvelle échelle d?hydrophobicité pour les acides aminés qui forment des protéines, basée sur leurs surfaces exposées à l?eau dans des protéines natives. Cette échelle permet une comparaison meilleure entre les expériences et les résultats théoriques. Ainsi, le modèle développé dans mon travail contribue à mieux comprendre les solutions aqueuses de particules hydrophobes. Je pense que les résultats analytiques et numériques obtenus éclaircissent en partie les processus physiques qui sont à la base de l?interaction hydrophobe.<br/><br/>Despite the importance of water in our daily lives, some of its properties remain unexplained. Indeed, the interactions of water with organic particles are investigated in research groups all over the world, but controversy still surrounds many aspects of their description. In my work I have tried to understand these interactions on a molecular level using both analytical and numerical methods. Recent investigations describe liquid water as random network formed by hydrogen bonds. The insertion of a hydrophobic particle at low temperature breaks some of the hydrogen bonds, which is energetically unfavorable. The water molecules, however, rearrange in a cage-like structure around the solute particle. Even stronger hydrogen bonds are formed between water molecules, and thus the solute particles are soluble. At higher temperatures, this strict ordering is disrupted by thermal movements, and the solution of particles becomes unfavorable. They minimize their exposed surface to water by aggregating. At even higher temperatures, entropy effects become dominant and water and solute particles mix again. Using a model based on these changes in water structure I have reproduced the essential phenomena connected to hydrophobicity. These include an upper and a lower critical solution temperature, which define temperature and density ranges in which aggregation occurs. Outside of this region the solute particles are soluble in water. Because I was able to demonstrate that the simple mixture model contains implicitly many-body interactions between the solute molecules, I feel that the study contributes to an important advance in the qualitative understanding of the hydrophobic effect. I have also studied the aggregation of hydrophobic particles in aqueous solutions in the presence of cosolvents. Here I have demonstrated that the important features of the destabilizing effect of chaotropic cosolvents on hydrophobic aggregates may be described within the same two-state model, with adaptations to focus on the ability of such substances to alter the structure of water. The relevant phenomena include a significant enhancement of the solubility of non-polar solute particles and preferential binding of chaotropic substances to solute molecules. In a similar fashion, I have analyzed the stabilizing effect of kosmotropic cosolvents in these solutions. Including the ability of kosmotropic substances to enhance the structure of liquid water, leads to reduced solubility, larger aggregation regime and the preferential exclusion of the cosolvent from the hydration shell of hydrophobic solute particles. I have further adapted the MLG model to include the solvation of amphiphilic solute particles in water, by allowing different distributions of hydrophobic regions at the molecular surface, I have found aggregation of the amphiphiles, and formation of various types of micelle as a function of the hydrophobicity pattern. I have demonstrated that certain features of micelle formation may be reproduced by the adapted model to describe alterations of water structure near different surface regions of the dissolved amphiphiles. Hydrophobicity remains a controversial quantity also in protein science. Based on the surface exposure of the 20 amino-acids in native proteins I have defined the a new hydrophobicity scale, which may lead to an improvement in the comparison of experimental data with the results from theoretical HP models. Overall, I have shown that the primary features of the hydrophobic interaction in aqueous solutions may be captured within a model which focuses on alterations in water structure around non-polar solute particles. The results obtained within this model may illuminate the processes underlying the hydrophobic interaction.<br/><br/>La vie sur notre planète a commencé dans l'eau et ne pourrait pas exister en son absence : les cellules des animaux et des plantes contiennent jusqu'à 95% d'eau. Malgré son importance dans notre vie de tous les jours, certaines propriétés de l?eau restent inexpliquées. En particulier, l'étude des interactions entre l'eau et les particules organiques occupe des groupes de recherche dans le monde entier et est loin d'être finie. Dans mon travail j'ai essayé de comprendre, au niveau moléculaire, ces interactions importantes pour la vie. J'ai utilisé pour cela un modèle simple de l'eau pour décrire des solutions aqueuses de différentes particules. Bien que l?eau soit généralement un bon solvant, un grand groupe de molécules, appelées molécules hydrophobes (du grecque "hydro"="eau" et "phobia"="peur"), n'est pas facilement soluble dans l'eau. Ces particules hydrophobes essayent d'éviter le contact avec l'eau, et forment donc un agrégat pour minimiser leur surface exposée à l'eau. Cette force entre les particules est appelée interaction hydrophobe, et les mécanismes physiques qui conduisent à ces interactions ne sont pas bien compris à l'heure actuelle. Dans mon étude j'ai décrit l'effet des particules hydrophobes sur l'eau liquide. L'objectif était d'éclaircir le mécanisme de l'interaction hydrophobe qui est fondamentale pour la formation des membranes et le fonctionnement des processus biologiques dans notre corps. Récemment, l'eau liquide a été décrite comme un réseau aléatoire formé par des liaisons hydrogènes. En introduisant une particule hydrophobe dans cette structure, certaines liaisons hydrogènes sont détruites tandis que les molécules d'eau s'arrangent autour de cette particule en formant une cage qui permet de récupérer des liaisons hydrogènes (entre molécules d?eau) encore plus fortes : les particules sont alors solubles dans l'eau. A des températures plus élevées, l?agitation thermique des molécules devient importante et brise la structure de cage autour des particules hydrophobes. Maintenant, la dissolution des particules devient défavorable, et les particules se séparent de l'eau en formant deux phases. A très haute température, les mouvements thermiques dans le système deviennent tellement forts que les particules se mélangent de nouveau avec les molécules d'eau. A l'aide d'un modèle qui décrit le système en termes de restructuration dans l'eau liquide, j'ai réussi à reproduire les phénomènes physiques liés à l?hydrophobicité. J'ai démontré que les interactions hydrophobes entre plusieurs particules peuvent être exprimées dans un modèle qui prend uniquement en compte les liaisons hydrogènes entre les molécules d'eau. Encouragée par ces résultats prometteurs, j'ai inclus dans mon modèle des substances fréquemment utilisées pour stabiliser ou déstabiliser des solutions aqueuses de particules hydrophobes. J'ai réussi à reproduire les effets dûs à la présence de ces substances. De plus, j'ai pu décrire la formation de micelles par des particules amphiphiles comme des lipides dont la surface est partiellement hydrophobe et partiellement hydrophile ("hydro-phile"="aime l'eau"), ainsi que le repliement des protéines dû à l'hydrophobicité, qui garantit le fonctionnement correct des processus biologiques de notre corps. Dans mes études futures je poursuivrai l'étude des solutions aqueuses de différentes particules en utilisant les techniques acquises pendant mon travail de thèse, et en essayant de comprendre les propriétés physiques du liquide le plus important pour notre vie : l'eau.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract Lipid derived signals mediate many stress and defense responses in multicellular eukaryotes. Among these are the jasmonates, potently active signaling compounds in plants. Jasmonic acid (JA) and 12-oxo-phytodienoic acid (OPDA) are the two best known members of the large jasmonate family. This thesis further investigates their roles as signals using genomic and proteomic approaches. The study is based on a simple genetic model involving two key genes. The first is ALLENE OXIDE SYNTHASE (AOS), encoding the most important enzyme in generating jasmonates. The second is CORONATINE INSENSITIVE 1 (COI1), a gene involved in all currently documented canonical signaling responses. We asked the simple question: do null mutations in AOS and COI1 have analogous effects on the transcriptome ? We found that they do not. If most COI1-dependent genes were also AOS-dependent, the expression of a zinc-finger protein was AOS-dependent but was unaffected by the coi1-1 mutation. We thus supposed that a jasmonate member, most probably OPDA, can alter gene expression partially independently of COI1. Conversely, the expression of at least three genes, one of these is a protein kinase, was shown to be COI1-dependent but did not require a functional AOS protein. We conclude that a non-jasmonate signal might alter gene expression through COIL Proteomic comparison of coi1-1 and aos plants confirmed these observations and highlighted probable protein degradation processes controlled by jasmonates and COI1 in the wounded leaf. This thesis revealed new functions for COI1 and for AOS-generated oxylipins in the jasmonate signaling pathway. Résumé Les signaux dérivés d'acides gras sont des médiateurs de réponses aux stress et de la défense des eucaryotes multicellulaires. Parmi eux, les jasmonates sont de puissants composés de sig¬nalisation chez les plantes. L'acide jasmonique (JA) et l'acide 12-oxo-phytodienoïc (OPDA) sont les deux membres les mieux caractérisés de la grande famille des jasmonates. Cette thèse étudie plus profondément leurs rôles de signalisation en utilisant des approches génomique et protéomique. Cette étude est basée sur un modèle génétique simple n'impliquant que deux gènes. Le premier est PALLENE OXYDE SYNTHASE (AOS) qui encode l'enzyme la plus importante pour la fabrication des jasmonates. Le deuxième est CORONATINE INSENSITIVE 1 (COI1) qui est impliqué dans la totalité des réponses aux jasmonates connues à ce jour. Nous avons posé la question suivante : est-ce que les mutations nulles dans les gènes AOS et COI1 ont des effets analogues sur le transcriptome ? Nous avons trouvé que ce n'était pas le cas. Si la majorité des gènes dépendants de COI1 sont également dépendants d'AOS, l'expression d'un gène codant pour une protéine formée de doigts de zinc n'est pas affectée par la mutation de COI1 tout en étant dépendante d'AOS. Nous avons donc supposé qu'un membre de la famille des jasmonates, probablement OPDA, pouvait modifier l'expression de certains gènes indépendamment de COI1. Inversement, nous avons montré que, tout en étant dépendante de COI1, l'expression d'au moins trois gènes, dont un codant pour une protéine kinase, n'était pas affectée par l'absence d'une protéine AOS fonctionnelle. Nous en avons conclu qu'un signal autre qu'un jasmonate devait modifier l'expression de certains gènes à travers COI1. La comparaison par protéomique de plantes aos et coi1-1 a confirmé ces observations et a mis en évidence un probable processus de dégradation de protéines contrôlé par les jasmonates et COU_ Cette thèse a mis en avant de nouvelles fonctions pour COI1 et pour des oxylipines générées par AOS dans le cadre de la signalisation par les jasmonates.