128 resultados para Parallel numerical algorithms
Resumo:
An epidemic model is formulated by a reactionâeuro"diffusion system where the spatial pattern formation is driven by cross-diffusion. The reaction terms describe the local dynamics of susceptible and infected species, whereas the diffusion terms account for the spatial distribution dynamics. For both self-diffusion and cross-diffusion, nonlinear constitutive assumptions are suggested. To simulate the pattern formation two finite volume formulations are proposed, which employ a conservative and a non-conservative discretization, respectively. An efficient simulation is obtained by a fully adaptive multiresolution strategy. Numerical examples illustrate the impact of the cross-diffusion on the pattern formation.
Resumo:
ABSTRACT: BACKGROUND: Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have shown that a patient's antibody reaction in a confirmatory line immunoassay (INNO-LIATM HIV I/II Score, Innogenetics) provides information on the duration of infection. Here, we sought to further investigate the diagnostic specificity of various Inno-Lia algorithms and to identify factors affecting it. METHODS: Plasma samples of 714 selected patients of the Swiss HIV Cohort Study infected for longer than 12 months and representing all viral clades and stages of chronic HIV-1 infection were tested blindly by Inno-Lia and classified as either incident (up to 12 m) or older infection by 24 different algorithms. Of the total, 524 patients received HAART, 308 had HIV-1 RNA below 50 copies/mL, and 620 were infected by a HIV-1 non-B clade. Using logistic regression analysis we evaluated factors that might affect the specificity of these algorithms. RESULTS: HIV-1 RNA <50 copies/mL was associated with significantly lower reactivity to all five HIV-1 antigens of the Inno-Lia and impaired specificity of most algorithms. Among 412 patients either untreated or with HIV-1 RNA ≥50 copies/mL despite HAART, the median specificity of the algorithms was 96.5% (range 92.0-100%). The only factor that significantly promoted false-incident results in this group was age, with false-incident results increasing by a few percent per additional year. HIV-1 clade, HIV-1 RNA, CD4 percentage, sex, disease stage, and testing modalities exhibited no significance. Results were similar among 190 untreated patients. CONCLUSIONS: The specificity of most Inno-Lia algorithms was high and not affected by HIV-1 variability, advanced disease and other factors promoting false-recent results in other STARHS. Specificity should be good in any group of untreated HIV-1 patients.
Resumo:
We are interested in the development, implementation and testing of an orthotropic model for cardiac contraction based on an active strain decomposition. Our model addresses the coupling of a transversely isotropic mechanical description at the cell level, with an orthotropic constitutive law for incompressible tissue at the macroscopic level. The main differences with the active stress model are addressed in detail, and a finite element discretization using Taylor-Hood and MINI elements is proposed and illustrated with numerical examples.
Resumo:
In this work we analyze how patchy distributions of CO2 and brine within sand reservoirs may lead to significant attenuation and velocity dispersion effects, which in turn may have a profound impact on surface seismic data. The ultimate goal of this paper is to contribute to the understanding of these processes within the framework of the seismic monitoring of CO2 sequestration, a key strategy to mitigate global warming. We first carry out a Monte Carlo analysis to study the statistical behavior of attenuation and velocity dispersion of compressional waves traveling through rocks with properties similar to those at the Utsira Sand, Sleipner field, containing quasi-fractal patchy distributions of CO2 and brine. These results show that the mean patch size and CO2 saturation play key roles in the observed wave-induced fluid flow effects. The latter can be remarkably important when CO2 concentrations are low and mean patch sizes are relatively large. To analyze these effects on the corresponding surface seismic data, we perform numerical simulations of wave propagation considering reservoir models and CO2 accumulation patterns similar to the CO2 injection site in the Sleipner field. These numerical experiments suggest that wave-induced fluid flow effects may produce changes in the reservoir's seismic response, modifying significantly the main seismic attributes usually employed in the characterization of these environments. Consequently, the determination of the nature of the fluid distributions as well as the proper modeling of the seismic data constitute important aspects that should not be ignored in the seismic monitoring of CO2 sequestration problems.
Resumo:
This article analyses and discusses issues that pertain to the choice of relevant databases for assigning values to the components of evaluative likelihood ratio procedures at source level. Although several formal likelihood ratio developments currently exist, both case practitioners and recipients of expert information (such as judiciary) may be reluctant to consider them as a framework for evaluating scientific evidence in context. The recent ruling R v T and ensuing discussions in many forums provide illustrative examples for this. In particular, it is often felt that likelihood ratio-based reasoning amounts to an application that requires extensive quantitative information along with means for dealing with technicalities related to the algebraic formulation of these approaches. With regard to this objection, this article proposes two distinct discussions. In a first part, it is argued that, from a methodological point of view, there are additional levels of qualitative evaluation that are worth considering prior to focusing on particular numerical probability assignments. Analyses will be proposed that intend to show that, under certain assumptions, relative numerical values, as opposed to absolute values, may be sufficient to characterize a likelihood ratio for practical and pragmatic purposes. The feasibility of such qualitative considerations points out that the availability of hard numerical data is not a necessary requirement for implementing a likelihood ratio approach in practice. It is further argued that, even if numerical evaluations can be made, qualitative considerations may be valuable because they can further the understanding of the logical underpinnings of an assessment. In a second part, the article will draw a parallel to R v T by concentrating on a practical footwear mark case received at the authors' institute. This case will serve the purpose of exemplifying the possible usage of data from various sources in casework and help to discuss the difficulty associated with reconciling the depth of theoretical likelihood ratio developments and limitations in the degree to which these developments can actually be applied in practice.
Resumo:
The coverage and volume of geo-referenced datasets are extensive and incessantly¦growing. The systematic capture of geo-referenced information generates large volumes¦of spatio-temporal data to be analyzed. Clustering and visualization play a key¦role in the exploratory data analysis and the extraction of knowledge embedded in¦these data. However, new challenges in visualization and clustering are posed when¦dealing with the special characteristics of this data. For instance, its complex structures,¦large quantity of samples, variables involved in a temporal context, high dimensionality¦and large variability in cluster shapes.¦The central aim of my thesis is to propose new algorithms and methodologies for¦clustering and visualization, in order to assist the knowledge extraction from spatiotemporal¦geo-referenced data, thus improving making decision processes.¦I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical¦Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis:¦the Tree-structured Self-organizing Maps Component Planes. In addition, I present¦methodologies that combined with FGHSON and the Tree-structured SOM Component¦Planes allow the integration of space and time seamlessly and simultaneously in¦order to extract knowledge embedded in a temporal context.¦The originality of the FGHSON lies in its capability to reflect the underlying structure¦of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of¦clusters is crucial when data include complex structures with large variability of cluster¦shapes, variances, densities and number of clusters. The most important characteristics¦of the FGHSON include: (1) It does not require an a-priori setup of the number¦of clusters. (2) The algorithm executes several self-organizing processes in parallel.¦Hence, when dealing with large datasets the processes can be distributed reducing the¦computational cost. (3) Only three parameters are necessary to set up the algorithm.¦In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm¦lies in its ability to create a structure that allows the visual exploratory data analysis¦of large high-dimensional datasets. This algorithm creates a hierarchical structure¦of Self-Organizing Map Component Planes, arranging similar variables' projections in¦the same branches of the tree. Hence, similarities on variables' behavior can be easily¦detected (e.g. local correlations, maximal and minimal values and outliers).¦Both FGHSON and the Tree-structured SOM Component Planes were applied in¦several agroecological problems proving to be very efficient in the exploratory analysis¦and clustering of spatio-temporal datasets.¦In this thesis I also tested three soft competitive learning algorithms. Two of them¦well-known non supervised soft competitive algorithms, namely the Self-Organizing¦Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the¦third was our original contribution, the FGHSON. Although the algorithms presented¦here have been used in several areas, to my knowledge there is not any work applying¦and comparing the performance of those techniques when dealing with spatiotemporal¦geospatial data, as it is presented in this thesis.¦I propose original methodologies to explore spatio-temporal geo-referenced datasets¦through time. Our approach uses time windows to capture temporal similarities and¦variations by using the FGHSON clustering algorithm. The developed methodologies¦are used in two case studies. In the first, the objective was to find similar agroecozones¦through time and in the second one it was to find similar environmental patterns¦shifted in time.¦Several results presented in this thesis have led to new contributions to agroecological¦knowledge, for instance, in sugar cane, and blackberry production.¦Finally, in the framework of this thesis we developed several software tools: (1)¦a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called¦BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user¦interface tool which integrates the FGHSON algorithm with Google Earth in order to¦show zones with similar agroecological characteristics.
Resumo:
Rock slope instabilities such as rock slides, rock avalanche or deep-seated gravitational slope deformations are widespread in Alpine valleys. These phenomena represent at the same time a main factor that control the mountain belts erosion and also a significant natural hazard that creates important losses to the mountain communities. However, the potential geometrical and dynamic connections linking outcrop and slope-scale instabilities are often unknown. A more detailed definition of the potential links will be essential to improve the comprehension of the destabilization processes and to dispose of a more complete hazard characterization of the rock instabilities at different spatial scales. In order to propose an integrated approach in the study of the rock slope instabilities, three main themes were analysed in this PhD thesis: (1) the inventory and the spatial distribution of rock slope deformations at regional scale and their influence on the landscape evolution, (2) the influence of brittle and ductile tectonic structures on rock slope instabilities development and (3) the characterization of hazard posed by potential rock slope instabilities through the development of conceptual instability models. To prose and integrated approach for the analyses of these topics, several techniques were adopted. In particular, high resolution digital elevation models revealed to be fundamental tools that were employed during the different stages of the rock slope instability assessment. A special attention was spent in the application of digital elevation model for detailed geometrical modelling of past and potential instabilities and for the rock slope monitoring at different spatial scales. Detailed field analyses and numerical models were performed to complete and verify the remote sensing approach. In the first part of this thesis, large slope instabilities in Rhone valley (Switzerland) were mapped in order to dispose of a first overview of tectonic and climatic factors influencing their distribution and their characteristics. Our analyses demonstrate the key influence of neotectonic activity and the glacial conditioning on the spatial distribution of the rock slope deformations. Besides, the volumes of rock instabilities identified along the main Rhone valley, were then used to propose the first estimate of the postglacial denudation and filling of the Rhone valley associated to large gravitational movements. In the second part of the thesis, detailed structural analyses of the Frank slide and the Sierre rock avalanche were performed to characterize the influence of brittle and ductile tectonic structures on the geometry and on the failure mechanism of large instabilities. Our observations indicated that the geometric characteristics and the variation of the rock mass quality associated to ductile tectonic structures, that are often ignored landslide study, represent important factors that can drastically influence the extension and the failure mechanism of rock slope instabilities. In the last part of the thesis, the failure mechanisms and the hazard associated to five potential instabilities were analysed in detail. These case studies clearly highlighted the importance to incorporate different analyses and monitoring techniques to dispose of reliable and hazard scenarios. This information associated to the development of a conceptual instability model represents the primary data for an integrated risk management of rock slope instabilities. - Les mouvements de versant tels que les chutes de blocs, les éboulements ou encore les phénomènes plus lents comme les déformations gravitaires profondes de versant représentent des manifestations courantes en régions montagneuses. Les mouvements de versant sont à la fois un des facteurs principaux contrôlant la destruction progressive des chaines orogéniques mais aussi un danger naturel concret qui peut provoquer des dommages importants. Pourtant, les phénomènes gravitaires sont rarement analysés dans leur globalité et les rapports géométriques et mécaniques qui lient les instabilités à l'échelle du versant aux instabilités locales restent encore mal définis. Une meilleure caractérisation de ces liens pourrait pourtant représenter un apport substantiel dans la compréhension des processus de déstabilisation des versants et améliorer la caractérisation des dangers gravitaires à toutes les échelles spatiales. Dans le but de proposer un approche plus globale à la problématique des mouvements gravitaires, ce travail de thèse propose trois axes de recherche principaux: (1) l'inventaire et l'analyse de la distribution spatiale des grandes instabilités rocheuses à l'échelle régionale, (2) l'analyse des structures tectoniques cassantes et ductiles en relation avec les mécanismes de rupture des grandes instabilités rocheuses et (3) la caractérisation des aléas rocheux par une approche multidisciplinaire visant à développer un modèle conceptuel de l'instabilité et une meilleure appréciation du danger . Pour analyser les différentes problématiques traitées dans cette thèse, différentes techniques ont été utilisées. En particulier, le modèle numérique de terrain s'est révélé être un outil indispensable pour la majorité des analyses effectuées, en partant de l'identification de l'instabilité jusqu'au suivi des mouvements. Les analyses de terrain et des modélisations numériques ont ensuite permis de compléter les informations issues du modèle numérique de terrain. Dans la première partie de cette thèse, les mouvements gravitaires rocheux dans la vallée du Rhône (Suisse) ont été cartographiés pour étudier leur répartition en fonction des variables géologiques et morphologiques régionales. En particulier, les analyses ont mis en évidence l'influence de l'activité néotectonique et des phases glaciaires sur la distribution des zones à forte densité d'instabilités rocheuses. Les volumes des instabilités rocheuses identifiées le long de la vallée principale ont été ensuite utilisés pour estimer le taux de dénudations postglaciaire et le remplissage de la vallée du Rhône lié aux grands mouvements gravitaires. Dans la deuxième partie, l'étude de l'agencement structural des avalanches rocheuses de Sierre (Suisse) et de Frank (Canada) a permis de mieux caractériser l'influence passive des structures tectoniques sur la géométrie des instabilités. En particulier, les structures issues d'une tectonique ductile, souvent ignorées dans l'étude des instabilités gravitaires, ont été identifiées comme des structures très importantes qui contrôlent les mécanismes de rupture des instabilités à différentes échelles. Dans la dernière partie de la thèse, cinq instabilités rocheuses différentes ont été étudiées par une approche multidisciplinaire visant à mieux caractériser l'aléa et à développer un modèle conceptuel trois dimensionnel de ces instabilités. A l'aide de ces analyses on a pu mettre en évidence la nécessité d'incorporer différentes techniques d'analyses et de surveillance pour une gestion plus objective du risque associée aux grandes instabilités rocheuses.
Resumo:
PURPOSE: To determine the lower limit of dose reduction with hybrid and fully iterative reconstruction algorithms in detection of endoleaks and in-stent thrombus of thoracic aorta with computed tomographic (CT) angiography by applying protocols with different tube energies and automated tube current modulation. MATERIALS AND METHODS: The calcification insert of an anthropomorphic cardiac phantom was replaced with an aortic aneurysm model containing a stent, simulated endoleaks, and an intraluminal thrombus. CT was performed at tube energies of 120, 100, and 80 kVp with incrementally increasing noise indexes (NIs) of 16, 25, 34, 43, 52, 61, and 70 and a 2.5-mm section thickness. NI directly controls radiation exposure; a higher NI allows for greater image noise and decreases radiation. Images were reconstructed with filtered back projection (FBP) and hybrid and fully iterative algorithms. Five radiologists independently analyzed lesion conspicuity to assess sensitivity and specificity. Mean attenuation (in Hounsfield units) and standard deviation were measured in the aorta to calculate signal-to-noise ratio (SNR). Attenuation and SNR of different protocols and algorithms were analyzed with analysis of variance or Welch test depending on data distribution. RESULTS: Both sensitivity and specificity were 100% for simulated lesions on images with 2.5-mm section thickness and an NI of 25 (3.45 mGy), 34 (1.83 mGy), or 43 (1.16 mGy) at 120 kVp; an NI of 34 (1.98 mGy), 43 (1.23 mGy), or 61 (0.61 mGy) at 100 kVp; and an NI of 43 (1.46 mGy) or 70 (0.54 mGy) at 80 kVp. SNR values showed similar results. With the fully iterative algorithm, mean attenuation of the aorta decreased significantly in reduced-dose protocols in comparison with control protocols at 100 kVp (311 HU at 16 NI vs 290 HU at 70 NI, P ≤ .0011) and 80 kVp (400 HU at 16 NI vs 369 HU at 70 NI, P ≤ .0007). CONCLUSION: Endoleaks and in-stent thrombus of thoracic aorta were detectable to 1.46 mGy (80 kVp) with FBP, 1.23 mGy (100 kVp) with the hybrid algorithm, and 0.54 mGy (80 kVp) with the fully iterative algorithm.
Resumo:
The subdivisions of human inferior colliculus are currently based on Golgi and Nissl-stained preparations. We have investigated the distribution of calcium-binding protein immunoreactivity in the human inferior colliculus and found complementary or mutually exclusive localisations of parvalbumin versus calbindin D-28k and calretinin staining. The central nucleus of the inferior colliculus but not the surrounding regions contained parvalbumin-positive neuronal somata and fibres. Calbindin-positive neurons and fibres were concentrated in the dorsal aspect of the central nucleus and in structures surrounding it: the dorsal cortex, the lateral lemniscus, the ventrolateral nucleus, and the intercollicular region. In the dorsal cortex, labelling of calbindin and calretinin revealed four distinct layers.Thus, calcium-binding protein reactivity reveals in the human inferior colliculus distinct neuronal populations that are anatomically segregated. The different calcium-binding protein-defined subdivisions may belong to parallel auditory pathways that were previously demonstrated in non-human primates, and they may constitute a first indication of parallel processing in human subcortical auditory structures.
Resumo:
PURPOSE: To assess how different diagnostic decision aids perform in terms of sensitivity, specificity, and harm. METHODS: Four diagnostic decision aids were compared, as applied to a simulated patient population: a findings-based algorithm following a linear or branched pathway, a serial threshold-based strategy, and a parallel threshold-based strategy. Headache in immune-compromised HIV patients in a developing country was used as an example. Diagnoses included cryptococcal meningitis, cerebral toxoplasmosis, tuberculous meningitis, bacterial meningitis, and malaria. Data were derived from literature and expert opinion. Diagnostic strategies' validity was assessed in terms of sensitivity, specificity, and harm related to mortality and morbidity. Sensitivity analyses and Monte Carlo simulation were performed. RESULTS: The parallel threshold-based approach led to a sensitivity of 92% and a specificity of 65%. Sensitivities of the serial threshold-based approach and the branched and linear algorithms were 47%, 47%, and 74%, respectively, and the specificities were 85%, 95%, and 96%. The parallel threshold-based approach resulted in the least harm, with the serial threshold-based approach, the branched algorithm, and the linear algorithm being associated with 1.56-, 1.44-, and 1.17-times higher harm, respectively. Findings were corroborated by sensitivity and Monte Carlo analyses. CONCLUSION: A threshold-based diagnostic approach is designed to find the optimal trade-off that minimizes expected harm, enhancing sensitivity and lowering specificity when appropriate, as in the given example of a symptom pointing to several life-threatening diseases. Findings-based algorithms, in contrast, solely consider clinical observations. A parallel workup, as opposed to a serial workup, additionally allows for all potential diseases to be reviewed, further reducing false negatives. The parallel threshold-based approach might, however, not be as good in other disease settings.
Resumo:
Modeling concentration-response function became extremely popular in ecotoxicology during the last decade. Indeed, modeling allows determining the total response pattern of a given substance. However, reliable modeling is consuming in term of data, which is in contradiction with the current trend in ecotoxicology, which aims to reduce, for cost and ethical reasons, the number of data produced during an experiment. It is therefore crucial to determine experimental design in a cost-effective manner. In this paper, we propose to use the theory of locally D-optimal designs to determine the set of concentrations to be tested so that the parameters of the concentration-response function can be estimated with high precision. We illustrated this approach by determining the locally D-optimal designs to estimate the toxicity of the herbicide dinoseb on daphnids and algae. The results show that the number of concentrations to be tested is often equal to the number of parameters and often related to the their meaning, i.e. they are located close to the parameters. Furthermore, the results show that the locally D-optimal design often has the minimal number of support points and is not much sensitive to small changes in nominal values of the parameters. In order to reduce the experimental cost and the use of test organisms, especially in case of long-term studies, reliable nominal values may therefore be fixed based on prior knowledge and literature research instead of on preliminary experiments
Resumo:
River bifurcations are key nodes within braided river systems controlling the flow and sediment partitioning and therefore the dynamics of the river braiding process. Recent research has shown that certain geometrical configurations induce instabilities that lead to downstream mid-channel bar formation and the formation of bifurcations. However, we currently have a poor understanding of the flow division process within bifurcations and the flow dynamics in the downstream bifurcates, both of which are needed to understand bifurcation stability. This paper presents results of a numerical sensitivity experiment undertaken using computational fluid dynamics (CFD) with the purpose of understanding the flow dynamics of a series of idealized bifurcations. A geometric sensitivity analysis is undertaken for a range of channel slopes (0.005 to 0.03), bifurcation angles (22 degrees to 42 degrees) and a restricted set of inflow conditions based upon simulating flow through meander bends with different curvature on the flow field dynamics through the bifurcation. The results demonstrate that the overall slope of the bifurcation affects the velocity of flow through the bifurcation and when slope asymmetry is introduced, the flow structures in the bifurcation are modified. In terms of bifurcation evolution the most important observation appears to be that once slope asymmetry is greater than 0.2 the flow within the steep bifurcate shows potential instability and the potential for alternate channel bar formation. Bifurcation angle also defines the flow structures within the bifurcation with an increase in bifurcation angle increasing the flow velocity down both bifurcates. However, redistributive effects of secondary circulation caused by upstream curvature can very easily counter the effects of local bifurcation characteristics. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
MOTIVATION: Analysis of millions of pyro-sequences is currently playing a crucial role in the advance of environmental microbiology. Taxonomy-independent, i.e. unsupervised, clustering of these sequences is essential for the definition of Operational Taxonomic Units. For this application, reproducibility and robustness should be the most sought after qualities, but have thus far largely been overlooked. RESULTS: More than 1 million hyper-variable internal transcribed spacer 1 (ITS1) sequences of fungal origin have been analyzed. The ITS1 sequences were first properly extracted from 454 reads using generalized profiles. Then, otupipe, cd-hit-454, ESPRIT-Tree and DBC454, a new algorithm presented here, were used to analyze the sequences. A numerical assay was developed to measure the reproducibility and robustness of these algorithms. DBC454 was the most robust, closely followed by ESPRIT-Tree. DBC454 features density-based hierarchical clustering, which complements the other methods by providing insights into the structure of the data. AVAILABILITY: An executable is freely available for non-commercial users at ftp://ftp.vital-it.ch/tools/dbc454. It is designed to run under MPI on a cluster of 64-bit Linux machines running Red Hat 4.x, or on a multi-core OSX system. CONTACT: dbc454@vital-it.ch or nicolas.guex@isb-sib.ch.
Resumo:
For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.