128 resultados para Parallel numerical algorithms
Resumo:
The unstable rock slope, Stampa, above the village of Flåm, Norway, shows signs of both active and postglacial gravitational deformation over an area of 11 km2. Detailed structural field mapping, annual differential Global Navigation Satellite System (GNSS) surveys, as well as geomorphic analysis of high-resolution digital elevation models based on airborne and terrestrial laser scanning indicate that slope deformation is complex and spatially variable. Numerical modeling was used to investigate the influence of former rockslide activity and to better understand the failure mechanism. Field observations, kinematic analysis and numerical modeling indicate a strong structural control of the unstable area. Based on the integration of the above analyses, we propose that the failure mechanism is dominated by (1) a toppling component, (2) subsiding bilinear wedge failure and (3) planar sliding along the foliation at the toe of the unstable slope. Using differential GNSS, 18 points were measured annually over a period of up to 6 years. Two of these points have an average yearly movement of around 10 mm/year. They are located at the frontal cliff on almost completely detached blocks with volumes smaller than 300,000 m3. Large fractures indicate deep-seated gravitational deformation of volumes reaching several 100 million m3, but the movement rates in these areas are below 2 mm/year. Two different lobes of prehistoric rock slope failures were dated with terrestrial cosmogenic nuclides. While the northern lobe gave an average age of 4,300 years BP, the southern one resulted in two different ages (2,400 and 12,000 years BP), which represent most likely multiple rockfall events. This reflects the currently observable deformation style with unstable blocks in the northern part in between Joasete and Furekamben and no distinct blocks but a high rockfall activity around Ramnanosi in the south. With a relative susceptibility analysis it is concluded that small collapses of blocks along the frontal cliff will be more frequent. Larger collapses of free-standing blocks along the cliff with volumes > 100,000 m3, thus large enough to reach the fjord, cannot be ruled out. A larger collapse involving several million m3 is presently considered of very low likelihood.
Resumo:
In this work we present numerical simulations of continuous flow left ventricle assist device implantation with the aim of comparing difference in flow rates and pressure patterns depending on the location of the anastomosis and the rotational speed of the device. Despite the fact that the descending aorta anastomosis approach is less invasive, since it does not require a sternotomy and a cardiopulmonary bypass, its benefits are still controversial. Moreover, the device rotational speed should be correctly chosen to avoid anomalous flow rates and pressure distribution in specific location of the cardiovascular tree. With the aim of assessing the differences between these two approaches and device rotational speed in terms of flow rate and pressure waveforms, we set up numerical simulations of network of one-dimensional models where we account for the presence of an outflow cannula anastomosed to different locations of the aorta. Then, we use the resulting network to compare the results of the two different cannulations for several stages of heart failure and different rotational speed of the device. The inflow boundary data for the heart and the cannulas are obtained from a lumped parameters model of the entire circulatory system with an assist device, which is validated with clinical data. The results show that ascending and descending aorta cannulations lead to similar waveforms and mean flow rate in all the considered cases. Moreover, regardless of the anastomosis region, the rotational speed of the device has an important impact on wave profiles; this effect is more pronounced at high RPM.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
To make a comprehensive evaluation of organ-specific out-of-field doses using Monte Carlo (MC) simulations for different breast cancer irradiation techniques and to compare results with a commercial treatment planning system (TPS). Three breast radiotherapy techniques using 6MV tangential photon beams were compared: (a) 2DRT (open rectangular fields), (b) 3DCRT (conformal wedged fields), and (c) hybrid IMRT (open conformal+modulated fields). Over 35 organs were contoured in a whole-body CT scan and organ-specific dose distributions were determined with MC and the TPS. Large differences in out-of-field doses were observed between MC and TPS calculations, even for organs close to the target volume such as the heart, the lungs and the contralateral breast (up to 70% difference). MC simulations showed that a large fraction of the out-of-field dose comes from the out-of-field head scatter fluence (>40%) which is not adequately modeled by the TPS. Based on MC simulations, the 3DCRT technique using external wedges yielded significantly higher doses (up to a factor 4-5 in the pelvis) than the 2DRT and the hybrid IMRT techniques which yielded similar out-of-field doses. In sharp contrast to popular belief, the IMRT technique investigated here does not increase the out-of-field dose compared to conventional techniques and may offer the most optimal plan. The 3DCRT technique with external wedges yields the largest out-of-field doses. For accurate out-of-field dose assessment, a commercial TPS should not be used, even for organs near the target volume (contralateral breast, lungs, heart).
Resumo:
BACKGROUND: Gemcitabine, oxaliplatin and 5-fluorouracil (5-FU) are active in biliary tract cancer and have a potentially synergistic mode of action and non-overlapping toxicity. The objective of these trials was to determine response, survival and toxicity separately in patients with bile duct cancer (BDC) and gallbladder cancer (GBC) treated with gemcitabine/oxaliplatin/5-FU chemotherapy. METHODS: Eligible patients with histologically proven, advanced or metastatic BDC (n=37) or GBC (n=35) were treated with gemcitabine (900 mg m(-2) over 30 min), oxaliplatin (65 mg m(-2)) and 5-FU (1500 mg m(-2) over 24 h) on days 1 and 8 of a 21-day cycle. Tumour response was the primary outcome measure. RESULTS: Response rates were 19% (95% CI: 6-32%) and 23% (95% CI: 9-37%) for BDC and GBC, respectively. Median survivals were 10.0 months (95% CI: 8.6-12.4) and 9.9 months (95% CI: 7.5-12.2) for BDC and GBC, respectively, and 1- and 2-year survival rates were 40 and 23% in BDC and 34 and 6% in GBC (intention-to-treat analysis). Major grade III and IV adverse events were neutropenia, thrombocytopenia, elevated bilirubin and anorexia. CONCLUSION: Triple-drug chemotherapy achieves comparable results for response and survival to previously reported regimens, but with more toxicity.
Resumo:
In the parallel map theory, the hippocampus encodes space with 2 mapping systems. The bearing map is constructed primarily in the dentate gyrus from directional cues such as stimulus gradients. The sketch map is constructed within the hippocampus proper from positional cues. The integrated map emerges when data from the bearing and sketch maps are combined. Because the component maps work in parallel, the impairment of one can reveal residual learning by the other. Such parallel function may explain paradoxes of spatial learning, such as learning after partial hippocampal lesions, taxonomic and sex differences in spatial learning, and the function of hippocampal neurogenesis. By integrating evidence from physiology to phylogeny, the parallel map theory offers a unified explanation for hippocampal function.
Resumo:
Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often disseminated into different files obtained through different sampling designs. Data fusion is a set of methods used to combine information from different sources into a single dataset. In this article, we are interested in a specific problem: the fusion of two data files, one of which being quite small. We propose a model-based procedure combining a logistic regression with an Expectation-Maximization algorithm. Results show that despite the lack of data, this procedure can perform better than standard matching procedures.
Resumo:
In the first part of this research, three stages were stated for a program to increase the information extracted from ink evidence and maximise its usefulness to the criminal and civil justice system. These stages are (a) develop a standard methodology for analysing ink samples by high-performance thin layer chromatography (HPTLC) in reproducible way, when ink samples are analysed at different time, locations and by different examiners; (b) compare automatically and objectively ink samples; and (c) define and evaluate theoretical framework for the use of ink evidence in forensic context. This report focuses on the second of the three stages. Using the calibration and acquisition process described in the previous report, mathematical algorithms are proposed to automatically and objectively compare ink samples. The performances of these algorithms are systematically studied for various chemical and forensic conditions using standard performance tests commonly used in biometrics studies. The results show that different algorithms are best suited for different tasks. Finally, this report demonstrates how modern analytical and computer technology can be used in the field of ink examination and how tools developed and successfully applied in other fields of forensic science can help maximising its impact within the field of questioned documents.
Resumo:
We examined the spatial and temporal variation of species diversity and genetic diversity in a metacommunity comprising 16 species of freshwater gastropods. We monitored species abundance at five localities of the Ain river floodplain in southeastern France, over a period of four years. Using 190 AFLP loci, we monitored the genetic diversity of Radix balthica, one of the most abundant gastropod species of the metacommunity, twice during that period. An exceptionally intense drought occurred during the last two years and differentially affected the study sites. This allowed us to test the effect of natural disturbances on changes in both genetic and species diversity. Overall, local (alpha) diversity declined as reflected by lower values of gene diversity H(S) and evenness. In parallel, the among-sites (beta) diversity increased at both the genetic (F(ST)) and species (F(STC)) levels. These results suggest that disturbances can lead to similar changes in genetic and community structure through the combined effects of selective and neutral processes.
Resumo:
Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.