70 resultados para Ephemeral Computation
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
False identity documents constitute a potential powerful source of forensic intelligence because they are essential elements of transnational crime and provide cover for organized crime. In previous work, a systematic profiling method using false documents' visual features has been built within a forensic intelligence model. In the current study, the comparison process and metrics lying at the heart of this profiling method are described and evaluated. This evaluation takes advantage of 347 false identity documents of four different types seized in two countries whose sources were known to be common or different (following police investigations and dismantling of counterfeit factories). Intra-source and inter-sources variations were evaluated through the computation of more than 7500 similarity scores. The profiling method could thus be validated and its performance assessed using two complementary approaches to measuring type I and type II error rates: a binary classification and the computation of likelihood ratios. Very low error rates were measured across the four document types, demonstrating the validity and robustness of the method to link documents to a common source or to differentiate them. These results pave the way for an operational implementation of a systematic profiling process integrated in a developed forensic intelligence model.
Resumo:
Debris flows are among the most dangerous processes in mountainous areas due to their rapid rate of movement and long runout zone. Sudden and rather unexpected impacts produce not only damages to buildings and infrastructure but also threaten human lives. Medium- to regional-scale susceptibility analyses allow the identification of the most endangered areas and suggest where further detailed studies have to be carried out. Since data availability for larger regions is mostly the key limiting factor, empirical models with low data requirements are suitable for first overviews. In this study a susceptibility analysis was carried out for the Barcelonnette Basin, situated in the southern French Alps. By means of a methodology based on empirical rules for source identification and the empirical angle of reach concept for the 2-D runout computation, a worst-case scenario was first modelled. In a second step, scenarios for high, medium and low frequency events were developed. A comparison with the footprints of a few mapped events indicates reasonable results but suggests a high dependency on the quality of the digital elevation model. This fact emphasises the need for a careful interpretation of the results while remaining conscious of the inherent assumptions of the model used and quality of the input data.
Resumo:
A combined strategy based on the computation of absorption energies, using the ZINDO/S semiempirical method, for a statistically relevant number of thermally sampled configurations extracted from QM/MM trajectories is used to establish a one-to-one correspondence between the structures of the different early intermediates (dark, batho, BSI, lumi) involved in the initial steps of the rhodopsin photoactivation mechanism and their optical spectra. A systematic analysis of the results based on a correlation-based feature selection algorithm shows that the origin of the color shifts among these intermediates can be mainly ascribed to alterations in intrinsic properties of the chromophore structure, which are tuned by several residues located in the protein binding pocket. In addition to the expected electrostatic and dipolar effects caused by the charged residues (Glu113, Glu181) and to strong hydrogen bonding with Glu113, other interactions such as π-stacking with Ala117 and Thr118 backbone atoms, van der Waals contacts with Gly114 and Ala292, and CH/π weak interactions with Tyr268, Ala117, Thr118, and Ser186 side chains are found to make non-negligible contributions to the modulation of the color tuning among the different rhodopsin photointermediates.
Resumo:
BACKGROUND: The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC) algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. RESULTS: Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC). It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. CONCLUSION: ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.
Resumo:
Motivation: Genome-wide association studies have become widely used tools to study effects of genetic variants on complex diseases. While it is of great interest to extend existing analysis methods by considering interaction effects between pairs of loci, the large number of possible tests presents a significant computational challenge. The number of computations is further multiplied in the study of gene expression quantitative trait mapping, in which tests are performed for thousands of gene phenotypes simultaneously. Results: We present FastEpistasis, an efficient parallel solution extending the PLINK epistasis module, designed to test for epistasis effects when analyzing continuous phenotypes. Our results show that the algorithm scales with the number of processors and offers a reduction in computation time when several phenotypes are analyzed simultaneously. FastEpistasis is capable of testing the association of a continuous trait with all single nucleotide polymorphism ( SNP) pairs from 500 000 SNPs, totaling 125 billion tests, in a population of 5000 individuals in 29, 4 or 0.5 days using 8, 64 or 512 processors.
Resumo:
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Resumo:
When decommissioning a nuclear facility it is important to be able to estimate activity levels of potentially radioactive samples and compare with clearance values defined by regulatory authorities. This paper presents a method of calibrating a clearance box monitor based on practical experimental measurements and Monte Carlo simulations. Adjusting the simulation for experimental data obtained using a simple point source permits the computation of absolute calibration factors for more complex geometries with an accuracy of a bit more than 20%. The uncertainty of the calibration factor can be improved to about 10% when the simulation is used relatively, in direct comparison with a measurement performed in the same geometry but with another nuclide. The simulation can also be used to validate the experimental calibration procedure when the sample is supposed to be homogeneous but the calibration factor is derived from a plate phantom. For more realistic geometries, like a small gravel dumpster, Monte Carlo simulation shows that the calibration factor obtained with a larger homogeneous phantom is correct within about 20%, if sample density is taken as the influencing parameter. Finally, simulation can be used to estimate the effect of a contamination hotspot. The research supporting this paper shows that activity could be largely underestimated in the event of a centrally-located hotspot and overestimated for a peripherally-located hotspot if the sample is assumed to be homogeneously contaminated. This demonstrates the usefulness of being able to complement experimental methods with Monte Carlo simulations in order to estimate calibration factors that cannot be directly measured because of a lack of available material or specific geometries.
Resumo:
The M-Coffee server is a web server that makes it possible to compute multiple sequence alignments (MSAs) by running several MSA methods and combining their output into one single model. This allows the user to simultaneously run all his methods of choice without having to arbitrarily choose one of them. The MSA is delivered along with a local estimation of its consistency with the individual MSAs it was derived from. The computation of the consensus multiple alignment is carried out using a special mode of the T-Coffee package [Notredame, Higgins and Heringa (T-Coffee: a novel method for fast and accurate multiple sequence alignment. J. Mol. Biol. 2000; 302: 205-217); Wallace, O'Sullivan, Higgins and Notredame (M-Coffee: combining multiple sequence alignment methods with T-Coffee. Nucleic Acids Res. 2006; 34: 1692-1699)] Given a set of sequences (DNA or proteins) in FASTA format, M-Coffee delivers a multiple alignment in the most common formats. M-Coffee is a freeware open source package distributed under a GPL license and it is available either as a standalone package or as a web service from www.tcoffee.org.
Resumo:
A workshop recently held at the Ecole Polytechnique Federale de Lausanne (EPFL, Switzerland) was dedicated to understanding the genetic basis of adaptive change, taking stock of the different approaches developed in theoretical population genetics and landscape genomics and bringing together knowledge accumulated in both research fields. Indeed, an important challenge in theoretical population genetics is to incorporate effects of demographic history and population structure. But important design problems (e.g. focus on populations as units, focus on hard selective sweeps, no hypothesis-based framework in the design of the statistical tests) reduce their capability of detecting adaptive genetic variation. In parallel, landscape genomics offers a solution to several of these problems and provides a number of advantages (e.g. fast computation, landscape heterogeneity integration). But the approach makes several implicit assumptions that should be carefully considered (e.g. selection has had enough time to create a functional relationship between the allele distribution and the environmental variable, or this functional relationship is assumed to be constant). To address the respective strengths and weaknesses mentioned above, the workshop brought together a panel of experts from both disciplines to present their work and discuss the relevance of combining these approaches, possibly resulting in a joint software solution in the future.
Resumo:
Two-way alternating automata were introduced by Vardi in order to study the satisfiability problem for the modal μ-calculus extended with backwards modalities. In this paper, we present a very simple proof by way of Wadge games of the strictness of the hierarchy of Motowski indices of two-way alternating automata over trees.
Resumo:
We have constructed a forward modelling code in Matlab, capable of handling several commonly used electrical and electromagnetic methods in a 1D environment. We review the implemented electromagnetic field equations for grounded wires, frequency and transient soundings and present new solutions in the case of a non-magnetic first layer. The CR1Dmod code evaluates the Hankel transforms occurring in the field equations using either the Fast Hankel Transform based on digital filter theory, or a numerical integration scheme applied between the zeros of the Bessel function. A graphical user interface allows easy construction of 1D models and control of the parameters. Modelling results are in agreement with other authors, but the time of computation is less efficient than other available codes. Nevertheless, the CR1Dmod routine handles complex resistivities and offers solutions based on the full EM-equations as well as the quasi-static approximation. Thus, modelling of effects based on changes in the magnetic permeability and the permittivity is also possible.
Resumo:
Understanding the emplacement and growth of intrusive bodies in terms of mechanism, duration, ther¬mal evolution and rates are fundamental aspects of crustal evolution. Recent studies show that many plutons grow in several Ma by in situ accretion of discrete magma pulses, which constitute small-scale magmatic reservoirs. The residence time of magmas, and hence their capacities to interact and differentiate, are con¬trolled by the local thermal environment. The latter is highly dependant on 1) the emplacement depth, 2) the magmas and country rock composition, 3) the country rock thermal conductivity, 4) the rate of magma injection and 5) the geometry of the intrusion. In shallow level plutons, where magmas solidify quickly, evi¬dence for magma mixing and/or differentiation processes is considered by many authors to be inherited from deeper levels. This work shows however that in-situ differentiation and magma interactions occurred within basaltic and felsic sills at shallow depth (0.3 GPa) in the St-Jean-du-Doigt (SJDD) bimodal intrusion, France. This intrusion emplaced ca. 347 Ma ago (IDTIMS U/Pb on zircon) in the Precambrian crust of the Armori- can massif and preserves remarkable sill-like emplacement processes of bimodal mafic-felsic magmas. Field evidence coupled to high precision zircon U-Pb dating document progressive thermal maturation within the incrementally built ioppolith. Early m-thick mafic sills (eastern part) form the roof of the intrusion and are homogeneous and fine-grained with planar contacts with neighboring felsic sills; within a minimal 0.8 Ma time span, the system gets warmer (western part). Sills are emplaced by under-accretion under the old east¬ern part, interact and mingle. A striking feature of this younger, warmer part is in-situ differentiation of the mafic sills in the top 40 cm of the layer, which suggests liquids survival in the shallow crust. Rheological and thermal models were performed in order to determine the parameters required to allow this observed in- situ differentiation-accumulation processes. Strong constraints such as total emplacement durations (ca. 0.8 Ma, TIMS date) and pluton thickness (1.5 Km, gravity model) allow a quantitative estimation of the various parameters required (injection rates, incubation time,...). The results show that in-situ differentiation may be achieved in less than 10 years at such shallow depth, provided that: (1) The differentiating sills are injected beneath consolidated, yet still warm basalt sills, which act as low conductive insulating screens (eastern part formation in the SJDD intrusion). The latter are emplaced in a very short time (800 years) at high injection rate (0.5 m/y) in order to create a "hot zone" in the shallow crust (incubation time). This implies that nearly 1/3 of the pluton (400m) is emplaced by a subsequent and sustained magmatic activity occurring on a short time scale at the very beginning of the system. (2) Once incubation time is achieved, the calculations show that a small hot zone is created at the base of the sill pile, where new injections stay above their solidus T°C and may interact and differentiate. Extraction of differentiated residual liquids might eventually take place and mix with newly injected magma as documented in active syn-emplacement shear-zones within the "warm" part of the pluton. (3) Finally, the model show that in order to maintain a permanent hot zone at shallow level, injection rate must be of 0.03 m/y with injection of 5m thick basaltic sills eveiy 130yr, imply¬ing formation of a 15 km thick pluton. As this thickness is in contradiction with the one calculated for SJDD (1.5 Km) and exceed much the average thickness observed for many shallow level plutons, I infer that there is no permanent hot zone (or magma chambers) at such shallow level. I rather propose formation of small, ephemeral (10-15yr) reservoirs, which represent only small portions of the final size of the pluton. Thermal calculations show that, in the case of SJDD, 5m thick basaltic sills emplaced every 1500 y, allow formation of such ephemeral reservoirs. The latter are formed by several sills, which are in a mushy state and may interact and differentiate during a short time.The mineralogical, chemical and isotopic data presented in this study suggest a signature intermediate be¬tween E-MORB- and arc-like for the SJDD mafic sills and feeder dykes. The mantle source involved produced hydrated magmas and may be astenosphere modified by "arc-type" components, probably related to a sub¬ducting slab. Combined fluid mobile/immobile trace elements and Sr-Nd isotopes suggest that such subduc¬tion components are mainly fluids derived from altered oceanic crust with minor effect from the subducted sediments. Close match between the SJDD compositions and BABB may point to a continental back-arc setting with little crustal contamination. If so, the SjDD intrusion is a major witness of an extensional tectonic regime during the Early-Carboniferous, linked to the subduction of the Rheno-Hercynian Ocean beneath the Variscan terranes. Also of interest is the unusual association of cogenetic (same isotopic compositions) K-feldspar A- type granite and albite-granite. A-type granites may form by magma mixing between the mafic magma and crustal melts. Alternatively, they might derive from the melting of a biotite-bearing quartz-feldspathic crustal protolith triggered by early mafic injections at low crustal levels. Albite-granite may form by plagioclase cu¬mulate remelting issued from A-type magma differentiation.
Resumo:
Many eukaryote organisms are polyploid. However, despite their importance, evolutionary inference of polyploid origins and modes of inheritance has been limited by a need for analyses of allele segregation at multiple loci using crosses. The increasing availability of sequence data for nonmodel species now allows the application of established approaches for the analysis of genomic data in polyploids. Here, we ask whether approximate Bayesian computation (ABC), applied to realistic traditional and next-generation sequence data, allows correct inference of the evolutionary and demographic history of polyploids. Using simulations, we evaluate the robustness of evolutionary inference by ABC for tetraploid species as a function of the number of individuals and loci sampled, and the presence or absence of an outgroup. We find that ABC adequately retrieves the recent evolutionary history of polyploid species on the basis of both old and new sequencing technologies. The application of ABC to sequence data from diploid and polyploid species of the plant genus Capsella confirms its utility. Our analysis strongly supports an allopolyploid origin of C. bursa-pastoris about 80 000 years ago. This conclusion runs contrary to previous findings based on the same data set but using an alternative approach and is in agreement with recent findings based on whole-genome sequencing. Our results indicate that ABC is a promising and powerful method for revealing the evolution of polyploid species, without the need to attribute alleles to a homeologous chromosome pair. The approach can readily be extended to more complex scenarios involving higher ploidy levels.
Resumo:
The decision-making process regarding drug dose, regularly used in everyday medical practice, is critical to patients' health and recovery. It is a challenging process, especially for a drug with narrow therapeutic ranges, in which a medical doctor decides the quantity (dose amount) and frequency (dose interval) on the basis of a set of available patient features and doctor's clinical experience (a priori adaptation). Computer support in drug dose administration makes the prescription procedure faster, more accurate, objective, and less expensive, with a tendency to reduce the number of invasive procedures. This paper presents an advanced integrated Drug Administration Decision Support System (DADSS) to help clinicians/patients with the dose computing. Based on a support vector machine (SVM) algorithm, enhanced with the random sample consensus technique, this system is able to predict the drug concentration values and computes the ideal dose amount and dose interval for a new patient. With an extension to combine the SVM method and the explicit analytical model, the advanced integrated DADSS system is able to compute drug concentration-to-time curves for a patient under different conditions. A feedback loop is enabled to update the curve with a new measured concentration value to make it more personalized (a posteriori adaptation).