948 resultados para distributed model
Resumo:
The Computational Biophysics Group at the Universitat Pompeu Fabra (GRIB-UPF) hosts two unique computational resources dedicated to the execution of large scale molecular dynamics (MD) simulations: (a) the ACMD molecular-dynamics software, used on standard personal computers with graphical processing units (GPUs); and (b) the GPUGRID. net computing network, supported by users distributed worldwide that volunteer GPUs for biomedical research. We leveraged these resources and developed studies, protocols and open-source software to elucidate energetics and pathways of a number of biomolecular systems, with a special focus on flexible proteins with many degrees of freedom. First, we characterized ion permeation through the bactericidal model protein Gramicidin A conducting one of the largest studies to date with the steered MD biasing methodology. Next, we addressed an open problem in structural biology, the determination of drug-protein association kinetics; we reconstructed the binding free energy, association, and dissaciociation rates of a drug like model system through a spatial decomposition and a Makov-chain analysis. The work was published in the Proceedings of the National Academy of Sciences and become one of the few landmark papers elucidating a ligand-binding pathway. Furthermore, we investigated the unstructured Kinase Inducible Domain (KID), a 28-peptide central to signalling and transcriptional response; the kinetics of this challenging system was modelled with a Markovian approach in collaboration with Frank Noe’s group at the Freie University of Berlin. The impact of the funding includes three peer-reviewed publication on high-impact journals; three more papers under review; four MD analysis components, released as open-source software; MD protocols; didactic material, and code for the hosting group.
Resumo:
In a distributed key distribution scheme, a set of servers helps a set of users in a group to securely obtain a common key. Security means that an adversary who corrupts some servers and some users has no information about the key of a noncorrupted group. In this work, we formalize the security analysis of one such scheme which was not considered in the original proposal. We prove the scheme is secure in the random oracle model, assuming that the Decisional Diffie-Hellman (DDH) problem is hard to solve. We also detail a possible modification of that scheme and the one in which allows us to prove the security of the schemes without assuming that a specific hash function behaves as a random oracle. As usual, this improvement in the security of the schemes is at the cost of an efficiency loss.
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by processbased modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws.We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25m resolution.
Resumo:
Aim Recently developed parametric methods in historical biogeography allow researchers to integrate temporal and palaeogeographical information into the reconstruction of biogeographical scenarios, thus overcoming a known bias of parsimony-based approaches. Here, we compare a parametric method, dispersal-extinction-cladogenesis (DEC), against a parsimony-based method, dispersal-vicariance analysis (DIVA), which does not incorporate branch lengths but accounts for phylogenetic uncertainty through a Bayesian empirical approach (Bayes-DIVA). We analyse the benefits and limitations of each method using the cosmopolitan plant family Sapindaceae as a case study.Location World-wide.Methods Phylogenetic relationships were estimated by Bayesian inference on a large dataset representing generic diversity within Sapindaceae. Lineage divergence times were estimated by penalized likelihood over a sample of trees from the posterior distribution of the phylogeny to account for dating uncertainty in biogeographical reconstructions. We compared biogeographical scenarios between Bayes-DIVA and two different DEC models: one with no geological constraints and another that employed a stratified palaeogeographical model in which dispersal rates were scaled according to area connectivity across four time slices, reflecting the changing continental configuration over the last 110 million years.Results Despite differences in the underlying biogeographical model, Bayes-DIVA and DEC inferred similar biogeographical scenarios. The main differences were: (1) in the timing of dispersal events - which in Bayes-DIVA sometimes conflicts with palaeogeographical information, and (2) in the lower frequency of terminal dispersal events inferred by DEC. Uncertainty in divergence time estimations influenced both the inference of ancestral ranges and the decisiveness with which an area can be assigned to a node.Main conclusions By considering lineage divergence times, the DEC method gives more accurate reconstructions that are in agreement with palaeogeographical evidence. In contrast, Bayes-DIVA showed the highest decisiveness in unequivocally reconstructing ancestral ranges, probably reflecting its ability to integrate phylogenetic uncertainty. Care should be taken in defining the palaeogeographical model in DEC because of the possibility of overestimating the frequency of extinction events, or of inferring ancestral ranges that are outside the extant species ranges, owing to dispersal constraints enforced by the model. The wide-spanning spatial and temporal model proposed here could prove useful for testing large-scale biogeographical patterns in plants.
Resumo:
The integrity of the cornea, the most anterior part of the eye, is indispensable for vision. Forty-five million individuals worldwide are bilaterally blind and another 135 million have severely impaired vision in both eyes because of loss of corneal transparency; treatments range from local medications to corneal transplants, and more recently to stem cell therapy. The corneal epithelium is a squamous epithelium that is constantly renewing, with a vertical turnover of 7 to 14 days in many mammals. Identification of slow cycling cells (label-retaining cells) in the limbus of the mouse has led to the notion that the limbus is the niche for the stem cells responsible for the long-term renewal of the cornea; hence, the corneal epithelium is supposedly renewed by cells generated at and migrating from the limbus, in marked opposition to other squamous epithelia in which each resident stem cell has in charge a limited area of epithelium. Here we show that the corneal epithelium of the mouse can be serially transplanted, is self-maintained and contains oligopotent stem cells with the capacity to generate goblet cells if provided with a conjunctival environment. Furthermore, the entire ocular surface of the pig, including the cornea, contains oligopotent stem cells (holoclones) with the capacity to generate individual colonies of corneal and conjunctival cells. Therefore, the limbus is not the only niche for corneal stem cells and corneal renewal is not different from other squamous epithelia. We propose a model that unifies our observations with the literature and explains why the limbal region is enriched in stem cells.
Resumo:
Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.
Resumo:
In this paper we analyze the time of ruin in a risk process with the interclaim times being Erlang(n) distributed and a constant dividend barrier. We obtain an integro-differential equation for the Laplace Transform of the time of ruin. Explicit solutions for the moments of the time of ruin are presented when the individual claim amounts have a distribution with rational Laplace transform. Finally, some numerical results and a compare son with the classical risk model, with interclaim times following an exponential distribution, are given.
Resumo:
Background and Objectives: Precursor lesions of oesophagus adenocarcinoma constitute a clinical dilemma. Photodynamic therapy (PDT) is an effective treatment for this indication, but it is difficult to optimise without an appropriate animal model. For this reason, we assessed the sheep model for PDT in the oesophagus with the photosensitiser meta-(tetra-hydroxyphenyl) chlorin (mTHPC). Materials and Methods: Twelve sheep underwent intravenous mTHPC injection, blood sampling and fluorescence measurements. mTHPC's pharmacokinetics was measured in vivo and in plasma by fluorescence spectroscopy. Biopsies of sheep oesophagus were compared to corresponding human tissue, and the mTHPC's biodistribution was studied under fluorescence microscopy. Finally, the sheep oesophageal mucosa was irradiated, 4 days after mTHPC's injection. Results: Histologically, the sheep and human oesophagus were closely comparable, with the exception of additional fatty tissue in the sheep oesophagus. mTHPC's pharmacokinetics in sheep and human plasmas were similar, with a maximum of concentration in the sheep 10 hours after i.v. injection. mTHPC's pharmacokinetics in vivo reached its maximum after 30-50 hours, then decreased to background levels, as in humans under similar conditions. Two days after injection, mTHPC was mainly distributed in the lamina propria, followed by a penetration into the epithelium. The sheep and human tissue sensitivity to mTHPC PDT was similar. Conclusion: In conclusion, this model showed many similarities with humans as to mTHPC's plasma and tissue pharmacokinetics, and for tissue PDT response, making it suitable to optimise oesophagus PDT. Lasers Surg. Med. 41:643-652,2009. (C) 2009Wiley-Liss,Inc.
Resumo:
In this paper we analyze the time of ruin in a risk process with the interclaim times being Erlang(n) distributed and a constant dividend barrier. We obtain an integro-differential equation for the Laplace Transform of the time of ruin. Explicit solutions for the moments of the time of ruin are presented when the individual claim amounts have a distribution with rational Laplace transform. Finally, some numerical results and a compare son with the classical risk model, with interclaim times following an exponential distribution, are given.
Resumo:
Neuropathic pain is a major health issue and is frequently accompanied by allodynia (painful sensations in response to normally non-painful stimulations), and unpleasant paresthesia/dysesthesia, pointing to alterations in sensory pathways normally dedicated to the processing of non-nociceptive information. Interestingly, mounting evidence indicate that central glial cells are key players in allodynia, partly due to changes in the astrocytic capacity to scavenge extracellular glutamate and gamma-aminobutyric acid (GABA), through changes in their respective transporters (EAAT and GAT). In the present study, we investigated the glial changes occurring in the dorsal column nuclei, the major target of normally innocuous sensory information, in the rat spared nerve injury (SNI) model of neuropathic pain. We report that together with a robust microglial and astrocytic reaction in the ipsilateral gracile nucleus, the GABA transporter GAT-1 is upregulated with no change in GAT-3 or glutamate transporters. Furthermore, [(3)H] GABA reuptake on crude synaptosome preparation shows that transporter activity is functionally increased ipsilaterally in SNI rats. This GAT-1 upregulation appears evenly distributed in the gracile nucleus and colocalizes with astrocytic activation. Neither glial activation nor GAT-1 modulation was detected in the cuneate nucleus. Together, the present results point to GABA transport in the gracile nucleus as a putative therapeutic target against abnormal sensory perceptions related to neuropathic pain.
Resumo:
An Actively Heated Fiber Optics (AHFO) method to estimate soil moisture is tested and the analysis technique improved on. The measurements were performed in a lysimeter uniformly packed with loam soil with variable water content profiles. In the first meter of the soil profi le, 30 m of fiber optic cable were installed in a 12 loops coil. The metal sheath armoring the fiber cable was used as an electrical resistance heater to generate a heat pulse, and the soil response was monitored with a Distributed Temperature Sensing (DTS) system. We study the cooling following three continuous heat pulses of 120 s at 36 W m(-1) by means of long-time approximation of radial heat conduction. The soil volumetric water contents were then inferred from the estimated thermal conductivities through a specifically calibrated model relating thermal conductivity and volumetric water content. To use the pre-asymptotic data we employed a time correction that allowed the volumetric water content to be estimated with a precision of 0.01-0.035 (m(3) m(-3)). A comparison of the AHFO measurements with soil-moisture measurements obtained with calibrated capacitance-based probes gave good agreement for wetter soils [discrepancy between the two methods was less than 0.04 (m(3) m(-3))]. In the shallow drier soils, the AHFO method underestimated the volumetric water content due to the longertime required for the temperature increment to become asymptotic in less thermally conductive media [discrepancy between the two methods was larger than 0.1 (m(3) m(-3))]. The present work suggests that future applications of the AHFO method should include longer heat pulses, that longer heating and cooling events are analyzed, and, temperature increments ideally be measured with higher frequency.
Resumo:
Application of semi-distributed hydrological models to large, heterogeneous watersheds deals with several problems. On one hand, the spatial and temporal variability in catchment features should be adequately represented in the model parameterization, while maintaining the model complexity in an acceptable level to take advantage of state-of-the-art calibration techniques. On the other hand, model complexity enhances uncertainty in adjusted model parameter values, therefore increasing uncertainty in the water routing across the watershed. This is critical for water quality applications, where not only streamflow, but also a reliable estimation of the surface versus subsurface contributions to the runoff is needed. In this study, we show how a regularized inversion procedure combined with a multiobjective function calibration strategy successfully solves the parameterization of a complex application of a water quality-oriented hydrological model. The final value of several optimized parameters showed significant and consistentdifferences across geological and landscape features. Although the number of optimized parameters was significantly increased by the spatial and temporal discretization of adjustable parameters, the uncertainty in water routing results remained at reasonable values. In addition, a stepwise numerical analysis showed that the effects on calibration performance due to inclusion of different data types in the objective function could be inextricably linked. Thus caution should be taken when adding or removing data from an aggregated objective function.
Resumo:
This work extends a previously developed research concerning about the use of local model predictive control in differential driven mobile robots. Hence, experimental results are presented as a way to improve the methodology by considering aspects as trajectory accuracy and time performance. In this sense, the cost function and the prediction horizon are important aspects to be considered. The aim of the present work is to test the control method by measuring trajectory tracking accuracy and time performance. Moreover, strategies for the integration with perception system and path planning are briefly introduced. In this sense, monocular image data can be used to plan safety trajectories by using goal attraction potential fields
Resumo:
The hyperpolarization-activated cyclic nucleotide-gated (HCN) channels are expressed in pacemaker cells very early during cardiogenesis. This work aimed at determining to what extent these channels are implicated in the electromechanical disturbances induced by a transient oxygen lack which may occur in utero. Spontaneously beating hearts or isolated ventricles and outflow tracts dissected from 4-day-old chick embryos were exposed to a selective inhibitor of HCN channels (ivabradine 0.1-10microM) to establish a dose-response relationship. The effects of ivabradine on electrocardiogram, excitation-contraction coupling and contractility of hearts submitted to anoxia (30min) and reoxygenation (60min) were also determined. The distribution of the predominant channel isoform, HCN4, was established in atria, ventricle and outflow tract by immunoblotting. Intrinsic beating rate of atria, ventricle and outflow tract was 164+/-22 (n=10), 78+/-24 (n=8) and 40+/-12bpm (n=23, mean+/-SD), respectively. In the whole heart, ivabradine (0.3microM) slowed the firing rate of atria by 16% and stabilized PR interval. These effects persisted throughout anoxia-reoxygenation, whereas the variations of QT duration, excitation-contraction coupling and contractility, as well as the types and duration of arrhythmias were not altered. Ivabradine (10microM) reduced the intrinsic rate of atria and isolated ventricle by 27% and 52%, respectively, whereas it abolished activity of the isolated outflow tract. Protein expression of HCN4 channels was higher in atria and ventricle than in the outflow tract. Thus, HCN channels are specifically distributed and control finely atrial, ventricular and outflow tract pacemakers as well as conduction in the embryonic heart under normoxia and throughout anoxia-reoxygenation.
Resumo:
Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange