847 resultados para Routing queries
Resumo:
In this paper, we perform a societal and economic risk assessment for debris flows at the regional scale, for lower Valtellina, Northern Italy. We apply a simple empirical debris-flow model, FLOW-R, which couples a probabilistic flow routing algorithm with an energy line approach, providing the relative probability of transit, and the maximum kinetic energy, for each cell. By assessing a vulnerability to people and to other exposed elements (buildings, public facilities, crops, woods, communication lines), and their economic value, we calculated the expected annual losses both in terms of lives (societal risk) and goods (direct economic risk). For societal risk assessment, we distinguish for the day and night scenarios. The distribution of people at different moments of the day was considered, accounting for the occupational and recreational activities, to provide a more realistic assessment of risk. Market studies were performed in order to assess a realistic economic value to goods, structures, and lifelines. As terrain unit, a 20 m x 20 m cell was used, in accordance with data availability and the spatial resolution requested for a risk assessment at this scale. Societal risk the whole area amounts to 1.98 and 4.22 deaths/year for the day and the night scenarios, respectively, with a maximum of 0.013 deaths/year/cell. Economic risk for goods amounts to 1,760,291 ?/year, with a maximum of 13,814 ?/year/cell.
Resumo:
El siguiente artículo presenta el trabajo realizado en la creación de una aplicación de software libre que representa gráficamente las rutas generadas y la distribución de los elementos transportados en el interior de un vehículo de carga.
Resumo:
Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.
Resumo:
Desarrollo de un sistema capaz de procesar consultas en lenguaje natural introducidas por el usuario mediante el teclado. El sistema es capaz de responder a consultas en castellano, relacionadas con un dominio de aplicación representado mediante una base de datos relacional.
Resumo:
BACKGROUND: Up to now, the different uptake pathways and the subsequent intracellular trafficking of plasmid DNA have been largely explored. By contrast, the mode of internalization and the intracellular routing of an exogenous mRNA in transfected cells are poorly investigated and remain to be elucidated. The bioavailability of internalized mRNA depends on its intracellular routing and its potential accumulation in dynamic sorting sites for storage: stress granules and processing bodies. This question is of particular significance when a secure transposon-based system able to integrate a therapeutic transgene into the genome is used. Transposon vectors usually require two components: a plasmid DNA, carrying the gene of interest, and a source of transposase allowing the integration of the transgene. The principal drawback is the lasting presence of the transposase, which could remobilize the transgene once it has been inserted. Our study focused on the pharmacokinetics of the transposition process mediated by the piggyBac transposase mRNA transfection. Exogenous mRNA internalization and trafficking were investigated towards a better apprehension and fine control of the piggyBac transposase bioavailability. RESULTS: The mRNA prototype designed in this study provides a very narrow expression window of transposase, which allows high efficiency transposition with no cytotoxicity. Our data reveal that exogenous transposase mRNA enters cells by clathrin and caveolae-mediated endocytosis, before finishing in late endosomes 3 h after transfection. At this point, the mRNA is dissociated from its carrier and localized in stress granules, but not in cytoplasmic processing bodies. Some weaker signals have been observed in stress granules at 18 h and 48 h without causing prolonged production of the transposase. So, we designed an mRNA that is efficiently translated with a peak of transposase production 18 h post-transfection without additional release of the molecule. This confines the integration of the transgene in a very small time window. CONCLUSION: Our results shed light on processes of exogenous mRNA trafficking, which are crucial to estimate the mRNA bioavailability, and increase the biosafety of transgene integration mediated by transposition. This approach provides a new way for limiting the transgene copy in the genome and their remobilization by mRNA engineering and trafficking.
Resumo:
Background: To enhance our understanding of complex biological systems like diseases we need to put all of the available data into context and use this to detect relations, pattern and rules which allow predictive hypotheses to be defined. Life science has become a data rich science with information about the behaviour of millions of entities like genes, chemical compounds, diseases, cell types and organs, which are organised in many different databases and/or spread throughout the literature. Existing knowledge such as genotype - phenotype relations or signal transduction pathways must be semantically integrated and dynamically organised into structured networks that are connected with clinical and experimental data. Different approaches to this challenge exist but so far none has proven entirely satisfactory. Results: To address this challenge we previously developed a generic knowledge management framework, BioXM™, which allows the dynamic, graphic generation of domain specific knowledge representation models based on specific objects and their relations supporting annotations and ontologies. Here we demonstrate the utility of BioXM for knowledge management in systems biology as part of the EU FP6 BioBridge project on translational approaches to chronic diseases. From clinical and experimental data, text-mining results and public databases we generate a chronic obstructive pulmonary disease (COPD) knowledge base and demonstrate its use by mining specific molecular networks together with integrated clinical and experimental data. Conclusions: We generate the first semantically integrated COPD specific public knowledge base and find that for the integration of clinical and experimental data with pre-existing knowledge the configuration based set-up enabled by BioXM reduced implementation time and effort for the knowledge base compared to similar systems implemented as classical software development projects. The knowledgebase enables the retrieval of sub-networks including protein-protein interaction, pathway, gene - disease and gene - compound data which are used for subsequent data analysis, modelling and simulation. Pre-structured queries and reports enhance usability; establishing their use in everyday clinical settings requires further simplification with a browser based interface which is currently under development.
Resumo:
Winter maintenance, particularly snow removal and the stress of snow removal materials on public structures, is an enormous budgetary burden on municipalities and nongovernmental maintenance organizations in cold climates. Lately, geospatial technologies such as remote sensing, geographic information systems (GIS), and decision support tools are roviding a valuable tool for planning snow removal operations. A few researchers recently used geospatial technologies to develop winter maintenance tools. However, most of these winter maintenance tools, while having the potential to address some of these information needs, are not typically placed in the hands of planners and other interested stakeholders. Most tools are not constructed with a nontechnical user in mind and lack an easyto-use, easily understood interface. A major goal of this project was to implement a web-based Winter Maintenance Decision Support System (WMDSS) that enhances the capacity of stakeholders (city/county planners, resource managers, transportation personnel, citizens, and policy makers) to evaluate different procedures for managing snow removal assets optimally. This was accomplished by integrating geospatial analytical techniques (GIS and remote sensing), the existing snow removal asset management system, and webbased spatial decision support systems. The web-based system was implemented using the ESRI ArcIMS ActiveX Connector and related web technologies, such as Active Server Pages, JavaScript, HTML, and XML. The expert knowledge on snow removal procedures is gathered and integrated into the system in the form of encoded business rules using Visual Rule Studio. The system developed not only manages the resources but also provides expert advice to assist complex decision making, such as routing, optimal resource allocation, and monitoring live weather information. This system was developed in collaboration with Black Hawk County, IA, the city of Columbia, MO, and the Iowa Department of transportation. This product was also demonstrated for these agencies to improve the usability and applicability of the system.
Resumo:
En este proyecto se va a realizar una evaluación a Google para encontrar los puntos débiles de la aplicación y proponer soluciones y/o mejoras.Empezaremos introduciendo la historia de Google para tener referencias de cómo y dónde surgió, el algoritmo de PageRank que es el núcleo del motor de búsqueda y el hardware y software que ha desarrollado con su propia tecnología.Previamente se introducirán los requisitos que se necesitarán para entender cómo se van a evaluar los cuestionarios, es decir, se explicará la escalera Likert y las dos aplicaciones desarrolladas para realizar el análisis de las queries obtenidas.A continuación se detallará como se realizará la evaluación y se propondrá un cuestionario para este fin. Una vez enviado el cuestionario, obtendremos los datos necesarios para poder evaluar Google.Al concluir la evaluación, se propondrán 5 mejoras para dar más control al usuario y para poder evaluarlas se creará otro cuestionario. Con los datos que se obtendrán de este, se realizará una evaluación de las mejoras y se analizará si tienen una buena acogidas por parte de los usuarios.Para finalizar el proyecto, se realizarán unas conclusiones globales de todos los datos analizados y de las propuestas de mejora.
Resumo:
Nearly full-length Circumsporozoite protein (CSP) from Plasmodium falciparum, the C-terminal fragments from both P. falciparm and P. yoelii CSP and a fragment comprising 351 amino acids of P.vivax MSPI were expressed in the slime mold Dictyostelium discoideum. Discoidin-tag expression vectors allowed both high yields of these proteins and their purification by a nearly single-step procedure. We exploited the galactose binding activity of Discoidin Ia to separate the fusion proteins by affinity chromatography on Sepharose-4B columns. Inclusion of a thrombin recognition site allowed cleavage of the Discoidin-tag from the fusion protein. Partial secretion of the protein was obtained via an ER independent pathway, whereas routing the recombinant proteins to the ER resulted in glycosylation and retention. Yields of proteins ranged from 0.08 to 3 mg l(-1) depending on the protein sequence and the purification conditions. The recognition of purified MSPI by sera from P. vivax malaria patients was used to confirm the native conformation of the protein expressed in Dictyostelium. The simple purification procedure described here, based on Sepharose-4B, should facilitate the expression and the large-scale purification of various Plasmodium polypeptides.
Resumo:
Proyecto final de carrera sobre el desarrollo de una aplicación móvil para plataformas Android, conteniendo fragmentos, actividades y manejos de bases de datos.Se utiliza también el API de Google Maps para mostrar la posición del usuario y puntos precargados desde la base de datos.
Resumo:
The problems arising in commercial distribution are complex and involve several players and decision levels. One important decision is relatedwith the design of the routes to distribute the products, in an efficient and inexpensive way.This article deals with a complex vehicle routing problem that can beseen as a new extension of the basic vehicle routing problem. The proposed model is a multi-objective combinatorial optimization problemthat considers three objectives and multiple periods, which models in a closer way the real distribution problems. The first objective is costminimization, the second is balancing work levels and the third is amarketing objective. An application of the model on a small example, with5 clients and 3 days, is presented. The results of the model show the complexity of solving multi-objective combinatorial optimization problems and the contradiction between the several distribution management objective.
Resumo:
Introduction: We launched an investigator-initiated study (ISRCTN31181395) to evaluate the potential benefit of pharmacokinetic-guided dosage individualization of imatinib for leukaemiapatients followed in public and private sectors. Following approval by the research ethics committee (REC) of the coordinating centre, recruitment throughout Switzerland necessitatedto submit the protocol to 11 cantonal RECs.Materials and Methods: We analysed requirements and evaluation procedures of the 12 RECs with associated costs.Results: 1-18 copies of the dossier, in total 4300 printed pages, were required (printing/posting costs: ~300 CHF) to meet initial requirements. Meeting frequencies of RECs ranged between 2 weeks and 2 months, time from submission to fi rst feedback took 2-75 days. Study approval was obtained from a chairman, a subor the full committee, the evaluation work being invoiced by0-1000 CHF (median: 750 CHF, total: 9200 CHF). While 5 RECs gave immediate approval, the other 6 rose in total 38 queries before study release, mainly related to wording in the patient information, leading to 7 different fi nal versions approved. Submission tasks employed an investigator half-time over about 6 months.Conclusion: While the necessity of clinical research evaluation by independent RECs is undisputed, there is a need of further harmonization and cooperation in evaluation procedures. Current administrative burden is indeed complex, time-consuming and costly. A harmonized electronic application form, preferably compatible with other regulatory bodies and European countries, could increase transparency, improve communication, and encourage academic multi-centre clinical research in Switzerland.
Resumo:
The problems arising in the logistics of commercial distribution are complexand involve several players and decision levels. One important decision isrelated with the design of the routes to distribute the products, in anefficient and inexpensive way.This article explores three different distribution strategies: the firststrategy corresponds to the classical vehicle routing problem; the second isa master route strategy with daily adaptations and the third is a strategythat takes into account the cross-functional planning through amulti-objective model with two objectives. All strategies are analyzed ina multi-period scenario. A metaheuristic based on the Iteratetd Local Search,is used to solve the models related with each strategy. A computationalexperiment is performed to evaluate the three strategies with respect to thetwo objectives. The cross functional planning strategy leads to solutions thatput in practice the coordination between functional areas and better meetbusiness objectives.
Resumo:
En el presente PFC vamos a estudiar a fondo las wikis semánticas, para tratar de plasmar primeramente el estado del arte de éstas, su aprovechamiento y uso real, más allá de su concepción teórica o prototípica. Posteriormente utilizaremos una de ellas para realizar un ejercicio práctico, cargando una ontología nueva desarrollada para ésta y probando la potencia de las consultas sobre ésta.