13 resultados para Optimistic data replication system

em Universit


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The in vitro adenovirus (Ad) DNA replication system provides an assay to study the interaction of viral and host replication proteins with the DNA template in the formation of the preinitiation complex. This initiation system requires in addition to the origin DNA sequences 1) Ad DNA polymerase (Pol), 2) Ad preterminal protein (pTP), the covalent acceptor for protein-primed DNA replication, and 3) nuclear factor I (NFI), a host cell protein identical to the CCAAT box-binding transcription factor. The interactions of these proteins were studied by coimmunoprecipitation and Ad origin DNA binding assays. The Ad Pol can bind to origin sequences only in the presence of another protein which can be either pTP or NFI. While NFI alone can bind to its origin recognition sequence, pTP does not specifically recognize DNA unless Ad Pol is present. Thus, protein-protein interactions are necessary for the targetting of either Ad Pol or pTP to the preinitiation complex. DNA footprinting demonstrated that the Ad DNA site recognized by the pTP.Pol complex was within the first 18 bases at the end of the template which constitutes the minimal origin of replication. Mutagenesis studies have defined the Ad Pol interaction site on NFI between amino acids 68-150, which overlaps the DNA binding and replication activation domain of this factor. A putative zinc finger on the Ad Pol has been mutated to a product that fails to bind the Ad origin sequences but still interacts with pTP. These results indicate that both protein-protein and protein-DNA interactions mediate specific recognition of the replication origin by Ad DNA polymerase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Efficient initiation of SV40 DNA replication requires transcription factors that bind auxiliary sequences flanking the minimally required origin. To evaluate the possibility that transcription factors may activate SV40 replication by acting on the chromatin structure of the origin, we used an in vivo replication system in which we targeted GAL4 fusion proteins to the minimally required origin. We found that the proline-rich transcriptional activation domain of nuclear factor I (NF-I), which has been previously shown to interact with histone H3, specifically activates replication. Evaluation of a series of deletion and point mutants of NF-I indicates that the H3-binding domain and the replication activity coincide perfectly. Assays with other transcription factors, such as Sp1, confirmed the correlation between the interaction with H3 and the activation of replication. These findings imply that transcription factors such as NF-I can activate SV40 replication via direct interaction with chromatin components, thereby contributing to the relief of nucleosomal repression at the SV40 origin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Quality assurance (QA) in clinical trials is essential to ensure treatment is safely and effectively delivered. As QA requirements have increased in complexity in parallel with evolution of radiation therapy (RT) delivery, a need to facilitate digital data exchange emerged. Our objective is to present the platform developed for the integration and standardization of QART activities across all EORTC trials involving RT. METHODS: The following essential requirements were identified: secure and easy access without on-site software installation; integration within the existing EORTC clinical remote data capture system; and the ability to both customize the platform to specific studies and adapt to future needs. After retrospective testing within several clinical trials, the platform was introduced in phases to participating sites and QART study reviewers. RESULTS: The resulting QA platform, integrating RT analysis software installed at EORTC Headquarters, permits timely, secure, and fully digital central DICOM-RT based data review. Participating sites submit data through a standard secure upload webpage. Supplemental information is submitted in parallel through web-based forms. An internal quality check by the QART office verifies data consistency, formatting, and anonymization. QART reviewers have remote access through a terminal server. Reviewers evaluate submissions for protocol compliance through an online evaluation matrix. Comments are collected by the coordinating centre and institutions are informed of the results. CONCLUSIONS: This web-based central review platform facilitates rapid, extensive, and prospective QART review. This reduces the risk that trial outcomes are compromised through inadequate radiotherapy and facilitates correlation of results with clinical outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistics of causes of death remain an important source of epidemiological data for the evaluation of various medical and health problems. The improvement of analytical techniques and, above all, the transformation of demographic and morbid structures of populations have prompted researchers in the field to give more importance to the quality of death certificates. After describing the data collection system presently used in Switzerland, the paper discusses various indirect estimations of the quality of Swiss data and reviews the corresponding international literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A traditional photonic-force microscope (PFM) results in huge sets of data, which requires tedious numerical analysis. In this paper, we propose instead an analog signal processor to attain real-time capabilities while retaining the richness of the traditional PFM data. Our system is devoted to intracellular measurements and is fully interactive through the use of a haptic joystick. Using our specialized analog hardware along with a dedicated algorithm, we can extract the full 3D stiffness matrix of the optical trap in real time, including the off-diagonal cross-terms. Our system is also capable of simultaneously recording data for subsequent offline analysis. This allows us to check that a good correlation exists between the classical analysis of stiffness and our real-time measurements. We monitor the PFM beads using an optical microscope. The force-feedback mechanism of the haptic joystick helps us in interactively guiding the bead inside living cells and collecting information from its (possibly anisotropic) environment. The instantaneous stiffness measurements are also displayed in real time on a graphical user interface. The whole system has been built and is operational; here we present early results that confirm the consistency of the real-time measurements with offline computations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: Beyond the Framingham Stroke Risk Score, prediction of future stroke may improve with a genetic risk score (GRS) based on single-nucleotide polymorphisms associated with stroke and its risk factors. METHODS: The study includes 4 population-based cohorts with 2047 first incident strokes from 22,720 initially stroke-free European origin participants aged ≥55 years, who were followed for up to 20 years. GRSs were constructed with 324 single-nucleotide polymorphisms implicated in stroke and 9 risk factors. The association of the GRS to first incident stroke was tested using Cox regression; the GRS predictive properties were assessed with area under the curve statistics comparing the GRS with age and sex, Framingham Stroke Risk Score models, and reclassification statistics. These analyses were performed per cohort and in a meta-analysis of pooled data. Replication was sought in a case-control study of ischemic stroke. RESULTS: In the meta-analysis, adding the GRS to the Framingham Stroke Risk Score, age and sex model resulted in a significant improvement in discrimination (all stroke: Δjoint area under the curve=0.016, P=2.3×10(-6); ischemic stroke: Δjoint area under the curve=0.021, P=3.7×10(-7)), although the overall area under the curve remained low. In all the studies, there was a highly significantly improved net reclassification index (P<10(-4)). CONCLUSIONS: The single-nucleotide polymorphisms associated with stroke and its risk factors result only in a small improvement in prediction of future stroke compared with the classical epidemiological risk factors for stroke.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The pharmacokinetics (PK) of efavirenz (EFV) is characterized by marked interpatient variability that correlates with its pharmacodynamics (PD). In vitro-in vivo extrapolation (IVIVE) is a "bottom-up" approach that combines drug data with system information to predict PK and PD. The aim of this study was to simulate EFV PK and PD after dose reductions. At the standard dose, the simulated probability was 80% for viral suppression and 28% for central nervous system (CNS) toxicity. After a dose reduction to 400 mg, the probabilities of viral suppression were reduced to 69, 75, and 82%, and those of CNS toxicity were 21, 24, and 29% for the 516 GG, 516 GT, and 516 TT genotypes, respectively. With reduction of the dose to 200 mg, the probabilities of viral suppression decreased to 54, 62, and 72% and those of CNS toxicity decreased to 13, 18, and 20% for the 516 GG, 516 GT, and 516 TT genotypes, respectively. These findings indicate how dose reductions might be applied in patients with favorable genetic characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A medical and scientific multidisciplinary consensus meeting was held from 29 to 30 November 2013 on Anti-Doping in Sport at the Home of FIFA in Zurich, Switzerland, to create a roadmap for the implementation of the 2015 World Anti-Doping Code. The consensus statement and accompanying papers set out the priorities for the antidoping community in research, science and medicine. The participants achieved consensus on a strategy for the implementation of the 2015 World Anti-Doping Code. Key components of this strategy include: (1) sport-specific risk assessment, (2) prevalence measurement, (3) sport-specific test distribution plans, (4) storage and reanalysis, (5) analytical challenges, (6) forensic intelligence, (7) psychological approach to optimise the most deterrent effect, (8) the Athlete Biological Passport (ABP) and confounding factors, (9) data management system (Anti-Doping Administration & Management System (ADAMS), (10) education, (11) research needs and necessary advances, (12) inadvertent doping and (13) management and ethics: biological data. True implementation of the 2015 World Anti-Doping Code will depend largely on the ability to align thinking around these core concepts and strategies. FIFA, jointly with all other engaged International Federations of sports (Ifs), the International Olympic Committee (IOC) and World Anti-Doping Agency (WADA), are ideally placed to lead transformational change with the unwavering support of the wider antidoping community. The outcome of the consensus meeting was the creation of the ad hoc Working Group charged with the responsibility of moving this agenda forward.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'expérience LHCb sera installée sur le futur accélérateur LHC du CERN. LHCb est un spectromètre à un bras consacré aux mesures de précision de la violation CP et à l'étude des désintégrations rares des particules qui contiennent un quark b. Actuellement LHCb se trouve dans la phase finale de recherche et développement et de conception. La construction a déjà commencé pour l'aimant et les calorimètres. Dans le Modèle Standard, la violation CP est causée par une phase complexe dans la matrice 3x3 CKM (Cabibbo-Kobayashi-Maskawa) de mélange des quarks. L'expérience LHCb compte utiliser les mesons B pour tester l'unitarité de cette matrice, en mesurant de diverses manières indépendantes tous les angles et côtés du "triangle d'unitarité". Cela permettra de surdéterminer le modèle et, peut-être, de mettre en évidence des incohérences qui seraient le signal de l'existence d'une physique au-delà du Modèle Standard. La reconstruction du vertex de désintégration des particules est une condition fondamentale pour l'expérience LHCb. La présence d'un vertex secondaire déplacé est une signature de la désintégration de particules avec un quark b. Cette signature est utilisée dans le trigger topologique du LHCb. Le Vertex Locator (VeLo) doit fournir des mesures précises de coordonnées de passage des traces près de la région d'interaction. Ces points sont ensuite utilisés pour reconstruire les trajectoires des particules et l'identification des vertices secondaires et la mesure des temps de vie des hadrons avec quark b. L'électronique du VeLo est une partie essentielle du système d'acquisition de données et doit se conformer aux spécifications de l'électronique de LHCb. La conception des circuits doit maximiser le rapport signal/bruit pour obtenir la meilleure performance de reconstruction des traces dans le détecteur. L'électronique, conçue en parallèle avec le développement du détecteur de silicium, a parcouru plusieurs phases de "prototyping" décrites dans cette thèse.<br/><br/>The LHCb experiment is being built at the future LHC accelerator at CERN. It is a forward single-arm spectrometer dedicated to precision measurements of CP violation and rare decays in the b quark sector. Presently it is finishing its R&D and final design stage. The construction already started for the magnet and calorimeters. In the Standard Model, CP violation arises via the complex phase of the 3 x 3 CKM (Cabibbo-Kobayashi-Maskawa) quark mixing matrix. The LHCb experiment will test the unitarity of this matrix by measuring in several theoretically unrelated ways all angles and sides of the so-called "unitary triangle". This will allow to over-constrain the model and - hopefully - to exhibit inconsistencies which will be a signal of physics beyond the Standard Model. The Vertex reconstruction is a fundamental requirement for the LHCb experiment. Displaced secondary vertices are a distinctive feature of b-hadron decays. This signature is used in the LHCb topology trigger. The Vertex Locator (VeLo) has to provide precise measurements of track coordinates close to the interaction region. These are used to reconstruct production and decay vertices of beauty-hadrons and to provide accurate measurements of their decay lifetimes. The Vertex Locator electronics is an essential part of the data acquisition system and must conform to the overall LHCb electronics specification. The design of the electronics must maximise the signal to noise ratio in order to achieve the best tracking reconstruction performance in the detector. The electronics is being designed in parallel with the silicon detector development and went trough several prototyping phases, which are described in this thesis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Urotensin-II controls ion/water homeostasis in fish and vascular tone in rodents. We hypothesised that common genetic variants in urotensin-II pathway genes are associated with human blood pressure or renal function. We performed family-based analysis of association between blood pressure, glomerular filtration and genes of the urotensin-II pathway (urotensin-II, urotensin-II related peptide, urotensin-II receptor) saturated with 28 tagging single nucleotide polymorphisms in 2024 individuals from 520 families; followed by an independent replication in 420 families and 7545 unrelated subjects. The expression studies of the urotensin-II pathway were carried out in 97 human kidneys. Phylogenetic evolutionary analysis was conducted in 17 vertebrate species. One single nucleotide polymorphism (rs531485 in urotensin-II gene) was associated with adjusted estimated glomerular filtration rate in the discovery cohort (p = 0.0005). It showed no association with estimated glomerular filtration rate in the combined replication resource of 8724 subjects from 6 populations. Expression of urotensin-II and its receptor showed strong linear correlation (r = 0.86, p<0.0001). There was no difference in renal expression of urotensin-II system between hypertensive and normotensive subjects. Evolutionary analysis revealed accumulation of mutations in urotensin-II since the divergence of primates and weaker conservation of urotensin-II receptor in primates than in lower vertebrates. Our data suggest that urotensin-II system genes are unlikely to play a major role in genetic control of human blood pressure or renal function. The signatures of evolutionary forces acting on urotensin-II system indicate that it may have evolved towards loss of function since the divergence of primates.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Lassa virus (LASV) causing hemorrhagic Lassa fever in West Africa, Mopeia virus (MOPV) from East Africa, and lymphocytic choriomeningitis virus (LCMV) are the main representatives of the Old World arenaviruses. Little is known about how the components of the arenavirus replication machinery, i.e., the genome, nucleoprotein (NP), and L protein, interact. In addition, it is unknown whether these components can function across species boundaries. We established minireplicon systems for MOPV and LCMV in analogy to the existing LASV system and exchanged the components among the three systems. The functional and physical integrity of the resulting complexes was tested by reporter gene assay, Northern blotting, and coimmunoprecipitation studies. The minigenomes, NPs, and L proteins of LASV and MOPV could be exchanged without loss of function. LASV and MOPV L protein was also active in conjunction with LCMV NP, while the LCMV L protein required homologous NP for activity. Analysis of LASV/LCMV NP chimeras identified a single LCMV-specific NP residue (Ile-53) and the C terminus of NP (residues 340 to 558) as being essential for LCMV L protein function. The defect of LASV and MOPV NP in supporting transcriptional activity of LCMV L protein was not caused by a defect in physical NP-L protein interaction. In conclusion, components of the replication complex of Old World arenaviruses have the potential to functionally and physically interact across species boundaries. Residue 53 and the C-terminal domain of NP are important for function of L protein during genome replication and transcription.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Résumé: L'automatisation du séquençage et de l'annotation des génomes, ainsi que l'application à large échelle de méthodes de mesure de l'expression génique, génèrent une quantité phénoménale de données pour des organismes modèles tels que l'homme ou la souris. Dans ce déluge de données, il devient très difficile d'obtenir des informations spécifiques à un organisme ou à un gène, et une telle recherche aboutit fréquemment à des réponses fragmentées, voir incomplètes. La création d'une base de données capable de gérer et d'intégrer aussi bien les données génomiques que les données transcriptomiques peut grandement améliorer la vitesse de recherche ainsi que la qualité des résultats obtenus, en permettant une comparaison directe de mesures d'expression des gènes provenant d'expériences réalisées grâce à des techniques différentes. L'objectif principal de ce projet, appelé CleanEx, est de fournir un accès direct aux données d'expression publiques par le biais de noms de gènes officiels, et de représenter des données d'expression produites selon des protocoles différents de manière à faciliter une analyse générale et une comparaison entre plusieurs jeux de données. Une mise à jour cohérente et régulière de la nomenclature des gènes est assurée en associant chaque expérience d'expression de gène à un identificateur permanent de la séquence-cible, donnant une description physique de la population d'ARN visée par l'expérience. Ces identificateurs sont ensuite associés à intervalles réguliers aux catalogues, en constante évolution, des gènes d'organismes modèles. Cette procédure automatique de traçage se fonde en partie sur des ressources externes d'information génomique, telles que UniGene et RefSeq. La partie centrale de CleanEx consiste en un index de gènes établi de manière hebdomadaire et qui contient les liens à toutes les données publiques d'expression déjà incorporées au système. En outre, la base de données des séquences-cible fournit un lien sur le gène correspondant ainsi qu'un contrôle de qualité de ce lien pour différents types de ressources expérimentales, telles que des clones ou des sondes Affymetrix. Le système de recherche en ligne de CleanEx offre un accès aux entrées individuelles ainsi qu'à des outils d'analyse croisée de jeux de donnnées. Ces outils se sont avérés très efficaces dans le cadre de la comparaison de l'expression de gènes, ainsi que, dans une certaine mesure, dans la détection d'une variation de cette expression liée au phénomène d'épissage alternatif. Les fichiers et les outils de CleanEx sont accessibles en ligne (http://www.cleanex.isb-sib.ch/). Abstract: The automatic genome sequencing and annotation, as well as the large-scale gene expression measurements methods, generate a massive amount of data for model organisms. Searching for genespecific or organism-specific information througout all the different databases has become a very difficult task, and often results in fragmented and unrelated answers. The generation of a database which will federate and integrate genomic and transcriptomic data together will greatly improve the search speed as well as the quality of the results by allowing a direct comparison of expression results obtained by different techniques. The main goal of this project, called the CleanEx database, is thus to provide access to public gene expression data via unique gene names and to represent heterogeneous expression data produced by different technologies in a way that facilitates joint analysis and crossdataset comparisons. A consistent and uptodate gene nomenclature is achieved by associating each single gene expression experiment with a permanent target identifier consisting of a physical description of the targeted RNA population or the hybridization reagent used. These targets are then mapped at regular intervals to the growing and evolving catalogues of genes from model organisms, such as human and mouse. The completely automatic mapping procedure relies partly on external genome information resources such as UniGene and RefSeq. The central part of CleanEx is a weekly built gene index containing crossreferences to all public expression data already incorporated into the system. In addition, the expression target database of CleanEx provides gene mapping and quality control information for various types of experimental resources, such as cDNA clones or Affymetrix probe sets. The Affymetrix mapping files are accessible as text files, for further use in external applications, and as individual entries, via the webbased interfaces . The CleanEx webbased query interfaces offer access to individual entries via text string searches or quantitative expression criteria, as well as crossdataset analysis tools, and crosschip gene comparison. These tools have proven to be very efficient in expression data comparison and even, to a certain extent, in detection of differentially expressed splice variants. The CleanEx flat files and tools are available online at: http://www.cleanex.isbsib. ch/.