19 resultados para routing protocols
em Université de Lausanne, Switzerland
Resumo:
OBJECTIVE: To investigate the planning of subgroup analyses in protocols of randomised controlled trials and the agreement with corresponding full journal publications. DESIGN: Cohort of protocols of randomised controlled trial and subsequent full journal publications. SETTING: Six research ethics committees in Switzerland, Germany, and Canada. DATA SOURCES: 894 protocols of randomised controlled trial involving patients approved by participating research ethics committees between 2000 and 2003 and 515 subsequent full journal publications. RESULTS: Of 894 protocols of randomised controlled trials, 252 (28.2%) included one or more planned subgroup analyses. Of those, 17 (6.7%) provided a clear hypothesis for at least one subgroup analysis, 10 (4.0%) anticipated the direction of a subgroup effect, and 87 (34.5%) planned a statistical test for interaction. Industry sponsored trials more often planned subgroup analyses compared with investigator sponsored trials (195/551 (35.4%) v 57/343 (16.6%), P<0.001). Of 515 identified journal publications, 246 (47.8%) reported at least one subgroup analysis. In 81 (32.9%) of the 246 publications reporting subgroup analyses, authors stated that subgroup analyses were prespecified, but this was not supported by 28 (34.6%) corresponding protocols. In 86 publications, authors claimed a subgroup effect, but only 36 (41.9%) corresponding protocols reported a planned subgroup analysis. CONCLUSIONS: Subgroup analyses are insufficiently described in the protocols of randomised controlled trials submitted to research ethics committees, and investigators rarely specify the anticipated direction of subgroup effects. More than one third of statements in publications of randomised controlled trials about subgroup prespecification had no documentation in the corresponding protocols. Definitive judgments regarding credibility of claimed subgroup effects are not possible without access to protocols and analysis plans of randomised controlled trials.
Resumo:
La tomodensitométrie (CT) est une technique d'imagerie dont l'intérêt n'a cessé de croître depuis son apparition dans le début des années 70. Dans le domaine médical, son utilisation est incontournable à tel point que ce système d'imagerie pourrait être amené à devenir victime de son succès si son impact au niveau de l'exposition de la population ne fait pas l'objet d'une attention particulière. Bien évidemment, l'augmentation du nombre d'examens CT a permis d'améliorer la prise en charge des patients ou a rendu certaines procédures moins invasives. Toutefois, pour assurer que le compromis risque - bénéfice soit toujours en faveur du patient, il est nécessaire d'éviter de délivrer des doses non utiles au diagnostic.¦Si cette action est importante chez l'adulte elle doit être une priorité lorsque les examens se font chez l'enfant, en particulier lorsque l'on suit des pathologies qui nécessitent plusieurs examens CT au cours de la vie du patient. En effet, les enfants et jeunes adultes sont plus radiosensibles. De plus, leur espérance de vie étant supérieure à celle de l'adulte, ils présentent un risque accru de développer un cancer radio-induit dont la phase de latence peut être supérieure à vingt ans. Partant du principe que chaque examen radiologique est justifié, il devient dès lors nécessaire d'optimiser les protocoles d'acquisitions pour s'assurer que le patient ne soit pas irradié inutilement. L'avancée technologique au niveau du CT est très rapide et depuis 2009, de nouvelles techniques de reconstructions d'images, dites itératives, ont été introduites afin de réduire la dose et améliorer la qualité d'image.¦Le présent travail a pour objectif de déterminer le potentiel des reconstructions itératives statistiques pour réduire au minimum les doses délivrées lors d'examens CT chez l'enfant et le jeune adulte tout en conservant une qualité d'image permettant le diagnostic, ceci afin de proposer des protocoles optimisés.¦L'optimisation d'un protocole d'examen CT nécessite de pouvoir évaluer la dose délivrée et la qualité d'image utile au diagnostic. Alors que la dose est estimée au moyen d'indices CT (CTDIV0| et DLP), ce travail a la particularité d'utiliser deux approches radicalement différentes pour évaluer la qualité d'image. La première approche dite « physique », se base sur le calcul de métriques physiques (SD, MTF, NPS, etc.) mesurées dans des conditions bien définies, le plus souvent sur fantômes. Bien que cette démarche soit limitée car elle n'intègre pas la perception des radiologues, elle permet de caractériser de manière rapide et simple certaines propriétés d'une image. La seconde approche, dite « clinique », est basée sur l'évaluation de structures anatomiques (critères diagnostiques) présentes sur les images de patients. Des radiologues, impliqués dans l'étape d'évaluation, doivent qualifier la qualité des structures d'un point de vue diagnostique en utilisant une échelle de notation simple. Cette approche, lourde à mettre en place, a l'avantage d'être proche du travail du radiologue et peut être considérée comme méthode de référence.¦Parmi les principaux résultats de ce travail, il a été montré que les algorithmes itératifs statistiques étudiés en clinique (ASIR?, VEO?) ont un important potentiel pour réduire la dose au CT (jusqu'à-90%). Cependant, par leur fonctionnement, ils modifient l'apparence de l'image en entraînant un changement de texture qui pourrait affecter la qualité du diagnostic. En comparant les résultats fournis par les approches « clinique » et « physique », il a été montré que ce changement de texture se traduit par une modification du spectre fréquentiel du bruit dont l'analyse permet d'anticiper ou d'éviter une perte diagnostique. Ce travail montre également que l'intégration de ces nouvelles techniques de reconstruction en clinique ne peut se faire de manière simple sur la base de protocoles utilisant des reconstructions classiques. Les conclusions de ce travail ainsi que les outils développés pourront également guider de futures études dans le domaine de la qualité d'image, comme par exemple, l'analyse de textures ou la modélisation d'observateurs pour le CT.¦-¦Computed tomography (CT) is an imaging technique in which interest has been growing since it first began to be used in the early 1970s. In the clinical environment, this imaging system has emerged as the gold standard modality because of its high sensitivity in producing accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase of the number of CT examinations performed has raised concerns about the potential negative effects of ionizing radiation on the population. To insure a benefit - risk that works in favor of a patient, it is important to balance image quality and dose in order to avoid unnecessary patient exposure.¦If this balance is important for adults, it should be an absolute priority for children undergoing CT examinations, especially for patients suffering from diseases requiring several follow-up examinations over the patient's lifetime. Indeed, children and young adults are more sensitive to ionizing radiation and have an extended life span in comparison to adults. For this population, the risk of developing cancer, whose latency period exceeds 20 years, is significantly higher than for adults. Assuming that each patient examination is justified, it then becomes a priority to optimize CT acquisition protocols in order to minimize the delivered dose to the patient. Over the past few years, CT advances have been developing at a rapid pace. Since 2009, new iterative image reconstruction techniques, called statistical iterative reconstructions, have been introduced in order to decrease patient exposure and improve image quality.¦The goal of the present work was to determine the potential of statistical iterative reconstructions to reduce dose as much as possible without compromising image quality and maintain diagnosis of children and young adult examinations.¦The optimization step requires the evaluation of the delivered dose and image quality useful to perform diagnosis. While the dose is estimated using CT indices (CTDIV0| and DLP), the particularity of this research was to use two radically different approaches to evaluate image quality. The first approach, called the "physical approach", computed physical metrics (SD, MTF, NPS, etc.) measured on phantoms in well-known conditions. Although this technique has some limitations because it does not take radiologist perspective into account, it enables the physical characterization of image properties in a simple and timely way. The second approach, called the "clinical approach", was based on the evaluation of anatomical structures (diagnostic criteria) present on patient images. Radiologists, involved in the assessment step, were asked to score image quality of structures for diagnostic purposes using a simple rating scale. This approach is relatively complicated to implement and also time-consuming. Nevertheless, it has the advantage of being very close to the practice of radiologists and is considered as a reference method.¦Primarily, this work revealed that the statistical iterative reconstructions studied in clinic (ASIR? and VECO have a strong potential to reduce CT dose (up to -90%). However, by their mechanisms, they lead to a modification of the image appearance with a change in image texture which may then effect the quality of the diagnosis. By comparing the results of the "clinical" and "physical" approach, it was showed that a change in texture is related to a modification of the noise spectrum bandwidth. The NPS analysis makes possible to anticipate or avoid a decrease in image quality. This project demonstrated that integrating these new statistical iterative reconstruction techniques can be complex and cannot be made on the basis of protocols using conventional reconstructions. The conclusions of this work and the image quality tools developed will be able to guide future studies in the field of image quality as texture analysis or model observers dedicated to CT.
Resumo:
Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.
Resumo:
OBJECTIVES: Renal tubular sodium handling was measured in healthy subjects submitted to acute and chronic salt-repletion/salt-depletion protocols. The goal was to compare the changes in proximal and distal sodium handling induced by the two procedures using the lithium clearance technique. METHODS: In nine subjects, acute salt loading was obtained with a 2 h infusion of isotonic saline, and salt depletion was induced with a low-salt diet and furosemide. In the chronic protocol, 15 subjects randomly received a low-, a regular- and a high-sodium diet for 1 week. In both protocols, renal and systemic haemodynamics and urinary electrolyte excretion were measured after an acute water load. In the chronic study, sodium handling was also determined, based on 12 h day- and night-time urine collections. RESULTS: The acute and chronic protocols induced comparable changes in sodium excretion, renal haemodynamics and hormonal responses. Yet, the relative contribution of the proximal and distal nephrons to sodium excretion in response to salt loading and depletion differed in the two protocols. Acutely, subjects appeared to regulate sodium balance mainly by the distal nephron, with little contribution of the proximal tubule. In contrast, in the chronic protocol, changes in sodium reabsorption could be measured both in the proximal and distal nephrons. Acute water loading was an important confounding factor which increased sodium excretion by reducing proximal sodium reabsorption. This interference of water was particularly marked in salt-depleted subjects. CONCLUSION: Acute and chronic salt loading/salt depletion protocols investigate different renal mechanisms of control of sodium balance. The endogenous lithium clearance technique is a reliable method to assess proximal sodium reabsorption in humans. However, to investigate sodium handling in diseases such as hypertension, lithium should be measured preferably on 24 h or overnight urine collections to avoid the confounding influence of water.
Resumo:
This PhD thesis addresses the issue of scalable media streaming in large-scale networking environments. Multimedia streaming is one of the largest sink of network resources and this trend is still growing as testified by the success of services like Skype, Netflix, Spotify and Popcorn Time (BitTorrent-based). In traditional client-server solutions, when the number of consumers increases, the server becomes the bottleneck. To overcome this problem, the Content-Delivery Network (CDN) model was invented. In CDN model, the server copies the media content to some CDN servers, which are located in different strategic locations on the network. However, they require heavy infrastructure investment around the world, which is too expensive. Peer-to-peer (P2P) solutions are another way to achieve the same result. These solutions are naturally scalable, since each peer can act as both a receiver and a forwarder. Most of the proposed streaming solutions in P2P networks focus on routing scenarios to achieve scalability. However, these solutions cannot work properly in video-on-demand (VoD) streaming, when resources of the media server are not sufficient. Replication is a solution that can be used in these situations. This thesis specifically provides a family of replication-based media streaming protocols, which are scalable, efficient and reliable in P2P networks. First, it provides SCALESTREAM, a replication-based streaming protocol that adaptively replicates media content in different peers to increase the number of consumers that can be served in parallel. The adaptiveness aspect of this solution relies on the fact that it takes into account different constraints like bandwidth capacity of peers to decide when to add or remove replicas. SCALESTREAM routes media blocks to consumers over a tree topology, assuming a reliable network composed of homogenous peers in terms of bandwidth. Second, this thesis proposes RESTREAM, an extended version of SCALESTREAM that addresses the issues raised by unreliable networks composed of heterogeneous peers. Third, this thesis proposes EAGLEMACAW, a multiple-tree replication streaming protocol in which two distinct trees, named EAGLETREE and MACAWTREE, are built in a decentralized manner on top of an underlying mesh network. These two trees collaborate to serve consumers in an efficient and reliable manner. The EAGLETREE is in charge of improving efficiency, while the MACAWTREE guarantees reliability. Finally, this thesis provides TURBOSTREAM, a hybrid replication-based streaming protocol in which a tree overlay is built on top of a mesh overlay network. Both these overlays cover all peers of the system and collaborate to improve efficiency and low-latency in streaming media to consumers. This protocol is implemented and tested in a real networking environment using PlanetLab Europe testbed composed of peers distributed in different places in Europe.
Resumo:
Sleep-wake disturbances are frequently observed in stroke patients and are associated with poorer functional outcome. Until now the effects of sleep on stroke evolution are unknown. The purpose of the present study was to evaluate the effects of three sleep deprivation (SD) protocols on brain damages after focal cerebral ischemia in a rat model. Permanent occlusion of distal branches of the middle cerebral artery was induced in adult rats. The animals were then subjected to 6h SD, 12h SD or sleep disturbances (SDis) in which 3 x 12h sleep deprivation were performed by gentle handling. Infarct size and brain swelling were assessed by Cresyl violet staining, and the number of damaged cells was measured by terminal deoxynucleotidyl transferase mediated dUTP nick end labeling (TUNEL) staining. Behavioral tests, namely tape removal and cylinder tests, were performed for assessing sensorimotor function. In the 6h SD protocol, no significant difference (P > 0.05) was found either in infarct size (42.5 ± 30.4 mm3 in sleep deprived animals vs. 44.5 ± 20.5 mm3 in controls, mean ± s.d.), in brain swelling (10.2 ± 3.8 % in sleep deprived animals vs. 11.3 ± 2.0 % in controls) or in number of TUNEL-positive cells (21.7 ± 2.0/mm2 in sleep deprived animals vs. 23.0 ± 1.1/mm2 in controls). In contrast, 12h sleep deprivation increased infarct size by 40 % (82.8 ± 10.9 mm3 in SD group vs. 59.2 ± 13.9 mm3 in control group, P = 0.008) and number of TUNEL-positive cells by 137 % (46.8 ± 15/mm in SD group vs. 19.7 ± 7.7/mm2 in control group, P = 0.003). There was no significant difference (P > 0.05) in brain swelling (12.9 ± 6.3 % in sleep deprived animals vs. 11.6 ± 6.0 % in controls). The SDis protocol also increased infarct size by 76 % (3 x 12h SD 58.8 ± 20.4 mm3 vs. no SD 33.8 ± 6.3 mm3, P = 0.017) and number of TUNEL-positive cells by 219 % (32.9 ± 13.2/mm2 vs. 10.3 ± 2.5/mm2, P = 0.008). Brain swelling did not show any difference between the two groups (24.5 ± 8.4 % in SD group vs. 16.7 ± 8.9 % in control group, p > 0.05). Both behavioral tests did not show any concluding results. In summary, we demonstrate that sleep deprivation aggravates brain damages in a rat model of stroke. Further experiments are needed to unveil the mechanisms underlying these effects.
Resumo:
Résumé La cryptographie classique est basée sur des concepts mathématiques dont la sécurité dépend de la complexité du calcul de l'inverse des fonctions. Ce type de chiffrement est à la merci de la puissance de calcul des ordinateurs ainsi que la découverte d'algorithme permettant le calcul des inverses de certaines fonctions mathématiques en un temps «raisonnable ». L'utilisation d'un procédé dont la sécurité est scientifiquement prouvée s'avère donc indispensable surtout les échanges critiques (systèmes bancaires, gouvernements,...). La cryptographie quantique répond à ce besoin. En effet, sa sécurité est basée sur des lois de la physique quantique lui assurant un fonctionnement inconditionnellement sécurisé. Toutefois, l'application et l'intégration de la cryptographie quantique sont un souci pour les développeurs de ce type de solution. Cette thèse justifie la nécessité de l'utilisation de la cryptographie quantique. Elle montre que le coût engendré par le déploiement de cette solution est justifié. Elle propose un mécanisme simple et réalisable d'intégration de la cryptographie quantique dans des protocoles de communication largement utilisés comme les protocoles PPP, IPSec et le protocole 802.1li. Des scénarios d'application illustrent la faisabilité de ces solutions. Une méthodologie d'évaluation, selon les critères communs, des solutions basées sur la cryptographie quantique est également proposée dans ce document. Abstract Classical cryptography is based on mathematical functions. The robustness of a cryptosystem essentially depends on the difficulty of computing the inverse of its one-way function. There is no mathematical proof that establishes whether it is impossible to find the inverse of a given one-way function. Therefore, it is mandatory to use a cryptosystem whose security is scientifically proven (especially for banking, governments, etc.). On the other hand, the security of quantum cryptography can be formally demonstrated. In fact, its security is based on the laws of physics that assure the unconditional security. How is it possible to use and integrate quantum cryptography into existing solutions? This thesis proposes a method to integrate quantum cryptography into existing communication protocols like PPP, IPSec and the 802.l1i protocol. It sketches out some possible scenarios in order to prove the feasibility and to estimate the cost of such scenarios. Directives and checkpoints are given to help in certifying quantum cryptography solutions according to Common Criteria.
Resumo:
The oleaginous yeast Yarrowia lipolytica possesses six acyl-CoA oxidase (Aox) isoenzymes encoded by genes POX1-POX6. The respective roles of these multiple Aox isoenzymes were studied in recombinant Y. lipolytica strains that express heterologous polyhydroxyalkanoate (PHA) synthase (phaC) of Pseudomonas aeruginosa in varying POX genetic backgrounds, thus allowing assessment of the impact of specific Aox enzymes on the routing of carbon flow to β-oxidation or to PHA biosynthesis. Analysis of PHA production yields during growth on fatty acids with different chain lengths has revealed that the POX genotype significantly affects the PHA levels, but not the monomer composition of PHA. Aox3p function was found to be responsible for 90% and 75% of the total PHA produced from either C9:0 or C13:0 fatty acid, respectively, whereas Aox5p encodes the main Aox involved in the biosynthesis of 70% of PHA from C9:0 fatty acid. Other Aoxs, such as Aox1p, Aox2p, Aox4p and Aox6p, were not found to play a significant role in PHA biosynthesis, independent of the chain length of the fatty acid used. Finally, three known models of β-oxidation are discussed and it is shown that a 'leaky-hose pipe model' of the cycle can be applied to Y. lipolytica.
Resumo:
BACKGROUND: Many clinical studies are ultimately not fully published in peer-reviewed journals. Underreporting of clinical research is wasteful and can result in biased estimates of treatment effect or harm, leading to recommendations that are inappropriate or even dangerous. METHODS: We assembled a cohort of clinical studies approved 2000-2002 by the Research Ethics Committee of the University of Freiburg, Germany. Published full articles were searched in electronic databases and investigators contacted. Data on study characteristics were extracted from protocols and corresponding publications. We characterized the cohort, quantified its publication outcome and compared protocols and publications for selected aspects. RESULTS: Of 917 approved studies, 807 were started and 110 were not, either locally or as a whole. Of the started studies, 576 (71%) were completed according to protocol, 128 (16%) discontinued and 42 (5%) are still ongoing; for 61 (8%) there was no information about their course. We identified 782 full publications corresponding to 419 of the 807 initiated studies; the publication proportion was 52% (95% CI: 0.48-0.55). Study design was not significantly associated with subsequent publication. Multicentre status, international collaboration, large sample size and commercial or non-commercial funding were positively associated with subsequent publication. Commercial funding was mentioned in 203 (48%) protocols and in 205 (49%) of the publications. In most published studies (339; 81%) this information corresponded between protocol and publication. Most studies were published in English (367; 88%); some in German (25; 6%) or both languages (27; 6%). The local investigators were listed as (co-)authors in the publications corresponding to 259 (62%) studies. CONCLUSION: Half of the clinical research conducted at a large German university medical centre remains unpublished; future research is built on an incomplete database. Research resources are likely wasted as neither health care professionals nor patients nor policy makers can use the results when making decisions.