183 resultados para Key exchange protocols
Resumo:
Résumé La cryptographie classique est basée sur des concepts mathématiques dont la sécurité dépend de la complexité du calcul de l'inverse des fonctions. Ce type de chiffrement est à la merci de la puissance de calcul des ordinateurs ainsi que la découverte d'algorithme permettant le calcul des inverses de certaines fonctions mathématiques en un temps «raisonnable ». L'utilisation d'un procédé dont la sécurité est scientifiquement prouvée s'avère donc indispensable surtout les échanges critiques (systèmes bancaires, gouvernements,...). La cryptographie quantique répond à ce besoin. En effet, sa sécurité est basée sur des lois de la physique quantique lui assurant un fonctionnement inconditionnellement sécurisé. Toutefois, l'application et l'intégration de la cryptographie quantique sont un souci pour les développeurs de ce type de solution. Cette thèse justifie la nécessité de l'utilisation de la cryptographie quantique. Elle montre que le coût engendré par le déploiement de cette solution est justifié. Elle propose un mécanisme simple et réalisable d'intégration de la cryptographie quantique dans des protocoles de communication largement utilisés comme les protocoles PPP, IPSec et le protocole 802.1li. Des scénarios d'application illustrent la faisabilité de ces solutions. Une méthodologie d'évaluation, selon les critères communs, des solutions basées sur la cryptographie quantique est également proposée dans ce document. Abstract Classical cryptography is based on mathematical functions. The robustness of a cryptosystem essentially depends on the difficulty of computing the inverse of its one-way function. There is no mathematical proof that establishes whether it is impossible to find the inverse of a given one-way function. Therefore, it is mandatory to use a cryptosystem whose security is scientifically proven (especially for banking, governments, etc.). On the other hand, the security of quantum cryptography can be formally demonstrated. In fact, its security is based on the laws of physics that assure the unconditional security. How is it possible to use and integrate quantum cryptography into existing solutions? This thesis proposes a method to integrate quantum cryptography into existing communication protocols like PPP, IPSec and the 802.l1i protocol. It sketches out some possible scenarios in order to prove the feasibility and to estimate the cost of such scenarios. Directives and checkpoints are given to help in certifying quantum cryptography solutions according to Common Criteria.
Resumo:
Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.
Resumo:
The mitochondrial 70-kDa heat shock protein (mtHsp70), also known in humans as mortalin, is a central component of the mitochondrial protein import motor and plays a key role in the folding of matrix-localized mitochondrial proteins. MtHsp70 is assisted by a member of the 40-kDa heat shock protein co-chaperone family named Tid1 and a nucleotide exchange factor. Whereas, yeast mtHsp70 has been extensively studied in the context of protein import in the mitochondria, and the bacterial 70-kDa heat shock protein was recently shown to act as an ATP-fuelled unfolding enzyme capable of detoxifying stably misfolded polypeptides into harmless natively refolded proteins, little is known about the molecular functions of the human mortalin in protein homeostasis. Here, we developed novel and efficient purification protocols for mortalin and the two spliced versions of Tid1, Tid1-S, and Tid1-L and showed that mortalin can mediate the in vitro ATP-dependent reactivation of stable-preformed heat-denatured model aggregates, with the assistance of Mge1 and either Tid1-L or Tid1-S co-chaperones or yeast Mdj1. Thus, in addition of being a central component of the protein import machinery, human mortalin together with Tid1, may serve as a protein disaggregating machine which, for lack of Hsp100/ClpB disaggregating co-chaperones, may carry alone the scavenging of toxic protein aggregates in stressed, diseased, or aging human mitochondria.
Resumo:
The safe and responsible development of engineered nanomaterials (ENM), nanotechnology-based materials and products, together with the definition of regulatory measures and implementation of "nano"-legislation in Europe require a widely supported scientific basis and sufficient high quality data upon which to base decisions. At the very core of such a scientific basis is a general agreement on key issues related to risk assessment of ENMs which encompass the key parameters to characterise ENMs, appropriate methods of analysis and best approach to express the effect of ENMs in widely accepted dose response toxicity tests. The following major conclusions were drawn: Due to high batch variability of ENMs characteristics of commercially available and to a lesser degree laboratory made ENMs it is not possible to make general statements regarding the toxicity resulting from exposure to ENMs. 1) Concomitant with using the OECD priority list of ENMs, other criteria for selection of ENMs like relevance for mechanistic (scientific) studies or risk assessment-based studies, widespread availability (and thus high expected volumes of use) or consumer concern (route of consumer exposure depending on application) could be helpful. The OECD priority list is focussing on validity of OECD tests. Therefore source material will be first in scope for testing. However for risk assessment it is much more relevant to have toxicity data from material as present in products/matrices to which men and environment are be exposed. 2) For most, if not all characteristics of ENMs, standardized methods analytical methods, though not necessarily validated, are available. Generally these methods are only able to determine one single characteristic and some of them can be rather expensive. Practically, it is currently not feasible to fully characterise ENMs. Many techniques that are available to measure the same nanomaterial characteristic produce contrasting results (e.g. reported sizes of ENMs). It was recommended that at least two complementary techniques should be employed to determine a metric of ENMs. The first great challenge is to prioritise metrics which are relevant in the assessment of biological dose response relations and to develop analytical methods for characterising ENMs in biological matrices. It was generally agreed that one metric is not sufficient to describe fully ENMs. 3) Characterisation of ENMs in biological matrices starts with sample preparation. It was concluded that there currently is no standard approach/protocol for sample preparation to control agglomeration/aggregation and (re)dispersion. It was recommended harmonization should be initiated and that exchange of protocols should take place. The precise methods used to disperse ENMs should be specifically, yet succinctly described within the experimental section of a publication. 4) ENMs need to be characterised in the matrix as it is presented to the test system (in vitro/ in vivo). 5) Alternative approaches (e.g. biological or in silico systems) for the characterisation of ENMS are simply not possible with the current knowledge. Contributors: Iseult Lynch, Hans Marvin, Kenneth Dawson, Markus Berges, Diane Braguer, Hugh J. Byrne, Alan Casey, Gordon Chambers, Martin Clift, Giuliano Elia1, Teresa F. Fernandes, Lise Fjellsbø, Peter Hatto, Lucienne Juillerat, Christoph Klein, Wolfgang Kreyling, Carmen Nickel1, and Vicki Stone.
Resumo:
We present a new lab-on-a-chip system for electrophysiological measurements on Xenopus oocytes. Xenopus oocytes are widely used host cells in the field of pharmacological studies and drug development. We developed a novel non-invasive technique using immobilized non-devitellinized cells that replaces the traditional "two-electrode voltage-clamp" (TEVC) method. In particular, rapid fluidic exchange was implemented on-chip to allow recording of fast kinetic events of exogenous ion channels expressed in the cell membrane. Reducing fluidic exchange times of extracellular reagent solutions is a great challenge with these large millimetre-sized cells. Fluidic switching is obtained by shifting the laminar flow interface in a perfusion channel under the cell by means of integrated poly-dimethylsiloxane (PDMS) microvalves. Reagent solution exchange times down to 20 ms have been achieved. An on-chip purging system allows to perform complex pharmacological protocols, making the system suitable for screening of ion channel ligand libraries. The performance of the integrated rapid fluidic exchange system was demonstrated by investigating the self-inhibition of human epithelial sodium channels (ENaC). Our results show that the response time of this ion channel to a specific reactant is about an order of magnitude faster than could be estimated with the traditional TEVC technique.
Resumo:
Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange
Resumo:
A patent foramen ovale (PFO), present in ∼40% of the general population, is a potential source of right-to-left shunt that can impair pulmonary gas exchange efficiency [i.e., increase the alveolar-to-arterial Po2 difference (A-aDO2)]. Prior studies investigating human acclimatization to high-altitude with A-aDO2 as a key parameter have not investigated differences between subjects with (PFO+) or without a PFO (PFO-). We hypothesized that in PFO+ subjects A-aDO2 would not improve (i.e., decrease) after acclimatization to high altitude compared with PFO- subjects. Twenty-one (11 PFO+) healthy sea-level residents were studied at rest and during cycle ergometer exercise at the highest iso-workload achieved at sea level (SL), after acute transport to 5,260 m (ALT1), and again at 5,260 m after 16 days of high-altitude acclimatization (ALT16). In contrast to PFO- subjects, PFO+ subjects had 1) no improvement in A-aDO2 at rest and during exercise at ALT16 compared with ALT1, 2) no significant increase in resting alveolar ventilation, or alveolar Po2, at ALT16 compared with ALT1, and consequently had 3) an increased arterial Pco2 and decreased arterial Po2 and arterial O2 saturation at rest at ALT16. Furthermore, PFO+ subjects had an increased incidence of acute mountain sickness (AMS) at ALT1 concomitant with significantly lower peripheral O2 saturation (SpO2). These data suggest that PFO+ subjects have increased susceptibility to AMS when not taking prophylactic treatments, that right-to-left shunt through a PFO impairs pulmonary gas exchange efficiency even after acclimatization to high altitude, and that PFO+ subjects have blunted ventilatory acclimatization after 16 days at altitude compared with PFO- subjects.
Resumo:
Different therapeutic options for prosthetic joint infections exist, but surgery remains the key. With a two-stage exchange procedure, a success rate above 90% can be expected. Currently, there is no consensus regarding the optimal duration between explantation and the reimplantation in a two-stage procedure. The aim of this study was to retrospectively compare treatment outcomes between short-interval and long-interval two-stage exchanges. Patients having a two-stage exchange of a hip or knee prosthetic joint infection at Lausanne University Hospital (Switzerland) between 1999 and 2013 were included. The satisfaction of the patient, the function of the articulation and the eradication of infection, were compared between patients having a short (2 to 4 weeks) versus a long (4 weeks and more) interval during a two-stage procedure. Patient satisfaction was defined as good if the patient did not have pain and bad if the patient had pain. Functional outcome was defined good if the patient had a prosthesis in place and could walk, medium if the prosthesis was in place but the patient could not walk, and bad if the prosthesis was no longer in place. Infection outcome was considered good if there had been no re-infection and bad if there had been a re-infection of the prosthesis 145 patients (100 hips, 45 knees) were identified with a median age of 68 years (range 19-103). The median hospital stay was 58 days (range 10-402). The median follow-up was 12.9 months (range 0.5-152). 28 % and 72 % of the patients had a short-interval and long-interval exchange of the prosthesis, respectively. Patient satisfaction, functional outcome and infection outcome for patients having a short versus a long interval are reported in the Table. The patient satisfaction was higher when a long interval was performed whereas the functional and infection outcomes were higher when a short interval was performed. According to this study a short-interval exchange appears preferable to a long interval, especially in the view of treatment effectiveness and functional outcome.
Resumo:
The Teggiolo zone is the sedimentary cover of the Antigorio nappe, one of the lowest tectonic units of the Penninic Central Alps. Detailed mapping, stratigraphic and structural analyses, and comparisons with less metamorphic series in several well-studied domains of the Alps, provide a new stratigraphic interpretation. The Teggiolo zone is comprised of several sedimentary cycles, separated by erosive surfaces and large stratigraphic gaps, which cover the time span from Triassic to Eocene. At Mid-Jurassic times it appears as an uplifted, partially emergent block, marking the southern limit of the main Helvetic basin (the Limiting South-Helvetic Rise LSHR). The main mass of the Teggiolo calcschists, whose base truncates the Triassic-Jurassic cycles and can erode the Antigorio basement, consists of fine-grained clastic sediments analogous to the deep-water flyschoid deposits of Late Cretaceous to Eocene age in the North-Penninic (or Valais s.l.) basins. Thus the Antigorio-Teggiolo domain occupies a crucial paleogeographic position, on the boundary between the Helvetic and Penninic realms: from Triassic to Early Cretaceous its affinity is with the Helvetic; at the end of Cretaceous it is incorporated into the North-Penninic basins. An unexpected result is the discovery of the important role played by complex formations of wildflysch type at the top of the Teggiolo zone. They contain blocks of various sizes. According to their nature, three different associations are distinguished that have specific vertical and lateral distributions. These blocks give clues to the existence of territories that have disappeared from the present-day level of observation and impose constraints on the kinematics of early folding and embryonic nappe emplacement. Tectonics produced several phases of superimposed folds and schistosities, more in the metasediments than in the gneissic basement. Older deformations that predate the amplification of the frontal hinge of the nappe generated the dominant schistosity and the km-wide Vanzèla isoclinal fold.
Resumo:
PURPOSE: The Cancer Vaccine Consortium of the Cancer Research Institute (CVC-CRI) conducted a multicenter HLA-peptide multimer proficiency panel (MPP) with a group of 27 laboratories to assess the performance of the assay. EXPERIMENTAL DESIGN: Participants used commercially available HLA-peptide multimers and a well characterized common source of peripheral blood mononuclear cells (PBMC). The frequency of CD8+ T cells specific for two HLA-A2-restricted model antigens was measured by flow cytometry. The panel design allowed for participants to use their preferred staining reagents and locally established protocols for both cell labeling, data acquisition and analysis. RESULTS: We observed significant differences in both the performance characteristics of the assay and the reported frequencies of specific T cells across laboratories. These results emphasize the need to identify the critical variables important for the observed variability to allow for harmonization of the technique across institutions. CONCLUSIONS: Three key recommendations emerged that would likely reduce assay variability and thus move toward harmonizing of this assay. (1) Use of more than two colors for the staining (2) collect at least 100,000 CD8 T cells, and (3) use of a background control sample to appropriately set the analytical gates. We also provide more insight into the limitations of the assay and identified additional protocol steps that potentially impact the quality of data generated and therefore should serve as primary targets for systematic analysis in future panels. Finally, we propose initial guidelines for harmonizing assay performance which include the introduction of standard operating protocols to allow for adequate training of technical staff and auditing of test analysis procedures.
Resumo:
OBJECTIVE: To provide an update to the original Surviving Sepsis Campaign clinical management guidelines, "Surviving Sepsis Campaign Guidelines for Management of Severe Sepsis and Septic Shock," published in 2004. DESIGN: Modified Delphi method with a consensus conference of 55 international experts, several subsequent meetings of subgroups and key individuals, teleconferences, and electronic-based discussion among subgroups and among the entire committee. This process was conducted independently of any industry funding. METHODS: We used the Grades of Recommendation, Assessment, Development and Evaluation (GRADE) system to guide assessment of quality of evidence from high (A) to very low (D) and to determine the strength of recommendations. A strong recommendation (1) indicates that an intervention's desirable effects clearly outweigh its undesirable effects (risk, burden, cost) or clearly do not. Weak recommendations (2) indicate that the tradeoff between desirable and undesirable effects is less clear. The grade of strong or weak is considered of greater clinical importance than a difference in letter level of quality of evidence. In areas without complete agreement, a formal process of resolution was developed and applied. Recommendations are grouped into those directly targeting severe sepsis, recommendations targeting general care of the critically ill patient that are considered high priority in severe sepsis, and pediatric considerations. RESULTS: Key recommendations, listed by category, include early goal-directed resuscitation of the septic patient during the first 6 hrs after recognition (1C); blood cultures before antibiotic therapy (1C); imaging studies performed promptly to confirm potential source of infection (1C); administration of broad-spectrum antibiotic therapy within 1 hr of diagnosis of septic shock (1B) and severe sepsis without septic shock (1D); reassessment of antibiotic therapy with microbiology and clinical data to narrow coverage, when appropriate (1C); a usual 7-10 days of antibiotic therapy guided by clinical response (1D); source control with attention to the balance of risks and benefits of the chosen method (1C); administration of either crystalloid or colloid fluid resuscitation (1B); fluid challenge to restore mean circulating filling pressure (1C); reduction in rate of fluid administration with rising filing pressures and no improvement in tissue perfusion (1D); vasopressor preference for norepinephrine or dopamine to maintain an initial target of mean arterial pressure > or = 65 mm Hg (1C); dobutamine inotropic therapy when cardiac output remains low despite fluid resuscitation and combined inotropic/vasopressor therapy (1C); stress-dose steroid therapy given only in septic shock after blood pressure is identified to be poorly responsive to fluid and vasopressor therapy (2C); recombinant activated protein C in patients with severe sepsis and clinical assessment of high risk for death (2B except 2C for postoperative patients). In the absence of tissue hypoperfusion, coronary artery disease, or acute hemorrhage, target a hemoglobin of 7-9 g/dL (1B); a low tidal volume (1B) and limitation of inspiratory plateau pressure strategy (1C) for acute lung injury (ALI)/acute respiratory distress syndrome (ARDS); application of at least a minimal amount of positive end-expiratory pressure in acute lung injury (1C); head of bed elevation in mechanically ventilated patients unless contraindicated (1B); avoiding routine use of pulmonary artery catheters in ALI/ARDS (1A); to decrease days of mechanical ventilation and ICU length of stay, a conservative fluid strategy for patients with established ALI/ARDS who are not in shock (1C); protocols for weaning and sedation/analgesia (1B); using either intermittent bolus sedation or continuous infusion sedation with daily interruptions or lightening (1B); avoidance of neuromuscular blockers, if at all possible (1B); institution of glycemic control (1B), targeting a blood glucose < 150 mg/dL after initial stabilization (2C); equivalency of continuous veno-veno hemofiltration or intermittent hemodialysis (2B); prophylaxis for deep vein thrombosis (1A); use of stress ulcer prophylaxis to prevent upper gastrointestinal bleeding using H2 blockers (1A) or proton pump inhibitors (1B); and consideration of limitation of support where appropriate (1D). Recommendations specific to pediatric severe sepsis include greater use of physical examination therapeutic end points (2C); dopamine as the first drug of choice for hypotension (2C); steroids only in children with suspected or proven adrenal insufficiency (2C); and a recommendation against the use of recombinant activated protein C in children (1B). CONCLUSIONS: There was strong agreement among a large cohort of international experts regarding many level 1 recommendations for the best current care of patients with severe sepsis. Evidenced-based recommendations regarding the acute management of sepsis and septic shock are the first step toward improved outcomes for this important group of critically ill patients.
Resumo:
Led by key opinion leaders in the field, the Cancer Immunotherapy Consortium of the Cancer Research Institute 2012 Scientific Colloquium included 179 participants who exchanged cutting-edge information on basic, clinical and translational cancer immunology and immunotherapy. The meeting revealed how rapidly this field is advancing. The keynote talk was given by Wolf H Fridman and it described the microenvironment of primary and metastatic human tumors. Participants interacted through oral presentations and panel discussions on topics that included host reactions in tumors, advances in imaging, monitoring therapeutic immune modulation, the benefit and risk of immunotherapy, and immune monitoring activities. In summary, the annual meeting gathered clinicians and scientists from academia, industry and regulatory agencies from around the globe to interact and exchange important scientific advances related to tumor immunobiology and cancer immunotherapy.
Resumo:
We propose a new method, based on inertial sensors, to automatically measure at high frequency the durations of the main phases of ski jumping (i.e. take-off release, take-off, and early flight). The kinematics of the ski jumping movement were recorded by four inertial sensors, attached to the thigh and shank of junior athletes, for 40 jumps performed during indoor conditions and 36 jumps in field conditions. An algorithm was designed to detect temporal events from the recorded signals and to estimate the duration of each phase. These durations were evaluated against a reference camera-based motion capture system and by trainers conducting video observations. The precision for the take-off release and take-off durations (indoor < 39 ms, outdoor = 27 ms) can be considered technically valid for performance assessment. The errors for early flight duration (indoor = 22 ms, outdoor = 119 ms) were comparable to the trainers' variability and should be interpreted with caution. No significant changes in the error were noted between indoor and outdoor conditions, and individual jumping technique did not influence the error of take-off release and take-off. Therefore, the proposed system can provide valuable information for performance evaluation of ski jumpers during training sessions.
Resumo:
OBJECTIVE: To investigate the planning of subgroup analyses in protocols of randomised controlled trials and the agreement with corresponding full journal publications. DESIGN: Cohort of protocols of randomised controlled trial and subsequent full journal publications. SETTING: Six research ethics committees in Switzerland, Germany, and Canada. DATA SOURCES: 894 protocols of randomised controlled trial involving patients approved by participating research ethics committees between 2000 and 2003 and 515 subsequent full journal publications. RESULTS: Of 894 protocols of randomised controlled trials, 252 (28.2%) included one or more planned subgroup analyses. Of those, 17 (6.7%) provided a clear hypothesis for at least one subgroup analysis, 10 (4.0%) anticipated the direction of a subgroup effect, and 87 (34.5%) planned a statistical test for interaction. Industry sponsored trials more often planned subgroup analyses compared with investigator sponsored trials (195/551 (35.4%) v 57/343 (16.6%), P<0.001). Of 515 identified journal publications, 246 (47.8%) reported at least one subgroup analysis. In 81 (32.9%) of the 246 publications reporting subgroup analyses, authors stated that subgroup analyses were prespecified, but this was not supported by 28 (34.6%) corresponding protocols. In 86 publications, authors claimed a subgroup effect, but only 36 (41.9%) corresponding protocols reported a planned subgroup analysis. CONCLUSIONS: Subgroup analyses are insufficiently described in the protocols of randomised controlled trials submitted to research ethics committees, and investigators rarely specify the anticipated direction of subgroup effects. More than one third of statements in publications of randomised controlled trials about subgroup prespecification had no documentation in the corresponding protocols. Definitive judgments regarding credibility of claimed subgroup effects are not possible without access to protocols and analysis plans of randomised controlled trials.