55 resultados para Protocols of the wise men of Zion.
Resumo:
Background: Changing perspectives on the natural history of celiac disease (CD), new serology and genetic tests, and amended histological criteria for diagnosis cast doubt on past prevalence estimates for CD. We set out to establish a more accurate prevalence estimate for CD using a novel serogenetic approach.Methods: The human leukocyte antigen (HLA)-DQ genotype was determined in 356 patients with 'biopsy-confirmed' CD, and in two age-stratified, randomly selected community cohorts of 1,390 women and 1,158 men. Sera were screened for CD-specific serology.Results: Only five 'biopsy-confirmed' patients with CD did not possess the susceptibility alleles HLA-DQ2.5, DQ8, or DQ2.2, and four of these were misdiagnoses. HLA-DQ2.5, DQ8, or DQ2.2 was present in 56% of all women and men in the community cohorts. Transglutaminase (TG)-2 IgA and composite TG2/deamidated gliadin peptide (DGP) IgA/IgG were abnormal in 4.6% and 5.6%, respectively, of the community women and 6.9% and 6.9%, respectively, of the community men, but in the screen-positive group, only 71% and 75%, respectively, of women and 65% and 63%, respectively, of men possessed HLA-DQ2.5, DQ8, or DQ2.2. Medical review was possible for 41% of seropositive women and 50% of seropositive men, and led to biopsy-confirmed CD in 10 women (0.7%) and 6 men (0.5%), but based on relative risk for HLA-DQ2.5, DQ8, or DQ2.2 in all TG2 IgA or TG2/DGP IgA/IgG screen-positive subjects, CD affected 1.3% or 1.9%, respectively, of females and 1.3% or 1.2%, respectively, of men. Serogenetic data from these community cohorts indicated that testing screen positives for HLA-DQ, or carrying out HLA-DQ and further serology, could have reduced unnecessary gastroscopies due to false-positive serology by at least 40% and by over 70%, respectively.Conclusions: Screening with TG2 IgA serology and requiring biopsy confirmation caused the community prevalence of CD to be substantially underestimated. Testing for HLA-DQ genes and confirmatory serology could reduce the numbers of unnecessary gastroscopies. © 2013 Anderson et al.; licensee BioMed Central Ltd.
Resumo:
Background Guidelines and clinical practice for the prevention of complications associated with central venous catheters (CVC) around the world vary greatly. Most institutions recommend the use of heparin to prevent occlusion, however there is debate regarding the need for heparin and evidence to suggest 0.9% sodium chloride (normal saline) may be as effective. The use of heparin is not without risk, may be unnecessary and is also associated with increased cost. Objectives To assess the clinical effects (benefits and harms) of intermittent flushing of heparin versus normal saline to prevent occlusion in long term central venous catheters in infants and children. Search Methods The Cochrane Vascular Trials Search Co-ordinator searched the Specialised Register (last searched April 2015) and the Cochrane Register of Studies (Issue 3, 2015). We also searched the reference lists of retrieved trials. Selection criteria Randomised controlled trials that compared the efficacy of normal saline with heparin to prevent occlusion of long term CVCs in infants and children aged up to 18 years of age were included. We excluded temporary CVCs and peripherally inserted central catheters (PICC). Data Collection and Analysis Two review authors independently assessed trial inclusion criteria, trial quality and extracted data. Rate ratios were calculated for two outcome measures - occlusion of the CVC and central line-associated blood stream infection. Other outcome measures included duration of catheter placement, inability to withdraw blood from the catheter, use of urokinase or recombinant tissue plasminogen, incidence of removal or re-insertion of the catheter, or both, and other CVC-related complications such as dislocation of CVCs, other CVC site infections and thrombosis. Main Results Three trials with a total of 245 participants were included in this review. The three trials directly compared the use of normal saline and heparin, however, between studies, all used different protocols for the standard and experimental arms with different concentrations of heparin and different frequency of flushes reported. In addition, not all studies reported on all outcomes. The quality of the evidence ranged from low to very low because there was no blinding, heterogeneity and inconsistency between studies was high and the confidence intervals were wide. CVC occlusion was assessed in all three trials (243 participants). We were able to pool the results of two trials for the outcomes of CVC occlusion and CVC-associated blood stream infection. The estimated rate ratio for CVC occlusion per 1000 catheter days between the normal saline and heparin group was 0.75 (95% CI 0.10 to 5.51, two studies, 229 participants, very low quality evidence). The estimated rate ratio for CVC-associated blood stream infection was 1.48 (95% CI 0.24 to 9.37, two studies, 231 participants; low quality evidence). The duration of catheter placement was reported to be similar between the two study arms, in one study (203 participants). Authors' Conclusions The review found that there was not enough evidence to determine the effects of intermittent flushing of heparin versus normal saline to prevent occlusion in long term central venous catheters in infants and children. Ultimately, if this evidence were available, the development of evidenced-based clinical practice guidelines and consistency of practice would be facilitated.
Resumo:
Aim Frail older people typically suffer several chronic diseases, receive multiple medications and are more likely to be institutionalized in residential aged care facilities. In such patients, optimizing prescribing and avoiding use of high-risk medications might prevent adverse events. The present study aimed to develop a pragmatic, easily applied algorithm for medication review to help clinicians identify and discontinue potentially inappropriate high-risk medications. Methods The literature was searched for robust evidence of the association of adverse effects related to potentially inappropriate medications in older patients to identify high-risk medications. Prior research into the cessation of potentially inappropriate medications in older patients in different settings was synthesized into a four-step algorithm for incorporation into clinical assessment protocols for patients, particularly those in residential aged care facilities. Results The algorithm comprises several steps leading to individualized prescribing recommendations: (i) identify a high-risk medication; (ii) ascertain the current indications for the medication and assess their validity; (iii) assess if the drug is providing ongoing symptomatic benefit; and (iv) consider withdrawing, altering or continuing medications. Decision support resources were developed to complement the algorithm in ensuring a systematic and patient-centered approach to medication discontinuation. These include a comprehensive list of high-risk medications and the reasons for inappropriateness, lists of alternative treatments, and suggested medication withdrawal protocols. Conclusions The algorithm captures a range of different clinical scenarios in relation to potentially inappropriate medications, and offers an evidence-based approach to identifying and, if appropriate, discontinuing such medications. Studies are required to evaluate algorithm effects on prescribing decisions and patient outcomes.
Resumo:
Over recent decades, efforts have been made to reduce human exposure to atmospheric pollutants including polycyclic aromatic hydrocarbons (PAHs) and polychlorinated biphenyls (PCBs) through emission control and abatement. Along with the potential changes in their concentrations resulting from these efforts, profiles of emission sources may have also changed over such extended timeframes. However relevant data are quite limited in the Southern Hemisphere. We revisited two sampling sites in an Australian city, where the concentration data in 1994/5 for atmospheric PAHs and PCBs were available. Monthly air samples from July 2013 to June 2014 at the two sites were collected and analysed for these compounds, using similar protocols to the original study. A prominent seasonal pattern was observed for PAHs with elevated concentrations in cooler months whereas PCB levels showed little seasonal variation. Compared to two decades ago, atmospheric concentrations of ∑13 PAHs (gaseous + particle-associated) in this city have decreased by approximately one order of magnitude and the apparent halving time ( t 1 / 2 ) was estimated as 6.2 ± 0.56 years. ∑6 iPCBs concentrations (median value; gaseous + particle-associated) have decreased by 80% with an estimated t 1 / 2 of 11 ± 2.9 years. These trends and values are similar to those reported for comparable sites in the Northern Hemisphere. To characterise emission source profiles, samples were also collected from a bushfire event and within a vehicular tunnel. Emissions from bushfires are suggested to be an important contributor to the current atmospheric concentrations of PAHs in this city. This contribution is more important in cooler months, i.e. June, July and August, and its importance may have increased over the last two decades.
Resumo:
This thesis evaluates the security of Supervisory Control and Data Acquisition (SCADA) systems, which are one of the key foundations of many critical infrastructures. Specifically, it examines one of the standardised SCADA protocols called the Distributed Network Protocol Version 3, which attempts to provide a security mechanism to ensure that messages transmitted between devices, are adequately secured from rogue applications. To achieve this, the thesis applies formal methods from theoretical computer science to formally analyse the correctness of the protocol.
Resumo:
Identity-based cryptography has become extremely fashionable in the last few years. As a consequence many proposals for identity-based key establishment have emerged, the majority in the two party case. We survey the currently proposed protocols of this type, examining their security and efficiency. Problems with some published protocols are noted.
Resumo:
Denial-of-service attacks (DoS) and distributed denial-of-service attacks (DDoS) attempt to temporarily disrupt users or computer resources to cause service un- availability to legitimate users in the internetworking system. The most common type of DoS attack occurs when adversaries °ood a large amount of bogus data to interfere or disrupt the service on the server. The attack can be either a single-source attack, which originates at only one host, or a multi-source attack, in which multiple hosts coordinate to °ood a large number of packets to the server. Cryptographic mechanisms in authentication schemes are an example ap- proach to help the server to validate malicious tra±c. Since authentication in key establishment protocols requires the veri¯er to spend some resources before successfully detecting the bogus messages, adversaries might be able to exploit this °aw to mount an attack to overwhelm the server resources. The attacker is able to perform this kind of attack because many key establishment protocols incorporate strong authentication at the beginning phase before they can iden- tify the attacks. This is an example of DoS threats in most key establishment protocols because they have been implemented to support con¯dentiality and data integrity, but do not carefully consider other security objectives, such as availability. The main objective of this research is to design denial-of-service resistant mechanisms in key establishment protocols. In particular, we focus on the design of cryptographic protocols related to key establishment protocols that implement client puzzles to protect the server against resource exhaustion attacks. Another objective is to extend formal analysis techniques to include DoS- resistance. Basically, the formal analysis approach is used not only to analyse and verify the security of a cryptographic scheme carefully but also to help in the design stage of new protocols with a high level of security guarantee. In this research, we focus on an analysis technique of Meadows' cost-based framework, and we implement DoS-resistant model using Coloured Petri Nets. Meadows' cost-based framework is directly proposed to assess denial-of-service vulnerabil- ities in the cryptographic protocols using mathematical proof, while Coloured Petri Nets is used to model and verify the communication protocols using inter- active simulations. In addition, Coloured Petri Nets are able to help the protocol designer to clarify and reduce some inconsistency of the protocol speci¯cation. Therefore, the second objective of this research is to explore vulnerabilities in existing DoS-resistant protocols, as well as extend a formal analysis approach to our new framework for improving DoS-resistance and evaluating the performance of the new proposed mechanism. In summary, the speci¯c outcomes of this research include following results; 1. A taxonomy of denial-of-service resistant strategies and techniques used in key establishment protocols; 2. A critical analysis of existing DoS-resistant key exchange and key estab- lishment protocols; 3. An implementation of Meadows's cost-based framework using Coloured Petri Nets for modelling and evaluating DoS-resistant protocols; and 4. A development of new e±cient and practical DoS-resistant mechanisms to improve the resistance to denial-of-service attacks in key establishment protocols.
Resumo:
This paper explores the way men are represented in present-day advertising. Most gender related studies have concentrated in studying women in advertising and claim that men are still represented as the dominant gender and in more active, independent and functional roles than women. This paper asks whether this still holds for advertising in the beginning of 21st century. Many cultural changes may have broken the earlier stereotypes, for example changes in the family life, attitudes toward various sexual identities, concepts of masculinity and femininity, and changes in cultural style.
Resumo:
Introduction: Evaluating the effectiveness of interventions designed to increase the physical activity in communities is often a difficult and complex task, requiring considerable expertise and investment, and often constrained by methodological limitations. These limitations, in turn, create additional challenges when these studies are used in systematic reviews as they hinder the confidence, precision and interpretation of results. The objective of this paper is to summarise the methodological challenges posed in conducting a systematic review of community-wide physical activity interventions to help inform those conducting future primary research and systematic reviews. Methods: We conducted a Cochrane systematic review of community-wide interventions to increase physical activity. We assessed the methodological quality of the included studies. We will investigate these in greater detail, particularly in relation to the potential impact on measures of effect, confidence in results, generalizability of results and general interpretation. Results: The systematic review was conducted and has been published in the Cochrane Library. A logic model was helpful in defining and interpreting the studies. Many studies of unsuitable study design were excluded; however several important methodological limitations of the primary studies evaluating community-wide physical activity interventions emerged. These included: - the failure to use validated tools to measure physical activity; - issues associated with pre and post test designs; - inadequate sampling of populations; - poor control groups; and - intervention and measurement protocols of inadequate duration. Although it is challenging to undertake rigorous evaluations of complex interventions, these issues result in significant uncertainty over the effectiveness of these interventions, and the possible factors required for a community-wide intervention to be successful. In particular, the combination of several of these limitations (e.g. un-validated tools, inadequate sampling, and short duration) is that studies may lack the sensitivity to detect any meaningful change. Multiple publications of findings for the same study also made interpretation difficult; however, interventions with parallel qualitative publications were helpful. Discussion: Evaluating community wide interventions to increase physical activity in a rigorous way is incredibly challenging. These findings reflect these challenges but have important ramifications for researchers conducting primary studies to determine the efficacy of such interventions, as well as for researchers conducting systematic reviews. This new review shows that the inadequacies of design and evaluation are continuing. It is hoped that the adoption of such suggestions may aid in the development of systematic reviews, but more importantly, in enabling translation of such findings into policy and practice.
Resumo:
We conducted on-road and simulator studies to explore the mechanisms underpinning driver-rider crashes. In Study 1 the verbal protocols of 40 drivers and riders were assessed at intersections as part of a 15km on-road route in Melbourne. Network analysis of the verbal transcripts highlighted key differences in the situation awareness of drivers and riders at intersections. In a further study using a driving simulator we examined in car drivers the influence of acute exposure to motorcyclists. In a 15 min simulated drive, 40 drivers saw either no motorcycles or a high number of motorcycles in the surrounding traffic. In a subsequent 45-60 min drive, drivers were asked to detect motorcycles in traffic. The proportion of motorcycles was manipulated so that there was either a high (120) or low (6) number of motorcycles during the drive. Those drivers exposed to a high number of motorcycles were significantly faster at detecting motorcycles. Fundamentally, the incompatible situation awareness at intersections by drivers and riders underpins the conflicts. Study 2 offers some suggestion for a countermeasure here, although more research around schema and exposure training to support safer interactions is needed.
Resumo:
Classical results in unconditionally secure multi-party computation (MPC) protocols with a passive adversary indicate that every n-variate function can be computed by n participants, such that no set of size t < n/2 participants learns any additional information other than what they could derive from their private inputs and the output of the protocol. We study unconditionally secure MPC protocols in the presence of a passive adversary in the trusted setup (‘semi-ideal’) model, in which the participants are supplied with some auxiliary information (which is random and independent from the participant inputs) ahead of the protocol execution (such information can be purchased as a “commodity” well before a run of the protocol). We present a new MPC protocol in the trusted setup model, which allows the adversary to corrupt an arbitrary number t < n of participants. Our protocol makes use of a novel subprotocol for converting an additive secret sharing over a field to a multiplicative secret sharing, and can be used to securely evaluate any n-variate polynomial G over a field F, with inputs restricted to non-zero elements of F. The communication complexity of our protocol is O(ℓ · n 2) field elements, where ℓ is the number of non-linear monomials in G. Previous protocols in the trusted setup model require communication proportional to the number of multiplications in an arithmetic circuit for G; thus, our protocol may offer savings over previous protocols for functions with a small number of monomials but a large number of multiplications.
Resumo:
Plant small RNAs are a class of 19- to 25-nucleotide (nt) RNA molecules that are essential for genome stability, development and differentiation, disease, cellular communication, signaling, and adaptive responses to biotic and abiotic stress. Small RNAs comprise two major RNA classes, short interfering RNAs (siRNAs) and microRNAs (miRNAs). Efficient and reliable detection and quantification of small RNA expression has become an essential step in understanding their roles in specific cells and tissues. Here we provide protocols for the detection of miRNAs by stem-loop RT-PCR. This method enables fast and reliable miRNA expression profiling from as little as 20 pg of total RNA extracted from plant tissue and is suitable for high-throughput miRNA expression analysis. In addition, this method can be used to detect other classes of small RNAs, provided the sequence is known and their GC contents are similar to those specific for miRNAs.
Resumo:
A key exchange protocol allows a set of parties to agree upon a secret session key over a public network. Two-party key exchange (2PKE) protocols have been rigorously analyzed under various models considering different adversarial actions. However, the analysis of group key exchange (GKE) protocols has not been as extensive as that of 2PKE protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for the case of GKE protocols. We first model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure even against outsider KCI attacks. The attacks on these protocols demonstrate the necessity of considering KCI resilience for GKE protocols. Finally, we give a new proof of security for an existing GKE protocol under the revised model assuming random oracles.
Resumo:
China has been the focus of much academic and business scrutiny of late. Its economic climate is changing and its huge new market opportunities seem quite tantalizing to the would-be 'technology entrepreneur'. But China's market is a relatively immature one; it is still in the process of being opened up to real competition. The corollary of this is that, at this stage of the transitional process, there is still significant State control of market function. This article discusses Chinese competition law, the technology transfer system, how the laws are being reformed and how the technology entrepreneur fares under them. The bottom line is that while opportunities beckon, the wise entrepreneur will nevertheless continue to exercise caution.
Resumo:
Literally, the word compliance suggests conformity in fulfilling official requirements. The thesis presents the results of the analysis and design of a class of protocols called compliant cryptologic protocols (CCP). The thesis presents a notion for compliance in cryptosystems that is conducive as a cryptologic goal. CCP are employed in security systems used by at least two mutually mistrusting sets of entities. The individuals in the sets of entities only trust the design of the security system and any trusted third party the security system may include. Such a security system can be thought of as a broker between the mistrusting sets of entities. In order to provide confidence in operation for the mistrusting sets of entities, CCP must provide compliance verification mechanisms. These mechanisms are employed either by all the entities or a set of authorised entities in the system to verify the compliance of the behaviour of various participating entities with the rules of the system. It is often stated that confidentiality, integrity and authentication are the primary interests of cryptology. It is evident from the literature that authentication mechanisms employ confidentiality and integrity services to achieve their goal. Therefore, the fundamental services that any cryptographic algorithm may provide are confidentiality and integrity only. Since controlling the behaviour of the entities is not a feasible cryptologic goal,the verification of the confidentiality of any data is a futile cryptologic exercise. For example, there exists no cryptologic mechanism that would prevent an entity from willingly or unwillingly exposing its private key corresponding to a certified public key. The confidentiality of the data can only be assumed. Therefore, any verification in cryptologic protocols must take the form of integrity verification mechanisms. Thus, compliance verification must take the form of integrity verification in cryptologic protocols. A definition of compliance that is conducive as a cryptologic goal is presented as a guarantee on the confidentiality and integrity services. The definitions are employed to provide a classification mechanism for various message formats in a cryptologic protocol. The classification assists in the characterisation of protocols, which assists in providing a focus for the goals of the research. The resulting concrete goal of the research is the study of those protocols that employ message formats to provide restricted confidentiality and universal integrity services to selected data. The thesis proposes an informal technique to understand, analyse and synthesise the integrity goals of a protocol system. The thesis contains a study of key recovery,electronic cash, peer-review, electronic auction, and electronic voting protocols. All these protocols contain message format that provide restricted confidentiality and universal integrity services to selected data. The study of key recovery systems aims to achieve robust key recovery relying only on the certification procedure and without the need for tamper-resistant system modules. The result of this study is a new technique for the design of key recovery systems called hybrid key escrow. The thesis identifies a class of compliant cryptologic protocols called secure selection protocols (SSP). The uniqueness of this class of protocols is the similarity in the goals of the member protocols, namely peer-review, electronic auction and electronic voting. The problem statement describing the goals of these protocols contain a tuple,(I, D), where I usually refers to an identity of a participant and D usually refers to the data selected by the participant. SSP are interested in providing confidentiality service to the tuple for hiding the relationship between I and D, and integrity service to the tuple after its formation to prevent the modification of the tuple. The thesis provides a schema to solve the instances of SSP by employing the electronic cash technology. The thesis makes a distinction between electronic cash technology and electronic payment technology. It will treat electronic cash technology to be a certification mechanism that allows the participants to obtain a certificate on their public key, without revealing the certificate or the public key to the certifier. The thesis abstracts the certificate and the public key as the data structure called anonymous token. It proposes design schemes for the peer-review, e-auction and e-voting protocols by employing the schema with the anonymous token abstraction. The thesis concludes by providing a variety of problem statements for future research that would further enrich the literature.