798 resultados para cryptographic protocol
Resumo:
Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.
Resumo:
We leverage the buffering capabilities of end-systems to achieve scalable, asynchronous delivery of streams in a peer-to-peer environment. Unlike existing cache-and-relay schemes, we propose a distributed prefetching protocol where peers prefetch and store portions of the streaming media ahead of their playout time, thus not only turning themselves to possible sources for other peers but their prefetched data can allow them to overcome the departure of their source-peer. This stands in sharp contrast to existing cache-and-relay schemes where the departure of the source-peer forces its peer children to go the original server, thus disrupting their service and increasing server and network load. Through mathematical analysis and simulations, we show the effectiveness of maintaining such asynchronous multicasts from several source-peers to other children peers, and the efficacy of prefetching in the face of peer departures. We confirm the scalability of our dPAM protocol as it is shown to significantly reduce server load.
Resumo:
The popularity of TCP/IP coupled with the premise of high speed communication using Asynchronous Transfer Mode (ATM) technology have prompted the network research community to propose a number of techniques to adapt TCP/IP to ATM network environments. ATM offers Available Bit Rate (ABR) and Unspecified Bit Rate (UBR) services for best-effort traffic, such as conventional file transfer. However, recent studies have shown that TCP/IP, when implemented using ABR or UBR, leads to serious performance degradations, especially when the utilization of network resources (such as switch buffers) is high. Proposed techniques-switch-level enhancements, for example-that attempt to patch up TCP/IP over ATMs have had limited success in alleviating this problem. The major reason for TCP/IP's poor performance over ATMs has been consistently attributed to packet fragmentation, which is the result of ATM's 53-byte cell-oriented switching architecture. In this paper, we present a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. At the core of TCP Boston is the Adaptive Information Dispersal Algorithm (AIDA), an efficient encoding technique that allows for dynamic redundancy control. AIDA makes TCP/IP's performance less sensitive to cell losses, thus ensuring a graceful degradation of TCP/IP's performance when faced with congested resources. In this paper, we introduce AIDA and overview the main features of TCP Boston. We present detailed simulation results that show the superiority of our protocol when compared to other adaptations of TCP/IP over ATMs. In particular, we show that TCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput) and application-centric metrics (e.g., response time).
Resumo:
Formal tools like finite-state model checkers have proven useful in verifying the correctness of systems of bounded size and for hardening single system components against arbitrary inputs. However, conventional applications of these techniques are not well suited to characterizing emergent behaviors of large compositions of processes. In this paper, we present a methodology by which arbitrarily large compositions of components can, if sufficient conditions are proven concerning properties of small compositions, be modeled and completely verified by performing formal verifications upon only a finite set of compositions. The sufficient conditions take the form of reductions, which are claims that particular sequences of components will be causally indistinguishable from other shorter sequences of components. We show how this methodology can be applied to a variety of network protocol applications, including two features of the HTTP protocol, a simple active networking applet, and a proposed web cache consistency algorithm. We also doing discuss its applicability to framing protocol design goals and to representing systems which employ non-model-checking verification methodologies. Finally, we briefly discuss how we hope to broaden this methodology to more general topological compositions of network applications.
Resumo:
We present a transport protocol whose goal is to reduce power consumption without compromising delivery requirements of applications. To meet its goal of energy efficiency, our transport protocol (1) contains mechanisms to balance end-to-end vs. local retransmissions; (2) minimizes acknowledgment traffic using receiver regulated rate-based flow control combined with selected acknowledgements and in-network caching of packets; and (3) aggressively seeks to avoid any congestion-based packet loss. Within a recently developed ultra low-power multi-hop wireless network system, extensive simulations and experimental results demonstrate that our transport protocol meets its goal of preserving the energy efficiency of the underlying network.
Resumo:
Within a recently developed low-power ad hoc network system, we present a transport protocol (JTP) whose goal is to reduce power consumption without trading off delivery requirements of applications. JTP has the following features: it is lightweight whereby end-nodes control in-network actions by encoding delivery requirements in packet headers; JTP enables applications to specify a range of reliability requirements, thus allocating the right energy budget to packets; JTP minimizes feedback control traffic from the destination by varying its frequency based on delivery requirements and stability of the network; JTP minimizes energy consumption by implementing in-network caching and increasing the chances that data retransmission requests from destinations "hit" these caches, thus avoiding costly source retransmissions; and JTP fairly allocates bandwidth among flows by backing off the sending rate of a source to account for in-network retransmissions on its behalf. Analysis and extensive simulations demonstrate the energy gains of JTP over one-size-fits-all transport protocols.
Resumo:
With the rapid growth of the Internet and digital communications, the volume of sensitive electronic transactions being transferred and stored over and on insecure media has increased dramatically in recent years. The growing demand for cryptographic systems to secure this data, across a multitude of platforms, ranging from large servers to small mobile devices and smart cards, has necessitated research into low cost, flexible and secure solutions. As constraints on architectures such as area, speed and power become key factors in choosing a cryptosystem, methods for speeding up the development and evaluation process are necessary. This thesis investigates flexible hardware architectures for the main components of a cryptographic system. Dedicated hardware accelerators can provide significant performance improvements when compared to implementations on general purpose processors. Each of the designs proposed are analysed in terms of speed, area, power, energy and efficiency. Field Programmable Gate Arrays (FPGAs) are chosen as the development platform due to their fast development time and reconfigurable nature. Firstly, a reconfigurable architecture for performing elliptic curve point scalar multiplication on an FPGA is presented. Elliptic curve cryptography is one such method to secure data, offering similar security levels to traditional systems, such as RSA, but with smaller key sizes, translating into lower memory and bandwidth requirements. The architecture is implemented using different underlying algorithms and coordinates for dedicated Double-and-Add algorithms, twisted Edwards algorithms and SPA secure algorithms, and its power consumption and energy on an FPGA measured. Hardware implementation results for these new algorithms are compared against their software counterparts and the best choices for minimum area-time and area-energy circuits are then identified and examined for larger key and field sizes. Secondly, implementation methods for another component of a cryptographic system, namely hash functions, developed in the recently concluded SHA-3 hash competition are presented. Various designs from the three rounds of the NIST run competition are implemented on FPGA along with an interface to allow fair comparison of the different hash functions when operating in a standardised and constrained environment. Different methods of implementation for the designs and their subsequent performance is examined in terms of throughput, area and energy costs using various constraint metrics. Comparing many different implementation methods and algorithms is nontrivial. Another aim of this thesis is the development of generic interfaces used both to reduce implementation and test time and also to enable fair baseline comparisons of different algorithms when operating in a standardised and constrained environment. Finally, a hardware-software co-design cryptographic architecture is presented. This architecture is capable of supporting multiple types of cryptographic algorithms and is described through an application for performing public key cryptography, namely the Elliptic Curve Digital Signature Algorithm (ECDSA). This architecture makes use of the elliptic curve architecture and the hash functions described previously. These components, along with a random number generator, provide hardware acceleration for a Microblaze based cryptographic system. The trade-off in terms of performance for flexibility is discussed using dedicated software, and hardware-software co-design implementations of the elliptic curve point scalar multiplication block. Results are then presented in terms of the overall cryptographic system.
Resumo:
Background: The eliciting dose (ED) for a peanut allergic reaction in 5% of the peanut allergic population, the ED05, is 1.5 mg of peanut protein. This ED05 was derived from oral food challenges (OFC) that use graded, incremental doses administered at fixed time intervals. Individual patients’ threshold doses were used to generate population dose-distribution curves using probability distributions from which the ED05 was then determined. It is important to clinically validate that this dose is predictive of the allergenic response in a further unselected group of peanut-allergic individuals. Methods/Aims: This is a multi-centre study involving three national level referral and teaching centres. (Cork University Hospital, Ireland, Royal Children’s Hospital Melbourne, Australia and Massachusetts General Hospital, Boston, U.S.A.) The study is now in process and will continue to run until all centres have recruited 125 participates in each respective centre. A total of 375 participants, aged 1–18 years will be recruited during routine Allergy appointments in the centres. The aim is to assess the precision of the predicted ED05 using a single dose (6 mg peanut = 1.5 mg of peanut protein) in the form of a cookie. Validated Food Allergy related Quality of Life Questionnaires-(FAQLQ) will be self-administered prior to OFC and 1 month after challenge to assess the impact of a single dose OFC on FAQL. Serological and cell based in vitro studies will be performed. Conclusion: The validation of the ED05 threshold for allergic reactions in peanut allergic subjects has potential value for public health measures. The single dose OFC, based upon the statistical dose-distribution analysis of past challenge trials, promises an efficient approach to identify the most highly sensitive patients within any given food-allergic population.
Resumo:
Traditionally, attacks on cryptographic algorithms looked for mathematical weaknesses in the underlying structure of a cipher. Side-channel attacks, however, look to extract secret key information based on the leakage from the device on which the cipher is implemented, be it smart-card, microprocessor, dedicated hardware or personal computer. Attacks based on the power consumption, electromagnetic emanations and execution time have all been practically demonstrated on a range of devices to reveal partial secret-key information from which the full key can be reconstructed. The focus of this thesis is power analysis, more specifically a class of attacks known as profiling attacks. These attacks assume a potential attacker has access to, or can control, an identical device to that which is under attack, which allows him to profile the power consumption of operations or data flow during encryption. This assumes a stronger adversary than traditional non-profiling attacks such as differential or correlation power analysis, however the ability to model a device allows templates to be used post-profiling to extract key information from many different target devices using the power consumption of very few encryptions. This allows an adversary to overcome protocols intended to prevent secret key recovery by restricting the number of available traces. In this thesis a detailed investigation of template attacks is conducted, along with how the selection of various attack parameters practically affect the efficiency of the secret key recovery, as well as examining the underlying assumption of profiling attacks in that the power consumption of one device can be used to extract secret keys from another. Trace only attacks, where the corresponding plaintext or ciphertext data is unavailable, are then investigated against both symmetric and asymmetric algorithms with the goal of key recovery from a single trace. This allows an adversary to bypass many of the currently proposed countermeasures, particularly in the asymmetric domain. An investigation into machine-learning methods for side-channel analysis as an alternative to template or stochastic methods is also conducted, with support vector machines, logistic regression and neural networks investigated from a side-channel viewpoint. Both binary and multi-class classification attack scenarios are examined in order to explore the relative strengths of each algorithm. Finally these machine-learning based alternatives are empirically compared with template attacks, with their respective merits examined with regards to attack efficiency.
Resumo:
Efficient early identification of primary immunodeficiency disease (PID) is important for prognosis, but is not an easy task for non-immunologists. The Clinical Working Party of the European Society for Immunodeficiencies (ESID) has composed a multi-stage diagnostic protocol that is based on expert opinion, in order to increase the awareness of PID among doctors working in different fields. The protocol starts from the clinical presentation of the patient; immunological skills are not needed for its use. The multi-stage design allows cost-effective screening for PID within the large pool of potential cases in all hospitals in the early phases, while more expensive tests are reserved for definitive classification in collaboration with an immunologist at a later stage. Although many PIDs present in childhood, others may present at any age. The protocols presented here are therefore aimed at both adult physicians and paediatricians. While designed for use throughout Europe, there will be national differences which may make modification of this generic algorithm necessary.
Resumo:
info:eu-repo/semantics/nonPublished
Resumo:
BACKGROUND: Stroke is one of the most disabling and costly impairments of adulthood in the United States. Stroke patients clearly benefit from intensive inpatient care, but due to the high cost, there is considerable interest in implementing interventions to reduce hospital lengths of stay. Early discharge rehabilitation programs require coordinated, well-organized home-based rehabilitation, yet lack of sufficient information about the home setting impedes successful rehabilitation. This trial examines a multifaceted telerehabilitation (TR) intervention that uses telehealth technology to simultaneously evaluate the home environment, assess the patient's mobility skills, initiate rehabilitative treatment, prescribe exercises tailored for stroke patients and provide periodic goal oriented reassessment, feedback and encouragement. METHODS: We describe an ongoing Phase II, 2-arm, 3-site randomized controlled trial (RCT) that determines primarily the effect of TR on physical function and secondarily the effect on disability, falls-related self-efficacy, and patient satisfaction. Fifty participants with a diagnosis of ischemic or hemorrhagic stroke will be randomly assigned to one of two groups: (a) TR; or (b) Usual Care. The TR intervention uses a combination of three videotaped visits and five telephone calls, an in-home messaging device, and additional telephonic contact as needed over a 3-month study period, to provide a progressive rehabilitative intervention with a treatment goal of safe functional mobility of the individual within an accessible home environment. Dependent variables will be measured at baseline, 3-, and 6-months and analyzed with a linear mixed-effects model across all time points. DISCUSSION: For patients recovering from stroke, the use of TR to provide home assessments and follow-up training in prescribed equipment has the potential to effectively supplement existing home health services, assist transition to home and increase efficiency. This may be particularly relevant when patients live in remote locations, as is the case for many veterans. TRIAL REGISTRATION: Clinical Trials.gov Identifier: NCT00384748.
Resumo:
BACKGROUND: Many patients with diabetes have poor blood pressure (BP) control. Pharmacological therapy is the cornerstone of effective BP treatment, yet there are high rates both of poor medication adherence and failure to intensify medications. Successful medication management requires an effective partnership between providers who initiate and increase doses of effective medications and patients who adhere to the regimen. METHODS: In this cluster-randomized controlled effectiveness study, primary care teams within sites were randomized to a program led by a clinical pharmacist trained in motivational interviewing-based behavioral counseling approaches and authorized to make BP medication changes or to usual care. This study involved the collection of data during a 14-month intervention period in three Department of Veterans Affairs facilities and two Kaiser Permanente Northern California facilities. The clinical pharmacist was supported by clinical information systems that enabled proactive identification of, and outreach to, eligible patients identified on the basis of poor BP control and either medication refill gaps or lack of recent medication intensification. The primary outcome is the relative change in systolic blood pressure (SBP) measurements over time. Secondary outcomes are changes in Hemoglobin A1c, low-density lipoprotein cholesterol (LDL), medication adherence determined from pharmacy refill data, and medication intensification rates. DISCUSSION: Integration of the three intervention elements--proactive identification, adherence counseling and medication intensification--is essential to achieve optimal levels of control for high-risk patients. Testing the effectiveness of this intervention at the team level allows us to study the program as it would typically be implemented within a clinic setting, including how it integrates with other elements of care. TRIAL REGISTRATION: The ClinicalTrials.gov registration number is NCT00495794.
Resumo:
BACKGROUND: Despite the impact of hypertension and widely accepted target values for blood pressure (BP), interventions to improve BP control have had limited success. OBJECTIVES: We describe the design of a 'translational' study that examines the implementation, impact, sustainability, and cost of an evidence-based nurse-delivered tailored behavioral self-management intervention to improve BP control as it moves from a research context to healthcare delivery. The study addresses four specific aims: assess the implementation of an evidence-based behavioral self-management intervention to improve BP levels; evaluate the clinical impact of the intervention as it is implemented; assess organizational factors associated with the sustainability of the intervention; and assess the cost of implementing and sustaining the intervention. METHODS: The project involves three geographically diverse VA intervention facilities and nine control sites. We first conduct an evaluation of barriers and facilitators for implementing the intervention at intervention sites. We examine the impact of the intervention by comparing 12-month pre/post changes in BP control between patients in intervention sites versus patients in the matched control sites. Next, we examine the sustainability of the intervention and organizational factors facilitating or hindering the sustained implementation. Finally, we examine the costs of intervention implementation. Key outcomes are acceptability and costs of the program, as well as changes in BP. Outcomes will be assessed using mixed methods (e.g., qualitative analyses--pattern matching; quantitative methods--linear mixed models). DISCUSSION: The study results will provide information about the challenges and costs to implement and sustain the intervention, and what clinical impact can be expected.
Resumo:
BACKGROUND: Implementing new practices, such as health information technology (HIT), is often difficult due to the disruption of the highly coordinated, interdependent processes (e.g., information exchange, communication, relationships) of providing care in hospitals. Thus, HIT implementation may occur slowly as staff members observe and make sense of unexpected disruptions in care. As a critical organizational function, sensemaking, defined as the social process of searching for answers and meaning which drive action, leads to unified understanding, learning, and effective problem solving -- strategies that studies have linked to successful change. Project teamwork is a change strategy increasingly used by hospitals that facilitates sensemaking by providing a formal mechanism for team members to share ideas, construct the meaning of events, and take next actions. METHODS: In this longitudinal case study, we aim to examine project teams' sensemaking and action as the team prepares to implement new information technology in a tiertiary care hospital. Based on management and healthcare literature on HIT implementation and project teamwork, we chose sensemaking as an alternative to traditional models for understanding organizational change and teamwork. Our methods choices are derived from this conceptual framework. Data on project team interactions will be prospectively collected through direct observation and organizational document review. Through qualitative methods, we will identify sensemaking patterns and explore variation in sensemaking across teams. Participant demographics will be used to explore variation in sensemaking patterns. DISCUSSION: Outcomes of this research will be new knowledge about sensemaking patterns of project teams, such as: the antecedents and consequences of the ongoing, evolutionary, social process of implementing HIT; the internal and external factors that influence the project team, including team composition, team member interaction, and interaction between the project team and the larger organization; the ways in which internal and external factors influence project team processes; and the ways in which project team processes facilitate team task accomplishment. These findings will lead to new methods of implementing HIT in hospitals.