117 resultados para Incremental protocol in treadmill
Resumo:
This study compared the effects of a low-frequency electrical stimulation (LFES; Veinoplus® Sport, Ad Rem Technology, Paris, France), a low-frequency electrical stimulation combined with a cooling vest (LFESCR) and an active recovery combined with a cooling vest (ACTCR) as recovery strategies on performance (racing time and pacing strategies), physiologic and perceptual responses between two sprint kayak simulated races, in a hot environment (∼32 wet-bulb-globe temperature). Eight elite male kayakers performed two successive 1000-m kayak time trials (TT1 and TT2), separated by a short-term recovery period, including a 30-min of the respective recovery intervention protocol, in a randomized crossover design. Racing time, power output, and stroke rate were recorded for each time trial. Blood lactate concentration, pH, core, skin and body temperatures were measured before and after both TT1 and TT2 and at mid- and post-recovery intervention. Perceptual ratings of thermal sensation were also collected. LFESCR was associated with a very likely effect in performance restoration compared with ACTCR (99/0/1%) and LFES conditions (98/0/2%). LFESCR induced a significant decrease in body temperature and thermal sensation at post-recovery intervention, which is not observed in ACTCR condition. In conclusion, the combination of LFES and wearing a cooling vest (LFESCR) improves performance restoration between two 1000-m kayak time trials achieved by elite athletes, in the heat.
Resumo:
We present an approach to automating computationally sound proofs of key exchange protocols based on public-key encryption. We show that satisfying the property called occultness in the Dolev-Yao model guarantees the security of a related key exchange protocol in a simple computational model. Security in this simpler model has been shown to imply security in a Bellare {Rogaway-like model. Furthermore, the occultness in the Dolev-Yao model can be searched automatically by a mechanisable procedure. Thus automated proofs for key exchange protocols in the computational model can be achieved. We illustrate the method using the well-known Lowe-Needham-Schroeder protocol.
Resumo:
The theory of nonlinear dyamic systems provides some new methods to handle complex systems. Chaos theory offers new concepts, algorithms and methods for processing, enhancing and analyzing the measured signals. In recent years, researchers are applying the concepts from this theory to bio-signal analysis. In this work, the complex dynamics of the bio-signals such as electrocardiogram (ECG) and electroencephalogram (EEG) are analyzed using the tools of nonlinear systems theory. In the modern industrialized countries every year several hundred thousands of people die due to sudden cardiac death. The Electrocardiogram (ECG) is an important biosignal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. Heart rate variability analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. A computerbased intelligent system for analysis of cardiac states is very useful in diagnostics and disease management. Like many bio-signals, HRV signals are non-linear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of non-linear systems and provides good noise immunity. In this work, we studied the HOS of the HRV signals of normal heartbeat and four classes of arrhythmia. This thesis presents some general characteristics for each of these classes of HRV signals in the bispectrum and bicoherence plots. Several features were extracted from the HOS and subjected an Analysis of Variance (ANOVA) test. The results are very promising for cardiac arrhythmia classification with a number of features yielding a p-value < 0.02 in the ANOVA test. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, seven features were extracted from the heart rate signals using HOS and fed to a support vector machine (SVM) for classification. The performance evaluation protocol in this thesis uses 330 subjects consisting of five different kinds of cardiac disease conditions. The classifier achieved a sensitivity of 90% and a specificity of 89%. This system is ready to run on larger data sets. In EEG analysis, the search for hidden information for identification of seizures has a long history. Epilepsy is a pathological condition characterized by spontaneous and unforeseeable occurrence of seizures, during which the perception or behavior of patients is disturbed. An automatic early detection of the seizure onsets would help the patients and observers to take appropriate precautions. Various methods have been proposed to predict the onset of seizures based on EEG recordings. The use of nonlinear features motivated by the higher order spectra (HOS) has been reported to be a promising approach to differentiate between normal, background (pre-ictal) and epileptic EEG signals. In this work, these features are used to train both a Gaussian mixture model (GMM) classifier and a Support Vector Machine (SVM) classifier. Results show that the classifiers were able to achieve 93.11% and 92.67% classification accuracy, respectively, with selected HOS based features. About 2 hours of EEG recordings from 10 patients were used in this study. This thesis introduces unique bispectrum and bicoherence plots for various cardiac conditions and for normal, background and epileptic EEG signals. These plots reveal distinct patterns. The patterns are useful for visual interpretation by those without a deep understanding of spectral analysis such as medical practitioners. It includes original contributions in extracting features from HRV and EEG signals using HOS and entropy, in analyzing the statistical properties of such features on real data and in automated classification using these features with GMM and SVM classifiers.
Resumo:
Client puzzles are meant to act as a defense against denial of service (DoS) attacks by requiring a client to solve some moderately hard problem before being granted access to a resource. However, recent client puzzle difficulty definitions (Stebila and Ustaoglu, 2009; Chen et al., 2009) do not ensure that solving n puzzles is n times harder than solving one puzzle. Motivated by examples of puzzles where this is the case, we present stronger definitions of difficulty for client puzzles that are meaningful in the context of adversaries with more computational power than required to solve a single puzzle. A protocol using strong client puzzles may still not be secure against DoS attacks if the puzzles are not used in a secure manner. We describe a security model for analyzing the DoS resistance of any protocol in the context of client puzzles and give a generic technique for combining any protocol with a strong client puzzle to obtain a DoS-resistant protocol.
Resumo:
We present an automated verification method for security of Diffie–Hellman–based key exchange protocols. The method includes a Hoare-style logic and syntactic checking. The method is applied to protocols in a simplified version of the Bellare–Rogaway–Pointcheval model (2000). The security of the protocol in the complete model can be established automatically by a modular proof technique of Kudla and Paterson (2005).
Resumo:
In this paper we analyze the performance degradation of slotted amplify-and-forward protocol in wireless environments with high node density where the number of relays grows asymptotically large. Channel gains between source-destination pairs in such networks can no longer be independent. We analyze the degradation of performance in such wireless environments where channel gains are exponentially correlated by looking at the capacity per channel use. Theoretical results for eigenvalue distribution and the capacity are derived and compared with the simulation results. Both analytical and simulated results show that the capacity given by the asymptotic mutual information decreases with the network density.
Resumo:
The possibility of a surface inner sphere electron transfer mechanism leading to the coating of gold via the surface reduction of gold(I) chloride on metal and semi-metal oxide nanoparticles was investigated. Silica and zinc oxide nanoparticles are known to have very different surface chemistry, potentially leading to a new class of gold coated nanoparticles. Monodisperse silica nanoparticles were synthesised by the well known Stöber protocol in conjunction with sonication. The nanoparticle size was regulated solely by varying the amount of ammonia solution added. The presence of surface hydroxyl groups was investigated by liquid proton NMR. The resultant nanoparticle size was directly measured by the use of TEM. The synthesised silica nanoparticles were dispersed in acetonitrile (MeCN) and added to a bis acetonitrile gold(I) co-ordination complex [Au(MeCN)2]+ in MeCN. The silica hydroxyl groups were deprotonated in the presence of MeCN generating a formal negative charge on the siloxy groups. This allowed the [Au(MeCN)2]+ complex to undergo ligand exchange with the silica nanoparticles, which formed a surface co-ordination complex with reduction to gold(0), that proceeded by a surface inner sphere electron transfer mechanism. The residual [Au(MeCN)2]+ complex was allowed to react with water, disproportionating into gold(0) and gold(III) respectively, with gold(0) being added to the reduced gold already bound on the silica surface. The so-formed metallic gold seed surface was found to be suitable for the conventional reduction of gold(III) to gold(0) by ascorbic acid. This process generated a thin and uniform gold coating on the silica nanoparticles. This process was modified to include uniformly gold coated composite zinc oxide nanoparticles (Au@ZnO NPs) using surface co-ordination chemistry. AuCl dissolved in acetonitrile (MeCN) supplied chloride ions which were adsorbed onto ZnO NPs. The co-ordinated gold(I) was reduced on the ZnO surface to gold(0) by the inner sphere electron transfer mechanism. Addition of water disproportionated the remaining gold(I) to gold(0) and gold(III). Gold(0) bonded to gold(0) on the NP surface with gold(III) was reduced to gold(0) by ascorbic acid (ASC), which completed the gold coating process. This gold coating process of Au@ZnO NPs was modified to incorporate iodide instead of chloride. ZnO NPs were synthesised by the use of sodium oxide, zinc iodide and potassium iodide in refluxing basic ethanol with iodide controlling the presence of chemisorbed oxygen. These ZnO NPs were treated by the addition of gold(I) chloride dissolved in acetonitrile leaving chloride anions co-ordinated on the ZnO NP surface. This allowed acetonitrile ligands in the added [Au(MeCN)2]+ complex to surface exchange with adsorbed chloride from the dissolved AuCl on the ZnO NP surface. Gold(I) was then reduced by the surface inner sphere electron transfer mechanism. The presence of the reduced gold on the ZnO NPs allowed adsorption of iodide to generate a uniform deposition of gold onto the ZnO NP surface without the use of additional reducing agents or heat.
Resumo:
Denial-of-service (DoS) attacks are a growing concern to networked services like the Internet. In recent years, major Internet e-commerce and government sites have been disabled due to various DoS attacks. A common form of DoS attack is a resource depletion attack, in which an attacker tries to overload the server's resources, such as memory or computational power, rendering the server unable to service honest clients. A promising way to deal with this problem is for a defending server to identify and segregate malicious traffic as earlier as possible. Client puzzles, also known as proofs of work, have been shown to be a promising tool to thwart DoS attacks in network protocols, particularly in authentication protocols. In this thesis, we design efficient client puzzles and propose a stronger security model to analyse client puzzles. We revisit a few key establishment protocols to analyse their DoS resilient properties and strengthen them using existing and novel techniques. Our contributions in the thesis are manifold. We propose an efficient client puzzle that enjoys its security in the standard model under new computational assumptions. Assuming the presence of powerful DoS attackers, we find a weakness in the most recent security model proposed to analyse client puzzles and this study leads us to introduce a better security model for analysing client puzzles. We demonstrate the utility of our new security definitions by including two hash based stronger client puzzles. We also show that using stronger client puzzles any protocol can be converted into a provably secure DoS resilient key exchange protocol. In other contributions, we analyse DoS resilient properties of network protocols such as Just Fast Keying (JFK) and Transport Layer Security (TLS). In the JFK protocol, we identify a new DoS attack by applying Meadows' cost based framework to analyse DoS resilient properties. We also prove that the original security claim of JFK does not hold. Then we combine an existing technique to reduce the server cost and prove that the new variant of JFK achieves perfect forward secrecy (the property not achieved by original JFK protocol) and secure under the original security assumptions of JFK. Finally, we introduce a novel cost shifting technique which reduces the computation cost of the server significantly and employ the technique in the most important network protocol, TLS, to analyse the security of the resultant protocol. We also observe that the cost shifting technique can be incorporated in any Diffine{Hellman based key exchange protocol to reduce the Diffie{Hellman exponential cost of a party by one multiplication and one addition.
Resumo:
Most security models for authenticated key exchange (AKE) do not explicitly model the associated certification system, which includes the certification authority (CA) and its behaviour. However, there are several well-known and realistic attacks on AKE protocols which exploit various forms of malicious key registration and which therefore lie outside the scope of these models. We provide the first systematic analysis of AKE security incorporating certification systems (ASICS). We define a family of security models that, in addition to allowing different sets of standard AKE adversary queries, also permit the adversary to register arbitrary bitstrings as keys. For this model family we prove generic results that enable the design and verification of protocols that achieve security even if some keys have been produced maliciously. Our approach is applicable to a wide range of models and protocols; as a concrete illustration of its power, we apply it to the CMQV protocol in the natural strengthening of the eCK model to the ASICS setting.
Resumo:
Classical results in unconditionally secure multi-party computation (MPC) protocols with a passive adversary indicate that every n-variate function can be computed by n participants, such that no set of size t < n/2 participants learns any additional information other than what they could derive from their private inputs and the output of the protocol. We study unconditionally secure MPC protocols in the presence of a passive adversary in the trusted setup (‘semi-ideal’) model, in which the participants are supplied with some auxiliary information (which is random and independent from the participant inputs) ahead of the protocol execution (such information can be purchased as a “commodity” well before a run of the protocol). We present a new MPC protocol in the trusted setup model, which allows the adversary to corrupt an arbitrary number t < n of participants. Our protocol makes use of a novel subprotocol for converting an additive secret sharing over a field to a multiplicative secret sharing, and can be used to securely evaluate any n-variate polynomial G over a field F, with inputs restricted to non-zero elements of F. The communication complexity of our protocol is O(ℓ · n 2) field elements, where ℓ is the number of non-linear monomials in G. Previous protocols in the trusted setup model require communication proportional to the number of multiplications in an arithmetic circuit for G; thus, our protocol may offer savings over previous protocols for functions with a small number of monomials but a large number of multiplications.
Resumo:
The sum of k mins protocol was proposed by Hopper and Blum as a protocol for secure human identification. The goal of the protocol is to let an unaided human securely authenticate to a remote server. The main ingredient of the protocol is the sum of k mins problem. The difficulty of solving this problem determines the security of the protocol. In this paper, we show that the sum of k mins problem is NP-Complete and W[1]-Hard. This latter notion relates to fixed parameter intractability. We also discuss the use of the sum of k mins protocol in resource-constrained devices.
Resumo:
We study the multicast stream authentication problem when an opponent can drop, reorder and introduce data packets into the communication channel. In such a model, packet overhead and computing efficiency are two parameters to be taken into account when designing a multicast stream protocol. In this paper, we propose to use two families of erasure codes to deal with this problem, namely, rateless codes and maximum distance separable codes. Our constructions will have the following advantages. First, our packet overhead will be small. Second, the number of signature verifications to be performed at the receiver is O(1). Third, every receiver will be able to recover all the original data packets emitted by the sender despite losses and injection occurred during the transmission of information.
Resumo:
Aortic root replacement is a complex procedure, though subsequent modifications of the original Bentall procedure have made surgery more reproducible. The study aim was to examine the outcomes of a modified Bentall procedure, using the Medtronic Open PivotTM valved conduit. Whilst short-term data on the conduit and long-term data on the valve itself are available, little is known of the long-term results with the valved conduit. Patients undergoing aortic root replacement between February 1999 and February 2010, using the Medtronic Open Pivot valved conduit were identified from the prospectively collected Cardiothoracic Register at The Prince Charles Hospital, Brisbane, Australia. All patients were followed up echocardiographically and clinically. The primary end-point was death, and a Cox proportional model was used to identify factors associated.with survival. Secondary end-points were valve-related morbidity (as defined by STS guidelines) and postoperative morbidity. Predictors of morbidity were identified using logistic regression. A total of 246 patients (mean age 50 years) was included in the study. The overall mortality was 12%, with actuarial 10-year survival 79% and a 10-year estimate of valve-related death of 0.04 (95% CI: 0.004, 0.07). Preoperative myocardial infarction (p = 0.004, HR 4.74), urgency of operation (p = 0.038, HR 2.8) and 10% incremental decreases in ejection fraction (p = 0.046, HR 0.69) were predictive of mortality. Survival was also affected by the valve gradients, with a unit increase in peak gradient reducing mortality (p = 0.021, HR 0.93). Valve-related morbidity occurred in 11 patients. Urgent surgery (p <0.001, OR 4.12), aortic dissection (p = 0.015, OR 3.35), calcific aortic stenosis (p = 0.016, OR 2.35) and Marfan syndrome (p 0.009, OR 3.75) were predictive of postoperative morbidity. The reoperation rate was 1.2%. The Medtronic Open Pivot valved conduit is a safe and durable option for aortic root replacement, and is associated with low morbidity and 10-year survival of 79%. However, further studies are required to determine the effect of valve gradient on survival.
Resumo:
We show the first deterministic construction of an unconditionally secure multiparty computation (MPC) protocol in the passive adversarial model over black-box non-Abelian groups which is both optimal (secure against an adversary who possesses any t
Resumo:
Osteoblast lineage cells are direct effectors of osteogenesis and are, therefore, commonly used to evaluate the in vitro osteogenic capacity of bone substitute materials. This method has served its purposes when testing novel bone biomaterials; however, inconsistent results between in vitro and in vivo studies suggest the mechanisms that govern a material's capacity to mediate osteogenesis are not well understood. The emerging field of osteoimmunology and immunomodulation has informed a paradigm shift in our view of bone biomaterials–from one of an inert to an osteoimmunomodulatory material–highlighting the importance of immune cells in materials-mediated osteogenesis. Neglecting the importance of the immune response during this process is a major shortcoming of the current evaluation protocol. In this study we evaluated a potential angiogenic bone substitute material cobalt incorporated with β-tricalcium phosphate (CCP), comparing the traditional “one cell type” approach with a “multiple cell types” approach to assess osteogenesis, the latter including the use of immune cells. We found that CCP extract by itself was sufficient to enhance osteogenic differentiation of bone marrow stem cells (BMSCs), whereas this effect was cancelled out when macrophages were involved. In response to CCP, the macrophage phenotype switched to the M1 extreme, releasing pro-inflammatory cytokines and bone catabolic factors. When the CCP materials were implanted into a rat femur condyle defect model, there was a significant increase of inflammatory markers and bone destruction, coupled with fibrous encapsulation rather than new bone formation. These findings demonstrated that the inclusion of immune cells (macrophages) in the in vitro assessment matched the in vivo tissue response, and that this method provides a more accurate indication of the essential role of immune cells when assessing materials-stimulated osteogenesis in vitro.