925 resultados para Cube attack
Resumo:
Traditionally, attacks on cryptographic algorithms looked for mathematical weaknesses in the underlying structure of a cipher. Side-channel attacks, however, look to extract secret key information based on the leakage from the device on which the cipher is implemented, be it smart-card, microprocessor, dedicated hardware or personal computer. Attacks based on the power consumption, electromagnetic emanations and execution time have all been practically demonstrated on a range of devices to reveal partial secret-key information from which the full key can be reconstructed. The focus of this thesis is power analysis, more specifically a class of attacks known as profiling attacks. These attacks assume a potential attacker has access to, or can control, an identical device to that which is under attack, which allows him to profile the power consumption of operations or data flow during encryption. This assumes a stronger adversary than traditional non-profiling attacks such as differential or correlation power analysis, however the ability to model a device allows templates to be used post-profiling to extract key information from many different target devices using the power consumption of very few encryptions. This allows an adversary to overcome protocols intended to prevent secret key recovery by restricting the number of available traces. In this thesis a detailed investigation of template attacks is conducted, along with how the selection of various attack parameters practically affect the efficiency of the secret key recovery, as well as examining the underlying assumption of profiling attacks in that the power consumption of one device can be used to extract secret keys from another. Trace only attacks, where the corresponding plaintext or ciphertext data is unavailable, are then investigated against both symmetric and asymmetric algorithms with the goal of key recovery from a single trace. This allows an adversary to bypass many of the currently proposed countermeasures, particularly in the asymmetric domain. An investigation into machine-learning methods for side-channel analysis as an alternative to template or stochastic methods is also conducted, with support vector machines, logistic regression and neural networks investigated from a side-channel viewpoint. Both binary and multi-class classification attack scenarios are examined in order to explore the relative strengths of each algorithm. Finally these machine-learning based alternatives are empirically compared with template attacks, with their respective merits examined with regards to attack efficiency.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
We apply a coded aperture snapshot spectral imager (CASSI) to fluorescence microscopy. CASSI records a two-dimensional (2D) spectrally filtered projection of a three-dimensional (3D) spectral data cube. We minimize a convex quadratic function with total variation (TV) constraints for data cube estimation from the 2D snapshot. We adapt the TV minimization algorithm for direct fluorescent bead identification from CASSI measurements by combining a priori knowledge of the spectra associated with each bead type. Our proposed method creates a 2D bead identity image. Simulated fluorescence CASSI measurements are used to evaluate the behavior of the algorithm. We also record real CASSI measurements of a ten bead type fluorescence scene and create a 2D bead identity map. A baseline image from filtered-array imaging system verifies CASSI's 2D bead identity map.
Resumo:
In the event of a terrorist-mediated attack in the United States using radiological or improvised nuclear weapons, it is expected that hundreds of thousands of people could be exposed to life-threatening levels of ionizing radiation. We have recently shown that genome-wide expression analysis of the peripheral blood (PB) can generate gene expression profiles that can predict radiation exposure and distinguish the dose level of exposure following total body irradiation (TBI). However, in the event a radiation-mass casualty scenario, many victims will have heterogeneous exposure due to partial shielding and it is unknown whether PB gene expression profiles would be useful in predicting the status of partially irradiated individuals. Here, we identified gene expression profiles in the PB that were characteristic of anterior hemibody-, posterior hemibody- and single limb-irradiation at 0.5 Gy, 2 Gy and 10 Gy in C57Bl6 mice. These PB signatures predicted the radiation status of partially irradiated mice with a high level of accuracy (range 79-100%) compared to non-irradiated mice. Interestingly, PB signatures of partial body irradiation were poorly predictive of radiation status by site of injury (range 16-43%), suggesting that the PB molecular response to partial body irradiation was anatomic site specific. Importantly, PB gene signatures generated from TBI-treated mice failed completely to predict the radiation status of partially irradiated animals or non-irradiated controls. These data demonstrate that partial body irradiation, even to a single limb, generates a characteristic PB signature of radiation injury and thus may necessitate the use of multiple signatures, both partial body and total body, to accurately assess the status of an individual exposed to radiation.
Resumo:
Our long-term goal is the detection and characterization of vulnerable plaque in the coronary arteries of the heart using intravascular ultrasound (IVUS) catheters. Vulnerable plaque, characterized by a thin fibrous cap and a soft, lipid-rich necrotic core is a precursor to heart attack and stroke. Early detection of such plaques may potentially alter the course of treatment of the patient to prevent ischemic events. We have previously described the characterization of carotid plaques using external linear arrays operating at 9 MHz. In addition, we previously modified circular array IVUS catheters by short-circuiting several neighboring elements to produce fixed beamwidths for intravascular hyperthermia applications. In this paper, we modified Volcano Visions 8.2 French, 9 MHz catheters and Volcano Platinum 3.5 French, 20 MHz catheters by short-circuiting portions of the array for acoustic radiation force impulse imaging (ARFI) applications. The catheters had an effective transmit aperture size of 2 mm and 1.5 mm, respectively. The catheters were connected to a Verasonics scanner and driven with pushing pulses of 180 V p-p to acquire ARFI data from a soft gel phantom with a Young's modulus of 2.9 kPa. The dynamic response of the tissue-mimicking material demonstrates a typical ARFI motion of 1 to 2 microns as the gel phantom displaces away and recovers back to its normal position. The hardware modifications applied to our IVUS catheters mimic potential beamforming modifications that could be implemented on IVUS scanners. Our results demonstrate that the generation of radiation force from IVUS catheters and the development of intravascular ARFI may be feasible.
Resumo:
In this work we present an activity for High School students in which various mathematical concepts of plane and spatial geometry are involved. The final objective of the proposed tasks is constructing a particular polyhedron, the cube, by using a modality of origami called modular origami.
Resumo:
An M/M/1 queue is subject to mass exodus at rate β and mass immigration at rate {αr; r≥ 1} when idle. A general resolvent approach is used to derive occupation probabilities and high-order moments. This powerful technique is not only considerably easier to apply than a standard direct attack on the forward p.g.f. equation, but it also implicitly yields necessary and sufficient conditions for recurrence, positive recurrence and transience.
Resumo:
Fractal image compression is a relatively recent image compression method. Its extension to a sequence of motion images is important in video compression applications. There are two basic fractal compression methods, namely the cube-based and the frame-based methods, being commonly used in the industry. However there are advantages and disadvantages in both methods. This paper proposes a hybrid algorithm highlighting the advantages of the two methods in order to produce a good compression algorithm for video industry. Experimental results show the hybrid algorithm improves the compression ratio and the quality of decompressed images.
Resumo:
Fractal image compression is a relatively recent image compression method, which is simple to use and often leads to a high compression ratio. These advantages make it suitable for the situation of a single encoding and many decoding, as required in video on demand, archive compression, etc. There are two fundamental fractal compression methods, namely, the cube-based and the frame-based methods, being commonly studied. However, there are advantages and disadvantages in both methods. This paper gives an extension of the fundamental compression methods based on the concept of adaptive partition. Experimental results show that the algorithms based on adaptive partition may obtain a much higher compression ratio compared to algorithms based on fixed partition while maintaining the quality of decompressed images.
Resumo:
The acorn barnacle Chthamalus montagui can present strong variation in shell morphology, ranging from flat conic to a highly bent form, caused by a substantial overgrowth of the rostrum plate. Shell shape distribution was investigated between January and May 2004 from geographical to microhabitat spatial scales along the western coast of Britain. Populations studied in the north (Scotland and Isle of Man) showed a higher degree of shell variation compared to those in the south (Wales and south-west England). In the north, C. montagui living at lower tidal levels and in proximity to the predatory dogwhelk, Nucella lapillus, were more bent in profile. Laboratory experiments were conducted to examine behavioural responses, and vulnerability of bent and conic barnacles to predation by N. lapillus. Dogwhelks did not attack one morphotype more than the other, but only 15 % of attacks on bent forms were successful compared to 75 % in conic forms. Dogwhelk effluent reduced the time spent feeding by C. montagui (11 %), but there was no significant difference between conic and bent forms. Examination of barnacle morphology indicated a trade-off in investment in shell structure and feeding appendages associated with being bent, but none with egg or somatic tissue mass. These results are consistent with C. montagui showing an induced defence comparable to that found in its congeners Chthamalus anisopoma and Chthamalus fissus on the Pacific coast of North America, but further work to demonstrate inducibility is required.
Resumo:
It is almost a tradition that celluloid (or digital) villains are represented with some characteristics that remind us the real political enemies of the producer country of the film, or even enemies within the country according to the particular ideology that sustains the film. The case of Christopher Nolan The Dark Knight trilogy, analyzed here, is representative of this trend for two reasons. First, because it gets marked by political radicalization conducted by the US government after the attack of September 11, 2001. Secondly, because it offers a profuse gallery of villains who are outside the circle of friends as the new doctrine “either with us or against us” opened by George Bush for the XXI century. This gallery includes from the very terrorists who justify the War on Terror (Ra's al Ghul, the Joker), to the “radical left” (Bane, Talia al Ghul) including liberal politicians (Harvey Dent), and corrupt that take advantage of the softness of the law to commit crimes with impunity (Dr. Crane, the Scarecrow).
Resumo:
Aminolevulinic acid (ALA) stability within topical formulations intended for photodynamic therapy (PDT) is poor due to dimerisation to pyrazine-2,5-dipropionic acid (PY). Most strategies to improve stability use low pH vehicles, which can cause cutaneous irritancy. To overcome this problem, a novel approach is investigated that uses a non-aqueous vehicle to retard proton-induced charge separation across the 4-carbonyl group on ALA and lessen nucleophilic attack that leads to condensation dimerisation. Bioadhesive anhydrous vehicles based on methylvinylether-maleic anhydride copolymer patches and poly(ethyleneglycol) or glycerol thickened poly(acrylic acid) gels were formulated. ALA stability fell below pharmaceutically acceptable levels after 6 months, with bioadhesive patches stored at 5°C demonstrating the best stability by maintaining 86.2% of their original loading. Glycerol-based gels maintained 40.2% in similar conditions. However, ALA loss did not correspond to expected increases in PY, indicating the presence of another degradative process that prevented dimerisation. Nuclear magnetic resonance (NMR) analysis was inconclusive in respect of the mechanism observed in the patch system, but showed clearly that an esterification reaction involving ALA and both glycerol and poly(ethyleneglycol) was occurring. This was especially marked in the glycerol gels, where only 2.21% of the total expected PY was detected after 204 days at 5°C. Non-specific esterase hydrolysis demonstrated that ALA was recoverable from the gel systems, further supporting esterified binding within the gel matrices. It is conceivable that skin esterases could duplicate this finding upon topical application of the gel and convert these derivatives back to ALA in situ, provided skin penetration is not affected adversely.
Resumo:
Objective: To determine the epidemiology of out of hospital sudden cardiac death (OHSCD) in Belfast from 1 August 2003 to 31 July 2004.
Design: Prospective examination of out of hospital cardiac arrests by using the Utstein style and necropsy reports. World Health Organization criteria were applied to determine the number of sudden cardiac deaths.
Results: Of 300 OHSCDs, 197 (66%) in men, mean age (SD) 68 (14) years, 234 (78%) occurred at home. The emergency medical services (EMS) attended 279 (93%). Rhythm on EMS arrival was ventricular fibrillation (VF) in 75 (27%). The call to response interval (CRI) was mean (SD) 8 (3) minutes. Among patients attended by the EMS, 9.7% were resuscitated and 7.2% survived to leave hospital alive. The CRI for survivors was mean (SD) 5 (2) minutes and for non-survivors, 8 (3) minutes (p < 0.001). Ninety one (30%) OHSCDs were witnessed; of these 91 patients 48 (53%) had VF on EMS arrival. The survival rate for witnessed VF arrests was 20 of 48 (41.7%): all 20 survivors had VF as the presenting rhythm and CRI ? 7 minutes. The European age standardised incidence for OHSCD was 122/100 000 (95% confidence interval 111 to 133) for men and 41/100 000 (95% confidence interval 36 to 46) for women.
Conclusion: Despite a 37% reduction in heart attack mortality in Ireland over the past 20 years, the incidence of OHSCD in Belfast has not fallen. In this study, 78% of OHSCDs occurred at home.
Resumo:
This paper presents a study of the residual strength of Pinus sylvestris, which has been subject to attack by the furniture beetle (Anobium punctatum). It is relatively easy to stop the infestation, but difficult to assess the structural soundness of the remaining timber. Removal and replacement of affected structural elements is usually difficult and expensive, particularly in buildings of historic interest. Current on-site assessment procedures are limited. The main object of the study was to develop an on-site test of timber quality: a test which can be carried out on the surface and also at varying depths into the timber. It is based on a probe pull-out technique using a portable load-measuring device. Pull-out force values have been correlated with both strength and energy absorbed as measured by compression testing on laboratory samples of both sound and infested timber. These two relationships are significant and could be used to assess whether remedial work is needed. In addition, work on the use of artificial borings to simulate the natural worming of timber is presented and the findings discussed.
Resumo:
A full-scale, seven-story, reinforced concrete building frame was constructed in-place at the Building Research Establishment's Cardington Laboratory, which encompassed a range of different concrete mixtures and advanced construction techniques. This provided an opportunity to assess in-place nondestructive test methods, namely the pullout test, and more specifically the Danish version, which has been known as the Lok test, on a systematic basis during the construction of the building. It was used in conjunction with both standard and temperature-matched cube specimens to assess its practicality and accuracy under site conditions. Strength correlations were determined using linear and power function regression analysis. Strength predictions from these were found to be in very good agreement with the compressive strengths of temperature-matched cube specimens. When a general correlation is used, however, estimates for compressive strength are likely to have 95% confidence limits of around '20% of the mean value of four results.