963 resultados para Hardware


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The analysis of energy detector systems is a well studied topic in the literature: numerous models have been derived describing the behaviour of single and multiple antenna architectures operating in a variety of radio environments. However, in many cases of interest, these models are not in a closed form and so their evaluation requires the use of numerical methods. In general, these are computationally expensive, which can cause difficulties in certain scenarios, such as in the optimisation of device parameters on low cost hardware. The problem becomes acute in situations where the signal to noise ratio is small and reliable detection is to be ensured or where the number of samples of the received signal is large. Furthermore, due to the analytic complexity of the models, further insight into the behaviour of various system parameters of interest is not readily apparent. In this thesis, an approximation based approach is taken towards the analysis of such systems. By focusing on the situations where exact analyses become complicated, and making a small number of astute simplifications to the underlying mathematical models, it is possible to derive novel, accurate and compact descriptions of system behaviour. Approximations are derived for the analysis of energy detectors with single and multiple antennae operating on additive white Gaussian noise (AWGN) and independent and identically distributed Rayleigh, Nakagami-m and Rice channels; in the multiple antenna case, approximations are derived for systems with maximal ratio combiner (MRC), equal gain combiner (EGC) and square law combiner (SLC) diversity. In each case, error bounds are derived describing the maximum error resulting from the use of the approximations. In addition, it is demonstrated that the derived approximations require fewer computations of simple functions than any of the exact models available in the literature. Consequently, the regions of applicability of the approximations directly complement the regions of applicability of the available exact models. Further novel approximations for other system parameters of interest, such as sample complexity, minimum detectable signal to noise ratio and diversity gain, are also derived. In the course of the analysis, a novel theorem describing the convergence of the chi square, noncentral chi square and gamma distributions towards the normal distribution is derived. The theorem describes a tight upper bound on the error resulting from the application of the central limit theorem to random variables of the aforementioned distributions and gives a much better description of the resulting error than existing Berry-Esseen type bounds. A second novel theorem, providing an upper bound on the maximum error resulting from the use of the central limit theorem to approximate the noncentral chi square distribution where the noncentrality parameter is a multiple of the number of degrees of freedom, is also derived.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditionally, attacks on cryptographic algorithms looked for mathematical weaknesses in the underlying structure of a cipher. Side-channel attacks, however, look to extract secret key information based on the leakage from the device on which the cipher is implemented, be it smart-card, microprocessor, dedicated hardware or personal computer. Attacks based on the power consumption, electromagnetic emanations and execution time have all been practically demonstrated on a range of devices to reveal partial secret-key information from which the full key can be reconstructed. The focus of this thesis is power analysis, more specifically a class of attacks known as profiling attacks. These attacks assume a potential attacker has access to, or can control, an identical device to that which is under attack, which allows him to profile the power consumption of operations or data flow during encryption. This assumes a stronger adversary than traditional non-profiling attacks such as differential or correlation power analysis, however the ability to model a device allows templates to be used post-profiling to extract key information from many different target devices using the power consumption of very few encryptions. This allows an adversary to overcome protocols intended to prevent secret key recovery by restricting the number of available traces. In this thesis a detailed investigation of template attacks is conducted, along with how the selection of various attack parameters practically affect the efficiency of the secret key recovery, as well as examining the underlying assumption of profiling attacks in that the power consumption of one device can be used to extract secret keys from another. Trace only attacks, where the corresponding plaintext or ciphertext data is unavailable, are then investigated against both symmetric and asymmetric algorithms with the goal of key recovery from a single trace. This allows an adversary to bypass many of the currently proposed countermeasures, particularly in the asymmetric domain. An investigation into machine-learning methods for side-channel analysis as an alternative to template or stochastic methods is also conducted, with support vector machines, logistic regression and neural networks investigated from a side-channel viewpoint. Both binary and multi-class classification attack scenarios are examined in order to explore the relative strengths of each algorithm. Finally these machine-learning based alternatives are empirically compared with template attacks, with their respective merits examined with regards to attack efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

My original contribution to knowledge is the creation of a WSN system that further improves the functionality of existing technology, whilst achieving improved power consumption and reliability. This thesis concerns the development of industrially applicable wireless sensor networks that are low-power, reliable and latency aware. This work aims to improve upon the state of the art in networking protocols for low-rate multi-hop wireless sensor networks. Presented is an application-driven co-design approach to the development of such a system. Starting with the physical layer, hardware was designed to meet industry specified requirements. The end system required further investigation of communications protocols that could achieve the derived application-level system performance specifications. A CSMA/TDMA hybrid MAC protocol was developed, leveraging numerous techniques from the literature and novel optimisations. It extends the current art with respect to power consumption for radio duty-cycled applications, and reliability, in dense wireless sensor networks, whilst respecting latency bounds. Specifically, it provides 100% packet delivery for 11 concurrent senders transmitting towards a single radio duty cycled sink-node. This is representative of an order of magnitude improvement over the comparable art, considering MAC-only mechanisms. A novel latency-aware routing protocol was developed to exploit the developed hardware and MAC protocol. It is based on a new weighted objective function with multiple fail safe mechanisms to ensure extremely high reliability and robustness. The system was empirically evaluated on two hardware platforms. These are the application-specific custom 868 MHz node and the de facto community-standard TelosB. Extensive empirical comparative performance analyses were conducted against the relevant art to demonstrate the advances made. The resultant system is capable of exceeding 10-year battery life, and exhibits reliability performance in excess of 99.9%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We explore the possibilities of obtaining compression in video through modified sampling strategies using multichannel imaging systems. The redundancies in video streams are exploited through compressive sampling schemes to achieve low power and low complexity video sensors. The sampling strategies as well as the associated reconstruction algorithms are discussed. These compressive sampling schemes could be implemented in the focal plane readout hardware resulting in drastic reduction in data bandwidth and computational complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Droplet-based digital microfluidics technology has now come of age, and software-controlled biochips for healthcare applications are starting to emerge. However, today's digital microfluidic biochips suffer from the drawback that there is no feedback to the control software from the underlying hardware platform. Due to the lack of precision inherent in biochemical experiments, errors are likely during droplet manipulation; error recovery based on the repetition of experiments leads to wastage of expensive reagents and hard-to-prepare samples. By exploiting recent advances in the integration of optical detectors (sensors) into a digital microfluidics biochip, we present a physical-aware system reconfiguration technique that uses sensor data at intermediate checkpoints to dynamically reconfigure the biochip. A cyberphysical resynthesis technique is used to recompute electrode-actuation sequences, thereby deriving new schedules, module placement, and droplet routing pathways, with minimum impact on the time-to-response. 2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to manipulate small fluid droplets, colloidal particles and single cells with the precision and parallelization of modern-day computer hardware has profound applications for biochemical detection, gene sequencing, chemical synthesis and highly parallel analysis of single cells. Drawing inspiration from general circuit theory and magnetic bubble technology, here we demonstrate a class of integrated circuits for executing sequential and parallel, timed operations on an ensemble of single particles and cells. The integrated circuits are constructed from lithographically defined, overlaid patterns of magnetic film and current lines. The magnetic patterns passively control particles similar to electrical conductors, diodes and capacitors. The current lines actively switch particles between different tracks similar to gated electrical transistors. When combined into arrays and driven by a rotating magnetic field clock, these integrated circuits have general multiplexing properties and enable the precise control of magnetizable objects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces the concept of adaptive temporal compressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scene's temporal complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal adaptivity is manifested by manipulating the integration time of the camera, opening the possibility to realtime implementation. The proposed algorithm is a generalized temporal CS approach that can be incorporated with a diverse set of existing hardware systems. 2013 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2005-2012 IEEE.Within industrial automation systems, three-dimensional (3-D) vision provides very useful feedback information in autonomous operation of various manufacturing equipment (e.g., industrial robots, material handling devices, assembly systems, and machine tools). The hardware performance in contemporary 3-D scanning devices is suitable for online utilization. However, the bottleneck is the lack of real-time algorithms for recognition of geometric primitives (e.g., planes and natural quadrics) from a scanned point cloud. One of the most important and the most frequent geometric primitive in various engineering tasks is plane. In this paper, we propose a new fast one-pass algorithm for recognition (segmentation and fitting) of planar segments from a point cloud. To effectively segment planar regions, we exploit the orthonormality of certain wavelets to polynomial function, as well as their sensitivity to abrupt changes. After segmentation of planar regions, we estimate the parameters of corresponding planes using standard fitting procedures. For point cloud structuring, a z-buffer algorithm with mesh triangles representation in barycentric coordinates is employed. The proposed recognition method is tested and experimentally validated in several real-world case studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SUMMARY: Fracture stabilization in the diabetic patient is associated with higher complication rates, particularly infection and impaired wound healing, which can lead to major tissue damage, osteomyelitis, and higher amputation rates. With an increasing prevalence of diabetes and an aging population, the risks of infection of internal fixation devices are expected to grow. Although numerous retrospective clinical studies have identified a relationship between diabetes and infection, currently there are few animal models that have been used to investigate postoperative surgical-site infections associated with internal fixator implantation and diabetes. The authors therefore refined the protocol for inducing hyperglycemia and compared the bacterial burden in controls to pharmacologically induced type 1 diabetic rats after undergoing internal fracture plate fixation and Staphylococcus aureus surgical-site inoculation. Using an initial series of streptozotocin doses, followed by optional additional doses to reach a target blood glucose range of 300 to 600 mg/dl, the authors reliably induced diabetes in 100 percent of the rats (n = 16), in which a narrow hyperglycemic range was maintained 14 days after onset of diabetes (mean SEM, 466 16 mg/dl; coefficient of variation, 0.15). With respect to their primary endpoint, the authors quantified a significantly higher infectious burden in inoculated diabetic animals (median, 3.2 10 colony-forming units/mg dry tissue) compared with inoculated nondiabetic animals (7.2 10 colony-forming units/mg dry tissue). These data support the authors' hypothesis that uncontrolled diabetes adversely affects the immune system's ability to clear Staphylococcus aureus associated with internal hardware.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our long-term goal is the detection and characterization of vulnerable plaque in the coronary arteries of the heart using intravascular ultrasound (IVUS) catheters. Vulnerable plaque, characterized by a thin fibrous cap and a soft, lipid-rich necrotic core is a precursor to heart attack and stroke. Early detection of such plaques may potentially alter the course of treatment of the patient to prevent ischemic events. We have previously described the characterization of carotid plaques using external linear arrays operating at 9 MHz. In addition, we previously modified circular array IVUS catheters by short-circuiting several neighboring elements to produce fixed beamwidths for intravascular hyperthermia applications. In this paper, we modified Volcano Visions 8.2 French, 9 MHz catheters and Volcano Platinum 3.5 French, 20 MHz catheters by short-circuiting portions of the array for acoustic radiation force impulse imaging (ARFI) applications. The catheters had an effective transmit aperture size of 2 mm and 1.5 mm, respectively. The catheters were connected to a Verasonics scanner and driven with pushing pulses of 180 V p-p to acquire ARFI data from a soft gel phantom with a Young's modulus of 2.9 kPa. The dynamic response of the tissue-mimicking material demonstrates a typical ARFI motion of 1 to 2 microns as the gel phantom displaces away and recovers back to its normal position. The hardware modifications applied to our IVUS catheters mimic potential beamforming modifications that could be implemented on IVUS scanners. Our results demonstrate that the generation of radiation force from IVUS catheters and the development of intravascular ARFI may be feasible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software-based control of life-critical embedded systems has become increasingly complex, and to a large extent has come to determine the safety of the human being. For example, implantable cardiac pacemakers have over 80,000 lines of code which are responsible for maintaining the heart within safe operating limits. As firmware-related recalls accounted for over 41% of the 600,000 devices recalled in the last decade, there is a need for rigorous model-driven design tools to generate verified code from verified software models. To this effect, we have developed the UPP2SF model-translation tool, which facilitates automatic conversion of verified models (in UPPAAL) to models that may be simulated and tested (in Simulink/Stateflow). We describe the translation rules that ensure correct model conversion, applicable to a large class of models. We demonstrate how UPP2SF is used in themodel-driven design of a pacemaker whosemodel is (a) designed and verified in UPPAAL (using timed automata), (b) automatically translated to Stateflow for simulation-based testing, and then (c) automatically generated into modular code for hardware-level integration testing of timing-related errors. In addition, we show how UPP2SF may be used for worst-case execution time estimation early in the design stage. Using UPP2SF, we demonstrate the value of integrated end-to-end modeling, verification, code-generation and testing process for complex software-controlled embedded systems. 2014 ACM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

<p>The outcomes for both (i) radiation therapy and (ii) preclinical small animal radio- biology studies are dependent on the delivery of a known quantity of radiation to a specic and intentional location. Adverse eects can result from these procedures if the dose to the target is too high or low, and can also result from an incorrect spatial distribution in which nearby normal healthy tissue can be undesirably damaged by poor radiation delivery techniques. Thus, in mice and humans alike, the spatial dose distributions from radiation sources should be well characterized in terms of the absolute dose quantity, and with pin-point accuracy. When dealing with the steep spatial dose gradients consequential to either (i) high dose rate (HDR) brachytherapy or (ii) within the small organs and tissue inhomogeneities of mice, obtaining accurate and highly precise dose results can be very challenging, considering commercially available radiation detection tools, such as ion chambers, are often too large for in-vivo use.</p><p>In this dissertation two tools are developed and applied for both clinical and preclinical radiation measurement. The rst tool is a novel radiation detector for acquiring physical measurements, fabricated from an inorganic nano-crystalline scintillator that has been xed on an optical ber terminus. This dosimeter allows for the measurement of point doses to sub-millimeter resolution, and has the ability to be placed in-vivo in humans and small animals. Real-time data is displayed to the user to provide instant quality assurance and dose-rate information. The second tool utilizes an open source Monte Carlo particle transport code, and was applied for small animal dosimetry studies to calculate organ doses and recommend new techniques of dose prescription in mice, as well as to characterize dose to the murine bone marrow compartment with micron-scale resolution.</p><p>Hardware design changes were implemented to reduce the overall ber diameter to <0.9 mm for the nano-crystalline scintillator based ber optic detector (NanoFOD) system. Lower limits of device sensitivity were found to be approximately 0.05 cGy/s. Herein, this detector was demonstrated to perform quality assurance of clinical 192Ir HDR brachytherapy procedures, providing comparable dose measurements as thermo-luminescent dosimeters and accuracy within 20% of the treatment planning software (TPS) for 27 treatments conducted, with an inter-quartile range ratio to the TPS dose value of (1.02-0.94=0.08). After removing contaminant signals (Cerenkov and diode background), calibration of the detector enabled accurate dose measurements for vaginal applicator brachytherapy procedures. For 192Ir use, energy response changed by a factor of 2.25 over the SDD values of 3 to 9 cm; however a cap made of 0.2 mm thickness silver reduced energy dependence to a factor of 1.25 over the same SDD range, but had the consequence of reducing overall sensitivity by 33%.</p><p>For preclinical measurements, dose accuracy of the NanoFOD was within 1.3% of MOSFET measured dose values in a cylindrical mouse phantom at 225 kV for x-ray irradiation at angles of 0, 90, 180, and 270. The NanoFOD exhibited small changes in angular sensitivity, with a coecient of variation (COV) of 3.6% at 120 kV and 1% at 225 kV. When the NanoFOD was placed alongside a MOSFET in the liver of a sacriced mouse and treatment was delivered at 225 kV with 0.3 mm Cu lter, the dose dierence was only 1.09% with use of the 4x4 cm collimator, and -0.03% with no collimation. Additionally, the NanoFOD utilized a scintillator of 11 m thickness to measure small x-ray elds for microbeam radiation therapy (MRT) applications, and achieved 2.7% dose accuracy of the microbeam peak in comparison to radiochromic lm. Modest dierences between the full-width at half maximum measured lateral dimension of the MRT system were observed between the NanoFOD (420 m) and radiochromic lm (320 m), but these dierences have been explained mostly as an artifact due to the geometry used and volumetric eects in the scintillator material. Characterization of the energy dependence for the yttrium-oxide based scintillator material was performed in the range of 40-320 kV (2 mm Al ltration), and the maximum device sensitivity was achieved at 100 kV. Tissue maximum ratio data measurements were carried out on a small animal x-ray irradiator system at 320 kV and demonstrated an average dierence of 0.9% as compared to a MOSFET dosimeter in the range of 2.5 to 33 cm depth in tissue equivalent plastic blocks. Irradiation of the NanoFOD ber and scintillator material on a 137Cs gamma irradiator to 1600 Gy did not produce any measurable change in light output, suggesting that the NanoFOD system may be re-used without the need for replacement or recalibration over its lifetime.</p><p>For small animal irradiator systems, researchers can deliver a given dose to a target organ by controlling exposure time. Currently, researchers calculate this exposure time by dividing the total dose that they wish to deliver by a single provided dose rate value. This method is independent of the target organ. Studies conducted here used Monte Carlo particle transport codes to justify a new method of dose prescription in mice, that considers organ specic doses. Monte Carlo simulations were performed in the Geant4 Application for Tomographic Emission (GATE) toolkit using a MOBY mouse whole-body phantom. The non-homogeneous phantom was comprised of 256x256x800 voxels of size 0.145x0.145x0.145 mm3. Dierences of up to 20-30% in dose to soft-tissue target organs was demonstrated, and methods for alleviating these errors were suggested during whole body radiation of mice by utilizing organ specic and x-ray tube lter specic dose rates for all irradiations.</p><p>Monte Carlo analysis was used on 1 m resolution CT images of a mouse femur and a mouse vertebra to calculate the dose gradients within the bone marrow (BM) compartment of mice based on dierent radiation beam qualities relevant to x-ray and isotope type irradiators. Results and ndings indicated that soft x-ray beams (160 kV at 0.62 mm Cu HVL and 320 kV at 1 mm Cu HVL) lead to substantially higher dose to BM within close proximity to mineral bone (within about 60 m) as compared to hard x-ray beams (320 kV at 4 mm Cu HVL) and isotope based gamma irradiators (137Cs). The average dose increases to the BM in the vertebra for these four aforementioned radiation beam qualities were found to be 31%, 17%, 8%, and 1%, respectively. Both in-vitro and in-vivo experimental studies conrmed these simulation results, demonstrating that the 320 kV, 1 mm Cu HVL beam caused statistically signicant increased killing to the BM cells at 6 Gy dose levels in comparison to both the 320 kV, 4 mm Cu HVL and the 662 keV, 137Cs beams.</p>

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2015 IEEE.Although definition of single-program benchmarks is relatively straight-forward-a benchmark is a program plus a specific input-definition of multi-program benchmarks is more complex. Each program may have a different runtime and they may have different interactions depending on how they align with each other. While prior work has focused on sampling multiprogram benchmarks, little attention has been paid to defining the benchmarks in their entirety. In this work, we propose a four-tuple that formally defines multi-program benchmarks in a well-defined way. We then examine how four different classes of benchmarks created by varying the elements of this tuple align with real-world use-cases. We evaluate the impact of these variations on real hardware, and see drastic variations in results between different benchmarks constructed from the same programs. Notable differences include significant speedups versus slowdowns (e.g., +57% vs -5% or +26% vs -18%), and large differences in magnitude even when the results are in the same direction (e.g., 67% versus 11%).