962 resultados para hardware deskribapen lengoaiak
Resumo:
The ability to manipulate small fluid droplets, colloidal particles and single cells with the precision and parallelization of modern-day computer hardware has profound applications for biochemical detection, gene sequencing, chemical synthesis and highly parallel analysis of single cells. Drawing inspiration from general circuit theory and magnetic bubble technology, here we demonstrate a class of integrated circuits for executing sequential and parallel, timed operations on an ensemble of single particles and cells. The integrated circuits are constructed from lithographically defined, overlaid patterns of magnetic film and current lines. The magnetic patterns passively control particles similar to electrical conductors, diodes and capacitors. The current lines actively switch particles between different tracks similar to gated electrical transistors. When combined into arrays and driven by a rotating magnetic field clock, these integrated circuits have general multiplexing properties and enable the precise control of magnetizable objects.
Resumo:
This paper introduces the concept of adaptive temporal compressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scene's temporal complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal adaptivity is manifested by manipulating the integration time of the camera, opening the possibility to realtime implementation. The proposed algorithm is a generalized temporal CS approach that can be incorporated with a diverse set of existing hardware systems. © 2013 IEEE.
Resumo:
© 2005-2012 IEEE.Within industrial automation systems, three-dimensional (3-D) vision provides very useful feedback information in autonomous operation of various manufacturing equipment (e.g., industrial robots, material handling devices, assembly systems, and machine tools). The hardware performance in contemporary 3-D scanning devices is suitable for online utilization. However, the bottleneck is the lack of real-time algorithms for recognition of geometric primitives (e.g., planes and natural quadrics) from a scanned point cloud. One of the most important and the most frequent geometric primitive in various engineering tasks is plane. In this paper, we propose a new fast one-pass algorithm for recognition (segmentation and fitting) of planar segments from a point cloud. To effectively segment planar regions, we exploit the orthonormality of certain wavelets to polynomial function, as well as their sensitivity to abrupt changes. After segmentation of planar regions, we estimate the parameters of corresponding planes using standard fitting procedures. For point cloud structuring, a z-buffer algorithm with mesh triangles representation in barycentric coordinates is employed. The proposed recognition method is tested and experimentally validated in several real-world case studies.
Resumo:
SUMMARY: Fracture stabilization in the diabetic patient is associated with higher complication rates, particularly infection and impaired wound healing, which can lead to major tissue damage, osteomyelitis, and higher amputation rates. With an increasing prevalence of diabetes and an aging population, the risks of infection of internal fixation devices are expected to grow. Although numerous retrospective clinical studies have identified a relationship between diabetes and infection, currently there are few animal models that have been used to investigate postoperative surgical-site infections associated with internal fixator implantation and diabetes. The authors therefore refined the protocol for inducing hyperglycemia and compared the bacterial burden in controls to pharmacologically induced type 1 diabetic rats after undergoing internal fracture plate fixation and Staphylococcus aureus surgical-site inoculation. Using an initial series of streptozotocin doses, followed by optional additional doses to reach a target blood glucose range of 300 to 600 mg/dl, the authors reliably induced diabetes in 100 percent of the rats (n = 16), in which a narrow hyperglycemic range was maintained 14 days after onset of diabetes (mean ± SEM, 466 ± 16 mg/dl; coefficient of variation, 0.15). With respect to their primary endpoint, the authors quantified a significantly higher infectious burden in inoculated diabetic animals (median, 3.2 × 10 colony-forming units/mg dry tissue) compared with inoculated nondiabetic animals (7.2 × 10 colony-forming units/mg dry tissue). These data support the authors' hypothesis that uncontrolled diabetes adversely affects the immune system's ability to clear Staphylococcus aureus associated with internal hardware.
Resumo:
Our long-term goal is the detection and characterization of vulnerable plaque in the coronary arteries of the heart using intravascular ultrasound (IVUS) catheters. Vulnerable plaque, characterized by a thin fibrous cap and a soft, lipid-rich necrotic core is a precursor to heart attack and stroke. Early detection of such plaques may potentially alter the course of treatment of the patient to prevent ischemic events. We have previously described the characterization of carotid plaques using external linear arrays operating at 9 MHz. In addition, we previously modified circular array IVUS catheters by short-circuiting several neighboring elements to produce fixed beamwidths for intravascular hyperthermia applications. In this paper, we modified Volcano Visions 8.2 French, 9 MHz catheters and Volcano Platinum 3.5 French, 20 MHz catheters by short-circuiting portions of the array for acoustic radiation force impulse imaging (ARFI) applications. The catheters had an effective transmit aperture size of 2 mm and 1.5 mm, respectively. The catheters were connected to a Verasonics scanner and driven with pushing pulses of 180 V p-p to acquire ARFI data from a soft gel phantom with a Young's modulus of 2.9 kPa. The dynamic response of the tissue-mimicking material demonstrates a typical ARFI motion of 1 to 2 microns as the gel phantom displaces away and recovers back to its normal position. The hardware modifications applied to our IVUS catheters mimic potential beamforming modifications that could be implemented on IVUS scanners. Our results demonstrate that the generation of radiation force from IVUS catheters and the development of intravascular ARFI may be feasible.
Resumo:
Software-based control of life-critical embedded systems has become increasingly complex, and to a large extent has come to determine the safety of the human being. For example, implantable cardiac pacemakers have over 80,000 lines of code which are responsible for maintaining the heart within safe operating limits. As firmware-related recalls accounted for over 41% of the 600,000 devices recalled in the last decade, there is a need for rigorous model-driven design tools to generate verified code from verified software models. To this effect, we have developed the UPP2SF model-translation tool, which facilitates automatic conversion of verified models (in UPPAAL) to models that may be simulated and tested (in Simulink/Stateflow). We describe the translation rules that ensure correct model conversion, applicable to a large class of models. We demonstrate how UPP2SF is used in themodel-driven design of a pacemaker whosemodel is (a) designed and verified in UPPAAL (using timed automata), (b) automatically translated to Stateflow for simulation-based testing, and then (c) automatically generated into modular code for hardware-level integration testing of timing-related errors. In addition, we show how UPP2SF may be used for worst-case execution time estimation early in the design stage. Using UPP2SF, we demonstrate the value of integrated end-to-end modeling, verification, code-generation and testing process for complex software-controlled embedded systems. © 2014 ACM.
Resumo:
The outcomes for both (i) radiation therapy and (ii) preclinical small animal radio- biology studies are dependent on the delivery of a known quantity of radiation to a specific and intentional location. Adverse effects can result from these procedures if the dose to the target is too high or low, and can also result from an incorrect spatial distribution in which nearby normal healthy tissue can be undesirably damaged by poor radiation delivery techniques. Thus, in mice and humans alike, the spatial dose distributions from radiation sources should be well characterized in terms of the absolute dose quantity, and with pin-point accuracy. When dealing with the steep spatial dose gradients consequential to either (i) high dose rate (HDR) brachytherapy or (ii) within the small organs and tissue inhomogeneities of mice, obtaining accurate and highly precise dose results can be very challenging, considering commercially available radiation detection tools, such as ion chambers, are often too large for in-vivo use.
In this dissertation two tools are developed and applied for both clinical and preclinical radiation measurement. The first tool is a novel radiation detector for acquiring physical measurements, fabricated from an inorganic nano-crystalline scintillator that has been fixed on an optical fiber terminus. This dosimeter allows for the measurement of point doses to sub-millimeter resolution, and has the ability to be placed in-vivo in humans and small animals. Real-time data is displayed to the user to provide instant quality assurance and dose-rate information. The second tool utilizes an open source Monte Carlo particle transport code, and was applied for small animal dosimetry studies to calculate organ doses and recommend new techniques of dose prescription in mice, as well as to characterize dose to the murine bone marrow compartment with micron-scale resolution.
Hardware design changes were implemented to reduce the overall fiber diameter to <0.9 mm for the nano-crystalline scintillator based fiber optic detector (NanoFOD) system. Lower limits of device sensitivity were found to be approximately 0.05 cGy/s. Herein, this detector was demonstrated to perform quality assurance of clinical 192Ir HDR brachytherapy procedures, providing comparable dose measurements as thermo-luminescent dosimeters and accuracy within 20% of the treatment planning software (TPS) for 27 treatments conducted, with an inter-quartile range ratio to the TPS dose value of (1.02-0.94=0.08). After removing contaminant signals (Cerenkov and diode background), calibration of the detector enabled accurate dose measurements for vaginal applicator brachytherapy procedures. For 192Ir use, energy response changed by a factor of 2.25 over the SDD values of 3 to 9 cm; however a cap made of 0.2 mm thickness silver reduced energy dependence to a factor of 1.25 over the same SDD range, but had the consequence of reducing overall sensitivity by 33%.
For preclinical measurements, dose accuracy of the NanoFOD was within 1.3% of MOSFET measured dose values in a cylindrical mouse phantom at 225 kV for x-ray irradiation at angles of 0, 90, 180, and 270˝. The NanoFOD exhibited small changes in angular sensitivity, with a coefficient of variation (COV) of 3.6% at 120 kV and 1% at 225 kV. When the NanoFOD was placed alongside a MOSFET in the liver of a sacrificed mouse and treatment was delivered at 225 kV with 0.3 mm Cu filter, the dose difference was only 1.09% with use of the 4x4 cm collimator, and -0.03% with no collimation. Additionally, the NanoFOD utilized a scintillator of 11 µm thickness to measure small x-ray fields for microbeam radiation therapy (MRT) applications, and achieved 2.7% dose accuracy of the microbeam peak in comparison to radiochromic film. Modest differences between the full-width at half maximum measured lateral dimension of the MRT system were observed between the NanoFOD (420 µm) and radiochromic film (320 µm), but these differences have been explained mostly as an artifact due to the geometry used and volumetric effects in the scintillator material. Characterization of the energy dependence for the yttrium-oxide based scintillator material was performed in the range of 40-320 kV (2 mm Al filtration), and the maximum device sensitivity was achieved at 100 kV. Tissue maximum ratio data measurements were carried out on a small animal x-ray irradiator system at 320 kV and demonstrated an average difference of 0.9% as compared to a MOSFET dosimeter in the range of 2.5 to 33 cm depth in tissue equivalent plastic blocks. Irradiation of the NanoFOD fiber and scintillator material on a 137Cs gamma irradiator to 1600 Gy did not produce any measurable change in light output, suggesting that the NanoFOD system may be re-used without the need for replacement or recalibration over its lifetime.
For small animal irradiator systems, researchers can deliver a given dose to a target organ by controlling exposure time. Currently, researchers calculate this exposure time by dividing the total dose that they wish to deliver by a single provided dose rate value. This method is independent of the target organ. Studies conducted here used Monte Carlo particle transport codes to justify a new method of dose prescription in mice, that considers organ specific doses. Monte Carlo simulations were performed in the Geant4 Application for Tomographic Emission (GATE) toolkit using a MOBY mouse whole-body phantom. The non-homogeneous phantom was comprised of 256x256x800 voxels of size 0.145x0.145x0.145 mm3. Differences of up to 20-30% in dose to soft-tissue target organs was demonstrated, and methods for alleviating these errors were suggested during whole body radiation of mice by utilizing organ specific and x-ray tube filter specific dose rates for all irradiations.
Monte Carlo analysis was used on 1 µm resolution CT images of a mouse femur and a mouse vertebra to calculate the dose gradients within the bone marrow (BM) compartment of mice based on different radiation beam qualities relevant to x-ray and isotope type irradiators. Results and findings indicated that soft x-ray beams (160 kV at 0.62 mm Cu HVL and 320 kV at 1 mm Cu HVL) lead to substantially higher dose to BM within close proximity to mineral bone (within about 60 µm) as compared to hard x-ray beams (320 kV at 4 mm Cu HVL) and isotope based gamma irradiators (137Cs). The average dose increases to the BM in the vertebra for these four aforementioned radiation beam qualities were found to be 31%, 17%, 8%, and 1%, respectively. Both in-vitro and in-vivo experimental studies confirmed these simulation results, demonstrating that the 320 kV, 1 mm Cu HVL beam caused statistically significant increased killing to the BM cells at 6 Gy dose levels in comparison to both the 320 kV, 4 mm Cu HVL and the 662 keV, 137Cs beams.
Resumo:
© 2015 IEEE.Although definition of single-program benchmarks is relatively straight-forward-a benchmark is a program plus a specific input-definition of multi-program benchmarks is more complex. Each program may have a different runtime and they may have different interactions depending on how they align with each other. While prior work has focused on sampling multiprogram benchmarks, little attention has been paid to defining the benchmarks in their entirety. In this work, we propose a four-tuple that formally defines multi-program benchmarks in a well-defined way. We then examine how four different classes of benchmarks created by varying the elements of this tuple align with real-world use-cases. We evaluate the impact of these variations on real hardware, and see drastic variations in results between different benchmarks constructed from the same programs. Notable differences include significant speedups versus slowdowns (e.g., +57% vs -5% or +26% vs -18%), and large differences in magnitude even when the results are in the same direction (e.g., 67% versus 11%).
Resumo:
We present a novel system to be used in the rehabilitation of patients with forearm injuries. The system uses surface electromyography (sEMG) recordings from a wireless sleeve to control video games designed to provide engaging biofeedback to the user. An integrated hardware/software system uses a neural net to classify the signals from a user’s muscles as they perform one of a number of common forearm physical therapy exercises. These classifications are used as input for a suite of video games that have been custom-designed to hold the patient’s attention and decrease the risk of noncompliance with the physical therapy regimen necessary to regain full function in the injured limb. The data is transmitted wirelessly from the on-sleeve board to a laptop computer using a custom-designed signal-processing algorithm that filters and compresses the data prior to transmission. We believe that this system has the potential to significantly improve the patient experience and efficacy of physical therapy using biofeedback that leverages the compelling nature of video games.
Resumo:
An abstract of this work will be presented at the Compiler, Architecture and Tools Conference (CATC), Intel Development Center, Haifa, Israel November 23, 2015.
Resumo:
Three paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation -- a nonlinear, structured-grid partial differential equation boundary value problem -- using the same algorithm on the same hardware. All of the paradigms -- parallel languages represented by the Portland Group's HPF, (semi-)automated serial-to-parallel source-to-source translation represented by CAP-Tools from the University of Greenwich, and parallel libraries represented by Argonne's PETSc -- are found to be easy to use for this problem class, and all are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under any paradigm includes specification of the data partitioning, corresponding to a geometrically simple decomposition of the domain of the PDE. Programming in SPMD style for the PETSc library requires writing only the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global-to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm as a starting point, introduction of concurrency through subdomain blocking (a task similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Programming with CAPTools involves feeding the same sequential implementation to the CAPTools interactive parallelization system, and guiding the source-to-source code transformation by responding to various queries about quantities knowable only at runtime. Results representative of "the state of the practice" for a scaled sequence of structured grid problems are given on three of the most important contemporary high-performance platforms: the IBM SP, the SGI Origin 2000, and the CRAYY T3E.
Resumo:
The shared-memory programming model can be an effective way to achieve parallelism on shared memory parallel computers. Historically however, the lack of a programming standard using directives and the limited scalability have affected its take-up. Recent advances in hardware and software technologies have resulted in improvements to both the performance of parallel programs with compiler directives and the issue of portability with the introduction of OpenMP. In this study, the Computer Aided Parallelisation Toolkit has been extended to automatically generate OpenMP-based parallel programs with nominal user assistance. We categorize the different loop types and show how efficient directives can be placed using the toolkit's in-depth interprocedural analysis. Examples are taken from the NAS parallel benchmarks and a number of real-world application codes. This demonstrates the great potential of using the toolkit to quickly parallelise serial programs as well as the good performance achievable on up to 300 processors for hybrid message passing-directive parallelisations.
Resumo:
From the model geometry creation to the model analysis, the stages in between such as mesh generation are the most manpower intensive phase in a mesh-based computational mechanics simulation process. On the other hand the model analysis is the most computing intensive phase. Advanced computational hardware and software have significantly reduced the computing time - and more importantly the trend is downward. With the kind of models envisaged coming, which are larger, more complex in geometry and modelling, and multiphysics, there is no clear trend that the manpower intensive phase is to decrease significantly in time - in the present way of operation it is more likely to increase with model complexity. In this paper we address this dilemma in collaborating components for models in electronic packaging application.
Resumo:
Many Web applications walk the thin line between the need for dynamic data and the need to meet user performance expectations. In environments where funds are not available to constantly upgrade hardware inline with user demand, alternative approaches need to be considered. This paper introduces a ‘Data farming’ model whereby dynamic data, which is ‘grown’ in operational applications, is ‘harvested’ and ‘packaged’ for various consumer markets. Like any well managed agricultural operation, crops are harvested according to historical and perceived demand as inferred by a self-optimising process. This approach aims to make enhanced use of available resources through better utlilisation of system downtime - thereby improving application performance and increasing the availability of key business data.