887 resultados para Hard real-time distributed systems
Resumo:
Federal Highway Administration, Office of Research, Washington, D.C.
Resumo:
Federal Highway Administration, Office of Research, Washington, D.C.
Resumo:
Federal Highway Administration, Office of Research, Washington, D.C.
Resumo:
OBJECTIVES We sought to determine whether assessment of left ventricular (LV) function with real-time (RT) three-dimensional echocardiography (3DE) could reduce the variation of sequential LV measurements and provide greater accuracy than two-dimensional echocardiography (2DE). BACKGROUND Real-time 3DE has become feasible as a standard clinical tool, but its accuracy for LV assessment has not been validated. METHODS Unselected patients (n = 50; 41 men; age, 64 +/- 8 years) presenting for evaluation of LV function were studied with 2DE and RT-3DE. Test-retest variation was performed by a complete restudy by a separate sonographer within 1 h without alteration of hemodynamics or therapy. Magnetic resonance imaging (MRI) images were obtained during a breath-hold, and measurements were made off-line. RESULTS The test-retest variation showed similar measurements for volumes but wider scatter of LV mass measurements with M-mode and 2DE than 3DE. The average MRI end-diastolic volume was 172 +/- 53 ml; LV volumes were underestimated by 2DE (mean difference, -54 +/- 33; p < 0.01) but only slightly by RT-3DE (-4 +/- 29; p = 0.31). Similarly, end-systolic volume by MRI (91 +/- 53 ml) was underestimated by 2DE (mean difference, -28 +/- 28; p < 0.01) and by RT-3DE (mean difference, -3 +/- 18; p = 0.23). Ejection fraction by MRI was similar by 2DE (p = 0.76) and RT-3DE (p = 0.74). Left ventricular mass (183 +/- 50 g) was overestimated by M-mode (mean difference, 68 +/- 86 g; p < 0.01) and 2DE (16 +/- 57; p = 0.04) but not RT-3DE (0 +/- 38 g; p = 0.94). There was good inter- and intra-observer correlation between RT-3DE by two sonographers for volumes, ejection fraction, and mass. CONCLUSIONS Real-time 3DE is a feasible approach to reduce test-retest variation of LV volume, ejection fraction, and mass measurements in follow-up LV assessment in daily practice. (C) 2004 by the American College of Cardiology Foundation.
Resumo:
Objectives: Left atrial (LA) volume (LAV) is a prognostically important biomarker for diastolic dysfunction, but its reproducibility on repeated testing is not well defined. LA assessment with 3-dimensional. (3D) echocardiography (3DE) has been validated against magnetic resonance imaging, and we sought to assess whether this was superior to existing measurements for sequential echocardiographic follow-up. Methods: Patients (n = 100; 81 men; age 56 +/- 14 years) presenting for LA evaluation were studied with M-mode (MM) echocardiography, 2-dimensional (2D) echocardiography, and 3DE. Test-retest variation was performed by a complete restudy by a separate sonographer within 1 hour without alteration of hemodynamics or therapy. In all, 20 patients were studied for interobserver and intraobserver variation. LAVs were calculated by using M-mode diameter and planimetered atrial area in the apical. 4-chamber view to calculate an assumed sphere, as were prolate ellipsoid, Simpson's biplane, and biplane area-length methods. All were compared with 3DE. Results: The average LAV was 72 +/- 27 mL by 3DE. There was significant underestimation of LAV by M-mode (35 +/- 20 mL, r = 0.66, P < .01). The 3DE and various 2D echocardiographic techniques were well correlated: LA planimetry (85 +/- 38 mL, r = 0.77, P < .01), prolate ellipsoid (73 +/- 36 mL, r = 0.73, P = .04), area-length (64 +/- 30 mL, r = 0.74, P < .01), and Simpson's biplane (69 +/- 31 mL, r = 0.78, P = .06). Test-retest variation for 3DE was most favorable (r = 0.98, P < .01), with the prolate ellipsoid method showing most variation. Interobserver agreement between measurements was best for 3DE (r = 0.99, P < .01), with M-mode the worst (r = 0.89, P < .01). Intraobserver results were similar to interobserver, the best correlation for 3DE (r = 0.99, P < .01), with LA planimetry the worst (r = 0.91, P < .01). Conclusions. The 2D measurements correlate closely with 3DE. Follow-up assessment in daily practice appears feasible and reliable with both 2D and 3D approaches.
Resumo:
A major impediment to developing real-time computer vision systems has been the computational power and level of skill required to process video streams in real-time. This has meant that many researchers have either analysed video streams off-line or used expensive dedicated hardware acceleration techniques. Recent software and hardware developments have greatly eased the development burden of realtime image analysis leading to the development of portable systems using cheap PC hardware and software exploiting the Multimedia Extension (MMX) instruction set of the Intel Pentium chip. This paper describes the implementation of a computationally efficient computer vision system for recognizing hand gestures using efficient coding and MMX-acceleration to achieve real-time performance on low cost hardware.
Resumo:
We propose a method for the timing analysis of concurrent real-time programs with hard deadlines. We divide the analysis into a machine-independent and a machine-dependent task. The latter takes into account the execution times of the program on a particular machine. Therefore, our goal is to make the machine-dependent phase of the analysis as simple as possible. We succeed in the sense that the machine-dependent phase remains the same as in the analysis of sequential programs. We shift the complexity introduced by concurrency completely to the machine-independent phase.
Resumo:
This paper presents results from the first use of neural networks for the real-time feedback control of high temperature plasmas in a Tokamak fusion experiment. The Tokamak is currently the principal experimental device for research into the magnetic confinement approach to controlled fusion. In the Tokamak, hydrogen plasmas, at temperatures of up to 100 Million K, are confined by strong magnetic fields. Accurate control of the position and shape of the plasma boundary requires real-time feedback control of the magnetic field structure on a time-scale of a few tens of microseconds. Software simulations have demonstrated that a neural network approach can give significantly better performance than the linear technique currently used on most Tokamak experiments. The practical application of the neural network approach requires high-speed hardware, for which a fully parallel implementation of the multi-layer perceptron, using a hybrid of digital and analogue technology, has been developed.
Resumo:
Liposomes have been imaged using a plethora of techniques. However, few of these methods offer the ability to study these systems in their natural hydrated state without the requirement of drying, staining, and fixation of the vesicles. However, the ability to image a liposome in its hydrated state is the ideal scenario for visualization of these dynamic lipid structures and environmental scanning electron microscopy (ESEM), with its ability to image wet systems without prior sample preparation, offers potential advantages to the above methods. In our studies, we have used ESEM to not only investigate the morphology of liposomes and niosomes but also to dynamically follow the changes in structure of lipid films and liposome suspensions as water condenses on to or evaporates from the sample. In particular, changes in liposome morphology were studied using ESEM in real time to investigate the resistance of liposomes to coalescence during dehydration thereby providing an alternative assay of liposome formulation and stability. Based on this protocol, we have also studied niosome-based systems and cationic liposome/DNA complexes. Copyright © Informa Healthcare.
Resumo:
Requirements for systems to continue to operate satisfactorily in the presence of faults has led to the development of techniques for the construction of fault tolerant software. This thesis addresses the problem of error detection and recovery in distributed systems which consist of a set of communicating sequential processes. A method is presented for the `a priori' design of conversations for this class of distributed system. Petri nets are used to represent the state and to solve state reachability problems for concurrent systems. The dynamic behaviour of the system can be characterised by a state-change table derived from the state reachability tree. Systematic conversation generation is possible by defining a closed boundary on any branch of the state-change table. By relating the state-change table to process attributes it ensures all necessary processes are included in the conversation. The method also ensures properly nested conversations. An implementation of the conversation scheme using the concurrent language occam is proposed. The structure of the conversation is defined using the special features of occam. The proposed implementation gives a structure which is independent of the application and is independent of the number of processes involved. Finally, the integrity of inter-process communications is investigated. The basic communication primitives used in message passing systems are seen to have deficiencies when applied to systems with safety implications. Using a Petri net model a boundary for a time-out mechanism is proposed which will increase the integrity of a system which involves inter-process communications.
Resumo:
This thesis makes a contribution to the Change Data Capture (CDC) field by providing an empirical evaluation on the performance of CDC architectures in the context of realtime data warehousing. CDC is a mechanism for providing data warehouse architectures with fresh data from Online Transaction Processing (OLTP) databases. There are two types of CDC architectures, pull architectures and push architectures. There is exiguous data on the performance of CDC architectures in a real-time environment. Performance data is required to determine the real-time viability of the two architectures. We propose that push CDC architectures are optimal for real-time CDC. However, push CDC architectures are seldom implemented because they are highly intrusive towards existing systems and arduous to maintain. As part of our contribution, we pragmatically develop a service based push CDC solution, which addresses the issues of intrusiveness and maintainability. Our solution uses Data Access Services (DAS) to decouple CDC logic from the applications. A requirement for the DAS is to place minimal overhead on a transaction in an OLTP environment. We synthesize DAS literature and pragmatically develop DAS that eciently execute transactions in an OLTP environment. Essentially we develop effeicient RESTful DAS, which expose Transactions As A Resource (TAAR). We evaluate the TAAR solution and three pull CDC mechanisms in a real-time environment, using the industry recognised TPC-C benchmark. The optimal CDC mechanism in a real-time environment, will capture change data with minimal latency and will have a negligible affect on the database's transactional throughput. Capture latency is the time it takes a CDC mechanism to capture a data change that has been applied to an OLTP database. A standard definition for capture latency and how to measure it does not exist in the field. We create this definition and extend the TPC-C benchmark to make the capture latency measurement. The results from our evaluation show that pull CDC is capable of real-time CDC at low levels of user concurrency. However, as the level of user concurrency scales upwards, pull CDC has a significant impact on the database's transaction rate, which affirms the theory that pull CDC architectures are not viable in a real-time architecture. TAAR CDC on the other hand is capable of real-time CDC, and places a minimal overhead on the transaction rate, although this performance is at the expense of CPU resources.
Resumo:
We investigate the problem of obtaining a dense reconstruction in real-time, from a live video stream. In recent years, multi-view stereo (MVS) has received considerable attention and a number of methods have been proposed. However, most methods operate under the assumption of a relatively sparse set of still images as input and unlimited computation time. Video based MVS has received less attention despite the fact that video sequences offer significant benefits in terms of usability of MVS systems. In this paper we propose a novel video based MVS algorithm that is suitable for real-time, interactive 3d modeling with a hand-held camera. The key idea is a per-pixel, probabilistic depth estimation scheme that updates posterior depth distributions with every new frame. The current implementation is capable of updating 15 million distributions/s. We evaluate the proposed method against the state-of-the-art real-time MVS method and show improvement in terms of accuracy. © 2011 Elsevier B.V. All rights reserved.
Resumo:
We report high-resolution real-time measurements of spectrum evolution in a fibre. The proposed method combines optical heterodyning with a technique of spatio-temporal intensity measurements revealing fast spectral dynamics of cavity-based systems.
Resumo:
We present a complex neural network model of user behavior in distributed systems. The model reflects both dynamical and statistical features of user behavior and consists of three components: on-line and off-line models and change detection module. On-line model reflects dynamical features by predicting user actions on the basis of previous ones. Off-line model is based on the analysis of statistical parameters of user behavior. In both cases neural networks are used to reveal uncharacteristic activity of users. Change detection module is intended for trends analysis in user behavior. The efficiency of complex model is verified on real data of users of Space Research Institute of NASU-NSAU.