908 resultados para Real systems
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
OBJECTIVES We sought to determine whether assessment of left ventricular (LV) function with real-time (RT) three-dimensional echocardiography (3DE) could reduce the variation of sequential LV measurements and provide greater accuracy than two-dimensional echocardiography (2DE). BACKGROUND Real-time 3DE has become feasible as a standard clinical tool, but its accuracy for LV assessment has not been validated. METHODS Unselected patients (n = 50; 41 men; age, 64 +/- 8 years) presenting for evaluation of LV function were studied with 2DE and RT-3DE. Test-retest variation was performed by a complete restudy by a separate sonographer within 1 h without alteration of hemodynamics or therapy. Magnetic resonance imaging (MRI) images were obtained during a breath-hold, and measurements were made off-line. RESULTS The test-retest variation showed similar measurements for volumes but wider scatter of LV mass measurements with M-mode and 2DE than 3DE. The average MRI end-diastolic volume was 172 +/- 53 ml; LV volumes were underestimated by 2DE (mean difference, -54 +/- 33; p < 0.01) but only slightly by RT-3DE (-4 +/- 29; p = 0.31). Similarly, end-systolic volume by MRI (91 +/- 53 ml) was underestimated by 2DE (mean difference, -28 +/- 28; p < 0.01) and by RT-3DE (mean difference, -3 +/- 18; p = 0.23). Ejection fraction by MRI was similar by 2DE (p = 0.76) and RT-3DE (p = 0.74). Left ventricular mass (183 +/- 50 g) was overestimated by M-mode (mean difference, 68 +/- 86 g; p < 0.01) and 2DE (16 +/- 57; p = 0.04) but not RT-3DE (0 +/- 38 g; p = 0.94). There was good inter- and intra-observer correlation between RT-3DE by two sonographers for volumes, ejection fraction, and mass. CONCLUSIONS Real-time 3DE is a feasible approach to reduce test-retest variation of LV volume, ejection fraction, and mass measurements in follow-up LV assessment in daily practice. (C) 2004 by the American College of Cardiology Foundation.
Resumo:
The recent summary report of a Department of Energy Workshop on Plant Systems Biology (P.V. Minorsky [2003] Plant Physiol 132: 404-409) offered a welcomed advocacy for systems analysis as essential in understanding plant development, growth, and production. The goal of the Workshop was to consider methods for relating the results of molecular research to real-world challenges in plant production for increased food supplies, alternative energy sources, and environmental improvement. The rather surprising feature of this report, however, was that the Workshop largely overlooked the rich history of plant systems analysis extending over nearly 40 years (Sinclair and Seligman, 1996) that has considered exactly those challenges targeted by the Workshop. Past systems research has explored and incorporated biochemical and physiological knowledge into plant simulation models from a number of perspectives. The research has resulted in considerable understanding and insight about how to simulate plant systems and the relative contribution of various factors in influencing plant production. These past activities have contributed directly to research focused on solving the problems of increasing biomass production and crop yields. These modeling approaches are also now providing an avenue to enhance integration of molecular genetic technologies in plant improvement (Hammer et al., 2002).
Resumo:
Granulation is one of the fundamental operations in particulate processing and has a very ancient history and widespread use. Much fundamental particle science has occurred in the last two decades to help understand the underlying phenomena. Yet, until recently the development of granulation systems was mostly based on popular practice. The use of process systems approaches to the integrated understanding of these operations is providing improved insight into the complex nature of the processes. Improved mathematical representations, new solution techniques and the application of the models to industrial processes are yielding better designs, improved optimisation and tighter control of these systems. The parallel development of advanced instrumentation and the use of inferential approaches provide real-time access to system parameters necessary for improvements in operation. The use of advanced models to help develop real-time plant diagnostic systems provides further evidence of the utility of process system approaches to granulation processes. This paper highlights some of those aspects of granulation. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Objectives: Left atrial (LA) volume (LAV) is a prognostically important biomarker for diastolic dysfunction, but its reproducibility on repeated testing is not well defined. LA assessment with 3-dimensional. (3D) echocardiography (3DE) has been validated against magnetic resonance imaging, and we sought to assess whether this was superior to existing measurements for sequential echocardiographic follow-up. Methods: Patients (n = 100; 81 men; age 56 +/- 14 years) presenting for LA evaluation were studied with M-mode (MM) echocardiography, 2-dimensional (2D) echocardiography, and 3DE. Test-retest variation was performed by a complete restudy by a separate sonographer within 1 hour without alteration of hemodynamics or therapy. In all, 20 patients were studied for interobserver and intraobserver variation. LAVs were calculated by using M-mode diameter and planimetered atrial area in the apical. 4-chamber view to calculate an assumed sphere, as were prolate ellipsoid, Simpson's biplane, and biplane area-length methods. All were compared with 3DE. Results: The average LAV was 72 +/- 27 mL by 3DE. There was significant underestimation of LAV by M-mode (35 +/- 20 mL, r = 0.66, P < .01). The 3DE and various 2D echocardiographic techniques were well correlated: LA planimetry (85 +/- 38 mL, r = 0.77, P < .01), prolate ellipsoid (73 +/- 36 mL, r = 0.73, P = .04), area-length (64 +/- 30 mL, r = 0.74, P < .01), and Simpson's biplane (69 +/- 31 mL, r = 0.78, P = .06). Test-retest variation for 3DE was most favorable (r = 0.98, P < .01), with the prolate ellipsoid method showing most variation. Interobserver agreement between measurements was best for 3DE (r = 0.99, P < .01), with M-mode the worst (r = 0.89, P < .01). Intraobserver results were similar to interobserver, the best correlation for 3DE (r = 0.99, P < .01), with LA planimetry the worst (r = 0.91, P < .01). Conclusions. The 2D measurements correlate closely with 3DE. Follow-up assessment in daily practice appears feasible and reliable with both 2D and 3D approaches.
Resumo:
A major impediment to developing real-time computer vision systems has been the computational power and level of skill required to process video streams in real-time. This has meant that many researchers have either analysed video streams off-line or used expensive dedicated hardware acceleration techniques. Recent software and hardware developments have greatly eased the development burden of realtime image analysis leading to the development of portable systems using cheap PC hardware and software exploiting the Multimedia Extension (MMX) instruction set of the Intel Pentium chip. This paper describes the implementation of a computationally efficient computer vision system for recognizing hand gestures using efficient coding and MMX-acceleration to achieve real-time performance on low cost hardware.
Resumo:
The challenge of Research, Development and Extension (R,D&E) is to apply agricultural science to serve the real needs of production systems. The ideal is to have community partnerships involving a variety of stakeholders with equal representation, and a sharing in the design of R, D&E actions. R,D&E policy in Australia is stressing the participation of industry in new projects. The Dairy Research and Development Corporation (DRDC) in Australia, and the Brazilian Agricultural Research Corporation for Dairy (Embrapa Dairy), have developed initiatives to identify priorities for R,D&E design with participation of the industry. However, weaknesses in the methods have been identified. The present study describes the results of a strategy to involve a broader range of stakeholders in the identification of regional dairy industry needs. The findings show that overall communication, finance and marketing as the three major priorities of three study regions, meaning that primary needs for the industry are not in production technologies. This is an apparent contradiction with what some stakeholders considered valuable for dairy farms, which are pasture, genetics and nutrition technologies. The results reflect the large amount of research activity into production technology, and the relative success of R,D&E. However, it is necessary to consider issues beyond production technologies before developing R,D&E projects or presenting technologies.
Resumo:
Despite extensive progress on the theoretical aspects of spectral efficient communication systems, hardware impairments, such as phase noise, are the key bottlenecks in next generation wireless communication systems. The presence of non-ideal oscillators at the transceiver introduces time varying phase noise and degrades the performance of the communication system. Significant research literature focuses on joint synchronization and decoding based on joint posterior distribution, which incorporate both the channel and code graph. These joint synchronization and decoding approaches operate on well designed sum-product algorithms, which involves calculating probabilistic messages iteratively passed between the channel statistical information and decoding information. Channel statistical information, generally entails a high computational complexity because its probabilistic model may involve continuous random variables. The detailed knowledge about the channel statistics for these algorithms make them an inadequate choice for real world applications due to power and computational limitations. In this thesis, novel phase estimation strategies are proposed, in which soft decision-directed iterative receivers for a separate A Posteriori Probability (APP)-based synchronization and decoding are proposed. These algorithms do not require any a priori statistical characterization of the phase noise process. The proposed approach relies on a Maximum A Posteriori (MAP)-based algorithm to perform phase noise estimation and does not depend on the considered modulation/coding scheme as it only exploits the APPs of the transmitted symbols. Different variants of APP-based phase estimation are considered. The proposed algorithm has significantly lower computational complexity with respect to joint synchronization/decoding approaches at the cost of slight performance degradation. With the aim to improve the robustness of the iterative receiver, we derive a new system model for an oversampled (more than one sample per symbol interval) phase noise channel. We extend the separate APP-based synchronization and decoding algorithm to a multi-sample receiver, which exploits the received information from the channel by exchanging the information in an iterative fashion to achieve robust convergence. Two algorithms based on sliding block-wise processing with soft ISI cancellation and detection are proposed, based on the use of reliable information from the channel decoder. Dually polarized systems provide a cost-and spatial-effective solution to increase spectral efficiency and are competitive candidates for next generation wireless communication systems. A novel soft decision-directed iterative receiver, for separate APP-based synchronization and decoding, is proposed. This algorithm relies on an Minimum Mean Square Error (MMSE)-based cancellation of the cross polarization interference (XPI) followed by phase estimation on the polarization of interest. This iterative receiver structure is motivated from Master/Slave Phase Estimation (M/S-PE), where M-PE corresponds to the polarization of interest. The operational principle of a M/S-PE block is to improve the phase tracking performance of both polarization branches: more precisely, the M-PE block tracks the co-polar phase and the S-PE block reduces the residual phase error on the cross-polar branch. Two variants of MMSE-based phase estimation are considered; BW and PLP.
Resumo:
This paper presents results from the first use of neural networks for the real-time feedback control of high temperature plasmas in a Tokamak fusion experiment. The Tokamak is currently the principal experimental device for research into the magnetic confinement approach to controlled fusion. In the Tokamak, hydrogen plasmas, at temperatures of up to 100 Million K, are confined by strong magnetic fields. Accurate control of the position and shape of the plasma boundary requires real-time feedback control of the magnetic field structure on a time-scale of a few tens of microseconds. Software simulations have demonstrated that a neural network approach can give significantly better performance than the linear technique currently used on most Tokamak experiments. The practical application of the neural network approach requires high-speed hardware, for which a fully parallel implementation of the multi-layer perceptron, using a hybrid of digital and analogue technology, has been developed.
Resumo:
This paper introduces responsive systems: systems that are real-time, event-based, or time-dependent. There are a number of trends that are accelerating the adoption of responsive systems: timeliness requirements for business information systems are becoming more prevalent, embedded systems are increasingly integrated into soft real-time command-and-control systems, improved message-oriented middleware is facilitating growth in event-processing applications, and advances in service-oriented and component-based techniques are lowering the costs of developing and deploying responsive applications. The use of responsive systems is illustrated here in two application areas: the defense industry and online gaming. The papers in this special issue of the IBM Systems Journal are then introduced. The paper concludes with a discussion of the key remaining challenges in this area and ideas for further work.
Resumo:
This thesis presents an investigation, of synchronisation and causality, motivated by problems in computational neuroscience. The thesis addresses both theoretical and practical signal processing issues regarding the estimation of interdependence from a set of multivariate data generated by a complex underlying dynamical system. This topic is driven by a series of problems in neuroscience, which represents the principal background motive behind the material in this work. The underlying system is the human brain and the generative process of the data is based on modern electromagnetic neuroimaging methods . In this thesis, the underlying functional of the brain mechanisms are derived from the recent mathematical formalism of dynamical systems in complex networks. This is justified principally on the grounds of the complex hierarchical and multiscale nature of the brain and it offers new methods of analysis to model its emergent phenomena. A fundamental approach to study the neural activity is to investigate the connectivity pattern developed by the brain’s complex network. Three types of connectivity are important to study: 1) anatomical connectivity refering to the physical links forming the topology of the brain network; 2) effective connectivity concerning with the way the neural elements communicate with each other using the brain’s anatomical structure, through phenomena of synchronisation and information transfer; 3) functional connectivity, presenting an epistemic concept which alludes to the interdependence between data measured from the brain network. The main contribution of this thesis is to present, apply and discuss novel algorithms of functional connectivities, which are designed to extract different specific aspects of interaction between the underlying generators of the data. Firstly, a univariate statistic is developed to allow for indirect assessment of synchronisation in the local network from a single time series. This approach is useful in inferring the coupling as in a local cortical area as observed by a single measurement electrode. Secondly, different existing methods of phase synchronisation are considered from the perspective of experimental data analysis and inference of coupling from observed data. These methods are designed to address the estimation of medium to long range connectivity and their differences are particularly relevant in the context of volume conduction, that is known to produce spurious detections of connectivity. Finally, an asymmetric temporal metric is introduced in order to detect the direction of the coupling between different regions of the brain. The method developed in this thesis is based on a machine learning extensions of the well known concept of Granger causality. The thesis discussion is developed alongside examples of synthetic and experimental real data. The synthetic data are simulations of complex dynamical systems with the intention to mimic the behaviour of simple cortical neural assemblies. They are helpful to test the techniques developed in this thesis. The real datasets are provided to illustrate the problem of brain connectivity in the case of important neurological disorders such as Epilepsy and Parkinson’s disease. The methods of functional connectivity in this thesis are applied to intracranial EEG recordings in order to extract features, which characterize underlying spatiotemporal dynamics before during and after an epileptic seizure and predict seizure location and onset prior to conventional electrographic signs. The methodology is also applied to a MEG dataset containing healthy, Parkinson’s and dementia subjects with the scope of distinguishing patterns of pathological from physiological connectivity.
Resumo:
We report on teaching Information Systems Analysis (ISA) in a way that takes the classroom into the real world to enrich students' understanding of the broader role of being an IS professional. Through exposure to less controllable and more uncomfortable issues (e.g., client deadlines; unclear scope; client expectations; unhelpful colleagues, complexity about what is the problem never mind the solution) we aim to better prepare students to respond to the complex issues surrounding deployment of systems analysis methodologies in the real world. In this paper we provide enough detail on what these classes involve to allow a reader to replicate appealing elements in their own teaching. This paper is a reflection on integrating in the real world when teaching ISA – a reflection from the standpoint of students who face an unstructured and complex world and of lecturers who aim to prepare students to hit the floor running when they encounter that world.