902 resultados para Computational time
Resumo:
In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.
Resumo:
The development of applications as well as the services for mobile systems faces a varied range of devices with very heterogeneous capabilities whose response times are difficult to predict. The research described in this work aims to respond to this issue by developing a computational model that formalizes the problem and that defines adjusting computing methods. The described proposal combines imprecise computing strategies with cloud computing paradigms in order to provide flexible implementation frameworks for embedded or mobile devices. As a result, the imprecise computation scheduling method on the workload of the embedded system is the solution to move computing to the cloud according to the priority and response time of the tasks to be executed and hereby be able to meet productivity and quality of desired services. A technique to estimate network delays and to schedule more accurately tasks is illustrated in this paper. An application example in which this technique is experimented in running contexts with heterogeneous work loading for checking the validity of the proposed model is described.
Resumo:
Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is - potentially fatally - obstructed. It is one of the leading causes of sudden cardiac death in young people. Electrocardiography (ECG) and Echocardiography (Echo) are the standard tests for identifying HCM and other cardiac abnormalities. The American Heart Association has recommended using a pre-participation questionnaire for young athletes instead of ECG or Echo tests due to considerations of cost and time involved in interpreting the results of these tests by an expert cardiologist. Initially we set out to develop a classifier for automated prediction of young athletes’ heart conditions based on the answers to the questionnaire. Classification results and further in-depth analysis using computational and statistical methods indicated significant shortcomings of the questionnaire in predicting cardiac abnormalities. Automated methods for analyzing ECG signals can help reduce cost and save time in the pre-participation screening process by detecting HCM and other cardiac abnormalities. Therefore, the main goal of this dissertation work is to identify HCM through computational analysis of 12-lead ECG. ECG signals recorded on one or two leads have been analyzed in the past for classifying individual heartbeats into different types of arrhythmia as annotated primarily in the MIT-BIH database. In contrast, we classify complete sequences of 12-lead ECGs to assign patients into two groups: HCM vs. non-HCM. The challenges and issues we address include missing ECG waves in one or more leads and the dimensionality of a large feature-set. We address these by proposing imputation and feature-selection methods. We develop heartbeat-classifiers by employing Random Forests and Support Vector Machines, and propose a method to classify full 12-lead ECGs based on the proportion of heartbeats classified as HCM. The results from our experiments show that the classifiers developed using our methods perform well in identifying HCM. Thus the two contributions of this thesis are the utilization of computational and statistical methods for discovering shortcomings in a current screening procedure and the development of methods to identify HCM through computational analysis of 12-lead ECG signals.
Resumo:
Stroke is a leading cause of death and permanent disability worldwide, affecting millions of individuals. Traditional clinical scores for assessment of stroke-related impairments are inherently subjective and limited by inter-rater and intra-rater reliability, as well as floor and ceiling effects. In contrast, robotic technologies provide objective, highly repeatable tools for quantification of neurological impairments following stroke. KINARM is an exoskeleton robotic device that provides objective, reliable tools for assessment of sensorimotor, proprioceptive and cognitive brain function by means of a battery of behavioral tasks. As such, KINARM is particularly useful for assessment of neurological impairments following stroke. This thesis introduces a computational framework for assessment of neurological impairments using the data provided by KINARM. This is done by achieving two main objectives. First, to investigate how robotic measurements can be used to estimate current and future abilities to perform daily activities for subjects with stroke. We are able to predict clinical scores related to activities of daily living at present and future time points using a set of robotic biomarkers. The findings of this analysis provide a proof of principle that robotic evaluation can be an effective tool for clinical decision support and target-based rehabilitation therapy. The second main objective of this thesis is to address the emerging problem of long assessment time, which can potentially lead to fatigue when assessing subjects with stroke. To address this issue, we examine two time reduction strategies. The first strategy focuses on task selection, whereby KINARM tasks are arranged in a hierarchical structure so that an earlier task in the assessment procedure can be used to decide whether or not subsequent tasks should be performed. The second strategy focuses on time reduction on the longest two individual KINARM tasks. Both reduction strategies are shown to provide significant time savings, ranging from 30% to 90% using task selection and 50% using individual task reductions, thereby establishing a framework for reduction of assessment time on a broader set of KINARM tasks. All in all, findings of this thesis establish an improved platform for diagnosis and prognosis of stroke using robot-based biomarkers.
Resumo:
"UILU-ENG 79 1716."
Resumo:
The paper presents a computational system based upon formal principles to run spatial models for environmental processes. The simulator is named SimuMap because it is typically used to simulate spatial processes over a mapped representation of terrain. A model is formally represented in SimuMap as a set of coupled sub-models. The paper considers the situation where spatial processes operate at different time levels, but are still integrated. An example of such a situation commonly occurs in watershed hydrology where overland flow and stream channel flow have very different flow rates but are highly related as they are subject to the same terrain runoff processes. SimuMap is able to run a network of sub-models that express different time-space derivatives for water flow processes. Sub-models may be coded generically with a map algebra programming language that uses a surface data model. To address the problem of differing time levels in simulation, the paper: (i) reviews general approaches for numerical solvers, (ii) considers the constraints that need to be enforced to use more adaptive time steps in discrete time specified simulations, and (iii) scaling transfer rates in equations that use different time bases for time-space derivatives. A multistep scheme is proposed for SimuMap. This is presented along with a description of its visual programming interface, its modelling formalisms and future plans. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Changes in arterial distensibility have been widely used to identify the presence of cardiovascular abnormalities like hypertension. Pulse wave velocity (PWV) has shown to be related to arterial distensibility. However, the lack of suitable techniques to measure PWV nonintrusively has impeded its clinical usefulness. Pulse transit time (PTT) is a noninvasive technique derived from the principle of PWV. PTT has shown its capabilities in cardiovascular and cardiorespiratory studies in adults. However, no known study has been conducted to understand the suitability and utility of PTT to estimate PWV in children. Two computational methods to derive PWV from PTT values obtained from 23 normotensive Caucasian children (19 males, aged 5-12 years old) from their finger and toe were conducted. Furthermore, the effects of adopting different postures on the PWV derivations were investigated. Statistical analyses were performed in comparison with two previous PWV studies conducted on children. Results revealed that PWV derived from the upper limb correlated significantly (P
Resumo:
This research work analyses techniques for implementing a cell-centred finite-volume time-domain (ccFV-TD) computational methodology for the purpose of studying microwave heating. Various state-of-the-art spatial and temporal discretisation methods employed to solve Maxwell's equations on multidimensional structured grid networks are investigated, and the dispersive and dissipative errors inherent in those techniques examined. Both staggered and unstaggered grid approaches are considered. Upwind schemes using a Riemann solver and intensity vector splitting are studied and evaluated. Staggered and unstaggered Leapfrog and Runge-Kutta time integration methods are analysed in terms of phase and amplitude error to identify which method is the most accurate and efficient for simulating microwave heating processes. The implementation and migration of typical electromagnetic boundary conditions. from staggered in space to cell-centred approaches also is deliberated. In particular, an existing perfectly matched layer absorbing boundary methodology is adapted to formulate a new cell-centred boundary implementation for the ccFV-TD solvers. Finally for microwave heating purposes, a comparison of analytical and numerical results for standard case studies in rectangular waveguides allows the accuracy of the developed methods to be assessed. © 2004 Elsevier Inc. All rights reserved.
Resumo:
This paper presents a finite-difference time-domain (FDTD) simulator for electromagnetic analysis and design applications in MRI. It is intended to be a complete FDTD model of an MRI system including all RF and low-frequency field generating units and electrical models of the patient. The pro-ram has been constructed in an object-oriented framework. The design procedure is detailed and the numerical solver has been verified against analytical solutions for simple cases and also applied to various field calculation problems. In particular, the simulator is demonstrated for inverse RF coil design, optimized source profile generation, and parallel imaging in high-frequency situations. The examples show new developments enabled by the simulator and demonstrate that the proposed FDTD framework can be used to analyze large-scale computational electromagnetic problems in modern MRI engineering. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
A one-dimensional computational model of pilling of a fibre assembly has been created. The model follows a set of individual fibres, as free ends and loops appear as fuzz and arc progressively withdrawn from the body of the assembly, and entangle to form pills, which eventually break off or are pulled out. The time dependence of the computation is given by ticks, which correspond to cycles of a wear and laundering process. The movement of the fibres is treated as a reptation process. A set of standard values is used as inputs to the computation. Predictions arc given of the change with a number Of cycles of mass of fuzz, mass of pills, and mass removed from the assembly. Changes in the standard values allow sensitivity studies to be carried out.
Resumo:
Collaborative filtering is regarded as one of the most promising recommendation algorithms. The item-based approaches for collaborative filtering identify the similarity between two items by comparing users' ratings on them. In these approaches, ratings produced at different times are weighted equally. That is to say, changes in user purchase interest are not taken into consideration. For example, an item that was rated recently by a user should have a bigger impact on the prediction of future user behaviour than an item that was rated a long time ago. In this paper, we present a novel algorithm to compute the time weights for different items in a manner that will assign a decreasing weight to old data. More specifically, the users' purchase habits vary. Even the same user has quite different attitudes towards different items. Our proposed algorithm uses clustering to discriminate between different kinds of items. To each item cluster, we trace each user's purchase interest change and introduce a personalized decay factor according to the user own purchase behaviour. Empirical studies have shown that our new algorithm substantially improves the precision of item-based collaborative filtering without introducing higher order computational complexity.
Resumo:
The ability to grow microscopic spherical birefringent crystals of vaterite, a calcium carbonate mineral, has allowed the development of an optical microrheometer based on optical tweezers. However, since these crystals are birefringent, and worse, are expected to have non-uniform birefringence, computational modeling of the microrheometer is a highly challenging task. Modeling the microrheometer - and optical tweezers in general - typically requires large numbers of repeated calculations for the same trapped particle. This places strong demands on the efficiency of computational methods used. While our usual method of choice for computational modelling of optical tweezers - the T-matrix method - meets this requirement of efficiency, it is restricted to homogeneous isotropic particles. General methods that can model complex structures such as the vaterite particles, such as finite-difference time-domain (FDTD) or finite-difference frequency-domain (FDFD) methods, are inefficient. Therefore, we have developed a hybrid FDFD/T-matrix method that combines the generality of volume-discretisation methods such as FDFD with the efficiency of the T-matrix method. We have used this hybrid method to calculate optical forces and torques on model vaterite spheres in optical traps. We present and compare the results of computational modelling and experimental measurements.
Resumo:
A major impediment to developing real-time computer vision systems has been the computational power and level of skill required to process video streams in real-time. This has meant that many researchers have either analysed video streams off-line or used expensive dedicated hardware acceleration techniques. Recent software and hardware developments have greatly eased the development burden of realtime image analysis leading to the development of portable systems using cheap PC hardware and software exploiting the Multimedia Extension (MMX) instruction set of the Intel Pentium chip. This paper describes the implementation of a computationally efficient computer vision system for recognizing hand gestures using efficient coding and MMX-acceleration to achieve real-time performance on low cost hardware.
Resumo:
We propose a method for the timing analysis of concurrent real-time programs with hard deadlines. We divide the analysis into a machine-independent and a machine-dependent task. The latter takes into account the execution times of the program on a particular machine. Therefore, our goal is to make the machine-dependent phase of the analysis as simple as possible. We succeed in the sense that the machine-dependent phase remains the same as in the analysis of sequential programs. We shift the complexity introduced by concurrency completely to the machine-independent phase.