899 resultados para Real-Time Decision Support System
Resumo:
As a part of the AMAZE-08 campaign during the wet season in the rainforest of central Amazonia, an ultraviolet aerodynamic particle sizer (UV-APS) was operated for continuous measurements of fluorescent biological aerosol particles (FBAP). In the coarse particle size range (> 1 mu m) the campaign median and quartiles of FBAP number and mass concentration were 7.3x10(4) m(-3) (4.0-13.2x10(4) m(-3)) and 0.72 mu g m(-3) (0.42-1.19 mu g m(-3)), respectively, accounting for 24% (11-41%) of total particle number and 47% (25-65%) of total particle mass. During the five-week campaign in February-March 2008 the concentration of coarse-mode Saharan dust particles was highly variable. In contrast, FBAP concentrations remained fairly constant over the course of weeks and had a consistent daily pattern, peaking several hours before sunrise, suggesting observed FBAP was dominated by nocturnal spore emission. This conclusion was supported by the consistent FBAP number size distribution peaking at 2.3 mu m, also attributed to fungal spores and mixed biological particles by scanning electron microscopy (SEM), light microscopy and biochemical staining. A second primary biological aerosol particle (PBAP) mode between 0.5 and 1.0 mu m was also observed by SEM, but exhibited little fluorescence and no true fungal staining. This mode may have consisted of single bacterial cells, brochosomes, various fragments of biological material, and small Chromalveolata (Chromista) spores. Particles liquid-coated with mixed organic-inorganic material constituted a large fraction of observations, and these coatings contained salts likely from primary biological origin. We provide key support for the suggestion that real-time laser-induce fluorescence (LIF) techniques using 355 nm excitation provide size-resolved concentrations of FBAP as a lower limit for the atmospheric abundance of biological particles in a pristine environment. We also show some limitations of using the instrument for ambient monitoring of weakly fluorescent particles < 2 mu m. Our measurements confirm that primary biological particles, fungal spores in particular, are an important fraction of supermicron aerosol in the Amazon and that may contribute significantly to hydrological cycling, especially when coated by mixed inorganic material.
Resumo:
The main objective of this work is to present an efficient method for phasor estimation based on a compact Genetic Algorithm (cGA) implemented in Field Programmable Gate Array (FPGA). To validate the proposed method, an Electrical Power System (EPS) simulated by the Alternative Transients Program (ATP) provides data to be used by the cGA. This data is as close as possible to the actual data provided by the EPS. Real life situations such as islanding, sudden load increase and permanent faults were considered. The implementation aims to take advantage of the inherent parallelism in Genetic Algorithms in a compact and optimized way, making them an attractive option for practical applications in real-time estimations concerning Phasor Measurement Units (PMUs).
Resumo:
Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.
Resumo:
In the last years of research, I focused my studies on different physiological problems. Together with my supervisors, I developed/improved different mathematical models in order to create valid tools useful for a better understanding of important clinical issues. The aim of all this work is to develop tools for learning and understanding cardiac and cerebrovascular physiology as well as pathology, generating research questions and developing clinical decision support systems useful for intensive care unit patients. I. ICP-model Designed for Medical Education We developed a comprehensive cerebral blood flow and intracranial pressure model to simulate and study the complex interactions in cerebrovascular dynamics caused by multiple simultaneous alterations, including normal and abnormal functional states of auto-regulation of the brain. Individual published equations (derived from prior animal and human studies) were implemented into a comprehensive simulation program. Included in the normal physiological modelling was: intracranial pressure, cerebral blood flow, blood pressure, and carbon dioxide (CO2) partial pressure. We also added external and pathological perturbations, such as head up position and intracranial haemorrhage. The model performed clinically realistically given inputs of published traumatized patients, and cases encountered by clinicians. The pulsatile nature of the output graphics was easy for clinicians to interpret. The manoeuvres simulated include changes of basic physiological inputs (e.g. blood pressure, central venous pressure, CO2 tension, head up position, and respiratory effects on vascular pressures) as well as pathological inputs (e.g. acute intracranial bleeding, and obstruction of cerebrospinal outflow). Based on the results, we believe the model would be useful to teach complex relationships of brain haemodynamics and study clinical research questions such as the optimal head-up position, the effects of intracranial haemorrhage on cerebral haemodynamics, as well as the best CO2 concentration to reach the optimal compromise between intracranial pressure and perfusion. We believe this model would be useful for both beginners and advanced learners. It could be used by practicing clinicians to model individual patients (entering the effects of needed clinical manipulations, and then running the model to test for optimal combinations of therapeutic manoeuvres). II. A Heterogeneous Cerebrovascular Mathematical Model Cerebrovascular pathologies are extremely complex, due to the multitude of factors acting simultaneously on cerebral haemodynamics. In this work, the mathematical model of cerebral haemodynamics and intracranial pressure dynamics, described in the point I, is extended to account for heterogeneity in cerebral blood flow. The model includes the Circle of Willis, six regional districts independently regulated by autoregulation and CO2 reactivity, distal cortical anastomoses, venous circulation, the cerebrospinal fluid circulation, and the intracranial pressure-volume relationship. Results agree with data in the literature and highlight the existence of a monotonic relationship between transient hyperemic response and the autoregulation gain. During unilateral internal carotid artery stenosis, local blood flow regulation is progressively lost in the ipsilateral territory with the presence of a steal phenomenon, while the anterior communicating artery plays the major role to redistribute the available blood flow. Conversely, distal collateral circulation plays a major role during unilateral occlusion of the middle cerebral artery. In conclusion, the model is able to reproduce several different pathological conditions characterized by heterogeneity in cerebrovascular haemodynamics and can not only explain generalized results in terms of physiological mechanisms involved, but also, by individualizing parameters, may represent a valuable tool to help with difficult clinical decisions. III. Effect of Cushing Response on Systemic Arterial Pressure. During cerebral hypoxic conditions, the sympathetic system causes an increase in arterial pressure (Cushing response), creating a link between the cerebral and the systemic circulation. This work investigates the complex relationships among cerebrovascular dynamics, intracranial pressure, Cushing response, and short-term systemic regulation, during plateau waves, by means of an original mathematical model. The model incorporates the pulsating heart, the pulmonary circulation and the systemic circulation, with an accurate description of the cerebral circulation and the intracranial pressure dynamics (same model as in the first paragraph). Various regulatory mechanisms are included: cerebral autoregulation, local blood flow control by oxygen (O2) and/or CO2 changes, sympathetic and vagal regulation of cardiovascular parameters by several reflex mechanisms (chemoreceptors, lung-stretch receptors, baroreceptors). The Cushing response has been described assuming a dramatic increase in sympathetic activity to vessels during a fall in brain O2 delivery. With this assumption, the model is able to simulate the cardiovascular effects experimentally observed when intracranial pressure is artificially elevated and maintained at constant level (arterial pressure increase and bradicardia). According to the model, these effects arise from the interaction between the Cushing response and the baroreflex response (secondary to arterial pressure increase). Then, patients with severe head injury have been simulated by reducing intracranial compliance and cerebrospinal fluid reabsorption. With these changes, oscillations with plateau waves developed. In these conditions, model results indicate that the Cushing response may have both positive effects, reducing the duration of the plateau phase via an increase in cerebral perfusion pressure, and negative effects, increasing the intracranial pressure plateau level, with a risk of greater compression of the cerebral vessels. This model may be of value to assist clinicians in finding the balance between clinical benefits of the Cushing response and its shortcomings. IV. Comprehensive Cardiopulmonary Simulation Model for the Analysis of Hypercapnic Respiratory Failure We developed a new comprehensive cardiopulmonary model that takes into account the mutual interactions between the cardiovascular and the respiratory systems along with their short-term regulatory mechanisms. The model includes the heart, systemic and pulmonary circulations, lung mechanics, gas exchange and transport equations, and cardio-ventilatory control. Results show good agreement with published patient data in case of normoxic and hyperoxic hypercapnia simulations. In particular, simulations predict a moderate increase in mean systemic arterial pressure and heart rate, with almost no change in cardiac output, paralleled by a relevant increase in minute ventilation, tidal volume and respiratory rate. The model can represent a valid tool for clinical practice and medical research, providing an alternative way to experience-based clinical decisions. In conclusion, models are not only capable of summarizing current knowledge, but also identifying missing knowledge. In the former case they can serve as training aids for teaching the operation of complex systems, especially if the model can be used to demonstrate the outcome of experiments. In the latter case they generate experiments to be performed to gather the missing data.
Resumo:
Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Resumo:
Environmental computer models are deterministic models devoted to predict several environmental phenomena such as air pollution or meteorological events. Numerical model output is given in terms of averages over grid cells, usually at high spatial and temporal resolution. However, these outputs are often biased with unknown calibration and not equipped with any information about the associated uncertainty. Conversely, data collected at monitoring stations is more accurate since they essentially provide the true levels. Due the leading role played by numerical models, it now important to compare model output with observations. Statistical methods developed to combine numerical model output and station data are usually referred to as data fusion. In this work, we first combine ozone monitoring data with ozone predictions from the Eta-CMAQ air quality model in order to forecast real-time current 8-hour average ozone level defined as the average of the previous four hours, current hour, and predictions for the next three hours. We propose a Bayesian downscaler model based on first differences with a flexible coefficient structure and an efficient computational strategy to fit model parameters. Model validation for the eastern United States shows consequential improvement of our fully inferential approach compared with the current real-time forecasting system. Furthermore, we consider the introduction of temperature data from a weather forecast model into the downscaler, showing improved real-time ozone predictions. Finally, we introduce a hierarchical model to obtain spatially varying uncertainty associated with numerical model output. We show how we can learn about such uncertainty through suitable stochastic data fusion modeling using some external validation data. We illustrate our Bayesian model by providing the uncertainty map associated with a temperature output over the northeastern United States.
Resumo:
Ziel dieser Dissertation ist die experimentelle Charakterisierung und quantitative Beschreibung der Hybridisierung von komplementären Nukleinsäuresträngen mit oberflächengebundenen Fängermolekülen für die Entwicklung von integrierten Biosensoren. Im Gegensatz zu lösungsbasierten Verfahren ist mit Microarray Substraten die Untersuchung vieler Nukleinsäurekombinationen parallel möglich. Als biologisch relevantes Evaluierungssystem wurde das in Eukaryoten universell exprimierte Actin Gen aus unterschiedlichen Pflanzenspezies verwendet. Dieses Testsystem ermöglicht es, nahe verwandte Pflanzenarten auf Grund von geringen Unterschieden in der Gen-Sequenz (SNPs) zu charakterisieren. Aufbauend auf dieses gut studierte Modell eines House-Keeping Genes wurde ein umfassendes Microarray System, bestehend aus kurzen und langen Oligonukleotiden (mit eingebauten LNA-Molekülen), cDNAs sowie DNA und RNA Targets realisiert. Damit konnte ein für online Messung optimiertes Testsystem mit hohen Signalstärken entwickelt werden. Basierend auf den Ergebnissen wurde der gesamte Signalpfad von Nukleinsärekonzentration bis zum digitalen Wert modelliert. Die aus der Entwicklung und den Experimenten gewonnen Erkenntnisse über die Kinetik und Thermodynamik von Hybridisierung sind in drei Publikationen zusammengefasst die das Rückgrat dieser Dissertation bilden. Die erste Publikation beschreibt die Verbesserung der Reproduzierbarkeit und Spezifizität von Microarray Ergebnissen durch online Messung von Kinetik und Thermodynamik gegenüber endpunktbasierten Messungen mit Standard Microarrays. Für die Auswertung der riesigen Datenmengen wurden zwei Algorithmen entwickelt, eine reaktionskinetische Modellierung der Isothermen und ein auf der Fermi-Dirac Statistik beruhende Beschreibung des Schmelzüberganges. Diese Algorithmen werden in der zweiten Publikation beschrieben. Durch die Realisierung von gleichen Sequenzen in den chemisch unterschiedlichen Nukleinsäuren (DNA, RNA und LNA) ist es möglich, definierte Unterschiede in der Konformation des Riboserings und der C5-Methylgruppe der Pyrimidine zu untersuchen. Die kompetitive Wechselwirkung dieser unterschiedlichen Nukleinsäuren gleicher Sequenz und die Auswirkungen auf Kinetik und Thermodynamik ist das Thema der dritten Publikation. Neben der molekularbiologischen und technologischen Entwicklung im Bereich der Sensorik von Hybridisierungsreaktionen oberflächengebundener Nukleinsäuremolekülen, der automatisierten Auswertung und Modellierung der anfallenden Datenmengen und der damit verbundenen besseren quantitativen Beschreibung von Kinetik und Thermodynamik dieser Reaktionen tragen die Ergebnisse zum besseren Verständnis der physikalisch-chemischen Struktur des elementarsten biologischen Moleküls und seiner nach wie vor nicht vollständig verstandenen Spezifizität bei.
Resumo:
BACKGROUND: The adequacy of thromboprophylaxis prescriptions in acutely ill hospitalized medical patients needs improvement. OBJECTIVE: To prospectively assess the efficacy of thromboprophylaxis adequacy of various clinical decision support systems (CDSS) with the aim of increasing the use of explicit criteria for thromboprophylaxis prescription in nine Swiss medical services. METHODS: We randomly assigned medical services to a pocket digital assistant program (PDA), pocket cards (PC) and no CDSS (controls). In centers using an electronic chart, an e-alert system (eAlerts) was developed. After 4 months, we compared post-CDSS with baseline thromboprophylaxis adequacy for the various CDSS and control groups. RESULTS: Overall, 1085 patients were included (395 controls, 196 PC, 168 PDA, 326 eAlerts), 651 pre- and 434 post-CDSS implementation: 472 (43.5%) presented a risk of VTE justifying thromboprophylaxis (31.8% pre, 61.1% post) and 556 (51.2%) received thromboprophylaxis (54.2% pre, 46.8% post). The overall adequacy (% patients with adequate prescription) of pre- and post-CDSS implementation was 56.2 and 50.7 for controls (P = 0.29), 67.3 and 45.3 for PC (P = 0.002), 66.0 and 64.9 for PDA (P = 0.99), 50.5 and 56.2 for eAlerts (P = 0.37), respectively, eAlerts limited overprescription (56% pre, 31% post, P = 0.01). CONCLUSION: While pocket cards and handhelds did not improve thromboprophylaxis adequacy, eAlerts had a modest effect, particularly on the reduction of overprescription. This effect only partially contributes to the improvement of patient safety and more work is needed towards institution-tailored tools.
Resumo:
ROTEM(®) is considered a helpful point-of-care device to monitor blood coagulation. Centrally performed analysis is desirable but rapid transport of blood samples and real-time transmission of graphic results are an important prerequisite. The effect of sample transport through a pneumatic tube system on ROTEM(®) results is unknown. The aims of the present work were (i) to determine the influence of blood sample transport through a pneumatic tube system on ROTEM(®) parameters compared to manual transportation, and (ii) to verify whether graphic results can be transmitted on line via virtual network computing using local area network to the physician in charge of the patient.
Resumo:
Much research has focused on desertification and land degradation assessments without putting sufficient emphasis on prevention and mitigation, although the concept of sustainable land management (SLM) is increasingly being acknowledged. A variety of SLM measures have already been applied at the local level, but they are rarely adequately recognised, evaluated, shared or used for decision support. WOCAT (World Overview of Technologies and Approaches) has developed an internationally recognised, standardised methodology to document and evaluate SLM technologies and approaches, including spatial distribution, allowing the sharing of SLM knowledge worldwide. The recent methodological integration into a participatory process allows now analysing and using this knowledge for decision support at the local and national level. The use of the WOCAT tools stimulates evaluation (self-evaluation as well as learning from comparing experiences) within SLM initiatives where all too often there is not only insufficient monitoring but also a lack of critical analysis. The comprehensive questionnaires and database system facilitate to document, evaluate and disseminate local experiences of SLM technologies and their implementation approaches. This evaluation process - in a team of experts and together with land users - greatly enhances understanding of the reasons behind successful (or failed) local practices. It has now been integrated into a new methodology for appraising and selecting SLM options. The methodology combines a local collective learning and decision approach with the use of the evaluated global best practices from WOCAT in a concise three step process: i) identifying land degradation and locally applied solutions in a stakeholder learning workshop; ii) assessing local solutions with the standardised WOCAT tool; iii) jointly selecting promising strategies for implementation with the help of a decision support tool. The methodology has been implemented in various countries and study sites around the world mainly within the FAO LADA (Land Degradation Assessment Project) and the EU-funded DESIRE project. Investments in SLM must be carefully assessed and planned on the basis of properly documented experiences and evaluated impacts and benefits: concerted efforts are needed and sufficient resources must be mobilised to tap the wealth of knowledge and learn from SLM successes.
Resumo:
Electric power grids throughout the world suffer from serious inefficiencies associated with under-utilization due to demand patterns, engineering design and load following approaches in use today. These grids consume much of the world’s energy and represent a large carbon footprint. From material utilization perspectives significant hardware is manufactured and installed for this infrastructure often to be used at less than 20-40% of its operational capacity for most of its lifetime. These inefficiencies lead engineers to require additional grid support and conventional generation capacity additions when renewable technologies (such as solar and wind) and electric vehicles are to be added to the utility demand/supply mix. Using actual data from the PJM [PJM 2009] the work shows that consumer load management, real time price signals, sensors and intelligent demand/supply control offer a compelling path forward to increase the efficient utilization and carbon footprint reduction of the world’s grids. Underutilization factors from many distribution companies indicate that distribution feeders are often operated at only 70-80% of their peak capacity for a few hours per year, and on average are loaded to less than 30-40% of their capability. By creating strong societal connections between consumers and energy providers technology can radically change this situation. Intelligent deployment of smart sensors, smart electric vehicles, consumer-based load management technology very high saturations of intermittent renewable energy supplies can be effectively controlled and dispatched to increase the levels of utilization of existing utility distribution, substation, transmission, and generation equipment. The strengthening of these technology, society and consumer relationships requires rapid dissemination of knowledge (real time prices, costs & benefit sharing, demand response requirements) in order to incentivize behaviors that can increase the effective use of technological equipment that represents one of the largest capital assets modern society has created.
Resumo:
A real-time polymerase chain reaction (PCR) assay was developed for rapid identification of Bacillus anthracis in environmental samples. These samples often harbor Bacillus cereus bacteria closely related to B. anthracis, which may hinder its specific identification by resulting in false positive signals. The assay consists of two duplex real-time PCR: the first PCR allows amplification of a sequence specific of the B. cereus group (B. anthracis, B. cereus, Bacillus thuringiensis, Bacillus weihenstephanensis, Bacillus pseudomycoides, and Bacillus mycoides) within the phosphoenolpyruvate/sugar phosphotransferase system I gene and a B. anthracis specific single nucleotide polymorphism within the adenylosuccinate synthetase gene. The second real-time PCR assay targets the lethal factor gene from virulence plasmid pXO1 and the capsule synthesis gene from virulence plasmid pXO2. Specificity of the assay is enhanced by the use of minor groove binding probes and/or locked nucleic acids probes. The assay was validated on 304 bacterial strains including 37 B. anthracis, 67 B. cereus group, 54 strains of non-cereus group Bacillus, and 146 Gram-positive and Gram-negative bacteria strains. The assay was performed on various environmental samples spiked with B. anthracis or B. cereus spores. The assay allowed an accurate identification of B. anthracis in environmental samples. This study provides a rapid and reliable method for improving rapid identification of B. anthracis in field operational conditions.
Resumo:
Reflected at any level of organization of the central nervous system, most of the processes ranging from ion channels to neuronal networks occur in a closed loop, where the input to the system depends on its output. In contrast, most in vitro preparations and experimental protocols operate autonomously, and do not depend on the output of the studied system. Thanks to the progress in digital signal processing and real-time computing, it is now possible to artificially close the loop and investigate biophysical processes and mechanisms under increased realism. In this contribution, we review some of the most relevant examples of a new trend in in vitro electrophysiology, ranging from the use of dynamic-clamp to multi-electrode distributed feedback stimulation. We are convinced these represents the beginning of new frontiers for the in vitro investigation of the brain, promising to open the still existing borders between theoretical and experimental approaches while taking advantage of cutting edge technologies.
Resumo:
This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.