957 resultados para Parallel system
Resumo:
Recently there has been an important increase in electric equipment, as well as, electric power demand in aircrafts applications. This prompts to the necessity of efficient, reliable, and low-weight converters, especially rectifiers from 115VAC to 270VDC because these voltages are used in power distribution. In order to obtain a high efficiency, in aircraft application where the derating in semiconductors is high, normally several semiconductors are used in parallel to decrease the conduction losses. However, this is in conflict with high reliability. To match both goals of high efficiency and reliability, this work proposes an interleaved multi-cell rectifier system, employing several converter cells in parallel instead of parallel-connected semiconductors. In this work a 10kW multi-cell isolated rectifier system has been designed where each cell is composed of a buck type rectifier and a full bridge DC-DC converter. The implemented system exhibits 91% of efficiency, high power density (10kW/10kg), low THD (2.5%), and n−1 fault tolerance which complies, with military aircraft standards.
Resumo:
In this paper, an architecture based on a scalable and flexible set of Evolvable Processing arrays is presented. FPGA-native Dynamic Partial Reconfiguration (DPR) is used for evolution, which is done intrinsically, letting the system to adapt autonomously to variable run-time conditions, including the presence of transient and permanent faults. The architecture supports different modes of operation, namely: independent, parallel, cascaded or bypass mode. These modes of operation can be used during evolution time or during normal operation. The evolvability of the architecture is combined with fault-tolerance techniques, to enhance the platform with self-healing features, making it suitable for applications which require both high adaptability and reliability. Experimental results show that such a system may benefit from accelerated evolution times, increased performance and improved dependability, mainly by increasing fault tolerance for transient and permanent faults, as well as providing some fault identification possibilities. The evolvable HW array shown is tailored for window-based image processing applications.
Resumo:
A methodology for developing an advanced communications system for the Deaf in a new domain is presented in this paper. This methodology is a user-centred design approach consisting of four main steps: requirement analysis, parallel corpus generation, technology adaptation to the new domain, and finally, system evaluation. During the requirement analysis, both the user and technical requirements are evaluated and defined. For generating the parallel corpus, it is necessary to collect Spanish sentences in the new domain and translate them into LSE (Lengua de Signos Española: Spanish Sign Language). LSE is represented by glosses and using video recordings. This corpus is used for training the two main modules of the advanced communications system to the new domain: the spoken Spanish into the LSE translation module and the Spanish generation from the LSE module. The main aspects to be generated are the vocabularies for both languages (Spanish words and signs), and the knowledge for translating in both directions. Finally, the field evaluation is carried out with deaf people using the advanced communications system to interact with hearing people in several scenarios. In this evaluation, the paper proposes several objective and subjective measurements for evaluating the performance. In this paper, the new considered domain is about dialogues in a hotel reception. Using this methodology, the system was developed in several months, obtaining very good performance: good translation rates (10% Sign Error Rate) with small processing times, allowing face-to-face dialogues.
Resumo:
Nowadays robots have made their way into real applications that were prohibitive and unthinkable thirty years ago. This is mainly due to the increase in power computations and the evolution in the theoretical field of robotics and control. Even though there is plenty of information in the current literature on this topics, it is not easy to find clear concepts of how to proceed in order to design and implement a controller for a robot. In general, the design of a controller requires of a complete understanding and knowledge of the system to be controlled. Therefore, for advanced control techniques the systems must be first identified. Once again this particular objective is cumbersome and is never straight forward requiring of great expertise and some criteria must be adopted. On the other hand, the particular problem of designing a controller is even more complex when dealing with Parallel Manipulators (PM), since their closed-loop structures give rise to a highly nonlinear system. Under this basis the current work is developed, which intends to resume and gather all the concepts and experiences involve for the control of an Hydraulic Parallel Manipulator. The main objective of this thesis is to provide a guide remarking all the steps involve in the designing of advanced control technique for PMs. The analysis of the PM under study is minced up to the core of the mechanism: the hydraulic actuators. The actuators are modeled and experimental identified. Additionally, some consideration regarding traditional PID controllers are presented and an adaptive controller is finally implemented. From a macro perspective the kinematic and dynamic model of the PM are presented. Based on the model of the system and extending the adaptive controller of the actuator, a control strategy for the PM is developed and its performance is analyzed with simulation.
Resumo:
La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.
Resumo:
The discrepancy between the structural longitudinal organization of the parallel-fiber system in the cerebellar cortex and the functional mosaic-like organization of the cortex has provoked controversial theories about the flow of information in the cerebellum. We address this issue by characterizing the spatiotemporal organization of neuronal activity in the cerebellar cortex by using optical imaging of voltage-sensitive dyes in isolated guinea-pig cerebellum. Parallel-fiber stimulation evoked a narrow beam of activity, which propagated along the parallel fibers. Stimulation of the mossy fibers elicited a circular, nonpropagating patch of synchronized activity. These results strongly support the hypothesis that a beam of parallel fibers, activated by a focal group of granule cells, fails to activate the Purkinje cells along most of its length. It is thus the ascending axon of the granule cell, and not its parallel branches, that activates and defines the basic functional modules of the cerebellar cortex.
Resumo:
We report automated DNA sequencing in 16-channel microchips. A microchip prefilled with sieving matrix is aligned on a heating plate affixed to a movable platform. Samples are loaded into sample reservoirs by using an eight-tip pipetting device, and the chip is docked with an array of electrodes in the focal plane of a four-color scanning detection system. Under computer control, high voltage is applied to the appropriate reservoirs in a programmed sequence that injects and separates the DNA samples. An integrated four-color confocal fluorescent detector automatically scans all 16 channels. The system routinely yields more than 450 bases in 15 min in all 16 channels. In the best case using an automated base-calling program, 543 bases have been called at an accuracy of >99%. Separations, including automated chip loading and sample injection, normally are completed in less than 18 min. The advantages of DNA sequencing on capillary electrophoresis chips include uniform signal intensity and tolerance of high DNA template concentration. To understand the fundamentals of these unique features we developed a theoretical treatment of cross-channel chip injection that we call the differential concentration effect. We present experimental evidence consistent with the predictions of the theory.
Resumo:
The activation of the silent endogenous progesterone receptor (PR) gene by 17-β-estradiol (E2) in cells stably transfected with estrogen receptor (ER) was used as a model system to study the mechanism of E2-induced transcription. The time course of E2-induced PR transcription rate was determined by nuclear run-on assays. No marked effect on specific PR gene transcription rates was detected at 0 and 1 h of E2 treatment. After 3 h of E2 treatment, the PR mRNA synthesis rate increased 2.0- ± 0.2-fold and continued to increase to 3.5- ± 0.4-fold by 24 h as compared with 0 h. The transcription rate increase was followed by PR mRNA accumulation. No PR mRNA was detectable at 0, 1, and 3 h of E2 treatment. PR mRNA accumulation was detected at 6 h of E2 treatment and continued to accumulate until 18 h, the longest time point examined. Interestingly, this slow and gradual transcription rate increase of the endogenous PR gene did not parallel binding of E2 to ER, which was maximized within 30 min. Furthermore, the E2–ER level was down-regulated to 15% at 3 h as compared with 30 min of E2 treatment and remained low at 24 h of E2 exposure. These paradoxical observations indicate that E2-induced transcription activation is more complicated than just an association of the occupied ER with the transcription machinery.
Resumo:
The perceived colors of reflecting surfaces generally remain stable despite changes in the spectrum of the illuminating light. This color constancy can be measured operationally by asking observers to distinguish illuminant changes on a scene from changes in the reflecting properties of the surfaces comprising it. It is shown here that during fast illuminant changes, simultaneous changes in spectral reflectance of one or more surfaces in an array of other surfaces can be readily detected almost independent of the numbers of surfaces, suggesting a preattentive, spatially parallel process. This process, which is perfect over a spatial window delimited by the anatomical fovea, may form an early input to a multistage analysis of surface color, providing the visual system with information about a rapidly changing world in advance of the generation of a more elaborate and stable perceptual representation.
Resumo:
Although specific proteinases play a critical role in the active phase of apoptosis, their substrates are largely unknown. We previously identified poly(ADP-ribose) polymerase (PARP) as an apoptosis-associated substrate for proteinase(s) related to interleukin 1 beta-converting enzyme (ICE). Now we have used a cell-free system to characterize proteinase(s) that cleave the nuclear lamins during apoptosis. Lamin cleavage during apoptosis requires the action of a second ICE-like enyzme, which exhibits kinetics of cleavage and a profile of sensitivity to specific inhibitors that is distinct from the PARP proteinase. Thus, multiple ICE-like enzymes are required for apoptotic events in these cell-free extracts. Inhibition of the lamin proteinase with tosyllysine "chloromethyl ketone" blocks nuclear apoptosis prior to the packaging of condensed chromatin into apoptotic bodies. Under these conditions, the nuclear DNA is fully cleaved to a nucleosomal ladder. Our studies reveal that the lamin proteinase and the fragmentation nuclease function in independent parallel pathways during the final stages of apoptotic execution. Neither pathway alone is sufficient for completion of nuclear apoptosis. Instead, the various activities cooperate to drive the disassembly of the nucleus.
Resumo:
This paper describes JANUS, a modular massively parallel and reconfigurable FPGA-based computing system. Each JANUS module has a computational core and a host. The computational core is a 4x4 array of FPGA-based processing elements with nearest-neighbor data links. Processors are also directly connected to an I/O node attached to the JANUS host, a conventional PC. JANUS is tailored for, but not limited to, the requirements of a class of hard scientific applications characterized by regular code structure, unconventional data manipulation instructions and not too large data-base size. We discuss the architecture of this configurable machine, and focus on its use on Monte Carlo simulations of statistical mechanics. On this class of application JANUS achieves impressive performances: in some cases one JANUS processing element outperfoms high-end PCs by a factor ≈1000. We also discuss the role of JANUS on other classes of scientific applications.
Resumo:
The so-called parallel multisplitting nonstationary iterative Model A was introduced by Bru, Elsner, and Neumann [Linear Algebra and its Applications 103:175-192 (1988)] for solving a nonsingular linear system Ax = b using a weak nonnegative multisplitting of the first type. In this paper new results are introduced when A is a monotone matrix using a weak nonnegative multisplitting of the second type and when A is a symmetric positive definite matrix using a P -regular multisplitting. Also, nonstationary alternating iterative methods are studied. Finally, combining Model A and alternating iterative methods, two new models of parallel multisplitting nonstationary iterations are introduced. When matrix A is monotone and the multisplittings are weak nonnegative of the first or of the second type, both models lead to convergent schemes. Also, when matrix A is symmetric positive definite and the multisplittings are P -regular, the schemes are also convergent.
Resumo:
Wireless sensor networks (WSNs) have shown wide applicability to many fields including monitoring of environmental, civil, and industrial settings. WSNs however are resource constrained by many competing factors that span their hardware, software, and networking. One of the central resource constrains is the charge consumption of WSN nodes. With finite energy supplies, low charge consumption is needed to ensure long lifetimes and success of WSNs. This thesis details the design of a power system to support long-term operation of WSNs. The power system’s development occurs in parallel with a custom WSN from the Queen’s MEMS Lab (QML-WSN), with the goal of supporting a 1+ year lifetime without sacrificing functionality. The final power system design utilizes a TPS62740 DC-DC converter with AA alkaline batteries to efficiently supply the nodes while providing battery monitoring functionality and an expansion slot for future development. Testing tools for measuring current draw and charge consumption were created along with analysis and processing software. Through their use charge consumption of the power system was drastically lowered and issues in QML-WSN were identified and resolved including the proper shutdown of accelerometers, and incorrect microcontroller unit (MCU) power pin connection. Controlled current profiling revealed unexpected behaviour of nodes and detailed current-voltage relationships. These relationships were utilized with a lifetime projection model to estimate a lifetime between 521-551 days, depending on the mode of operation. The power system and QML-WSN were tested over a long term trial lasting 272+ days in an industrial testbed to monitor an air compressor pump. Environmental factors were found to influence the behaviour of nodes leading to increased charge consumption, while a node in an office setting was still operating at the conclusion of the trail. This agrees with the lifetime projection and gives a strong indication that a 1+ year lifetime is achievable. Additionally, a light-weight charge consumption model was developed which allows charge consumption information of nodes in a distributed WSN to be monitored. This model was tested in a laboratory setting demonstrating +95% accuracy for high packet reception rate WSNs across varying data rates, battery supply capacities, and runtimes up to full battery depletion.
Resumo:
With the development of the embedded application and driving assistance systems, it becomes relevant to develop parallel mechanisms in order to check and to diagnose these new systems. In this thesis we focus our research on one of this type of parallel mechanisms and analytical redundancy for fault diagnosis of an automotive suspension system. We have considered a quarter model car passive suspension model and used a parameter estimation, ARX model, method to detect the fault happening in the damper and spring of system. Moreover, afterward we have deployed a neural network classifier to isolate the faults and identifies where the fault is happening. Then in this regard, the safety measurements and redundancies can take into the effect to prevent failure in the system. It is shown that The ARX estimator could quickly detect the fault online using the vertical acceleration and displacement sensor data which are common sensors in nowadays vehicles. Hence, the clear divergence is the ARX response make it easy to deploy a threshold to give alarm to the intelligent system of vehicle and the neural classifier can quickly show the place of fault occurrence.
Resumo:
Thesis (Master's)--University of Washington, 2016-06