865 resultados para Parallel processing (Electronic computers) - Research
Resumo:
The introduction of parallel processing architectures allowed the real time impelemtation of more sophisticated control algorithms with tighter specifications in terms of sampling time. However, to take advantage of the processing power of these architectures the control engeneer, due to the lack of appropriate tools, must spend a considerable amount of time in the parallelizaton of the control algorithm.
Resumo:
The performance demands of modern control and signal processing systems is increasing beyond the capacity of conventional sequential processors, requiring parallel processing solutions to satisfy the real-time requirements.
Resumo:
Proportional, Integral and Derivative (PID) regulators are standard building blocks for industrial automation. The popularity of these regulators comes from their rebust performance in a wide range of operating conditions, and also from their functional simplicity, which makes them suitable for manual tuning.
Resumo:
Evaluation of blood-flow Doppler ultrasound spectral content is currently performed on clinical diagnosis. Since mean frequency and bandwidth spectral parameters are determinants on the quantification of stenotic degree, more precise estimators than the conventional Fourier transform should be seek. This paper summarizes studies led by the author in this field, as well as the strategies used to implement the methods in real-time. Regarding stationary and nonstationary characteristics of the blood-flow signal, different models were assessed. When autoregressive and autoregressive moving average models were compared with the traditional Fourier based methods in terms of their statistical performance while estimating both spectral parameters, the Modified Covariance model was identified by the cost/benefit criterion as the estimator presenting better performance. The performance of three time-frequency distributions and the Short Time Fourier Transform was also compared. The Choi-Williams distribution proved to be more accurate than the other methods. The identified spectral estimators were developed and optimized using high performance techniques. Homogeneous and heterogeneous architectures supporting multiple instruction multiple data parallel processing were essayed. Results obtained proved that real-time implementation of the blood-flow estimators is feasible, enhancing the usage of more complex spectral models on other ultrasonic systems.
Resumo:
Consider the problem of scheduling sporadic tasks on a multiprocessor platform under mutual exclusion constraints. We present an approach which appears promising for allowing large amounts of parallel task executions and still ensures low amounts of blocking.
Resumo:
Sparse matrix-vector multiplication (SMVM) is a fundamental operation in many scientific and engineering applications. In many cases sparse matrices have thousands of rows and columns where most of the entries are zero, while non-zero data is spread over the matrix. This sparsity of data locality reduces the effectiveness of data cache in general-purpose processors quite reducing their performance efficiency when compared to what is achieved with dense matrix multiplication. In this paper, we propose a parallel processing solution for SMVM in a many-core architecture. The architecture is tested with known benchmarks using a ZYNQ-7020 FPGA. The architecture is scalable in the number of core elements and limited only by the available memory bandwidth. It achieves performance efficiencies up to almost 70% and better performances than previous FPGA designs.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniques for maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables, and an approach for performing parallel addition of N input symbols.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniquesfor maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables,and an approach for performing parallel addition of N input symbols.
Resumo:
This paper presents a theoretical model to analyze the privacy issues around location based mobile business models. We report the results of an exploratory field experiment in Switzerland that assessed the factors driving user payoff in mobile business. We found that (1) the personal data disclosed has a negative effect on user payoff; (2) the amount of personalization available has a direct and positive effect, as well as a moderating effect on user payoff; (3) the amount of control over user's personal data has a direct and positive effect, as well as a moderating effect on user payoff. The results suggest that privacy protection could be the main value proposition in the B2C mobile market. From our theoretical model we derive a set of guidelines to design a privacy-friendly business model pattern for third-party services. We discuss four examples to show the mobile platform can play a key role in the implementation of these new business models.
Resumo:
La lecture numérique prend de plus en plus de place dans l'espace global de la lecture des étudiants. Bien que les premiers systèmes de lecture numérique, communément appelés livres électroniques, datent déjà de plusieurs années, les opinions quant à leur potentiel divergent encore. Une variété de contenus universitaires numériques s’offre aujourd’hui aux étudiants, entraînant par le fait même une multiplication d'usages ainsi qu'une variété de modes de lecture. Les systèmes de lecture numérique font maintenant partie intégrante de l’environnement électronique auquel les étudiants ont accès et méritent d’être étudiés plus en profondeur. Maintes expérimentations ont été menées dans des bibliothèques publiques et dans des bibliothèques universitaires sur les livres électroniques. Des recherches ont été conduites sur leur utilisabilité et sur le degré de satisfaction des lecteurs dans le but d’en améliorer le design. Cependant, très peu d’études ont porté sur les pratiques de lecture proprement dites des universitaires (notamment les étudiants) et sur leurs perceptions de ces nouveaux systèmes de lecture. Notre recherche s’intéresse à ces aspects en étudiant deux systèmes de lecture numérique, une Tablet PC (dispositif nomade) et un système de livres-Web, NetLibrary (interface de lecture intégrée à un navigateur Web). Notre recherche étudie les pratiques de lecture des étudiants sur ces systèmes de lecture numérique. Elle est guidée par trois questions de recherche qui s’articulent autour (1) des stratégies de lecture employées par des étudiants (avant, pendant et après la lecture), (2) des éléments du système de lecture qui influencent (positivement ou négativement) le processus de lecture et (3) des perceptions des étudiants vis-à-vis la technologie du livre électronique et son apport à leur travail universitaire. Pour mener cette recherche, une approche méthodologique mixte a été retenue, utilisant trois modes de collecte de données : un questionnaire, des entrevues semi-structurées avec les étudiants ayant utilisé l’un ou l’autre des systèmes étudiés, et le prélèvement des traces de lecture laissées par les étudiants dans les systèmes, après usage. Les répondants (n=46) étaient des étudiants de l’Université de Montréal, provenant de trois départements (Bibliothéconomie & sciences de l’information, Communication et Linguistique & traduction). Près de la moitié d’entre eux (n=21) ont été interviewés. Parallèlement, les traces de lecture laissées dans les systèmes de lecture par les étudiants (annotations, surlignages, etc.) ont été prélevées et analysées. Les données des entrevues et des réponses aux questions ouvertes du questionnaire ont fait l'objet d'une analyse de contenu et un traitement statistique a été réservé aux données des questions fermées du questionnaire et des traces de lecture. Les résultats obtenus montrent que, d’une façon générale, l’objectif de lecture, la nouveauté du contenu, les habitudes de lecture de l’étudiant de même que les possibilités du système de lecture sont les éléments qui orientent le choix et l’application des stratégies de lecture. Des aides et des obstacles à la lecture ont été identifiés pour chacun des systèmes de lecture étudiés. Les aides consistent en la présence de certains éléments de la métaphore du livre papier dans le système de lecture numérique (notion de page délimitée, pagination, etc.), le dictionnaire intégré au système, et le fait que les systèmes de lecture étudiés facilitent la lecture en diagonale. Pour les obstacles, l’instrumentation de la lecture a rendu l’appropriation du texte par le lecteur difficile. De plus, la lecture numérique (donc « sur écran ») a entraîné un manque de concentration et une fatigue visuelle notamment avec NetLibrary. La Tablet PC, tout comme NetLibrary, a été perçue comme facile à utiliser mais pas toujours confortable, l’inconfort étant davantage manifeste dans NetLibrary. Les étudiants considèrent les deux systèmes de lecture comme des outils pratiques pour le travail universitaire, mais pour des raisons différentes, spécifiques à chaque système. L’évaluation globale de l’expérience de lecture numérique des répondants s’est avérée, dans l’ensemble, positive pour la Tablet PC et plutôt mitigée pour NetLibrary. Cette recherche contribue à enrichir les connaissances sur (1) la lecture numérique, notamment celle du lectorat universitaire étudiant, et (2) l’impact d’un système de lecture sur l’efficacité de la lecture, sur les lecteurs, sur l’atteinte de l’objectif de lecture, et sur les stratégies de lecture utilisées. Outre les limites de l’étude, des pistes pour des recherches futures sont présentées.
Resumo:
Analog-to digital Converters (ADC) have an important impact on the overall performance of signal processing system. This research is to explore efficient techniques for the design of sigma-delta ADC,specially for multi-standard wireless tranceivers. In particular, the aim is to develop novel models and algorithms to address this problem and to implement software tools which are avle to assist the designer's decisions in the system-level exploration phase. To this end, this thesis presents a framework of techniques to design sigma-delta analog to digital converters.A2-2-2 reconfigurable sigma-delta modulator is proposed which can meet the design specifications of the three wireless communication standards namely GSM,WCDMA and WLAN. A sigma-delta modulator design tool is developed using the Graphical User Interface Development Environment (GUIDE) In MATLAB.Genetic Algorithm(GA) based search method is introduced to find the optimum value of the scaling coefficients and to maximize the dynamic range in a sigma-delta modulator.
Resumo:
The continually growing worldwide hazardous waste problem is receiving much attention lately. The development of cost effective, yet efficient methods of decontamination are vital to our success in solving this problem.Bioremediation using white rot fungi, a group of basidiomycetes characterized by their ability to degrade lignin by producing extracellular LiP, MnP and laccase have come to be recognized globally which is described in detail in Chapter 1.These features provide them with tremendous advantages over other micro-organisms.Chapter 2 deals with the isolation and screening of lignin degrading enzyme producing micoro-organisms from mangrove area. Marine microbes of mangrove area has great capacity to tolerate wide fluctuations of salinitie.Primary and secondary screening for lignin degrading enzyme producing halophilic microbes from mangrove area resulted in the selection of two fungal strains from among 75 bacteria and 26 fungi. The two fungi, SIP 10 and SIP ll, were identified as penicillium sp and Aspergillus sp respectively belonging to the class Ascomycetes .Specific activity of the purified LiP was 7923 U/mg protein. The purification fold was 24.07 while the yield was 18.7%. SDS PAGE of LiP showed that it was a low molecular weight protein of 29 kDa.Zymogram analysis using crystal violet dye as substrate confirmed the peroxidase nature of the purified LiP.The studies on the ability of purified LiP to decolorize different synthetic dyes was done. Among the dyes studied, crystal violet, a triphenyl methane dye was decolorized to the greatest extent.
Resumo:
This paper presents a performance analysis of reversible, fault tolerant VLSI implementations of carry select and hybrid decimal adders suitable for multi-digit BCD addition. The designs enable partial parallel processing of all digits that perform high-speed addition in decimal domain. When the number of digits is more than 25 the hybrid decimal adder can operate 5 times faster than conventional decimal adder using classical logic gates. The speed up factor of hybrid adder increases above 10 when the number of decimal digits is more than 25 for reversible logic implementation. Such highspeed decimal adders find applications in real time processors and internet-based applications. The implementations use only reversible conservative Fredkin gates, which make it suitable for VLSI circuits.
Resumo:
Construcció d’una aplicació amb un llenguatge de programació concurrent i distribuït anomenat Erlang. Erlang és un llenguatge de programació funcional amb avaluació estricta, és a dir, assignació única i que inclou una màquina virtual. L’aplicació desenvolupada ha estat la programació d’un xat multiprotocol, el qual s’ha realitzat una primera part que ha consistit en el construcció d’un servidor per la xarxa local i el seu corresponent client. Llavors per poder fer un client més funcional i útil s’ha implementat un altre protocol, IRC. Al tractar-se d’un llenguatge espacialment dissenyat per treballar en processos s’ha intentat dissenyar d’una manera la qual es pugés aprofitar aquesta qualitat
Resumo:
Este trabajo de grado propone identificar la utilidad de las relaciones estratégicas comunitarias y el marketing en la administración de negocios con clientes corporativos, también se toman en cuenta conceptos como el marketing organizacional y relacional, estos conceptos ayudan en la investigación a determinar relaciones estratégicas entre las empresas, y el beneficio que estas le generan a las corporaciones; para así fomentar la implementación de estas estrategias en a las empresas a nivel nacional e internacional, así mismo, identificar el concepto de comunidad que tienen los clientes corporativos y como este concepto se puede adaptar al entorno que los rodea. Con el fin de entender las funciones y características de un cliente corporativo, así como su comportamiento, los objetivos específicos de la investigación son describir las estrategias de marketing en la administración de negocios con clientes corporativos, determinar si existe el concepto de comunidad en la administración de negocios con clientes corporativos y determinar si se utilizan relaciones estratégicas comunitarias en la administración de negocios con clientes corporativos. La metodología que se planteó usar fue teórica-conceptual, teniendo en cuenta el marketing y las relaciones estratégicas comunitarias de los clientes corporativos. Llevando la investigación al ámbito de la gerencia y dirección, los resultados que se obtuvieron gracias a la investigación, ayudaran a potenciar la dirección de las empresas, donde se evalué la verdadera utilidad de las estrategias basadas en las relaciones comunitarias y marketing en los negocios con clientes corporativos. Las estrategias comunitarias y el marketing influencian de manera directa las relaciones de las compañias con sus clientes corporativos, debido a que marketing nos permite extender la relación y generar una utilidad a futuro entre ambas partes. De la investigación se concluye que las empresas que logran crear estrategias comunitarias y relaciones estrechas entre ellas, tienden a tener mejores utilidades en el largo plazo y ser empresas más sostenibles.