921 resultados para Multiprocessador em chip


Relevância:

70.00% 70.00%

Publicador:

Resumo:

It bet on the next generation of computers as architecture with multiple processors and/or multicore processors. In this sense there are challenges related to features interconnection, operating frequency, the area on chip, power dissipation, performance and programmability. The mechanism of interconnection and communication it was considered ideal for this type of architecture are the networks-on-chip, due its scalability, reusability and intrinsic parallelism. The networks-on-chip communication is accomplished by transmitting packets that carry data and instructions that represent requests and responses between the processing elements interconnected by the network. The transmission of packets is accomplished as in a pipeline between the routers in the network, from source to destination of the communication, even allowing simultaneous communications between pairs of different sources and destinations. From this fact, it is proposed to transform the entire infrastructure communication of network-on-chip, using the routing mechanisms, arbitration and storage, in a parallel processing system for high performance. In this proposal, the packages are formed by instructions and data that represent the applications, which are executed on routers as well as they are transmitted, using the pipeline and parallel communication transmissions. In contrast, traditional processors are not used, but only single cores that control the access to memory. An implementation of this idea is called IPNoSys (Integrated Processing NoC System), which has an own programming model and a routing algorithm that guarantees the execution of all instructions in the packets, preventing situations of deadlock, livelock and starvation. This architecture provides mechanisms for input and output, interruption and operating system support. As proof of concept was developed a programming environment and a simulator for this architecture in SystemC, which allows configuration of various parameters and to obtain several results to evaluate it

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports a method for the analysis of secondary metabolites stored in glandular trichomes, employing negative ion `chip-based` nanospray tandem mass spectrometry. The analyses of glandular trichomes from Lychnophora ericoides, a plant endemic to the Brazilian `cerrado` and used in traditional medicine as an anti-inflammatory and analgesic agent, led to the identification of five flavonoids (chrysin, pinocembrin, pinostrobin, pinobanksin and 3-O-acetylpinobanksin) by direct infusion of the extracts of glandular trichomes into the nanospray ionisation source. All the flavonoids have no oxidation at ring B, which resulted in a modification of the fragmentation pathways compared with that of the oxidised 3,4-dihydroflavonoids already described in the literature. The absence of the anti-inflammatory and antioxidant di-C-glucosylflavone vicenin-2, or any other flavonoid glycosides, in the glandular trichomes was also demonstrated. The use of the,`chip-based` nanospray QqTOF apparatus is a new fast and useful tool for the identification of secondary metabolites stored in the glandular trichomes, which can be useful for chemotaxonomic studies based on metabolites from glandular trichomes. Copyright (C) 2008 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a novel method of fabricating atom chips that are well suited to the production and manipulation of atomic Bose–Einstein condensates. Our chip was created using a silver foil and simple micro-cutting techniques without the need for photolithography. It can sustain larger currents than conventional chips, and is compatible with the patterning of complex trapping potentials. A near pure Bose–Einstein condensate of 4 × 104 87Rb atoms has been created in a magnetic microtrap formed by currents through wires on the chip. We have observed the fragmentation of atom clouds in close proximity to the silver conductors. The fragmentation has different characteristic features to those seen with copper conductors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recent advances in embedded systems world, lead us to more complex systems with application specific blocks (IP cores), the System on Chip (SoC) devices. A good example of these complex devices can be encountered in the cell phones that can have image processing cores, communication cores, memory card cores, and others. The need of augmenting systems’ processing performance with lowest power, leads to a concept of Multiprocessor System on Chip (MSoC) in which the execution of multiple tasks can be distributed along various processors. This thesis intends to address the creation of a synthesizable multiprocessing system to be placed in a FPGA device, providing a good flexibility to tailor the system to a specific application. To deliver a multiprocessing system, will be used the synthesisable 32-bit SPARC V8 compliant, LEON3 processor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demands for functionality enhancements, cost reductions and power savings clearly suggest the introduction of multiand many-core platforms in real-time embedded systems. However, when compared to uni-core platforms, the manycores experience additional problems, namely the lack of scalable coherence mechanisms and the necessity to perform migrations. These problems have to be addressed before such systems can be considered for integration into the realtime embedded domain. We have devised several agreement protocols which solve some of the aforementioned issues. The protocols allow the applications to plan and organise their future executions both temporally and spatially (i.e. when and where the next job will be executed). Decisions can be driven by several factors, e.g. load balancing, energy savings and thermal issues. All presented protocols are analytically described, with the particular emphasis on their respective real-time behaviours and worst-case performance. The underlying assumptions are based on the multi-kernel model and the message-passing paradigm, which constitutes the communication between the interacting instances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

On-chip debug (OCD) features are frequently available in modern microprocessors. Their contribution to shorten the time-to-market justifies the industry investment in this area, where a number of competing or complementary proposals are available or under development, e.g. NEXUS, CJTAG, IJTAG. The controllability and observability features provided by OCD infrastructures provide a valuable toolbox that can be used well beyond the debugging arena, improving the return on investment rate by diluting its cost across a wider spectrum of application areas. This paper discusses the use of OCD features for validating fault tolerant architectures, and in particular the efficiency of various fault injection methods provided by enhanced OCD infrastructures. The reference data for our comparative study was captured on a workbench comprising the 32-bit Freescale MPC-565 microprocessor, an iSYSTEM IC3000 debugger (iTracePro version) and the Winidea 2005 debugging package. All enhanced OCD infrastructures were implemented in VHDL and the results were obtained by simulation within the same fault injection environment. The focus of this paper is on the comparative analysis of the experimental results obtained for various OCD configurations and debugging scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rapid increase in the use of microprocessor-based systems in critical areas, where failures imply risks to human lives, to the environment or to expensive equipment, significantly increased the need for dependable systems, able to detect, tolerate and eventually correct faults. The verification and validation of such systems is frequently performed via fault injection, using various forms and techniques. However, as electronic devices get smaller and more complex, controllability and observability issues, and sometimes real time constraints, make it harder to apply most conventional fault injection techniques. This paper proposes a fault injection environment and a scalable methodology to assist the execution of real-time fault injection campaigns, providing enhanced performance and capabilities. Our proposed solutions are based on the use of common and customized on-chip debug (OCD) mechanisms, present in many modern electronic devices, with the main objective of enabling the insertion of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented starting from basic Components Off-The-Shelf (COTS) microprocessors, equipped with real-time OCD infrastructures, to improved solutions based on modified interfaces, and dedicated OCD circuitry that enhance fault injection capabilities and performance. All methodologies and configurations were evaluated and compared concerning performance gain and silicon overhead.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de Mestrado em Engenharia Informática

Relevância:

20.00% 20.00%

Publicador:

Resumo:

6th International Real-Time Scheduling Open Problems Seminar (RTSOPS 2015), Lund, Sweden.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nos dias de hoje, os sistemas de tempo real crescem em importância e complexidade. Mediante a passagem do ambiente uniprocessador para multiprocessador, o trabalho realizado no primeiro não é completamente aplicável no segundo, dado que o nível de complexidade difere, principalmente devido à existência de múltiplos processadores no sistema. Cedo percebeu-se, que a complexidade do problema não cresce linearmente com a adição destes. Na verdade, esta complexidade apresenta-se como uma barreira ao avanço científico nesta área que, para já, se mantém desconhecida, e isto testemunha-se, essencialmente no caso de escalonamento de tarefas. A passagem para este novo ambiente, quer se trate de sistemas de tempo real ou não, promete gerar a oportunidade de realizar trabalho que no primeiro caso nunca seria possível, criando assim, novas garantias de desempenho, menos gastos monetários e menores consumos de energia. Este último fator, apresentou-se desde cedo, como, talvez, a maior barreira de desenvolvimento de novos processadores na área uniprocessador, dado que, à medida que novos eram lançados para o mercado, ao mesmo tempo que ofereciam maior performance, foram levando ao conhecimento de um limite de geração de calor que obrigou ao surgimento da área multiprocessador. No futuro, espera-se que o número de processadores num determinado chip venha a aumentar, e como é óbvio, novas técnicas de exploração das suas inerentes vantagens têm de ser desenvolvidas, e a área relacionada com os algoritmos de escalonamento não é exceção. Ao longo dos anos, diferentes categorias de algoritmos multiprocessador para dar resposta a este problema têm vindo a ser desenvolvidos, destacando-se principalmente estes: globais, particionados e semi-particionados. A perspectiva global, supõe a existência de uma fila global que é acessível por todos os processadores disponíveis. Este fato torna disponível a migração de tarefas, isto é, é possível parar a execução de uma tarefa e resumir a sua execução num processador distinto. Num dado instante, num grupo de tarefas, m, as tarefas de maior prioridade são selecionadas para execução. Este tipo promete limites de utilização altos, a custo elevado de preempções/migrações de tarefas. Em contraste, os algoritmos particionados, colocam as tarefas em partições, e estas, são atribuídas a um dos processadores disponíveis, isto é, para cada processador, é atribuída uma partição. Por essa razão, a migração de tarefas não é possível, acabando por fazer com que o limite de utilização não seja tão alto quando comparado com o caso anterior, mas o número de preempções de tarefas decresce significativamente. O esquema semi-particionado, é uma resposta de caráter hibrido entre os casos anteriores, pois existem tarefas que são particionadas, para serem executadas exclusivamente por um grupo de processadores, e outras que são atribuídas a apenas um processador. Com isto, resulta uma solução que é capaz de distribuir o trabalho a ser realizado de uma forma mais eficiente e balanceada. Infelizmente, para todos estes casos, existe uma discrepância entre a teoria e a prática, pois acaba-se por se assumir conceitos que não são aplicáveis na vida real. Para dar resposta a este problema, é necessário implementar estes algoritmos de escalonamento em sistemas operativos reais e averiguar a sua aplicabilidade, para caso isso não aconteça, as alterações necessárias sejam feitas, quer a nível teórico quer a nível prá

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poly(dimethylsiloxane) (PDMS) is an organosilicon polymer widely used in the fabrication of microfluidic systems to integrate biochips. In this study, we propose the use of an adapted PDMS mould for the creation of a miniaturized, reusable, reference electrode for in-chip electrochemical measurements. Through its integrated microfluidic system it is possible to replenish internal buffer solutions, unclog critical junctions and treat the electrode’s surface, assuring a long term reuse of the same device. Planar Ag/AgCl reference electrodes were microfabricated over a passivated p-type Silicon Wafer. The PDMS mould, containing an integrated microfluidic system, was fabricated based on patterned SU-8 mould, which includes a lateral horizontal inlet access point. Surface oxidation was used for irreversible permanent bondage between flat surfaces. The final result was planar Ag/AgCl reference electrode with integrated microfluidic that allows for electrochemical analysis in biochips

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La metodología actual de diseño de celdas analógicas embebidas se basa en una tecnología CMOS fija, no teniendo dichos módulos características de reutilización y de migración hacia otras tecnologías. Para avanzar a un mayor nivel de productividad en el diseño se necesita un cambio de paradigma. Este cambio en la metodología necesita reducir tiempo y esfuerzo en el desarrollo, incrementar la predictibilidad y reducir el riesgo involucrado en el diseño y la fabricación de complejos sistemas en un chip (SOC). Las celdas digitales embebidas se han aplicado al diseño VLSI digital debido a que la síntesis a través de lenguajes de descripción de hardware (HDL) permite mapear complejos algoritmos en una descripción sintáctica digital, la cual puede luego ser automáticamente colocada e interconectada (place&route). Sin embargo, dada la falta de automatización del diseño electrónico en el dominio analógico, como así también por factores como el ruido, el corrimiento y falta de apareamiento, el uso de los circuitos analógicos ha sido muy bajo en la medida de lo posible, por lo que las celdas analógicas embebidas son ahora un cuello de botella en el diseño de SOC. Por lo expuesto, en el proyecto que se propone se planea diseñar celdas analógicas embebidas con características de: bajo consumo, reutilización, bajo costo y alta performance para satisfacer el notable crecimiento del mercado de los sistemas portables alimentados por batería y el de sistemas de identificación remotamente energizados (RFID). Conjuntamente con el Área de Comunicaciones, se propone un generador de tensión de alimentación a partir de una señal de RF.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.