991 resultados para Processor cores
Resumo:
Retângulos de alumínio, medindo 10 x 24,5 cm, pintados de branco, vermelho, azul e amarelo, foram revestidos com cola incolor e inodora e dispostos no campo, entre árvores de Clitoria fairchildiana, para verificação da responsividade dos adultos de E. clitoriae às diferentes cores. Retângulos transparentes, de iguais dimensões, foram utilizados como controle. O número significativamente superior de adultos atraídos e capturados pelas armadilhas amarelas, em comparação com as demais, indica uma resposta orientada, sendo elas, portanto, recomendadas para o monitoramento de adultos de E. clitoriae.
Resumo:
As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.
Resumo:
Objetivos: Avaliar a eficiência do exame dopplervelocimétrico no diagnóstico do bem-estar fetal. Metodologia: Foram analisadas 130 gestantes atendidas no Serviço de Ultra-Sonografia do Centro de Atenção Integral à Saúde da Mulher da Universidade Estadual de Campinas, entre a 28ª e a 42ª semana. Foram feitas correlações entre o Doppler das artérias umbilical, cerebral média e aorta abdominal fetal com os resultados perinatais adversos. As gestantes selecionadas para o estudo foram submetidas eletivamente ao parto cesáreo, no máximo quatro horas após o exame Doppler. Considerou-se como resultados perinatais adversos: índice de Apgar ao 5º minuto menor que sete, internação em unidade de terapia intensiva neonatal, retardo de crescimento intra-uterino, sofrimento fetal agudo, mortalidade perinatal, hipoglicemia, enterocolite necrosante e hemorragia cerebral. Os índices de impedância das artérias umbilical, cerebral média e aorta abdominal foram relacionados caso a caso com os resultados perinatais adversos. Resultados: a relação sístole/diástole da artéria umbilical apresentou maior sensibilidade (80,76%) do que o índice de pulsatilidade e índice de resistência da artéria umbilical. O estudo Doppler da artéria umbilical apresentou melhor sensibilidade que o da artéria cerebral média e da aorta abdominal na detecção de resultados perinatais adversos. Conclusão: a dopplervelocimetria das artérias umbilical e cerebral média apresentou boa capacidade de avaliação do bem-estar fetal e associação significativa com resultados perinatais adversos.
Resumo:
Objetivos: avaliar os resultados perinatais do exame de dopplervelocimetria alterado com centralização de fluxo sangüíneo fetal. Metodologia: foram analisados 32 casos de centralização de fluxo sangüíneo fetal diagnosticados no Centro de Atenção Integral à Saúde da Mulher da Universidade Estadual de Campinas. O diagnóstico de centralização foi confirmado quando a relação entre o índice de pulsatilidade da artéria cerebral média fetal (IPACM) e o índice de pulsatilidade da artéria umbilical (IPAU) era menor que a unidade (IPACM/IPAU menor que 1). Detectou-se fluxo arterial usando equipamento ultra-sonográfico equipado com mapeamento em cores, marca Toshiba, modelo SSH-140A. Resultados: houve necessidade de cuidados intensivos em 26 fetos (89,6%). O número de dias de internação em unidade de terapia intensiva variou de 1 a 83, com média de 22 dias. A ocorrência de óbito fetal foi de 3/32 (9,4%) e de óbito perinatal de 9/29 (31%). Considerando-se a idade gestacional avaliada pelo método de Capurro, a incidência de nascimento de fetos com menos de 36 semanas foi de 21/32 (65,6%). Crescimento intra-uterino restrito ocorreu em 71,8% dos fetos e hipoglicemia em 44,8%. Conclusão: a centralização de fluxo sangüíneo é um marcador de situação danosa ao bem-estar fetal e seu estudo será de grande valia na orientação da conduta obstétrica.
Resumo:
Les systèmes multiprocesseurs sur puce électronique (On-Chip Multiprocessor [OCM]) sont considérés comme les meilleures structures pour occuper l'espace disponible sur les circuits intégrés actuels. Dans nos travaux, nous nous intéressons à un modèle architectural, appelé architecture isométrique de systèmes multiprocesseurs sur puce, qui permet d'évaluer, de prédire et d'optimiser les systèmes OCM en misant sur une organisation efficace des nœuds (processeurs et mémoires), et à des méthodologies qui permettent d'utiliser efficacement ces architectures. Dans la première partie de la thèse, nous nous intéressons à la topologie du modèle et nous proposons une architecture qui permet d'utiliser efficacement et massivement les mémoires sur la puce. Les processeurs et les mémoires sont organisés selon une approche isométrique qui consiste à rapprocher les données des processus plutôt que d'optimiser les transferts entre les processeurs et les mémoires disposés de manière conventionnelle. L'architecture est un modèle maillé en trois dimensions. La disposition des unités sur ce modèle est inspirée de la structure cristalline du chlorure de sodium (NaCl), où chaque processeur peut accéder à six mémoires à la fois et où chaque mémoire peut communiquer avec autant de processeurs à la fois. Dans la deuxième partie de notre travail, nous nous intéressons à une méthodologie de décomposition où le nombre de nœuds du modèle est idéal et peut être déterminé à partir d'une spécification matricielle de l'application qui est traitée par le modèle proposé. Sachant que la performance d'un modèle dépend de la quantité de flot de données échangées entre ses unités, en l'occurrence leur nombre, et notre but étant de garantir une bonne performance de calcul en fonction de l'application traitée, nous proposons de trouver le nombre idéal de processeurs et de mémoires du système à construire. Aussi, considérons-nous la décomposition de la spécification du modèle à construire ou de l'application à traiter en fonction de l'équilibre de charge des unités. Nous proposons ainsi une approche de décomposition sur trois points : la transformation de la spécification ou de l'application en une matrice d'incidence dont les éléments sont les flots de données entre les processus et les données, une nouvelle méthodologie basée sur le problème de la formation des cellules (Cell Formation Problem [CFP]), et un équilibre de charge de processus dans les processeurs et de données dans les mémoires. Dans la troisième partie, toujours dans le souci de concevoir un système efficace et performant, nous nous intéressons à l'affectation des processeurs et des mémoires par une méthodologie en deux étapes. Dans un premier temps, nous affectons des unités aux nœuds du système, considéré ici comme un graphe non orienté, et dans un deuxième temps, nous affectons des valeurs aux arcs de ce graphe. Pour l'affectation, nous proposons une modélisation des applications décomposées en utilisant une approche matricielle et l'utilisation du problème d'affectation quadratique (Quadratic Assignment Problem [QAP]). Pour l'affectation de valeurs aux arcs, nous proposons une approche de perturbation graduelle, afin de chercher la meilleure combinaison du coût de l'affectation, ceci en respectant certains paramètres comme la température, la dissipation de chaleur, la consommation d'énergie et la surface occupée par la puce. Le but ultime de ce travail est de proposer aux architectes de systèmes multiprocesseurs sur puce une méthodologie non traditionnelle et un outil systématique et efficace d'aide à la conception dès la phase de la spécification fonctionnelle du système.
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
We consider general allocation problems with indivisibilities where agents' preferences possibly exhibit externalities. In such contexts many different core notions were proposed. One is the gamma-core whereby blocking is only allowed via allocations where the non-blocking agents receive their endowment. We show that if there exists an allocation rule satisfying ‘individual rationality’, ‘efficiency’, and ‘strategy-proofness’, then for any problem for which the gamma-core is non-empty, the allocation rule must choose a gamma-core allocation and all agents are indifferent between all allocations in the gamma-core. We apply our result to housing markets, coalition formation and networks.
Resumo:
Most of the commercial and financial data are stored in decimal fonn. Recently, support for decimal arithmetic has received increased attention due to the growing importance in financial analysis, banking, tax calculation, currency conversion, insurance, telephone billing and accounting. Performing decimal arithmetic with systems that do not support decimal computations may give a result with representation error, conversion error, and/or rounding error. In this world of precision, such errors are no more tolerable. The errors can be eliminated and better accuracy can be achieved if decimal computations are done using Decimal Floating Point (DFP) units. But the floating-point arithmetic units in today's general-purpose microprocessors are based on the binary number system, and the decimal computations are done using binary arithmetic. Only few common decimal numbers can be exactly represented in Binary Floating Point (BF P). ln many; cases, the law requires that results generated from financial calculations performed on a computer should exactly match with manual calculations. Currently many applications involving fractional decimal data perform decimal computations either in software or with a combination of software and hardware. The performance can be dramatically improved by complete hardware DFP units and this leads to the design of processors that include DF P hardware.VLSI implementations using same modular building blocks can decrease system design and manufacturing cost. A multiplexer realization is a natural choice from the viewpoint of cost and speed.This thesis focuses on the design and synthesis of efficient decimal MAC (Multiply ACeumulate) architecture for high speed decimal processors based on IEEE Standard for Floating-point Arithmetic (IEEE 754-2008). The research goal is to design and synthesize deeimal'MAC architectures to achieve higher performance.Efficient design methods and architectures are developed for a high performance DFP MAC unit as part of this research.
Resumo:
For the theoretical investigation of local phenomena (adsorption at surfaces, defects or impurities within a crystal, etc.) one can assume that the effects caused by the local disturbance are only limited to the neighbouring particles. With this model, that is well-known as cluster-approximation, an infinite system can be simulated by a much smaller segment of the surface (Cluster). The size of this segment varies strongly for different systems. Calculations to the convergence of bond distance and binding energy of an adsorbed aluminum atom on an Al(100)-surface showed that more than 100 atoms are necessary to get a sufficient description of surface properties. However with a full-quantummechanical approach these system sizes cannot be calculated because of the effort in computer memory and processor speed. Therefore we developed an embedding procedure for the simulation of surfaces and solids, where the whole system is partitioned in several parts which itsself are treated differently: the internal part (cluster), which is located near the place of the adsorbate, is calculated completely self-consistently and is embedded into an environment, whereas the influence of the environment on the cluster enters as an additional, external potential to the relativistic Kohn-Sham-equations. The basis of the procedure represents the density functional theory. However this means that the choice of the electronic density of the environment constitutes the quality of the embedding procedure. The environment density was modelled in three different ways: atomic densities; of a large prepended calculation without embedding transferred densities; bulk-densities (copied). The embedding procedure was tested on the atomic adsorptions of 'Al on Al(100) and Cu on Cu(100). The result was that if the environment is choices appropriately for the Al-system one needs only 9 embedded atoms to reproduce the results of exact slab-calculations. For the Cu-system first calculations without embedding procedures were accomplished, with the result that already 60 atoms are sufficient as a surface-cluster. Using the embedding procedure the same values with only 25 atoms were obtained. This means a substantial improvement if one takes into consideration that the calculation time increased cubically with the number of atoms. With the embedding method Infinite systems can be treated by molecular methods. Additionally the program code was extended by the possibility to make molecular-dynamic simulations. Now it is possible apart from the past calculations of fixed cores to investigate also structures of small clusters and surfaces. A first application we made with the adsorption of Cu on Cu(100). We calculated the relaxed positions of the atoms that were located close to the adsorption site and afterwards made the full-quantummechanical calculation of this system. We did that procedure for different distances to the surface. Thus a realistic adsorption process could be examined for the first time. It should be remarked that when doing the Cu reference-calculations (without embedding) we begun to parallelize the entire program code. Only because of this aspect the investigations for the 100 atomic Cu surface-clusters were possible. Due to the good efficiency of both the parallelization and the developed embedding procedure we will be able to apply the combination in future. This will help to work on more these areas it will be possible to bring in results of full-relativistic molecular calculations, what will be very interesting especially for the regime of heavy systems.
Resumo:
This thesis describes Optimist, an optimizing compiler for the Concurrent Smalltalk language developed by the Concurrent VLSI Architecture Group. Optimist compiles Concurrent Smalltalk to the assembly language of the Message-Driven Processor (MDP). The compiler includes numerous optimization techniques such as dead code elimination, dataflow analysis, constant folding, move elimination, concurrency analysis, duplicate code merging, tail forwarding, use of register variables, as well as various MDP-specific optimizations in the code generator. The MDP presents some unique challenges and opportunities for compilation. Due to the MDP's small memory size, it is critical that the size of the generated code be as small as possible. The MDP is an inherently concurrent processor with efficient mechanisms for sending and receiving messages; the compiler takes advantage of these mechanisms. The MDP's tagged architecture allows very efficient support of object-oriented languages such as Concurrent Smalltalk. The initial goals for the MDP were to have the MDP execute about twenty instructions per method and contain 4096 words of memory. This compiler shows that these goals are too optimistic -- most methods are longer, both in terms of code size and running time. Thus, the memory size of the MDP should be increased.
Resumo:
Concurrent Smalltalk is the primary language used for programming the J- Machine, a MIMD message-passing computer containing thousands of 36-bit processors connected by a very low latency network. This thesis describes in detail Concurrent Smalltalk and its implementation on the J-Machine, including the Optimist II global optimizing compiler and Cosmos fine-grain parallel operating system. Quantitative and qualitative results are presented.
Resumo:
This paper discusses the Nucleus 22 cochlear implant.
Resumo:
Displacement studies on leaching of potassium (K+) were conducted under unsaturated steady state flow conditions in nine undisturbed soil columns (15.5 cm in diameter and 25 cm long). Pulses of K+ applied to columns of undisturbed soil were leached with distilled water or calcium chloride (CaCl2) at a rate of 18 mm h(-1). The movement of K+ in gypsum treated soil leached with distilled water was at a similar rate to that of the untreated soil leached with 15 mM CaCl2. The Ca2+ concentrations in the leachates were about 15 mM, the expected values for the dissolution of the gypsum. When applied K+ was displaced with the distilled water, K+ was retained in the top 10-12.5 cm depth of soil. In the undisturbed soil cores there is possibility of preferential flow and lack of K+ sorption. The application of gypsum and CaCl2 in the reclamation of sodic soils would be expected to leach K+ from soils. It can also be concluded that the use of sources of water for irrigation which have a high Ca2+ concentration can also lead to leaching of K+ from soil. Average effluent concentration of K+ during leaching period was 30.2 and 28.6 mg l(-1) for the gypsum and CaCl2 treated soils, respectively. These concentrations are greater than the recommended guideline of the World Health Organisation (12 mg K+ l(-1)).