29 resultados para RTL
Resumo:
The SystemVerilog implementation of the Open Verification Methodology (OVM) is exercised on an 8b/10b RTL open core design in the hope of being a simple yet complete exercise to expose the key features of OVM. Emphasis is put onto the actual usage of the verification components rather than a complete verification flow aiming at being of help to readers unfamiliar with OVM seeking to apply the methodology to their own designs. A link that takes you to the complete code is given to reinforce this aim. We found the methodology easy to use but intimidating at first glance specially for someone with little experience in object oriented programming. However it is clear to see the flexibility, portability and reusability of verification code once you manage to give some first steps.
Resumo:
Among all classes of nanomaterials, silver nanoparticles (AgNPs) have potentially an important ecotoxicological impact, especially in freshwater environments. Fish are particularly susceptible to the toxic effects of silver ions and, with knowledge gaps regarding the contribution of dissolution and unique particle effects to AgNP toxicity, they represent a group of vulnerable organisms. Using cell lines (RTL-W1, RTH-149, RTG-2) and primary hepatocytes of rainbow trout (Oncorhynchus mykiss) as in vitro test systems, we assessed the cytotoxicity of the representative AgNP, NM-300K, and AgNO3 as an Ag+ ion source. Lack of AgNP interference with the cytotoxicity assays (AlamarBlue, CFDA-AM, NRU assay) and their simultaneous application point to the compatibility and usefulness of such a battery of assays. The RTH-149 and RTL-W1 liver cell lines exhibited similar sensitivity as primary hepatocytes towards AgNP toxicity. Leibovitz's L-15 culture medium composition (high amino acid content) had an important influence on the behaviour and toxicity of AgNPs towards the RTL-W1 cell line. The obtained results demonstrate that, with careful consideration, such an in vitro approach can provide valuable toxicological data to be used in an integrated testing strategy for NM-300K risk assessment.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
National audience
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Livestock facilities, where animals carry their productive cycle, must have as main characteristic, the control of influence over climatic factors on animals. The environment variations can be controlled through the use of different ventilation systems. The objective of this research was to evaluate the influence of different environment conditioning systems on swine nursery. Three treatments have been tested: natural ventilation, cooled ventilation and forced ventilation. The climatic parameters evaluated were: air temperature, relative humidity and black globe temperature. The physiological parameters analyzed were: respiratory frequency and back fat thickness. Number of born alive piglets, average weight at weaning and number of weaned piglets were also evaluated parameters. The use of cooled ventilation systems were able to decreased animal's air temperature and respiratory frequency, and the black globe temperature and humidity index (WBGT) and the radiating thermal load (RTL).
Resumo:
The functional relation between the decline in the rate of a physiological process and the magnitude of a stress related to soil physical conditions is an important tool for uses as diverse as assessment of the stress-related sensitivity of different plant cultivars and characterization of soil structure. Two of the most pervasive sources of stress are soil resistance to root penetration (SR) and matric potential (psi). However, the assessment of these sources of stress on physiological processes in different soils can be complicated by other sources of stress and by the strong relation between SR and psi in a soil. A multivariate boundary line approach was assessed as a means of reducing these cornplications. The effects of SR and psi stress conditions on plant responses were examined under growth chamber conditions. Maize plants (Zea mays L.) were grown in soils at different water contents and having different structures arising from variation in texture, organic carbon content and soil compaction. Measurements of carbon exchange (CE), leaf transpiration (ILT), plant transpiration (PT), leaf area (LA), leaf + shoot dry weight (LSDW), root total length (RTL), root surface area (RSA) and root dry weight (RDW) were determined after plants reached the 12-leaf stage. The LT, PT and LA were described as a function of SR and psi with a double S-shaped function using the multivariate boundary line approach. The CE and LSDW were described by the combination of an S-shaped function for SR and a linear function for psi. The root parameters were described by a single S-shaped function for SR. The sensitivity to SR and psi depended on the plant parameter. Values of PT, LA and LSDW were most sensitive to SR. Among those parameters exhibiting a significant response to psi, PT was most sensitive. The boundary line approach was found to be a useful tool to describe the functional relation between the decline in the rate of a physiological process and the magnitude of a stress related to soil physical conditions. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
FUNDAMENTO: O bloqueio da síntese do óxido nítrico (NO) é caracterizado pelo aumento da atividade simpática cardíaca, e o treinamento físico promove a redução da atividade simpática. OBJETIVO: Investigamos o efeito do bloqueio da síntese do NO sobre o controle autonômico cardiovascular em ratos submetidos ao exercício aeróbio durante dez semanas. MÉTODOS: Ratos wistar foram divididos em quatro grupos: controle tratados com ração e água ad libitum durante dez semanas (RC); controle tratados com N G-nitro-L-arginine methyl ester (L-NAME) na última semana (RCL); treinados durante dez semanas em esteira motorizada (RT); treinados por dez semanas e tratados com L-NAME na última semana (RTL). O controle autonômico cardiovascular foi investigado em todos os grupos com a utilização de duplo bloqueio com metilatropina e propranolol, e análise da variabilidade. RESULTADOS: Os grupos RCL e RTL apresentaram hipertensão. O grupo RCL apresentou taquicardia e predomínio do tônus simpático na determinação da FC após o bloqueio autonômico farmacológico. O grupo RT apresentou bradicardia e menor freqüência cardíaca (FC) intrínseca em relação aos demais. A avaliação da variabilidade da FC mostrou menores valores absolutos e normalizados na banda de baixa freqüência (BF) no grupo RCL. Por sua vez, o grupo RTL apresentou elevação na banda de BF em valores absolutos. A análise da variabilidade da PAS mostrou que os grupos RCL e RTL apresentaram maiores valores na banda de BF. CONCLUSÃO: O exercício físico prévio impediu o déficit no controle autonômico cardíaco induzido pelo tratamento com L-NAME, no entanto não impediu o aumento na variabilidade da PAS.
Resumo:
ABSTRACT The objective of this study was to evaluate the thermoregulatory response of dairy buffaloes in pre-milking and post-milking. To identify animal thermoregulatory capacity, skin surface temperatures were taken by an infrared thermometer (SST), a thermographic camera (MTBP) as well as respiratory rate records (RR). Black Globe and Humidity Index (BGHI), radiating thermal load (RTL) and enthalpy (H) were used to characterize the thermal environment. Artificial Neural Networks analyzed those indices as well as animal physiological data, using a single layer trained with the least mean square (LMS) algorithm. The results indicated that pre-milking and post-milking environments reached BGHI, RR, SST and MTBP values above thermal neutrality zone for buffaloes. In addition, limits of surface skin temperatures were mostly influenced by changing ambient conditions to the detriment of respiratory rates. It follows that buffaloes are sensitive to environmental changes and their skin temperatures are the best indicators of thermal comfort in relation to respiratory rate.
Resumo:
The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99% with half the output rate as a bus-based system. The network-based solution avoids “broken” columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of > 10% to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling (TLM) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of > 10 in run-time is observed using these techniques compared to register transfer level (RTL) design technique. Reduction of 50% for lines-of-code (LoC) for the high-level models compared to the RTL description has been achieved. Two architectures are then demonstrated in two hybrid pixel readout chips. The first chip, Timepix3 has been designed for the Medipix3 collaboration. According to the measurements, it consumes < 1 W/cm^2. It also delivers up to 40 Mhits/s/cm^2 with 10-bit time-over-threshold (ToT) and 18-bit time-of-arrival (ToA) of 1.5625 ns. The chip uses a token-arbitrated, asynchronous two-phase handshake column bus for internal data transfer. It has also been successfully used in a multi-chip particle tracking telescope. The second chip, VeloPix, is a readout chip being designed for the upgrade of Vertex Locator (VELO) of the LHCb experiment at CERN. Based on the simulations, it consumes < 1.5 W/cm^2 while delivering up to 320 Mpackets/s/cm^2, each packet containing up to 8 pixels. VeloPix uses a node-based data fabric for achieving throughput of 13.3 Mpackets/s from the column to the EoC. By combining Monte Carlo physics data with high-level simulations, it has been demonstrated that the architecture meets requirements of the VELO (260 Mpackets/s/cm^2 with efficiency of 99%).
Resumo:
La conception de systèmes hétérogènes exige deux étapes importantes, à savoir : la modélisation et la simulation. Habituellement, des simulateurs sont reliés et synchronisés en employant un bus de co-simulation. Les approches courantes ont beaucoup d’inconvénients : elles ne sont pas toujours adaptées aux environnements distribués, le temps d’exécution de simulation peut être très décevant, et chaque simulateur a son propre noyau de simulation. Nous proposons une nouvelle approche qui consiste au développement d’un simulateur compilé multi-langage où chaque modèle peut être décrit en employant différents langages de modélisation tel que SystemC, ESyS.Net ou autres. Chaque modèle contient généralement des modules et des moyens de communications entre eux. Les modules décrivent des fonctionnalités propres à un système souhaité. Leur description est réalisée en utilisant la programmation orientée objet et peut être décrite en utilisant une syntaxe que l’utilisateur aura choisie. Nous proposons ainsi une séparation entre le langage de modélisation et la simulation. Les modèles sont transformés en une même représentation interne qui pourrait être vue comme ensemble d’objets. Notre environnement compile les objets internes en produisant un code unifié au lieu d’utiliser plusieurs langages de modélisation qui ajoutent beaucoup de mécanismes de communications et des informations supplémentaires. Les optimisations peuvent inclure différents mécanismes tels que le regroupement des processus en un seul processus séquentiel tout en respectant la sémantique des modèles. Nous utiliserons deux niveaux d’abstraction soit le « register transfer level » (RTL) et le « transaction level modeling » (TLM). Le RTL permet une modélisation à bas niveau d’abstraction et la communication entre les modules se fait à l’aide de signaux et des signalisations. Le TLM est une modélisation d’une communication transactionnelle à un plus haut niveau d’abstraction. Notre objectif est de supporter ces deux types de simulation, mais en laissant à l’usager le choix du langage de modélisation. De même, nous proposons d’utiliser un seul noyau au lieu de plusieurs et d’enlever le bus de co-simulation pour accélérer le temps de simulation.
Resumo:
Testing constraints for real-time systems are usually verified through the satisfiability of propositional formulae. In this paper, we propose an alternative where the verification of timing constraints can be done by counting the number of truth assignments instead of boolean satisfiability. This number can also tell us how “far away” is a given specification from satisfying its safety assertion. Furthermore, specifications and safety assertions are often modified in an incremental fashion, where problematic bugs are fixed one at a time. To support this development, we propose an incremental algorithm for counting satisfiability. Our proposed incremental algorithm is optimal as no unnecessary nodes are created during each counting. This works for the class of path RTL. To illustrate this application, we show how incremental satisfiability counting can be applied to a well-known rail-road crossing example, particularly when its specification is still being refined.
Resumo:
Resumen tomado de la publicaci??n. Resumen tambi??n en ingl??s
Resumo:
Este trabalho tem como foco a aplicação de técnicas de otimização de potência no alto nível de abstração para circuitos CMOS, e em particular no nível arquitetural e de transferência de registrados (Register Transfer Leve - RTL). Diferentes arquiteturas para projetos especificos de algorítmos de filtros FIR e transformada rápida de Fourier (FFT) são implementadas e comparadas. O objetivo é estabelecer uma metodologia de projeto para baixa potência neste nível de abstração. As técnicas de redução de potência abordadas tem por obetivo a redução da atividade de chaveamento através das técnicas de exploração arquitetural e codificação de dados. Um dos métodos de baixa potência que tem sido largamente utilizado é a codificação de dados para a redução da atividade de chaveamento em barramentos. Em nosso trabalho, é investigado o processo de codificação dos sinais para a obtenção de módulos aritméticos eficientes em termos de potência que operam diretamente com esses códigos. O objetivo não consiste somente na redução da atividade de chavemanto nos barramentos de dados mas também a minimização da complexidade da lógica combinacional dos módulos. Nos algorítmos de filtros FIR e FFT, a representação dos números em complemento de 2 é a forma mais utilizada para codificação de operandos com sinal. Neste trabalho, apresenta-se uma nova arquitetura para operações com sinal que mantém a mesma regularidade um multiplicador array convencional. Essa arquitetura pode operar com números na base 2m, o que permite a redução do número de linhas de produtos parciais, tendo-se desta forma, ganhos significativos em desempenho e redução de potência. A estratégia proposta apresenta resultados significativamente melhores em relação ao estado da arte. A flexibilidade da arquitetura proposta permite a construção de multiplicadores com diferentes valores de m. Dada a natureza dos algoritmos de filtro FIR e FFT, que envolvem o produto de dados por apropriados coeficientes, procura-se explorar o ordenamento ótimo destes coeficientes nos sentido de minimizar o consumo de potência das arquiteturas implementadas.