950 resultados para intel processor
Resumo:
The introduction of standard on-chip buses has eased integration and boosted the production of IP functional cores. However, once an IP is bus specific retargeting to a different bus is time-consuming and tedious, and this reduces the reusability of the bus-specific IP. As new bus standards are introduced and different interconnection methods are proposed, this problem increases. Many solutions have been proposed, however these solutions either limit the IP block performance or are restricted to a particular platform. A new concept is presented that can connect IP blocks to a wide variety of interface architectures with low overhead. This is achieved through the use a special interface adaptor logic layer.
Resumo:
In recent years many real time applications need to handle data streams. We consider the distributed environments in which remote data sources keep on collecting data from real world or from other data sources, and continuously push the data to a central stream processor. In these kinds of environments, significant communication is induced by the transmitting of rapid, high-volume and time-varying data streams. At the same time, the computing overhead at the central processor is also incurred. In this paper, we develop a novel filter approach, called DTFilter approach, for evaluating the windowed distinct queries in such a distributed system. DTFilter approach is based on the searching algorithm using a data structure of two height-balanced trees, and it avoids transmitting duplicate items in data streams, thus lots of network resources are saved. In addition, theoretical analysis of the time spent in performing the search, and of the amount of memory needed is provided. Extensive experiments also show that DTFilter approach owns high performance.
Resumo:
We propose an asymmetric multi-processor SoC architecture, featuring a master CPU running uClinux, and multiple loosely-coupled slave CPUs running real-time threads assigned by the master CPU. Real-time SoC architectures often demand a compromise between a generic platform for different applications, and application-specific customizations to achieve performance requirements. Our proposed architecture offers a generic platform running a conventional embedded operating system providing a traditional software-oriented development approach, while multiple slave CPUs act as a dedicated independent real-time threads execution unit running in parallel of master CPU to achieve performance requirements. In this paper, the architecture is described, including the application / threading development environment. The performance of the architecture with several standard benchmark routines is also analysed.
Resumo:
This paper describes the implementation of a TMR (Triple Modular Redundant) microprocessor system on a FPGA. The system exhibits true redundancy in that three instances of the same processor system (both software and hardware) are executed in parallel. The described system uses software to control external peripherals and a voter is used to output correct results. An error indication is asserted whenever two of the three outputs match or all three outputs disagree. The software has been implemented to conform to a particular safety critical coding guideline/standard which is popular in industry. The system was verified by injecting various faults into it.
Resumo:
A major impediment to developing real-time computer vision systems has been the computational power and level of skill required to process video streams in real-time. This has meant that many researchers have either analysed video streams off-line or used expensive dedicated hardware acceleration techniques. Recent software and hardware developments have greatly eased the development burden of realtime image analysis leading to the development of portable systems using cheap PC hardware and software exploiting the Multimedia Extension (MMX) instruction set of the Intel Pentium chip. This paper describes the implementation of a computationally efficient computer vision system for recognizing hand gestures using efficient coding and MMX-acceleration to achieve real-time performance on low cost hardware.
Resumo:
Esta tese teve por objetivo saber como o corpo docente da Universidade Estadual de Mato Grosso do Sul (UEMS) percebe, entende e reage ante a incorporação e utilização das Tecnologias de Informação e Comunicação (TICs) nos cursos de graduação dessa Instituição, considerando os novos processos comunicacionais dialógicos que elas podem proporcionar na sociedade atual. Metodologicamente, a tese é composta por pesquisa bibliográfica, buscando fundamentar as áreas da Educação e Comunicação, assim como a Educomunicação; pesquisa documental para contextualização do lócus da pesquisa e de uma pesquisa exploratória a partir da aplicação de um questionário online a 165 docentes da UEMS, que responderam voluntariamente. Verificou-se que os professores utilizam as TICs cotidianamente nas atividades pessoais e, em menor escala, nos ambientes profissionais. Os desafios estão em se formar melhor esse docente e oferecer capacitação continuada para que utilizem de forma mais eficaz as TICs nas salas de aula. Destaca-se ainda que os avanços em tecnologia e os novos ecossistemas comunicacionais construíram novas e outras realidades, tornando a aprendizagem um fator não linear, exigindo-se revisão nos projetos pedagógicos na educação superior para que estes viabilizem diálogos propositivos entre a comunicação e a educação. A infraestrutura institucional para as TICs é outro entrave apontado, tanto na aquisição como na manutenção desses aparatos tecnológicos pela Universidade. Ao final, propõe-se realizar estudos e pesquisas que possam discutir alterações nos regimes contratuais de trabalho dos docentes, uma vez que, para atuar com as TICs de maneira apropriada, exige-se mais tempo e dedicação do docente.
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.