191 resultados para Intel 8086 (Microprocesador)
Resumo:
Mode of access: Internet.
Resumo:
A major impediment to developing real-time computer vision systems has been the computational power and level of skill required to process video streams in real-time. This has meant that many researchers have either analysed video streams off-line or used expensive dedicated hardware acceleration techniques. Recent software and hardware developments have greatly eased the development burden of realtime image analysis leading to the development of portable systems using cheap PC hardware and software exploiting the Multimedia Extension (MMX) instruction set of the Intel Pentium chip. This paper describes the implementation of a computationally efficient computer vision system for recognizing hand gestures using efficient coding and MMX-acceleration to achieve real-time performance on low cost hardware.
Resumo:
Esta tese teve por objetivo saber como o corpo docente da Universidade Estadual de Mato Grosso do Sul (UEMS) percebe, entende e reage ante a incorporação e utilização das Tecnologias de Informação e Comunicação (TICs) nos cursos de graduação dessa Instituição, considerando os novos processos comunicacionais dialógicos que elas podem proporcionar na sociedade atual. Metodologicamente, a tese é composta por pesquisa bibliográfica, buscando fundamentar as áreas da Educação e Comunicação, assim como a Educomunicação; pesquisa documental para contextualização do lócus da pesquisa e de uma pesquisa exploratória a partir da aplicação de um questionário online a 165 docentes da UEMS, que responderam voluntariamente. Verificou-se que os professores utilizam as TICs cotidianamente nas atividades pessoais e, em menor escala, nos ambientes profissionais. Os desafios estão em se formar melhor esse docente e oferecer capacitação continuada para que utilizem de forma mais eficaz as TICs nas salas de aula. Destaca-se ainda que os avanços em tecnologia e os novos ecossistemas comunicacionais construíram novas e outras realidades, tornando a aprendizagem um fator não linear, exigindo-se revisão nos projetos pedagógicos na educação superior para que estes viabilizem diálogos propositivos entre a comunicação e a educação. A infraestrutura institucional para as TICs é outro entrave apontado, tanto na aquisição como na manutenção desses aparatos tecnológicos pela Universidade. Ao final, propõe-se realizar estudos e pesquisas que possam discutir alterações nos regimes contratuais de trabalho dos docentes, uma vez que, para atuar com as TICs de maneira apropriada, exige-se mais tempo e dedicação do docente.
Resumo:
The use of digital communication systems is increasing very rapidly. This is due to lower system implementation cost compared to analogue transmission and at the same time, the ease with which several types of data sources (data, digitised speech and video, etc.) can be mixed. The emergence of packet broadcast techniques as an efficient type of multiplexing, especially with the use of contention random multiple access protocols, has led to a wide-spread application of these distributed access protocols in local area networks (LANs) and a further extension of them to radio and mobile radio communication applications. In this research, a proposal for a modified version of the distributed access contention protocol which uses the packet broadcast switching technique has been achieved. The carrier sense multiple access with collision avoidance (CSMA/CA) is found to be the most appropriate protocol which has the ability to satisfy equally the operational requirements for local area networks as well as for radio and mobile radio applications. The suggested version of the protocol is designed in a way in which all desirable features of its precedents is maintained. However, all the shortcomings are eliminated and additional features have been added to strengthen its ability to work with radio and mobile radio channels. Operational performance evaluation of the protocol has been carried out for the two types of non-persistent and slotted non-persistent, through mathematical and simulation modelling of the protocol. The results obtained from the two modelling procedures validate the accuracy of both methods, which compares favourably with its precedent protocol CSMA/CD (with collision detection). A further extension of the protocol operation has been suggested to operate with multichannel systems. Two multichannel systems based on the CSMA/CA protocol for medium access are therefore proposed. These are; the dynamic multichannel system, which is based on two types of channel selection, the random choice (RC) and the idle choice (IC), and the sequential multichannel system. The latter has been proposed in order to supress the effect of the hidden terminal, which always represents a major problem with the usage of the contention random multiple access protocols with radio and mobile radio channels. Verification of their operation performance evaluation has been carried out using mathematical modelling for the dynamic system. However, simulation modelling has been chosen for the sequential system. Both systems are found to improve system operation and fault tolerance when compared to single channel operation.
Resumo:
The methods and software for integration of databases (DBs) on inorganic material and substance properties have been developed. The information systems integration is based on known approaches combination: EII (Enterprise Information Integration) and EAI (Enterprise Application Integration). The metabase - special database that stores data on integrated DBs contents is an integrated system kernel. Proposed methods have been applied for DBs integrated system creation in the field of inorganic chemistry and materials science. Important developed integrated system feature is ability to include DBs that have been created by means of different DBMS using essentially various computer platforms: Sun (DB "Diagram") and Intel (other DBs) and diverse operating systems: Sun Solaris (DB "Diagram") and Microsoft Windows Server (other DBs).
Resumo:
This report uses the Duke CGGC global value chain (GVC) framework to examine the role of the Philippines in the global electronics & electrical (E&E) industry and identify opportunities to upgrade. Electronics and electrical equipment have played an important role in the Philippine economy since the 1970s and form the foundation of the country’s export basket today. In 2014, these sectors accounted for 47% of total exports from the Philippines at US$28.8 billion, of which 41% was from electronics, and 6% from electrical products. From a global perspective, while the Philippines is not the leading exporter in any particular product category, it is known for its significant number of semiconductor assembly and test (A&T) facilities. The global economic crisis (2008-09), combined with the exit of Intel (2009), had a significant negative impact on electronics exports and, although steadily increasing, they have not yet rebounded to pre-crisis levels. Nonetheless, investment in the E&E industries has picked up since 2010; in the past five years, there have been 110 new investments in these sectors. Another positive sign is the low exit rate; with the exception of Intel, companies that have invested in the Philippines have stayed, with several operations dating back to the late 1970s and 1980s. These firms have not only stayed, but have continued to grow and expand in the country due to the quality of the workforce and satisfaction with the Philippine Economic Zone Authority (PEZA) environment. The growth of the industry has significantly benefited from foreign investment and close ties with Japanese firms.
Resumo:
Android OS supports multiple communication methods between apps. This opens the possibility to carry out threats in a collaborative fashion, c.f. the Soundcomber example from 2011. In this paper we provide a concise definition of collusion and report on a number of automated detection approaches, developed in co-operation with Intel Security.
Resumo:
App collusion refers to two or more apps working together to achieve a malicious goal that they otherwise would not be able to achieve individually. The permissions based security model (PBSM) for Android does not address this threat, as it is rather limited to mitigating risks due to individual apps. This paper presents a technique for assessing the threat of collusion for apps, which is a first step towards quantifying collusion risk, and allows us to narrow down to candidate apps for collusion, which is critical given the high volume of Android apps available. We present our empirical analysis using a classified corpus of over 29000 Android apps provided by Intel Security.
Resumo:
The semiconductor industry's urge towards faster, smaller and cheaper integrated circuits has lead the industry to smaller node devices. The integrated circuits that are now under volume production belong to 22 nm and 14 nm technology nodes. In 2007 the 45 nm technology came with the revolutionary high- /metal gate structure. 22 nm technology utilizes fully depleted tri-gate transistor structure. The 14 nm technology is a continuation of the 22 nm technology. Intel is using second generation tri-gate technology in 14 nm devices. After 14 nm, the semiconductor industry is expected to continue the scaling with 10 nm devices followed by 7 nm. Recently, IBM has announced successful production of 7 nm node test chips. This is the fashion how nanoelectronics industry is proceeding with its scaling trend. For the present node of technologies selective deposition and selective removal of the materials are required. Atomic layer deposition and the atomic layer etching are the respective techniques used for selective deposition and selective removal. Atomic layer deposition still remains as a futuristic manufacturing approach that deposits materials and lms in exact places. In addition to the nano/microelectronics industry, ALD is also widening its application areas and acceptance. The usage of ALD equipments in industry exhibits a diversi cation trend. With this trend, large area, batch processing, particle ALD and plasma enhanced like ALD equipments are becoming prominent in industrial applications. In this work, the development of an atomic layer deposition tool with microwave plasma capability is described, which is a ordable even for lightly funded research labs.
Resumo:
In this paper, we develop a fast implementation of an hyperspectral coded aperture (HYCA) algorithm on different platforms using OpenCL, an open standard for parallel programing on heterogeneous systems, which includes a wide variety of devices, from dense multicore systems from major manufactures such as Intel or ARM to new accelerators such as graphics processing units (GPUs), field programmable gate arrays (FPGAs), the Intel Xeon Phi and other custom devices. Our proposed implementation of HYCA significantly reduces its computational cost. Our experiments have been conducted using simulated data and reveal considerable acceleration factors. This kind of implementations with the same descriptive language on different architectures are very important in order to really calibrate the possibility of using heterogeneous platforms for efficient hyperspectral imaging processing in real remote sensing missions.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
Teniendo en cuenta que el aumento en los diferentes procesos de industrialización y globalización de la economía exigen unos estándares más altos de calidad, la Seguridad y Salud en el trabajo se han convertido en un valor agregado o una pieza clave para alcanzar certificaciones, abrir mercados, trabajar con estándares internacionales y cumplir con la normatividad tanto nacional como internacional. Con base a lo anterior, al exigir mayor productividad se ha incrementado el nivel de estrés en los trabajadores, tomando fuerza el estudio, valoración y prevención de los Riesgos Psicosociales a nivel laboral. Para ello, es necesario entender que el estar en un entorno laboral genera vulnerabilidad del trabajador ante los factores de riesgo presentes en el lugar de trabajo. Estos se pueden ver en la presencia de factores ambientales, familiares y personales, que aunque son inherentes a los seres humanos en el proceso de desarrollo, de no ser controlados, afectan directamente la salud de los empleados. En Colombia, la observación sistemática de los procesos de interacción de un trabajador con su puesto de trabajo, han llevado a reconocer la presencia de los riesgos psicosociales y, cómo éstos afectan los niveles de producción. Para beneficio tanto del trabajador como de las empresas, el Ministerio de la Protección Social, expide la Resolución 2646 de 2008, “por la cual se establecen disposiciones y se definen responsabilidades para la identificación, evaluación, prevención, intervención y monitoreo permanente de la exposición a factores de riesgo psicosocial en el trabajo y para la determinación del origen de las patologías causadas por el estrés ocupacional”. Por lo tanto, se considera pertinente el desarrollo de nuevos estudios sobre la cronología de la normatividad colombiana en cuanto a los riesgos psicosociales, cómo esta ha incidido en el mejoramiento de la salud mental de los trabajadores y en la generación de ambientes de trabajo saludables.
Resumo:
Processors with large numbers of cores are becoming commonplace. In order to utilise the available resources in such systems, the programming paradigm has to move towards increased parallelism. However, increased parallelism does not necessarily lead to better performance. Parallel programming models have to provide not only flexible ways of defining parallel tasks, but also efficient methods to manage the created tasks. Moreover, in a general-purpose system, applications residing in the system compete for the shared resources. Thread and task scheduling in such a multiprogrammed multithreaded environment is a significant challenge. In this thesis, we introduce a new task-based parallel reduction model, called the Glasgow Parallel Reduction Machine (GPRM). Our main objective is to provide high performance while maintaining ease of programming. GPRM supports native parallelism; it provides a modular way of expressing parallel tasks and the communication patterns between them. Compiling a GPRM program results in an Intermediate Representation (IR) containing useful information about tasks, their dependencies, as well as the initial mapping information. This compile-time information helps reduce the overhead of runtime task scheduling and is key to high performance. Generally speaking, the granularity and the number of tasks are major factors in achieving high performance. These factors are even more important in the case of GPRM, as it is highly dependent on tasks, rather than threads. We use three basic benchmarks to provide a detailed comparison of GPRM with Intel OpenMP, Cilk Plus, and Threading Building Blocks (TBB) on the Intel Xeon Phi, and with GNU OpenMP on the Tilera TILEPro64. GPRM shows superior performance in almost all cases, only by controlling the number of tasks. GPRM also provides a low-overhead mechanism, called “Global Sharing”, which improves performance in multiprogramming situations. We use OpenMP, as the most popular model for shared-memory parallel programming as the main GPRM competitor for solving three well-known problems on both platforms: LU factorisation of Sparse Matrices, Image Convolution, and Linked List Processing. We focus on proposing solutions that best fit into the GPRM’s model of execution. GPRM outperforms OpenMP in all cases on the TILEPro64. On the Xeon Phi, our solution for the LU Factorisation results in notable performance improvement for sparse matrices with large numbers of small blocks. We investigate the overhead of GPRM’s task creation and distribution for very short computations using the Image Convolution benchmark. We show that this overhead can be mitigated by combining smaller tasks into larger ones. As a result, GPRM can outperform OpenMP for convolving large 2D matrices on the Xeon Phi. Finally, we demonstrate that our parallel worksharing construct provides an efficient solution for Linked List processing and performs better than OpenMP implementations on the Xeon Phi. The results are very promising, as they verify that our parallel programming framework for manycore processors is flexible and scalable, and can provide high performance without sacrificing productivity.