724 resultados para intel processor
Resumo:
Este trabalho consiste na elaboração do Relatório Final do Estágio Curricular, realizado como parte integrante e conclusiva do curso de Mestrado em Treino Desportivo pela Faculdade de Motricidade Humana. O Estágio desenvolveu-se no Real Sport Clube, nomeadamente no escalão de Juniores A na época de 2014/2015 tendo como objetivos favorecer a integração e consolidação, no contexto da prática, os conhecimentos teóricos adquiridos ao longo do curso. O presente relatório está estruturado em diferentes capítulos, ao longo dos quais se apresentam as atividades desenvolvidas durante o Estágio Curricular. O propósito é descrever e refletir sobre as atividades desenvolvidas ao longo da época, pretendendo-se realizar uma avaliação de todo o trabalho desenvolvido e os conhecimentos dele derivado, ao descrever e analisar criticamente as duas dimensões. O relatório tem inicio com uma revisão da literatura que suporta a prática profissional, composta por três áreas: a área 1 diz respeito à organização e gestão do processo de treino e competição, onde são abordadas as tarefas relativas à conceção dos ciclos de treino, condução das sessões e controlo da competição bem como algumas informações relativas ao plantel de Juniores A do RSC. Na área 2 é apresentado um estudo de investigação realizado com recurso a dispositivos GPS que nos apresenta resultados relativos às distâncias percorridas pelos diferentes atletas em jogos oficiais e por último, a área 3 ilustra os dois eventos realizados direcionados para a formação continua dos treinadores de futebol. O Estágio Curricular mostra-se, neste sentido, uma ótima oportunidade de aprendizagem e promotor da aquisição e desenvolvimento de competências profissionais e pessoais, de atitudes e resolução de problemas pedagógicos, por forma a constituir o ponto de partida para uma futura integração no mercado de trabalho.
Resumo:
Introducción. El ácido úrico sérico (AUs) elevado es considerado marcador de riesgo cardiometabólico. En escolares con obesidad de Nuevo León no ha sido estudiado a profundidad. Objetivo. Estudiar los niveles de AUs y su asociación con indicadores clínico-metabólicos y dietéticos en escolares con obesidad e IMC normal. Material y métodos. Estudio retrospectivo, transversal, correlacional, en 530 escolares (6-12 años) autoseleccionados del programa de Obesidad Infantil FaSPyN-UANL, 322 con obesidad y 208 con IMC normal1 . La medición de los indicadores clínicos, metabólicos y dietéticos se realizó por personal certificado2 . Hiperuricemia se consideró con AUs > 5.5 mg/dL3 . Ingesta dietética4 se determinó con el recordatorio de 24 horas y software Food Processor. Se aplicaron técnicas de normalidad (Kolmogorov-Smirnov), prueba t para dos muestras, ANOVA de dos factores, U de Mann Whitney y regresión logística binaria, en software SPSS. Resultados. Se encontró diferencia significativa (p-valor < 0.05) para género, edad y nivel medio de AUs entre escolares con obesidad e IMC normal. 24.3% de escolares con obesidad presentaron hiperuricemia. Existieron diferencias significativas entre escolares con y sin hiperuricemia para la mayoría de los indicadores. La asociación del AUs con indicadores: obesidad (OR = 4.5), género masculino (OR = 2.6), edad (OR = 2.3), triglicéridos (OR = 1.3), ingesta de: proteínas (OR = 1.7), azúcares (OR = 1.5) y fibra dietética (OR = 0.6) fueron significativos (p-valor < 0.05). Conclusiones. Los escolares con obesidad, de género masculino y mayor edad presentaron niveles más elevados de AUs. La frecuencia de hiperuricemia fue mayor a lo reportado en otras poblaciones. Los niveles de AUs se asociaron principalmente con la obesidad, género masculino, mayor edad, triglicéridos séricos elevados, ingesta excesiva de proteínas y de azúcares, e ingesta deficiente de fibra dietética, lo que destaca la importancia del AUs como marcador de riesgo metabólico y nutricio.
Resumo:
The transport of fluids through pipes is used in the oil industry, being the pipelines an important link in the logistics flow of fluids. However, the pipelines suffer deterioration in their walls caused by several factors which may cause loss of fluids to the environment, justifying the investment in techniques and methods of leak detection to minimize fluid loss and environmental damage. This work presents the development of a supervisory module in order to inform to the operator the leakage in the pipeline monitored in the shortest time possible, in order that the operator log procedure that entails the end of the leak. This module is a component of a system designed to detect leaks in oil pipelines using sonic technology, wavelets and neural networks. The plant used in the development and testing of the module presented here was the system of tanks of LAMP, and its LAN, as monitoring network. The proposal consists of, basically, two stages. Initially, assess the performance of the communication infrastructure of the supervisory module. Later, simulate leaks so that the DSP sends information to the supervisory performs the calculation of the location of leaks and indicate to which sensor the leak is closer, and using the system of tanks of LAMP, capture the pressure in the pipeline monitored by piezoresistive sensors, this information being processed by the DSP and sent to the supervisory to be presented to the user in real time
Resumo:
There is an increasing concern to reduce the cost and overheads during the development of reliable systems. Selective protection of most critical parts of the systems represents a viable solution to obtain a high level of reliability at a fraction of the cost. In particular to design a selective fault mitigation strategy for processor-based systems, it is mandatory to identify and prioritize the most vulnerable registers in the register file as best candidates to be protected (hardened). This paper presents an application-based metric to estimate the criticality of each register from the microprocessor register file in microprocessor-based systems. The proposed metric relies on the combination of three different criteria based on common features of executed applications. The applicability and accuracy of our proposal have been evaluated in a set of applications running in different microprocessors. Results show a significant improvement in accuracy compared to previous approaches and regardless of the underlying architecture.
Resumo:
Hardware vendors make an important effort creating low-power CPUs that keep battery duration and durability above acceptable levels. In order to achieve this goal and provide good performance-energy for a wide variety of applications, ARM designed the big.LITTLE architecture. This heterogeneous multi-core architecture features two different types of cores: big cores oriented to performance and little cores, slower and aimed to save energy consumption. As all the cores have access to the same memory, multi-threaded applications must resort to some mutual exclusion mechanism to coordinate the access to shared data by the concurrent threads. Transactional Memory (TM) represents an optimistic approach for shared-memory synchronization. To take full advantage of the features offered by software TM, but also benefit from the characteristics of the heterogeneous big.LITTLE architectures, our focus is to propose TM solutions that take into account the power/performance requirements of the application and what it is offered by the architecture. In order to understand the current state-of-the-art and obtain useful information for future power-aware software TM solutions, we have performed an analysis of a popular TM library running on top of an ARM big.LITTLE processor. Experiments show, in general, better scalability for the LITTLE cores for most of the applications except for one, which requires the computing performance that the big cores offer.
Resumo:
After a decade evolving in the High Performance Computing arena, GPU-equipped supercomputers have con- quered the top500 and green500 lists, providing us unprecedented levels of computational power and memory bandwidth. This year, major vendors have introduced new accelerators based on 3D memory, like Xeon Phi Knights Landing by Intel and Pascal architecture by Nvidia. This paper reviews hardware features of those new HPC accelerators and unveils potential performance for scientific applications, with an emphasis on Hybrid Memory Cube (HMC) and High Bandwidth Memory (HBM) used by commercial products according to roadmaps already announced.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
A purpose of this research study was to demonstrate the practical linguistic study and evaluation of dissertations by using two examples of the latest technology, the microcomputer and optical scanner. That involved developing efficient methods for data entry plus creating computer algorithms appropriate for personal, linguistic studies. The goal was to develop a prototype investigation which demonstrated practical solutions for maximizing the linguistic potential of the dissertation data base. The mode of text entry was from a Dest PC Scan 1000 Optical Scanner. The function of the optical scanner was to copy the complete stack of educational dissertations from the Florida Atlantic University Library into an I.B.M. XT microcomputer. The optical scanner demonstrated its practical value by copying 15,900 pages of dissertation text directly into the microcomputer. A total of 199 dissertations or 72% of the entire stack of education dissertations (277) were successfully copied into the microcomputer's word processor where each dissertation was analyzed for a variety of syntax frequencies. The results of the study demonstrated the practical use of the optical scanner for data entry, the microcomputer for data and statistical analysis, and the availability of the college library as a natural setting for text studies. A supplemental benefit was the establishment of a computerized dissertation corpus which could be used for future research and study. The final step was to build a linguistic model of the differences in dissertation writing styles by creating 7 factors from 55 dependent variables through principal components factor analysis. The 7 factors (textual components) were then named and described on a hypothetical construct defined as a continuum from a conversational, interactional style to a formal, academic writing style. The 7 factors were then grouped through discriminant analysis to create discriminant functions for each of the 7 independent variables. The results indicated that a conversational, interactional writing style was associated with more recent dissertations (1972-1987), an increase in author's age, females, and the department of Curriculum and Instruction. A formal, academic writing style was associated with older dissertations (1972-1987), younger authors, males, and the department of Administration and Supervision. It was concluded that there were no significant differences in writing style due to subject matter (community college studies) compared to other subject matter. It was also concluded that there were no significant differences in writing style due to the location of dissertation origin (Florida Atlantic University, University of Central Florida, Florida International University).
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
Bilinear pairings can be used to construct cryptographic systems with very desirable properties. A pairing performs a mapping on members of groups on elliptic and genus 2 hyperelliptic curves to an extension of the finite field on which the curves are defined. The finite fields must, however, be large to ensure adequate security. The complicated group structure of the curves and the expensive field operations result in time consuming computations that are an impediment to the practicality of pairing-based systems. The Tate pairing can be computed efficiently using the ɳT method. Hardware architectures can be used to accelerate the required operations by exploiting the parallelism inherent to the algorithmic and finite field calculations. The Tate pairing can be performed on elliptic curves of characteristic 2 and 3 and on genus 2 hyperelliptic curves of characteristic 2. Curve selection is dependent on several factors including desired computational speed, the area constraints of the target device and the required security level. In this thesis, custom hardware processors for the acceleration of the Tate pairing are presented and implemented on an FPGA. The underlying hardware architectures are designed with care to exploit available parallelism while ensuring resource efficiency. The characteristic 2 elliptic curve processor contains novel units that return a pairing result in a very low number of clock cycles. Despite the more complicated computational algorithm, the speed of the genus 2 processor is comparable. Pairing computation on each of these curves can be appealing in applications with various attributes. A flexible processor that can perform pairing computation on elliptic curves of characteristic 2 and 3 has also been designed. An integrated hardware/software design and verification environment has been developed. This system automates the procedures required for robust processor creation and enables the rapid provision of solutions for a wide range of cryptographic applications.
Resumo:
Introducción: El trabajador avícola presenta un alto riesgo de sufrir de Desórdenes Musculo esqueléticos, debido a la realización de trabajos manuales repetitivos; posición bípeda prolongada, posturas por fuera de ángulos de confort de miembros superiores Objetivo: Establecer las recomendaciones basadas en la evidencia de las intervenciones en salud para los Desórdenes Musculoesqueléticos (DME) en el trabajador avícola. Metodología: Se realizó una revisión de la literatura de los estudios primarios publicados en las bases de datos Medline, Scient Direct y Scielo desde 1990. Los artículos se clasificaron de acuerdo con: el tipo de estudio, la calidad de éste y el nivel de evidencia que aportaba. Resultados: Dentro de las recomendaciones de la evidencia disponible para el manejo integral de los pacientes de la industria avícola con riesgos o eventos asociados a DME se encuentran las siguientes: 1) incorporar un enfoque sistémico en la atención a dichos trabajadores, 2) incluir aspectos psicosociales en la identificación y explicación de los riesgos y eventos en salud, 3) permitir los descansos, microrupturas y pautas para el ejercicio, 4) facilitar la rotación y ampliación de puestos de trabajo, 5) mejorar las herramientas de trabajo - especialmente el corte de los cuchillos. Conclusiones: Las intervenciones descritas en la presente revisión, apuntan hacia el mejoramiento de la incidencia y la prevalencia de los DMS, la disminución de incapacidad temporal y definitiva por los DMS, el mejoramiento en la producción industrial y la reducción de costos tanto económicos como humanos. Sin embargo, se debe plantear la necesidad de continuar impulsando el desarrollo de investigaciones y estudios que permitan tener mayores elementos de juicio para poder realizar recomendaciones a los tipos de intervenciones propuestas. A pesar de lo anterior, las intervenciones en salud para los trabajadores de la industria avícola deben ser enfocadas desde la prestación integral de los servicios de salud.
Resumo:
Solving a complex Constraint Satisfaction Problem (CSP) is a computationally hard task which may require a considerable amount of time. Parallelism has been applied successfully to the job and there are already many applications capable of harnessing the parallel power of modern CPUs to speed up the solving process. Current Graphics Processing Units (GPUs), containing from a few hundred to a few thousand cores, possess a level of parallelism that surpasses that of CPUs and there are much less applications capable of solving CSPs on GPUs, leaving space for further improvement. This paper describes work in progress in the solving of CSPs on GPUs, CPUs and other devices, such as Intel Many Integrated Cores (MICs), in parallel. It presents the gains obtained when applying more devices to solve some problems and the main challenges that must be faced when using devices with as different architectures as CPUs and GPUs, with a greater focus on how to effectively achieve good load balancing between such heterogeneous devices.
Resumo:
Nowadays the production of increasingly complex and electrified vehicles requires the implementation of new control and monitoring systems. This reason, together with the tendency of moving rapidly from the test bench to the vehicle, leads to a landscape that requires the development of embedded hardware and software to face the application effectively and efficiently. The development of application-based software on real-time/FPGA hardware could be a good answer for these challenges: FPGA grants parallel low-level and high-speed calculation/timing, while the Real-Time processor can handle high-level calculation layers, logging and communication functions with determinism. Thanks to the software flexibility and small dimensions, these architectures can find a perfect collocation as engine RCP (Rapid Control Prototyping) units and as smart data logger/analyser, both for test bench and on vehicle application. Efforts have been done for building a base architecture with common functionalities capable of easily hosting application-specific control code. Several case studies originating in this scenario will be shown; dedicated solutions for protype applications have been developed exploiting a real-time/FPGA architecture as ECU (Engine Control Unit) and custom RCP functionalities, such as water injection and testing hydraulic brake control.
Progetto di Sistemi di Regolazione dell'Alimentazione ad Alta Affidabilità per Processori Multi-Core
Resumo:
Quasi tutti i componenti del FIVR (regolatore di tensione Buck che fornisce l'alimentazione ai microprocessori multi-core) sono implementati sul die del SoC e quindi soffrono di problemi di affidabilità associati allo scaling della tecnologia microelettronica. In particolare, la variazione dei parametri di processo durante la fabbricazione e i guasti nei dispostivi di switching (circuiti aperti o cortocircuiti). Questa tesi si svolge in ambito di un progetto di ricerca in collaborazione con Intel Corporation, ed è stato sviluppato in due parti: Inizialmente è stato arricchito il lavoro di analisi dei guasti su FIVR, svolgendo un accurato studio su quelli che sono i principali effetti dell’invecchiamento sulle uscite dei regolatori di tensione integrati su chip. Successivamente è stato sviluppato uno schema di monitoraggio a basso costo in grado di rilevare gli effetti dei guasti più probabili del FIVR sul campo. Inoltre, lo schema sviluppato è in grado di rilevare, durante il tempo di vita del FIVR, gli effetti di invecchiamento che inducono un incorretto funzionamento del FIVR. Lo schema di monitoraggio è stato progettato in maniera tale che risulti self-checking nei confronti dei suoi guasti interni, questo per evitare che tali errori possano compromettere la corretta segnalazione di guasti sul FIVR.
Resumo:
Isolated DC-DC converters play a significant role in fast charging and maintaining the variable output voltage for EV applications. This study aims to investigate the different Isolated DC-DC converters for onboard and offboard chargers, then, once the topology is selected, study the control techniques and, finally, achieve a real-time converter model to accomplish Hardware-In-The-Loop (HIL) results. Among the different isolated DC-DC topologies, the Dual Active Bridge (DAB) converter has the advantage of allowing bidirectional power flow, which enables operating in both Grid to Vehicle (G2V) and Vehicle to Grid (V2G) modalities. Recently, DAB has been used in the offboard chargers for high voltage applications due to SiC and GaN MOSFETs; this new technology also allows the utilization of higher switching frequencies. By empowering soft switching techniques to reduce switching losses, higher switching frequency operation is possible in DAB. There are four phase shift control techniques for the DAB converter. They are Single Phase shift, Extended Phase shift, Dual Phase shift, Triple Phase shift controls. This thesis considers two control strategies; Single-Phase, and Dual-Phase shifts, to understand the circulating currents, power losses, and output capacitor size reduction in the DAB. Hardware-In-The-Loop (HIL) experiments are carried out on both controls with high switching frequencies using the PLECS software tool and the RT box supporting the PLECS. Root Mean Square Error is also calculated for steady-state values of output voltage with different sampling frequencies in both the controls to identify the achievable sampling frequency in real-time. DSP implementation is also executed to emulate the optimized DAB converter design, and final real-time simulation results are discussed for both the Single-Phase and Dual-Phase shift controls.