19 resultados para parallel link mechanism
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Avaliar a força de preensão é fundamental pela sua relação com a capacidade funcional dos indivíduos, permitindo determinar níveis de risco para incapacidade futura e assim estabelecer estratégias de prevenção. Grande parte dos estudos utiliza o dinamómetro hidráulico JAMAR que fornece o valor da força isométrica obtida durante a execução do movimento de preensão palmar. Contudo, existem outros dinamómetros disponíveis, como é o caso do dinamómetro portátil computorizado E-Link (Biometrics), que fornece o valor da força máxima (peak force), mas também outras variáveis relacionadas, como por exemplo a taxa de fadiga. Não existem, contudo, estudos de análise de concordância que nos permitam aceitar e comparar ou não os valores obtidos com os dois equipamentos e porventura utilizá-los indistintamente.
Resumo:
Introdução – Avaliar a força de preensão mostrou ser de primordial importância pela sua relação com a capacidade funcional dos indivíduos, permitindo determinar níveis de risco para incapacidade futura e, assim, estabelecer estratégias de prevenção. Grande parte dos estudos utiliza o dinamómetro hidráulico JAMAR que fornece o valor da força isométrica obtida durante a execução do movimento de preensão palmar. Contudo, existem outros dinamómetros disponíveis, como é o caso do dinamómetro portátil computorizado E‑Link (Biometrics) que fornece o valor da força máxima (peak force), para além de outras variáveis, como a taxa de fadiga. Não existem, contudo, estudos que nos permitam aceitar e comparar ou não os valores obtidos com os dois equipamentos e porventura utilizá‑los indistintamente. Objetivos – Avaliar a concordância entre as medições da força de preensão (força máxima ou peak force em Kg) obtida a partir de dois equipamentos diferentes (dinamómetros portáteis): um computorizado (E‑Link, Biometrics) e outro hidráulico (JAMAR). Metodologia – Foram avaliados 29 indivíduos (13H; 16M; 22±7 anos; 23,2±3,3 kg/m2) em 2 dias consecutivos, na mesma altura do dia. A posição de teste escolhida foi a recomendada pela Associação Americana de Terapeutas Ocupacionais e foi escolhido o melhor resultado de entre 3 tentativas para a mão dominante. Realizou‑se uma análise correlacional entre os valores obtidos na variável analisada em cada equipamento (coeficiente de Spearman) e uma análise de Bland & Altman para verificar a concordância entre as duas medições. Resultados – O coeficiente de correlação entre as duas medições foi elevado (rS= 0,956; p<0,001) e, pela análise de Bland & Altman, os valores obtidos encontram‑se todos dentro do intervalo da média±2SD. Conclusões – As duas medições mostraram ser concordantes, revelando que os dinamómetros testados podem ser comparáveis ou utilizados indistintamente em diferentes estudos e populações. ABSTRACT: Introduction – Assess grip strength has proved to be of vital importance because of its relationship with functional capacity of individuals, in order to determine levels of risk for future disability and thereby establish prevention strategies. Most studies use the JAMAR Hydraulic dynamometer that provides the value of isometric force obtained during the performance of grip movement. However, there are other dynamometers available, such as portable computerized dynamometer E‑Link (Biometrics), which provides the value of maximum force (peak force) in addition to other variables as the rate of fatigue. There are no studies that allow us to accept or not and compare values obtained with both devices and perhaps use them interchangeably. Purpose – To evaluate the agreement between the measurements of grip strength (peak force or maximum force in kg) obtained from two different devices (portable dynamometers): a computerized (E‑Link, Biometrics) and a hydraulic (JAMAR). Methodology – 29 subjects (13H, 16M, 22 ± 7 years, 23.2 ± 3.3 kg/m2) were assessed on two consecutive days at the same time of day. The test position chosen was recommended by the American Association of Occupational Therapists and was considered the best result from three attempts for the dominant hand. A correlation was studied between values obtained in the variable analyzed in each equipment (Spearman coefficient) and Bland‑Altman analysis to assess the agreement between the two measurements. Results – The correlation coefficient between the two measurements was high (rs = 0,956, p <0,001) and Bland & Altman analysis of the values obtained are all within the range of mean±2SD. Conclusions – The two measurements were shown to be concordant, revealing that the tested dynamometers can be comparable or used interchangeably in different studies and populations.
Resumo:
ISME, Thessaloniki, 2012
Resumo:
Levels of risk for future disability can be assessed with grip strength. This assessment is of fundamental importance for establishing prevention strategies. It also allows verifying relationships with functional capacity of individuals. Most studies on grip strength use the JAMAR Hydraulic dynamometer that provides the value of isometric force obtained during the performance of grip movement and is considered the “gold standard” for measurement of grip strength. However, there are different dynamometers available commercially, such as portable computerized dynamometer E-Link (Biometrics), which provides the value of maximum force (peak force) in addition to other variables as the rate of fatigue for hand strength, among others. Of our knowledge, there are no studies that allow us to accept or not and compare values obtained with both devices and perhaps use them interchangeably. The aim of this study was to evaluate the absolute agreement between the measurements of grip strength (peak force or maximum force in kg) obtained from two different devices (portable dynamometers): a computerized (E-Link, Biometrics) and one hydraulic (JAMAR).
Resumo:
The application of a-SiC:H/a-Si:H pinpin photodiodes for optoelectronic applications as a WDM demultiplexer device has been demonstrated useful in optical communications that use the WDM technique to encode multiple signals in the visible light range. This is required in short range optical communication applications, where for costs reasons the link is provided by Plastic Optical Fibers. Characterization of these devices has shown the presence of large photocapacitive effects. By superimposing background illumination to the pulsed channel the device behaves as a filter, producing signal attenuation, or as an amplifier, producing signal gain, depending on the channel/background wavelength combination. We present here results, obtained by numerical simulations, about the internal electric configuration of a-SiC:H/a-Si:H pinpin photodiode. These results address the explanation of the device functioning in the frequency domain to a wavelength tunable photo-capacitance due to the accumulation of space charge localized at the bottom diode that, according to the Shockley-Read-Hall model, it is mainly due to defect trapping. Experimental result about measurement of the photodiode capacitance under different conditions of illumination and applied bias will be also presented. The combination of these analyses permits the description of a wavelength controlled photo-capacitance that combined with the series and parallel resistance of the diodes may result in the explicit definition of cut off frequencies for frequency capacitive filters activated by the light background or an oscillatory resonance of photogenerated carriers between the two diodes. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
International Conference with Peer Review 2012 IEEE International Conference in Geoscience and Remote Sensing Symposium (IGARSS), 22-27 July 2012, Munich, Germany
Resumo:
We study the cosmological evolution of asymmetries in the two-Higgs doublet extension of the Standard Model, prior to the electroweak phase transition. If Higgs flavour-exchanging interactions are sufficiently slow, then a relative asymmetry among the Higgs doublets corresponds to an effectively conserved quantum number. Since the magnitude of the Higgs couplings depends on the choice of basis in the :Higgs doublet space, we attempt to formulate basis-independent out-of-equilibrium conditions. We show that an initial asymmetry between the fliggs scalars, which could be generated by GP violation in the :Higgs sector, will be transformed into a baryon asymmetry by the sphalerons, without the need of B - L violation. This novel mechanism of baryogenesis through (split) Higgsogenesis is exemplified with simple scenarios based on the out-of-equilibrium decay of heavy singlet scalar fields into the illiggs doublets.
Resumo:
This letter presents a new parallel method for hyperspectral unmixing composed by the efficient combination of two popular methods: vertex component analysis (VCA) and sparse unmixing by variable splitting and augmented Lagrangian (SUNSAL). First, VCA extracts the endmember signatures, and then, SUNSAL is used to estimate the abundance fractions. Both techniques are highly parallelizable, which significantly reduces the computing time. A design for the commodity graphics processing units of the two methods is presented and evaluated. Experimental results obtained for simulated and real hyperspectral data sets reveal speedups up to 100 times, which grants real-time response required by many remotely sensed hyperspectral applications.
Resumo:
This paper presents a model for the simulation of an offshore wind system having a rectifier input voltage malfunction at one phase. The offshore wind system model comprises a variable-speed wind turbine supported on a floating platform, equipped with a permanent magnet synchronous generator using full-power four-level neutral point clamped converter. The link from the offshore floating platform to the onshore electrical grid is done through a light high voltage direct current submarine cable. The drive train is modeled by a three-mass model. Considerations about the smart grid context are offered for the use of the model in such a context. The rectifier voltage malfunction domino effect is presented as a case study to show capabilities of the model. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
Trabalho final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Single processor architectures are unable to provide the required performance of high performance embedded systems. Parallel processing based on general-purpose processors can achieve these performances with a considerable increase of required resources. However, in many cases, simplified optimized parallel cores can be used instead of general-purpose processors achieving better performance at lower resource utilization. In this paper, we propose a configurable many-core architecture to serve as a co-processor for high-performance embedded computing on Field-Programmable Gate Arrays. The architecture consists of an array of configurable simple cores with support for floating-point operations interconnected with a configurable interconnection network. For each core it is possible to configure the size of the internal memory, the supported operations and number of interfacing ports. The architecture was tested in a ZYNQ-7020 FPGA in the execution of several parallel algorithms. The results show that the proposed many-core architecture achieves better performance than that achieved with a parallel generalpurpose processor and that up to 32 floating-point cores can be implemented in a ZYNQ-7020 SoC FPGA.
Resumo:
Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.
Resumo:
Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.
Resumo:
The application of compressive sensing (CS) to hyperspectral images is an active area of research over the past few years, both in terms of the hardware and the signal processing algorithms. However, CS algorithms can be computationally very expensive due to the extremely large volumes of data collected by imaging spectrometers, a fact that compromises their use in applications under real-time constraints. This paper proposes four efficient implementations of hyperspectral coded aperture (HYCA) for CS, two of them termed P-HYCA and P-HYCA-FAST and two additional implementations for its constrained version (CHYCA), termed P-CHYCA and P-CHYCA-FAST on commodity graphics processing units (GPUs). HYCA algorithm exploits the high correlation existing among the spectral bands of the hyperspectral data sets and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. The proposed P-HYCA and P-CHYCA implementations have been developed using the compute unified device architecture (CUDA) and the cuFFT library. Moreover, this library has been replaced by a fast iterative method in the P-HYCA-FAST and P-CHYCA-FAST implementations that leads to very significant speedup factors in order to achieve real-time requirements. The proposed algorithms are evaluated not only in terms of reconstruction error for different compressions ratios but also in terms of computational performance using two different GPU architectures by NVIDIA: 1) GeForce GTX 590; and 2) GeForce GTX TITAN. Experiments are conducted using both simulated and real data revealing considerable acceleration factors and obtaining good results in the task of compressing remotely sensed hyperspectral data sets.