888 resultados para Parallel processing (Electronic computers) - Research


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coastal low-level jets (CLLJ) are a low-tropospheric wind feature driven by the pressure gradient produced by a sharp contrast between high temperatures over land and lower temperatures over the sea. This contrast between the cold ocean and the warm land in the summer is intensified by the impact of the coastal parallel winds on the ocean generating upwelling currents, sharpening the temperature gradient close to the coast and giving rise to strong baroclinic structures at the coast. During summertime, the Iberian Peninsula is often under the effect of the Azores High and of a thermal low pressure system inland, leading to a seasonal wind, in the west coast, called the Nortada (northerly wind). This study presents a regional climatology of the CLLJ off the west coast of the Iberian Peninsula, based on a 9km resolution downscaling dataset, produced using the Weather Research and Forecasting (WRF) mesoscale model, forced by 19 years of ERA-Interim reanalysis (1989-2007). The simulation results show that the jet hourly frequency of occurrence in the summer is above 30% and decreases to about 10% during spring and autumn. The monthly frequencies of occurrence can reach higher values, around 40% in summer months, and reveal large inter-annual variability in all three seasons. In the summer, at a daily base, the CLLJ is present in almost 70% of the days. The CLLJ wind direction is mostly from north-northeasterly and occurs more persistently in three areas where the interaction of the jet flow with local capes and headlands is more pronounced. The coastal jets in this area occur at heights between 300 and 400 m, and its speed has a mean around 15 m/s, reaching maximum speeds of 25 m/s.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This letter presents a new parallel method for hyperspectral unmixing composed by the efficient combination of two popular methods: vertex component analysis (VCA) and sparse unmixing by variable splitting and augmented Lagrangian (SUNSAL). First, VCA extracts the endmember signatures, and then, SUNSAL is used to estimate the abundance fractions. Both techniques are highly parallelizable, which significantly reduces the computing time. A design for the commodity graphics processing units of the two methods is presented and evaluated. Experimental results obtained for simulated and real hyperspectral data sets reveal speedups up to 100 times, which grants real-time response required by many remotely sensed hyperspectral applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do Grau de Doutor em Informática Pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nos últimos anos, o avanço da tecnologia e a miniaturização de diversos componentes de electrónica associados a novos conceitos têm permitido nascer novas ideias e projectos, que até há alguns anos não passariam de ficção científica. Talvez o exemplo mais acabado seja actualmente o smartphone, um pequeno bloco de hardware e software, com capacidade de processamento que ultrapassa várias vezes o dos computadores com uma dúzia de anos. Estas capacidades têm sido utilizadas em comunicações, blocos de notas, agendas e até entretenimento. No entanto, podem ser reutilizadas para ajudar a resolver algumas limitações/constrangimentos da actualidade. Dentro destes destacam-se a gestão de recursos escassos. Com efeito, o consumo de energia eléctrica tem aumentado como consequência directa do desenvolvimento global e aumento do número de aparelhos eléctricos. Uma percentagem significativa de energia eléctrica tem sido produzida através de recursos não-renováveis de energia. No entanto, a dependência energética, associada à subida de preços e a redução das emissões de gases do efeito estufa, estimula o desenvolvimento de novas soluções que permitam lidar com esta situação. O desempenho energético por sua vez depende não só das características da estrutura, mas também do comportamento do utilizador. O desempenho energético dos edifícios é muito importante, uma vez que os respectivos consumos são responsáveis por mais de metade do total da energia produzida. Desta forma, a fim de alcançar um melhor desempenho é importante não só considerar o desempenho de estrutura, mas também monitorizar o comportamento do utilizador. Esta última questão coloca várias limitações, uma vez que depende muito do tipo de utilizador. Um dos conceitos actuais emergentes são as chamadas redes de sensores sem fio. Com esta tecnologia, pequenos módulos podem ser desenvolvidos com muitas possibilidades de conectividade, com elevado poder de processamento e com grande autonomia, sem serem excessivamente caros. Isto proporciona os meios para implementar vários dispositivos em toda a instalação, para recolher uma variedade de dados, sendo posteriormente armazenados num servidor. Os blocos fundamentais da infra-estrutura de sensores do projecto foram concebidos na Evoleo Technologies em simultâneo com o decorrer do estágio. Estes blocos recolhem dados específicos na instalação, e periodicamente enviam para o servidor central os valores recolhidos, onde são armazenados e colocados à disposição do utilizador. Os dados recolhidos podem então ser apresentados ao utilizador, proporcionando um registo de consumo de energia associado a um dado período de tempo. Uma vez que todos os dados são armazenados no servidor, podem ser efectuados estudos para determinar o uso típico, possíveis problemas em aparelhos, a qualidade da energia eléctrica, etc., permitindo determinar onde a energia está a ser eventualmente desperdiçada e fornecendo dados ao utilizador para que este possa proceder a alterações, tendo por base dados recolhidos num dado período. O objectivo principal deste trabalho passa por estabelecer a ligação entre o nível máquina e o nível de utilizador, isto é, uma plataforma de interacção entre dispositivos e administrador da instalação. Fornecer os dados de uma forma fácil e sem necessidade de instalação de software específico em cada dispositivo que se pretenda utilizar para monitorizar foi uma das principais preocupações das fases de concepção do projecto.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation for a Masters Degree in Computer and Electronic Engineering

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IEEE International Symposium on Circuits and Systems, pp. 724 – 727, Seattle, EUA

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Amorphous and crystalline sputtered boron carbide thin films have a very high hardness even surpassing that of bulk crystalline boron carbide (≈41 GPa). However, magnetron sputtered B-C films have high friction coefficients (C.o.F) which limit their industrial application. Nanopatterning of materials surfaces has been proposed as a solution to decrease the C.o.F. The contact area of the nanopatterned surfaces is decreased due to the nanometre size of the asperities which results in a significant reduction of adhesion and friction. In the present work, the surface of amorphous and polycrystalline B-C thin films deposited by magnetron sputtering was nanopatterned using infrared femtosecond laser radiation. Successive parallel laser tracks 10 μm apart were overlapped in order to obtain a processed area of about 3 mm2. Sinusoidal-like undulations with the same spatial period as the laser tracks were formed on the surface of the amorphous boron carbide films after laser processing. The undulations amplitude increases with increasing laser fluence. The formation of undulations with a 10 μm period was also observed on the surface of the crystalline boron carbide film processed with a pulse energy of 72 μJ. The amplitude of the undulations is about 10 times higher than in the amorphous films processed at the same pulse energy due to the higher roughness of the films and consequent increase in laser radiation absorption. LIPSS formation on the surface of the films was achieved for the three B-C films under study. However, LIPSS are formed under different circumstances. Processing of the amorphous films at low fluence (72 μJ) results in LIPSS formation only on localized spots on the film surface. LIPSS formation was also observed on the top of the undulations formed after laser processing with 78 μJ of the amorphous film deposited at 800 °C. Finally, large-area homogeneous LIPSS coverage of the boron carbide crystalline films surface was achieved within a large range of laser fluences although holes are also formed at higher laser fluences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Mecânica

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mestrado em Engenharia Mecânica – Especialização Gestão Industrial

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel hyperspectral unmixing problem is considered in this paper. A semisupervised approach is developed under the linear mixture model, where the abundance's physical constraints are taken into account. The proposed approach relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. Since Libraries are potentially very large and hyperspectral datasets are of high dimensionality a parallel implementation in a pixel-by-pixel fashion is derived to properly exploits the graphics processing units (GPU) architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for real hyperspectral datasets reveal significant speedup factors, up to 164 times, with regards to optimized serial implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many Hyperspectral imagery applications require a response in real time or near-real time. To meet this requirement this paper proposes a parallel unmixing method developed for graphics processing units (GPU). This method is based on the vertex component analysis (VCA), which is a geometrical based method highly parallelizable. VCA is a very fast and accurate method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Experimental results obtained for simulated and real hyperspectral datasets reveal considerable acceleration factors, up to 24 times.