966 resultados para Graphical processing units


Relevância:

80.00% 80.00%

Publicador:

Resumo:

At early stages in visual processing cells respond to local stimuli with specific features such as orientation and spatial frequency. Although the receptive fields of these cells have been thought to be local and independent, recent physiological and psychophysical evidence has accumulated, indicating that the cells participate in a rich network of local connections. Thus, these local processing units can integrate information over much larger parts of the visual field; the pattern of their response to a stimulus apparently depends on the context presented. To explore the pattern of lateral interactions in human visual cortex under different context conditions we used a novel chain lateral masking detection paradigm, in which human observers performed a detection task in the presence of different length chains of high-contrast-flanked Gabor signals. The results indicated a nonmonotonic relation of the detection threshold with the number of flankers. Remote flankers had a stronger effect on target detection when the space between them was filled with other flankers, indicating that the detection threshold is caused by dynamics of large neuronal populations in the neocortex, with a major interplay between excitation and inhibition. We considered a model of the primary visual cortex as a network consisting of excitatory and inhibitory cell populations, with both short- and long-range interactions. The model exhibited a behavior similar to the experimental results throughout a range of parameters. Experimental and modeling results indicated that long-range connections play an important role in visual perception, possibly mediating the effects of context.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Childhood exposure to low-level lead can permanently reduce intelligence, but the neurobiologic mechanism for this effect is unknown. We examined the impact of lead exposure on the development of cortical columns, using the rodent barrel field as a model. In all areas of mammalian neocortex, cortical columns constitute a fundamental structural unit subserving information processing. Barrel field cortex contains columnar processing units with distinct clusters of layer IV neurons that receive sensory input from individual whiskers. In this study, rat pups were exposed to 0, 0.2, 1, 1.5, or 2 g/liter lead acetate in their dam's drinking water from birth through postnatal day 10. This treatment, which coincides with the development of segregated columns in the barrel field, produced blood lead concentrations from 1 to 31 μg/dl. On postnatal day 10, the area of the barrel field and of individual barrels was measured. A dose-related reduction in barrel field area was observed (Pearson correlation = −0.740; P < 0.001); mean barrel field area in the highest exposure group was decreased 12% versus controls. Individual barrels in the physiologically more active caudoventral group were affected preferentially. Total cortical area measured in the same sections was not altered significantly by lead exposure. These data support the hypothesis that lead exposure may impair the development of columnar processing units in immature neocortex. We demonstrate that low levels of blood lead, in the range seen in many impoverished inner-city children, cause structural alterations in a neocortical somatosensory map.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As unidades de beneficiamento de minério de ouro buscam cada vez mais uma produção de baixo custo e maximização dos ganhos financeiros. A caracterização tecnológica está inserida em uma abordagem multidisciplinar que permite agregar conhecimento, alternativas de otimização e redução nos custos de operação. Inserida como uma ferramenta na caracterização tecnológica, a análise de imagens automatizada tem importante papel no setor mineral principalmente pela rapidez das análises, robustez estatística e confiabilidade nos resultados. A técnica pode ser realizada por meio de imagens adquiridas em microscópio eletrônico de varredura, associada a microanálises químicas sendo utilizada em diversas etapas de um empreendimento mineiro. Este estudo tem como objetivo a caraterização tecnológica de minério de ouro da Mina Morro do Ouro, Minas Gerais na qual foi utilizado a técnica de análise de imagens automatizada por MLA em um conjunto de 88 amostras. Foi possível identificar que 90% do ouro está na fração acima de 0,020 mm; o quartzo e mica representam cerca de 80% da massa total do minério; os sulfetos apresentam diâmetro de círculo equivalente entre 80 e 100 ?m e são representados por pirita e arsenopirita, com pirrotita, calcopirita, esfalerita e galena subordinada. Também foi possível observar que o ouro está majoritariamente associado à pirita e arsenopirita e com o aumento de teor de arsênio, cresce a parcela de ouro associado à arsenopirita. As medianas das distribuições de tamanho dos grãos de ouro apresentam um valor médio de 19 ?m. Verificou-se que a composição dos grãos de ouro é bastante diversa, em média 77% de ouro e 23% de prata. Para material abaixo de 0,50 mm observa-se uma parcela expressiva de perímetro exposto dos grãos de ouro (média 73%); o ouro incluso (21% do total dos grãos de ouro) está associado a pirita e arsenopirita, sendo que em 14 das 88 amostras este valor pode superar 40% do total de ouro contido. A ferramenta da análise de imagens automatizada mostrou-se bastante eficiente definindo características particulares o que fornece de forma objetiva subsídios para os trabalhos de planejamento de mina e processamento mineral.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Subpixel methods increase the accuracy and efficiency of image detectors, processing units, and algorithms and provide very cost-effective systems for object tracking. Published methods achieve resolution increases up to three orders of magnitude. In this Letter, we demonstrate that this limit can be theoretically improved by several orders of magnitude, permitting micropixel and submicropixel accuracies. The necessary condition for movement detection is that one single pixel changes its status. We show that an appropriate target design increases the probability of a pixel change for arbitrarily small shifts, thus increasing the detection accuracy of a tracking system. The proposal does not impose severe restriction on the target nor on the sensor, thus allowing easy experimental implementation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tool path generation is one of the most complex problems in Computer Aided Manufacturing. Although some efficient strategies have been developed, most of them are only useful for standard machining. However, the algorithms used for tool path computation demand a higher computation performance, which makes the implementation on many existing systems very slow or even impractical. Hardware acceleration is an incremental solution that can be cleanly added to these systems while keeping everything else intact. It is completely transparent to the user. The cost is much lower and the development time is much shorter than replacing the computers by faster ones. This paper presents an optimisation that uses a specific graphic hardware approach using the power of multi-core Graphic Processing Units (GPUs) in order to improve the tool path computation. This improvement is applied on a highly accurate and robust tool path generation algorithm. The paper presents, as a case of study, a fully implemented algorithm used for turning lathe machining of shoe lasts. A comparative study will show the gain achieved in terms of total computing time. The execution time is almost two orders of magnitude faster than modern PCs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Subpixel methods increase the accuracy and efficiency of image detectors, processing units, and algorithms and provide very cost-effective systems for object tracking. A recently proposed method permits micropixel and submicropixel accuracies providing certain design constraints on the target are met. In this paper, we explore the use of Costas arrays - permutation matrices with ideal auto-ambiguity properties - for the design of such targets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, we propose the use of the neural gas (NG), a neural network that uses an unsupervised Competitive Hebbian Learning (CHL) rule, to develop a reverse engineering process. This is a simple and accurate method to reconstruct objects from point clouds obtained from multiple overlapping views using low-cost sensors. In contrast to other methods that may need several stages that include downsampling, noise filtering and many other tasks, the NG automatically obtains the 3D model of the scanned objects. To demonstrate the validity of our proposal we tested our method with several models and performed a study of the neural network parameterization computing the quality of representation and also comparing results with other neural methods like growing neural gas and Kohonen maps or classical methods like Voxel Grid. We also reconstructed models acquired by low cost sensors that can be used in virtual and augmented reality environments for redesign or manipulation purposes. Since the NG algorithm has a strong computational cost we propose its acceleration. We have redesigned and implemented the NG learning algorithm to fit it onto Graphics Processing Units using CUDA. A speed-up of 180× faster is obtained compared to the sequential CPU version.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Durante los últimos años ha sido creciente el uso de las unidades de procesamiento gráfico, más conocidas como GPU (Graphic Processing Unit), en aplicaciones de propósito general, dejando a un lado el objetivo para el que fueron creadas y que no era otro que el renderizado de gráficos por computador. Este crecimiento se debe en parte a la evolución que han experimentado estos dispositivos durante este tiempo y que les ha dotado de gran potencia de cálculo, consiguiendo que su uso se extienda desde ordenadores personales a grandes cluster. Este hecho unido a la proliferación de sensores RGB-D de bajo coste ha hecho que crezca el número de aplicaciones de visión que hacen uso de esta tecnología para la resolución de problemas, así como también para el desarrollo de nuevas aplicaciones. Todas estas mejoras no solamente se han realizado en la parte hardware, es decir en los dispositivos, sino también en la parte software con la aparición de nuevas herramientas de desarrollo que facilitan la programación de estos dispositivos GPU. Este nuevo paradigma se acuñó como Computación de Propósito General sobre Unidades de Proceso Gráfico (General-Purpose computation on Graphics Processing Units, GPGPU). Los dispositivos GPU se clasifican en diferentes familias, en función de las distintas características hardware que poseen. Cada nueva familia que aparece incorpora nuevas mejoras tecnológicas que le permite conseguir mejor rendimiento que las anteriores. No obstante, para sacar un rendimiento óptimo a un dispositivo GPU es necesario configurarlo correctamente antes de usarlo. Esta configuración viene determinada por los valores asignados a una serie de parámetros del dispositivo. Por tanto, muchas de las implementaciones que hoy en día hacen uso de los dispositivos GPU para el registro denso de nubes de puntos 3D, podrían ver mejorado su rendimiento con una configuración óptima de dichos parámetros, en función del dispositivo utilizado. Es por ello que, ante la falta de un estudio detallado del grado de afectación de los parámetros GPU sobre el rendimiento final de una implementación, se consideró muy conveniente la realización de este estudio. Este estudio no sólo se realizó con distintas configuraciones de parámetros GPU, sino también con diferentes arquitecturas de dispositivos GPU. El objetivo de este estudio es proporcionar una herramienta de decisión que ayude a los desarrolladores a la hora implementar aplicaciones para dispositivos GPU. Uno de los campos de investigación en los que más prolifera el uso de estas tecnologías es el campo de la robótica ya que tradicionalmente en robótica, sobre todo en la robótica móvil, se utilizaban combinaciones de sensores de distinta naturaleza con un alto coste económico, como el láser, el sónar o el sensor de contacto, para obtener datos del entorno. Más tarde, estos datos eran utilizados en aplicaciones de visión por computador con un coste computacional muy alto. Todo este coste, tanto el económico de los sensores utilizados como el coste computacional, se ha visto reducido notablemente gracias a estas nuevas tecnologías. Dentro de las aplicaciones de visión por computador más utilizadas está el registro de nubes de puntos. Este proceso es, en general, la transformación de diferentes nubes de puntos a un sistema de coordenadas conocido. Los datos pueden proceder de fotografías, de diferentes sensores, etc. Se utiliza en diferentes campos como son la visión artificial, la imagen médica, el reconocimiento de objetos y el análisis de imágenes y datos de satélites. El registro se utiliza para poder comparar o integrar los datos obtenidos en diferentes mediciones. En este trabajo se realiza un repaso del estado del arte de los métodos de registro 3D. Al mismo tiempo, se presenta un profundo estudio sobre el método de registro 3D más utilizado, Iterative Closest Point (ICP), y una de sus variantes más conocidas, Expectation-Maximization ICP (EMICP). Este estudio contempla tanto su implementación secuencial como su implementación paralela en dispositivos GPU, centrándose en cómo afectan a su rendimiento las distintas configuraciones de parámetros GPU. Como consecuencia de este estudio, también se presenta una propuesta para mejorar el aprovechamiento de la memoria de los dispositivos GPU, permitiendo el trabajo con nubes de puntos más grandes, reduciendo el problema de la limitación de memoria impuesta por el dispositivo. El funcionamiento de los métodos de registro 3D utilizados en este trabajo depende en gran medida de la inicialización del problema. En este caso, esa inicialización del problema consiste en la correcta elección de la matriz de transformación con la que se iniciará el algoritmo. Debido a que este aspecto es muy importante en este tipo de algoritmos, ya que de él depende llegar antes o no a la solución o, incluso, no llegar nunca a la solución, en este trabajo se presenta un estudio sobre el espacio de transformaciones con el objetivo de caracterizarlo y facilitar la elección de la transformación inicial a utilizar en estos algoritmos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents implementation of a low-power tracking CMOS image sensor based on biological models of attention. The presented imager allows tracking of up to N salient targets in the field of view. Employing "smart" image sensor architecture, where all image processing is implemented on the sensor focal plane, the proposed imager allows reduction of the amount of data transmitted from the sensor array to external processing units and thus provides real time operation. The imager operation and architecture are based on the models taken from biological systems, where data sensed by many millions of receptors should be transmitted and processed in real time. The imager architecture is optimized to achieve low-power dissipation both in acquisition and tracking modes of operation. The tracking concept is presented, the system architecture is shown and the circuits description is discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We compared reading acquisition in English and Italian children up to late primary school analyzing RTs and errors as a function of various psycholinguistic variables and changes due to experience. Our results show that reading becomes progressively more reliant on larger processing units with age, but that this is modulated by consistency of the language. In English, an inconsistent orthography, reliance on larger units occurs earlier on and it is demonstrated by faster RTs, a stronger effect of lexical variables and lack of length effect (by fifth grade). However, not all English children are able to master this mode of processing yielding larger inter-individual variability. In Italian, a consistent orthography, reliance on larger units occurs later and it is less pronounced. This is demonstrated by larger length effects which remain significant even in older children and by larger effects of a global factor (related to speed of orthographic decoding) explaining changes of performance across ages. Our results show the importance of considering not only overall performance, but inter-individual variability and variability between conditions when interpreting cross-linguistic differences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since the 1970s, Brazil has gone through several changes in its economic and productive structures, which have symbiotic relationship with the organization and dynamics of the Brazilian territory. This set of economic, social and technical-scientific transformations developed in the amid the productive capital restructuring, a process that occurs on a global scale, but that effective with particularities in different places. Adopting this presuposition the present research had as main objective analyze the productive restructuring of the dairy sector in Rio Grande do Norte, highlighting its relationship with production process / organization of space and its impact on the social relations of production. The adopted methodology to elaborate of this study was based on the achievement a bibliographic review with regard to proceedings of production of space and productive restructuring, document research about the dynamics of the dairy sector in Rio Grande do Norte, as well as on regulatory instructions governing the dairy production in Brazil, we achieve parallel secondary data collection, with official organs such as IBGE, EMATER and SINDLEITE. Another important methodological resource was the realization of the field research, which enabled us to empirically understand the distinct realities lived by agents acting on milk production system in Rio Grande do Norte. The analyzes performed nevertheless evidence that the restructuring process in the dairy sector is fomented, greatly by state,that finance, encourages and normatizes the production of milk in the country. In the specific case of Rio Grande do Norte, this process is boosted by the creation of "Programa do Leite," which by constituting of an institutional market, contributes to the strengthening and expansion of industries, the detriment of the artisanal processing sector. Nevertheless family farmers continue to act in the activity, be only producing and trading fresh milk, supplying milk to processing units, mediating the production of their peers or by the craft benefiting milk in traditional cheese factories presents in the entire state of Rio Grande do Norte. The results reveal that it is a complex web of social relations of production that are established at the heart of laticinista activity in the Rio Grande Norte, these are summarily marked by relations of competition and complementarity between industrial and artisanal processing of milk

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis investigated the risk of accidental release of hydrocarbons during transportation and storage. Transportation of hydrocarbons from an offshore platform to processing units through subsea pipelines involves risk of release due to pipeline leakage resulting from corrosion, plastic deformation caused by seabed shakedown or damaged by contact with drifting iceberg. The environmental impacts of hydrocarbon dispersion can be severe. Overall safety and economic concerns of pipeline leakage at subsea environment are immense. A large leak can be detected by employing conventional technology such as, radar, intelligent pigging or chemical tracer but in a remote location like subsea or arctic, a small chronic leak may be undetected for a period of time. In case of storage, an accidental release of hydrocarbon from the storage tank could lead pool fire; further it could escalate to domino effects. This chain of accidents may lead to extremely severe consequences. Analyzing past accident scenarios it is observed that more than half of the industrial domino accidents involved fire as a primary event, and some other factors for instance, wind speed and direction, fuel type and engulfment of the compound. In this thesis, a computational fluid dynamics (CFD) approach is taken to model the subsea pipeline leak and the pool fire from a storage tank. A commercial software package ANSYS FLUENT Workbench 15 is used to model the subsea pipeline leakage. The CFD simulation results of four different types of fluids showed that the static pressure and pressure gradient along the axial length of the pipeline have a sharp signature variation near the leak orifice at steady state condition. Transient simulation is performed to obtain the acoustic signature of the pipe near leak orifice. The power spectral density (PSD) of acoustic signal is strong near the leak orifice and it dissipates as the distance and orientation from the leak orifice increase. The high-pressure fluid flow generates more noise than the low-pressure fluid flow. In order to model the pool fire from the storage tank, ANSYS CFX Workbench 14 is used. The CFD results show that the wind speed has significant contribution on the behavior of pool fire and its domino effects. The radiation contours are also obtained from CFD post processing, which can be applied for risk analysis. The outcome of this study will be helpful for better understanding of the domino effects of pool fire in complex geometrical settings of process industries. The attempt to reduce and prevent risks is discussed based on the results obtained from the numerical simulations of the numerical models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper is based on the novel use of a very high fidelity decimation filter chain for Electrocardiogram (ECG) signal acquisition and data conversion. The multiplier-free and multi-stage structure of the proposed filters lower the power dissipation while minimizing the circuit area which are crucial design constraints to the wireless noninvasive wearable health monitoring products due to the scarce operational resources in their electronic implementation. The decimation ratio of the presented filter is 128, working in tandem with a 1-bit 3rd order Sigma Delta (ΣΔ) modulator which achieves 0.04 dB passband ripples and -74 dB stopband attenuation. The work reported here investigates the non-linear phase effects of the proposed decimation filters on the ECG signal by carrying out a comparative study after phase correction. It concludes that the enhanced phase linearity is not crucial for ECG acquisition and data conversion applications since the signal distortion of the acquired signal, due to phase non-linearity, is insignificant for both original and phase compensated filters. To the best of the authors’ knowledge, being free of signal distortion is essential as this might lead to misdiagnosis as stated in the state of the art. This article demonstrates that with their minimal power consumption and minimal signal distortion features, the proposed decimation filters can effectively be employed in biosignal data processing units.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Graphics Processing Units (GPUs) are becoming popular accelerators in modern High-Performance Computing (HPC) clusters. Installing GPUs on each node of the cluster is not efficient resulting in high costs and power consumption as well as underutilisation of the accelerator. The research reported in this paper is motivated towards the use of few physical GPUs by providing cluster nodes access to remote GPUs on-demand for a financial risk application. We hypothesise that sharing GPUs between several nodes, referred to as multi-tenancy, reduces the execution time and energy consumed by an application. Two data transfer modes between the CPU and the GPUs, namely concurrent and sequential, are explored. The key result from the experiments is that multi-tenancy with few physical GPUs using sequential data transfers lowers the execution time and the energy consumed, thereby improving the overall performance of the application.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we develop a fast implementation of an hyperspectral coded aperture (HYCA) algorithm on different platforms using OpenCL, an open standard for parallel programing on heterogeneous systems, which includes a wide variety of devices, from dense multicore systems from major manufactures such as Intel or ARM to new accelerators such as graphics processing units (GPUs), field programmable gate arrays (FPGAs), the Intel Xeon Phi and other custom devices. Our proposed implementation of HYCA significantly reduces its computational cost. Our experiments have been conducted using simulated data and reveal considerable acceleration factors. This kind of implementations with the same descriptive language on different architectures are very important in order to really calibrate the possibility of using heterogeneous platforms for efficient hyperspectral imaging processing in real remote sensing missions.