991 resultados para GPU - graphics processing unit


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis gives an overview of the use of the level set methods in the field of image science. The similar fast marching method is discussed for comparison, also the narrow band and the particle level set methods are introduced. The level set method is a numerical scheme for representing, deforming and recovering structures in an arbitrary dimensions. It approximates and tracks the moving interfaces, dynamic curves and surfaces. The level set method does not define how and why some boundary is advancing the way it is but simply represents and tracks the boundary. The principal idea of the level set method is to represent the N dimensional boundary in the N+l dimensions. This gives the generality to represent even the complex boundaries. The level set methods can be powerful tools to represent dynamic boundaries, but they can require lot of computing power. Specially the basic level set method have considerable computational burden. This burden can be alleviated with more sophisticated versions of the level set algorithm like the narrow band level set method or with the programmable hardware implementation. Also the parallel approach can be used in suitable applications. It is concluded that these methods can be used in a quite broad range of image applications, like computer vision and graphics, scientific visualization and also to solve problems in computational physics. Level set methods and methods derived and inspired by it will be in the front line of image processing also in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of autonomous sensors powered by small-size photovoltaic (PV) panels, this work analyses how the efficiency of DC/DC-converter-based power processing circuits can be improved by an appropriate selection of the inductor current that transfers the energy from the PV panel to a storage unit. Each component of power losses (fixed, conduction and switching losses) involved in the DC/DC converter specifically depends on the average inductor current so that there is an optimal value of this current that causes minimal losses and, hence, maximum efficiency. Such an idea has been tested experimentally using two commercial DC/DC converters whose average inductor current is adjustable. Experimental results show that the efficiency can be improved up to 12% by selecting an optimal value of that current, which is around 300-350 mA for such DC/DC converters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of pulp processing on softwood fiber properties strongly influence the properties of wet and dry paper webs. Pulp strength delivery studies have provided observations that much of the strength potential of long fibered pulp is lost during brown stock fiber line operations where the pulp is merely washed and transferred to the subsequent processing stages. The objective of this work was to study the intrinsic mechanisms which maycause fiber damage in the different unit operations of modern softwood brown stock processing. The work was conducted by studying the effects of industrial machinery on pulp properties with some actions of unit operations simulated in laboratory scale devices under controlled conditions. An optical imaging system was created and used to study the orientation of fibers in the internal flows during pulp fluidization in mixers and the passage of fibers through the screen openings during screening. The qualitative changes in fibers were evaluated with existing and standardized techniques. The results showed that each process stage has its characteristic effects on fiber properties: Pulp washing and mat formation in displacement washers introduced fiber deformations especially if the fibers entering the stage were intact, but it did not decrease the pulp strength properties. However, storage chests and pulp transfer after displacement washers contributed to strength deterioration. Pulp screening proved to be quite gentle, having the potential of slightly evening out fiber deformations from very deformed pulps and vice versa inflicting a marginal increase in the deformation indices if the fibers were previously intact. Pulp mixing in fluidizing industrial mixers did not have detrimental effects on pulp strength and had the potential of slightly evening out the deformations, provided that the intensity of fluidization was high enough to allow fiber orientation with the flow and that the time of mixing was short. The chemical and mechanical actions of oxygen delignification had two distinct effects on pulp properties: chemical treatment clearly reduced pulp strength with and without mechanical treatment, and the mechanical actions of process machinery introduced more conformability to pulp fibers, but did not clearly contribute to a further decrease in pulp strength. The chemical composition of fibers entering the oxygen stage was also found to affect the susceptibility of fibers to damage during oxygen delignification. Fibers with the smallest content of xylan were found to be more prone to irreversibledeformations accompanied with a lower tensile strength of the pulp. Fibers poor in glucomannan exhibited a lower fiber strength while wet after oxygen delignification as compared to the reference pulp. Pulps with the smallest lignin content on the other hand exhibited improved strength properties as compared to the references.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this project was to identify in a subject group of engineers and technicians (N = 62) a preferred mode of representation for facilitating correct recall of information from complex graphics. The modes of representation were black and white (b&w) block, b&w icon, color block, and color icon. The researcher's test instrument included twelve complex graphics (six b&w and six color - three per mode). Each graphics presentation was followed by two multiple-choice questions. Recall performance was better using b&w block mode graphics and color icon mode graphics. A standardized test, the Group Embedded Figures Test (GEFT) was used to identify a cognitive style preference (field dependence). Although engineers and technicians in the sample were strongly field-independent, they were not significantly more field-independent than the normative group in the Witkin, Oltman, Raskin, and Karp study (1971). Tests were also employed to look for any significant difference in cognitive style preference due to gender. None was found. Implications from the project results for the design of visuals and their use in technical training are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present studies it is clear that Bacillus pumilus xylanase is having the characteristic suited for an industrial enzyme (xylanases that are active and stable at elevated temperatures and alkaline pH are needed). SSF production of xylanases and its application appears to be an innovative technology where the fermented substrate is the enzyme source that is used directly in the bleaching process without a prior downstream processing. The direct use of SSF enzymes in bleaching is a relatively new biobleaching approach. This can certainly benefit the bleaching process to lower the xylanase production costs and improve the economics and viability of the biobleaching technology. The application of enzymes to the bleaching process has been considered as an environmentally friendly approach that can reduce the negative impact on the environment exerted by the use of chlorine-based bleaching agents. It has been demonstrated that pretreatment of kraft pulp with xylanase prior to bleaching (biobleaching) can facilitate subsequent removal of lignin by bleaching chemicals, thereby, reducing the demand for elemental chlorine or improving final paper brightness. Using this xylanase pre-treatment, has resulted in an increased of brightness (8.5 Unit) when compared to non-enzymatic treated bleached pulp prepared using identical conditions. Reduction of the consumption of active chlorine can be achieved which results in a decrease in the toxicity, colour, chloride and absorbable organic halogen (AOX) levels of bleaching effluents. The xylanase treatment improves drainage, strength properties and the fragility of pulps, and also increases the brightness of pulps. This positive result shows that enzyme pre-treatment facilitates the removal of chromophore fragments of pulp there by making the process more environment friendly

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work, the author has designed and developed all types of solar air heaters called porous and nonporous collectors. The developed solar air heaters were subjected to different air mass flow rates in order to standardize the flow per unit area of the collector. Much attention was given to investigate the performance of the solar air heaters fitted with baffles. The output obtained from the experiments on pilot models, helped the installation of solar air heating system for industrial drying applications also. Apart from these, various types of solar dryers, for small and medium scale drying applications, were also built up. The feasibility of ‘latent heat thermal energy storage system’ based on Phase Change Material was also undertaken. The application of solar greenhouse for drying industrial effluent was analyzed in the present study and a solar greenhouse was developed. The effectiveness of Computational Fluid Dynamics (CFD) in the field of solar air heaters was also analyzed. The thesis is divided into eight chapters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigated the potential use of Linear Predictive Coding in speech communication applications. A Modified Block Adaptive Predictive Coder is developed, which reduces the computational burden and complexity without sacrificing the speech quality, as compared to the conventional adaptive predictive coding (APC) system. For this, changes in the evaluation methods have been evolved. This method is as different from the usual APC system in that the difference between the true and the predicted value is not transmitted. This allows the replacement of the high order predictor in the transmitter section of a predictive coding system, by a simple delay unit, which makes the transmitter quite simple. Also, the block length used in the processing of the speech signal is adjusted relative to the pitch period of the signal being processed rather than choosing a constant length as hitherto done by other researchers. The efficiency of the newly proposed coder has been supported with results of computer simulation using real speech data. Three methods for voiced/unvoiced/silent/transition classification have been presented. The first one is based on energy, zerocrossing rate and the periodicity of the waveform. The second method uses normalised correlation coefficient as the main parameter, while the third method utilizes a pitch-dependent correlation factor. The third algorithm which gives the minimum error probability has been chosen in a later chapter to design the modified coder The thesis also presents a comparazive study beh-cm the autocorrelation and the covariance methods used in the evaluaiicn of the predictor parameters. It has been proved that the azztocorrelation method is superior to the covariance method with respect to the filter stabf-it)‘ and also in an SNR sense, though the increase in gain is only small. The Modified Block Adaptive Coder applies a switching from pitch precitzion to spectrum prediction when the speech segment changes from a voiced or transition region to an unvoiced region. The experiments cont;-:ted in coding, transmission and simulation, used speech samples from .\£=_‘ajr2_1a:r1 and English phrases. Proposal for a speaker reecgnifion syste: and a phoneme identification system has also been outlized towards the end of the thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The National Housing and Planning Advice Unit commissioned Professor Michael Ball of Reading University to undertake empirical research into how long it was taking to obtain planning consent for major housing sites in England. The focus on sites as opposed to planning applications is important because it is sites that generate housing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The technique of constructing a transformation, or regrading, of a discrete data set such that the histogram of the transformed data matches a given reference histogram is commonly known as histogram modification. The technique is widely used for image enhancement and normalization. A method which has been previously derived for producing such a regrading is shown to be “best” in the sense that it minimizes the error between the cumulative histogram of the transformed data and that of the given reference function, over all single-valued, monotone, discrete transformations of the data. Techniques for smoothed regrading, which provide a means of balancing the error in matching a given reference histogram against the information lost with respect to a linear transformation are also examined. The smoothed regradings are shown to optimize certain cost functionals. Numerical algorithms for generating the smoothed regradings, which are simple and efficient to implement, are described, and practical applications to the processing of LANDSAT image data are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most multidimensional projection techniques rely on distance (dissimilarity) information between data instances to embed high-dimensional data into a visual space. When data are endowed with Cartesian coordinates, an extra computational effort is necessary to compute the needed distances, making multidimensional projection prohibitive in applications dealing with interactivity and massive data. The novel multidimensional projection technique proposed in this work, called Part-Linear Multidimensional Projection (PLMP), has been tailored to handle multivariate data represented in Cartesian high-dimensional spaces, requiring only distance information between pairs of representative samples. This characteristic renders PLMP faster than previous methods when processing large data sets while still being competitive in terms of precision. Moreover, knowing the range of variation for data instances in the high-dimensional space, we can make PLMP a truly streaming data projection technique, a trait absent in previous methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vascular segmentation is important in diagnosing vascular diseases like stroke and is hampered by noise in the image and very thin vessels that can pass unnoticed. One way to accomplish the segmentation is extracting the centerline of the vessel with height ridges, which uses the intensity as features for segmentation. This process can take from seconds to minutes, depending on the current technology employed. In order to accelerate the segmentation method proposed by Aylward [Aylward & Bullitt 2002] we have adapted it to run in parallel using CUDA architecture. The performance of the segmentation method running on GPU is compared to both the same method running on CPU and the original Aylward s method running also in CPU. The improvemente of the new method over the original one is twofold: the starting point for the segmentation process is not a single point in the blood vessel but a volume, thereby making it easier for the user to segment a region of interest, and; the overall gain method was 873 times faster running on GPU and 150 times more fast running on the CPU than the original CPU in Aylward

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The visualization of three-dimensional(3D)images is increasigly being sed in the area of medicine, helping physicians diagnose desease. the advances achived in scaners esed for acquisition of these 3d exames, such as computerized tumography(CT) and Magnetic Resonance imaging (MRI), enable the generation of images with higher resolutions, thus, generating files with much larger sizes. Currently, the images of computationally expensive one, and demanding the use of a righ and computer for such task. The direct remote acess of these images thruogh the internet is not efficient also, since all images have to be trasferred to the user´s equipment before the 3D visualization process ca start. with these problems in mind, this work proposes and analyses a solution for the remote redering of 3D medical images, called Remote Rendering (RR3D). In RR3D, the whole hedering process is pefomed a server or a cluster of servers, with high computational power, and only the resulting image is tranferred to the client, still allowing the client to peform operations such as rotations, zoom, etc. the solution was developed using web services written in java and an architecture that uses the scientific visualization packcage paraview, the framework paraviewWeb and the PACS server DCM4CHEE.The solution was tested with two scenarios where the rendering process was performed by a sever with graphics hadwere (GPU) and by a server without GPUs. In the scenarios without GPUs, the soluction was executed in parallel with several number of cores (processing units)dedicated to it. In order to compare our solution to order medical visualization application, a third scenario was esed in the rendering process, was done locally. In all tree scenarios, the solution was tested for different network speeds. The solution solved satisfactorily the problem with the delay in the transfer of the DICOM files, while alowing the use of low and computers as client for visualizing the exams even, tablets and smart phones