961 resultados para graphics processing unit (GPU)
Resumo:
The National Housing and Planning Advice Unit commissioned Professor Michael Ball of Reading University to undertake empirical research into how long it was taking to obtain planning consent for major housing sites in England. The focus on sites as opposed to planning applications is important because it is sites that generate housing.
Resumo:
The technique of constructing a transformation, or regrading, of a discrete data set such that the histogram of the transformed data matches a given reference histogram is commonly known as histogram modification. The technique is widely used for image enhancement and normalization. A method which has been previously derived for producing such a regrading is shown to be “best” in the sense that it minimizes the error between the cumulative histogram of the transformed data and that of the given reference function, over all single-valued, monotone, discrete transformations of the data. Techniques for smoothed regrading, which provide a means of balancing the error in matching a given reference histogram against the information lost with respect to a linear transformation are also examined. The smoothed regradings are shown to optimize certain cost functionals. Numerical algorithms for generating the smoothed regradings, which are simple and efficient to implement, are described, and practical applications to the processing of LANDSAT image data are discussed.
Resumo:
Most multidimensional projection techniques rely on distance (dissimilarity) information between data instances to embed high-dimensional data into a visual space. When data are endowed with Cartesian coordinates, an extra computational effort is necessary to compute the needed distances, making multidimensional projection prohibitive in applications dealing with interactivity and massive data. The novel multidimensional projection technique proposed in this work, called Part-Linear Multidimensional Projection (PLMP), has been tailored to handle multivariate data represented in Cartesian high-dimensional spaces, requiring only distance information between pairs of representative samples. This characteristic renders PLMP faster than previous methods when processing large data sets while still being competitive in terms of precision. Moreover, knowing the range of variation for data instances in the high-dimensional space, we can make PLMP a truly streaming data projection technique, a trait absent in previous methods.
Resumo:
Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
The vascular segmentation is important in diagnosing vascular diseases like stroke and is hampered by noise in the image and very thin vessels that can pass unnoticed. One way to accomplish the segmentation is extracting the centerline of the vessel with height ridges, which uses the intensity as features for segmentation. This process can take from seconds to minutes, depending on the current technology employed. In order to accelerate the segmentation method proposed by Aylward [Aylward & Bullitt 2002] we have adapted it to run in parallel using CUDA architecture. The performance of the segmentation method running on GPU is compared to both the same method running on CPU and the original Aylward s method running also in CPU. The improvemente of the new method over the original one is twofold: the starting point for the segmentation process is not a single point in the blood vessel but a volume, thereby making it easier for the user to segment a region of interest, and; the overall gain method was 873 times faster running on GPU and 150 times more fast running on the CPU than the original CPU in Aylward
Resumo:
The visualization of three-dimensional(3D)images is increasigly being sed in the area of medicine, helping physicians diagnose desease. the advances achived in scaners esed for acquisition of these 3d exames, such as computerized tumography(CT) and Magnetic Resonance imaging (MRI), enable the generation of images with higher resolutions, thus, generating files with much larger sizes. Currently, the images of computationally expensive one, and demanding the use of a righ and computer for such task. The direct remote acess of these images thruogh the internet is not efficient also, since all images have to be trasferred to the user´s equipment before the 3D visualization process ca start. with these problems in mind, this work proposes and analyses a solution for the remote redering of 3D medical images, called Remote Rendering (RR3D). In RR3D, the whole hedering process is pefomed a server or a cluster of servers, with high computational power, and only the resulting image is tranferred to the client, still allowing the client to peform operations such as rotations, zoom, etc. the solution was developed using web services written in java and an architecture that uses the scientific visualization packcage paraview, the framework paraviewWeb and the PACS server DCM4CHEE.The solution was tested with two scenarios where the rendering process was performed by a sever with graphics hadwere (GPU) and by a server without GPUs. In the scenarios without GPUs, the soluction was executed in parallel with several number of cores (processing units)dedicated to it. In order to compare our solution to order medical visualization application, a third scenario was esed in the rendering process, was done locally. In all tree scenarios, the solution was tested for different network speeds. The solution solved satisfactorily the problem with the delay in the transfer of the DICOM files, while alowing the use of low and computers as client for visualizing the exams even, tablets and smart phones
Resumo:
The X-ray crystal structure of a complex between ribonuclease T-1 and guanylyl(3'-6')-6'-deoxyhomouridine (GpcU) has been determined at 2.0 Angstrom resolution. This Ligand is an isosteric analogue of the minimal RNA substrate, guanylyl(3'-5')uridine (GpU), where a methylene is substituted for the uridine 5'-oxygen atom. Two protein molecules are part of the asymmetric unit and both have a GpcU bound at the active site in the same manner. The protein-protein interface reveals an extended aromatic stack involving both guanines and three enzyme phenolic groups. A third GpcU has its guanine moiety stacked on His92 at the active site on enzyme molecule A and interacts with GpcU on molecule B in a neighboring unit via hydrogen bonding between uridine ribose 2'- and 3'-OH groups. None of the uridine moieties of the three GpcU molecules in the asymmetric unit interacts directly with the protein. GpcU-active-site interactions involve extensive hydrogen bonding of the guanine moiety at the primary recognition site and of the guanosine 2'-hydroxyl group with His40 and Glu58. on the other hand, the phosphonate group is weakly bound only by a single hydrogen bond with Tyr38, unlike ligand phosphate groups of other substrate analogues and 3'-GMP, which hydrogen-bonded with three additional active-site residues. Hydrogen bonding of the guanylyl 2'-OH group and the phosphonate moiety is essentially the same as that recently observed for a novel structure of a RNase T-1-3'-GMP complex obtained immediately after in situ hydrolysis of exo-(S-p)-guanosine 2',3'-cyclophosphorothioate [Zegers et al. (1998) Nature Struct. Biol. 5, 280-283]. It is likely that GpcU at the active site represents a nonproductive binding mode for GpU [:Steyaert, J., and Engleborghs (1995) fur. J. Biochem. 233, 140-144]. The results suggest that the active site of ribonuclease T-1 is adapted for optimal tight binding of both the guanylyl 2'-OH and phosphate groups (of GpU) only in the transition state for catalytic transesterification, which is stabilized by adjacent binding of the leaving nucleoside (U) group.
Resumo:
The productivity and fruit size distribution of 28 processing tomato cultivars were analyzed to determine the ones with potential for fresh market. The experiment was done in Jaboticabal-SP, Brazil (21o15'22'' South, 48o18'58'' West, altitude 595 m), in a Haplorthox soil, from June to December. The cultivars H 7155, Hypeel 108, Andino, U 573, H 9036, Ipa 6, H 9494, AG 33, Yuba, RPT 1294, AG 72, Peelmech, Curicó, Hypeel 45, RPT 1478, H 9492, H 9498, H 2710, Hitech 45, Halley, Botu 13, H 9553, U 646, NK 1570, AG 45, RPT 1095, RPT 1570 and PSX 37511 were evaluated. The experimental design was randomized blocks, with four repetitions, and five plants per experimental unit. Fruits harvested from each experimental unit were counted, classified by transversal diameter (large, medium, small, very small and cull) and then weighed. Cultivars AG 72, H 9498, Hypeel 45, RPT 1095 and Curicó yielded more than 70 fruits per plant, on average. The total production per plant of cultivars AG 72, H 9498, Hypeel 45, H 7155, Hypeel 108, Halley, Hitech, RPT 1095, H 9494, H 9036 and Curicó was greater than 4 kg. Considering the weight of large and medium fruits, categories which are important for fresh market, the cultivars H 2710, Botu 13, U 573, Hypeel 45, Yuba, RPT 1294 and Ipa 6 presented values above 50% for production.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This study reports the implementation of GMPs in a mozzarella cheese processing plant. The mozzarella cheese manufacturing unit is located in the Southwestern region of the state of Parana, Brazil, and processes 20,000 L of milk daily. The implementation of GMP took place with the creation of a multi-disciplinary team and it was carried out in four steps: diagnosis, report of the diagnosis and road map, corrective measures and follow-up of GMP implementation. The effectiveness of actions taken and GMP implementation was compared by the total percentages of non-conformities and conformities before and after implementation of GMR Microbiological indicators were also used to assess the implementation of GMP in the mozzarella cheese processing facility. Results showed that the average percentage of conformity after the implementation of GMP was significant increased to 66%, while before it was 32% (p < 0.05). The populations of aerobic microorganisms and total coliforms in equipment were significantly reduced (p < 0.05) after the implementation of GMP, as well as the populations of total coliforms in the hands of food handlers (p < 0.05). In conclusion, GMP implementation changed the overall organization of the cheese processing unity, as well as managers and food handlers' behavior and knowledge on the quality and safety of products manufactured. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Programa de doctorado: Ingeniería de Telecomunicación Avanzada
Resumo:
Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.
Resumo:
Over the past years fruit and vegetable industry has become interested in the application of both osmotic dehydration and vacuum impregnation as mild technologies because of their low temperature and energy requirements. Osmotic dehydration is a partial dewatering process by immersion of cellular tissue in hypertonic solution. The diffusion of water from the vegetable tissue to the solution is usually accompanied by the simultaneous solutes counter-diffusion into the tissue. Vacuum impregnation is a unit operation in which porous products are immersed in a solution and subjected to a two-steps pressure change. The first step (vacuum increase) consists of the reduction of the pressure in a solid-liquid system and the gas in the product pores is expanded, partially flowing out. When the atmospheric pressure is restored (second step), the residual gas in the pores compresses and the external liquid flows into the pores. This unit operation allows introducing specific solutes in the tissue, e.g. antioxidants, pH regulators, preservatives, cryoprotectancts. Fruit and vegetable interact dynamically with the environment and the present study attempts to enhance our understanding on the structural, physico-chemical and metabolic changes of plant tissues upon the application of technological processes (osmotic dehydration and vacuum impregnation), by following a multianalytical approach. Macro (low-frequency nuclear magnetic resonance), micro (light microscopy) and ultrastructural (transmission electron microscopy) measurements combined with textural and differential scanning calorimetry analysis allowed evaluating the effects of individual osmotic dehydration or vacuum impregnation processes on (i) the interaction between air and liquid in real plant tissues, (ii) the plant tissue water state and (iii) the cell compartments. Isothermal calorimetry, respiration and photosynthesis determinations led to investigate the metabolic changes upon the application of osmotic dehydration or vacuum impregnation. The proposed multianalytical approach should enable both better designs of processing technologies and estimations of their effects on tissue.
Resumo:
Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.
Resumo:
The efficient emulation of a many-core architecture is a challenging task, each core could be emulated through a dedicated thread and such threads would be interleaved on an either single-core or a multi-core processor. The high number of context switches will results in an unacceptable performance. To support this kind of application, the GPU computational power is exploited in order to schedule the emulation threads on the GPU cores. This presents a non trivial divergence issue, since GPU computational power is offered through SIMD processing elements, that are forced to synchronously execute the same instruction on different memory portions. Thus, a new emulation technique is introduced in order to overcome this limitation: instead of providing a routine for each ISA opcode, the emulator mimics the behavior of the Micro Architecture level, here instructions are date that a unique routine takes as input. Our new technique has been implemented and compared with the classic emulation approach, in order to investigate the chance of a hybrid solution.