991 resultados para GPU - graphics processing unit
Resumo:
The X-ray crystal structure of a complex between ribonuclease T-1 and guanylyl(3'-6')-6'-deoxyhomouridine (GpcU) has been determined at 2.0 Angstrom resolution. This Ligand is an isosteric analogue of the minimal RNA substrate, guanylyl(3'-5')uridine (GpU), where a methylene is substituted for the uridine 5'-oxygen atom. Two protein molecules are part of the asymmetric unit and both have a GpcU bound at the active site in the same manner. The protein-protein interface reveals an extended aromatic stack involving both guanines and three enzyme phenolic groups. A third GpcU has its guanine moiety stacked on His92 at the active site on enzyme molecule A and interacts with GpcU on molecule B in a neighboring unit via hydrogen bonding between uridine ribose 2'- and 3'-OH groups. None of the uridine moieties of the three GpcU molecules in the asymmetric unit interacts directly with the protein. GpcU-active-site interactions involve extensive hydrogen bonding of the guanine moiety at the primary recognition site and of the guanosine 2'-hydroxyl group with His40 and Glu58. on the other hand, the phosphonate group is weakly bound only by a single hydrogen bond with Tyr38, unlike ligand phosphate groups of other substrate analogues and 3'-GMP, which hydrogen-bonded with three additional active-site residues. Hydrogen bonding of the guanylyl 2'-OH group and the phosphonate moiety is essentially the same as that recently observed for a novel structure of a RNase T-1-3'-GMP complex obtained immediately after in situ hydrolysis of exo-(S-p)-guanosine 2',3'-cyclophosphorothioate [Zegers et al. (1998) Nature Struct. Biol. 5, 280-283]. It is likely that GpcU at the active site represents a nonproductive binding mode for GpU [:Steyaert, J., and Engleborghs (1995) fur. J. Biochem. 233, 140-144]. The results suggest that the active site of ribonuclease T-1 is adapted for optimal tight binding of both the guanylyl 2'-OH and phosphate groups (of GpU) only in the transition state for catalytic transesterification, which is stabilized by adjacent binding of the leaving nucleoside (U) group.
Resumo:
The productivity and fruit size distribution of 28 processing tomato cultivars were analyzed to determine the ones with potential for fresh market. The experiment was done in Jaboticabal-SP, Brazil (21o15'22'' South, 48o18'58'' West, altitude 595 m), in a Haplorthox soil, from June to December. The cultivars H 7155, Hypeel 108, Andino, U 573, H 9036, Ipa 6, H 9494, AG 33, Yuba, RPT 1294, AG 72, Peelmech, Curicó, Hypeel 45, RPT 1478, H 9492, H 9498, H 2710, Hitech 45, Halley, Botu 13, H 9553, U 646, NK 1570, AG 45, RPT 1095, RPT 1570 and PSX 37511 were evaluated. The experimental design was randomized blocks, with four repetitions, and five plants per experimental unit. Fruits harvested from each experimental unit were counted, classified by transversal diameter (large, medium, small, very small and cull) and then weighed. Cultivars AG 72, H 9498, Hypeel 45, RPT 1095 and Curicó yielded more than 70 fruits per plant, on average. The total production per plant of cultivars AG 72, H 9498, Hypeel 45, H 7155, Hypeel 108, Halley, Hitech, RPT 1095, H 9494, H 9036 and Curicó was greater than 4 kg. Considering the weight of large and medium fruits, categories which are important for fresh market, the cultivars H 2710, Botu 13, U 573, Hypeel 45, Yuba, RPT 1294 and Ipa 6 presented values above 50% for production.
Resumo:
In this article we explore the NVIDIA graphical processing units (GPU) computational power in cryptography using CUDA (Compute Unified Device Architecture) technology. CUDA makes the general purpose computing easy using the parallel processing presents in GPUs. To do this, the NVIDIA GPUs architectures and CUDA are presented, besides cryptography concepts. Furthermore, we do the comparison between the versions executed in CPU with the parallel version of the cryptography algorithms Advanced Encryption Standard (AES) and Message-digest Algorithm 5 (MD5) wrote in CUDA. © 2011 AISTI.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This study reports the implementation of GMPs in a mozzarella cheese processing plant. The mozzarella cheese manufacturing unit is located in the Southwestern region of the state of Parana, Brazil, and processes 20,000 L of milk daily. The implementation of GMP took place with the creation of a multi-disciplinary team and it was carried out in four steps: diagnosis, report of the diagnosis and road map, corrective measures and follow-up of GMP implementation. The effectiveness of actions taken and GMP implementation was compared by the total percentages of non-conformities and conformities before and after implementation of GMR Microbiological indicators were also used to assess the implementation of GMP in the mozzarella cheese processing facility. Results showed that the average percentage of conformity after the implementation of GMP was significant increased to 66%, while before it was 32% (p < 0.05). The populations of aerobic microorganisms and total coliforms in equipment were significantly reduced (p < 0.05) after the implementation of GMP, as well as the populations of total coliforms in the hands of food handlers (p < 0.05). In conclusion, GMP implementation changed the overall organization of the cheese processing unity, as well as managers and food handlers' behavior and knowledge on the quality and safety of products manufactured. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Programa de doctorado: Ingeniería de Telecomunicación Avanzada
Resumo:
Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.
Resumo:
Over the past years fruit and vegetable industry has become interested in the application of both osmotic dehydration and vacuum impregnation as mild technologies because of their low temperature and energy requirements. Osmotic dehydration is a partial dewatering process by immersion of cellular tissue in hypertonic solution. The diffusion of water from the vegetable tissue to the solution is usually accompanied by the simultaneous solutes counter-diffusion into the tissue. Vacuum impregnation is a unit operation in which porous products are immersed in a solution and subjected to a two-steps pressure change. The first step (vacuum increase) consists of the reduction of the pressure in a solid-liquid system and the gas in the product pores is expanded, partially flowing out. When the atmospheric pressure is restored (second step), the residual gas in the pores compresses and the external liquid flows into the pores. This unit operation allows introducing specific solutes in the tissue, e.g. antioxidants, pH regulators, preservatives, cryoprotectancts. Fruit and vegetable interact dynamically with the environment and the present study attempts to enhance our understanding on the structural, physico-chemical and metabolic changes of plant tissues upon the application of technological processes (osmotic dehydration and vacuum impregnation), by following a multianalytical approach. Macro (low-frequency nuclear magnetic resonance), micro (light microscopy) and ultrastructural (transmission electron microscopy) measurements combined with textural and differential scanning calorimetry analysis allowed evaluating the effects of individual osmotic dehydration or vacuum impregnation processes on (i) the interaction between air and liquid in real plant tissues, (ii) the plant tissue water state and (iii) the cell compartments. Isothermal calorimetry, respiration and photosynthesis determinations led to investigate the metabolic changes upon the application of osmotic dehydration or vacuum impregnation. The proposed multianalytical approach should enable both better designs of processing technologies and estimations of their effects on tissue.
Resumo:
Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.
Resumo:
The efficient emulation of a many-core architecture is a challenging task, each core could be emulated through a dedicated thread and such threads would be interleaved on an either single-core or a multi-core processor. The high number of context switches will results in an unacceptable performance. To support this kind of application, the GPU computational power is exploited in order to schedule the emulation threads on the GPU cores. This presents a non trivial divergence issue, since GPU computational power is offered through SIMD processing elements, that are forced to synchronously execute the same instruction on different memory portions. Thus, a new emulation technique is introduced in order to overcome this limitation: instead of providing a routine for each ISA opcode, the emulator mimics the behavior of the Micro Architecture level, here instructions are date that a unique routine takes as input. Our new technique has been implemented and compared with the classic emulation approach, in order to investigate the chance of a hybrid solution.
Resumo:
The aim of my thesis is to parallelize the Weighting Histogram Analysis Method (WHAM), which is a popular algorithm used to calculate the Free Energy of a molucular system in Molecular Dynamics simulations. WHAM works in post processing in cooperation with another algorithm called Umbrella Sampling. Umbrella Sampling has the purpose to add a biasing in the potential energy of the system in order to force the system to sample a specific region in the configurational space. Several N independent simulations are performed in order to sample all the region of interest. Subsequently, the WHAM algorithm is used to estimate the original system energy starting from the N atomic trajectories. The parallelization of WHAM has been performed through CUDA, a language that allows to work in GPUs of NVIDIA graphic cards, which have a parallel achitecture. The parallel implementation may sensibly speed up the WHAM execution compared to previous serial CPU imlementations. However, the WHAM CPU code presents some temporal criticalities to very high numbers of interactions. The algorithm has been written in C++ and executed in UNIX systems provided with NVIDIA graphic cards. The results were satisfying obtaining an increase of performances when the model was executed on graphics cards with compute capability greater. Nonetheless, the GPUs used to test the algorithm is quite old and not designated for scientific calculations. It is likely that a further performance increase will be obtained if the algorithm would be executed in clusters of GPU at high level of computational efficiency. The thesis is organized in the following way: I will first describe the mathematical formulation of Umbrella Sampling and WHAM algorithm with their apllications in the study of ionic channels and in Molecular Docking (Chapter 1); then, I will present the CUDA architectures used to implement the model (Chapter 2); and finally, the results obtained on model systems will be presented (Chapter 3).
Resumo:
Efficient image blurring techniques based on the pyramid algorithm can be implemented on modern graphics hardware; thus, image blurring with arbitrary blur width is possible in real time even for large images. However, pyramidal blurring methods do not achieve the image quality provided by convolution filters; in particular, the shape of the corresponding filter kernel varies locally, which potentially results in objectionable rendering artifacts. In this work, a new analysis filter is designed that significantly reduces this variation for a particular pyramidal blurring technique. Moreover, the pyramidal blur algorithm is generalized to allow for a continuous variation of the blur width. Furthermore, an efficient implementation for programmable graphics hardware is presented. The proposed method is named “quasi-convolution pyramidal blurring” since the resulting effect is very close to image blurring based on a convolution filter for many applications.
Resumo:
We present a high performance-yet low cost-system for multi-view rendering in virtual reality (VR) applications. In contrast to complex CAVE installations, which are typically driven by one render client per view, we arrange eight displays in an octagon around the viewer to provide a full 360° projection, and we drive these eight displays by a single PC equipped with multiple graphics units (GPUs). In this paper we describe the hardware and software setup, as well as the necessary low-level and high-level optimizations to optimally exploit the parallelism of this multi-GPU multi-view VR system.