48 resultados para graphics processor


Relevância:

10.00% 10.00%

Publicador:

Resumo:

As an auxiliary tool to combat hunger by decreasing the waste of food and contributing for improvement of life quality on the population, CEASA/RN has released from August/03 to August/05 the program MESA DA SOLIDARIEDADE. Despite of the positive results of this program, that has already distributed around 226 tons of food, there is still food being thrown in the trash as the deliver of the same food in its natural form would be a health risk to those who would consume it and only the correct processing of this food can make it edible. This work has as a goal the reuse of solid residues of vegetal origin generated by the CEASA/RN, through the Program MESA DA SOLIDARIEDADE and the characterization of the product obtained so it might be used as a mineral complement in the human diet. To the collecting of samples (from September until December /2004) it was developed a methodology having as a reference the daily needs of mineral salts for infants at the age of seven to ten. The sample was packed in plastic bags and transported in an ambient temperature to the laboratory where it was selected, weighted, disinfected, fractionated and dried to 70ºC in greenhouse. The dry sample was shredded and stored in bottles previously sterilized. The sample in nature was weighted in the same proportion of the dry sample and it was obtained a uniform mass in a domestic processor. The physical-chemical analyses were carried out in triplicate in the samples in nature and in the dry product, being analyzed: pH, humidity, acidity and soluble solids according to IAL (1985), mineral salts contents (Ca, K, Na, Mg, P and Fe) determined by spectrophotometry of Atomic Absorption, caloric power through a calorimetric bomb and presence of fecal traces and E. coli through the colilert method (APHA, 1995). During this period the dry food a base of vegetables presented on average 5,06% of humidity, 4,62 of pH, acidity of 2,73 mg of citric acid /100g of sample, 51,45ºBrix of soluble solids, 2.323,50mg of K/100g, 299,06mg of Ca/100g, 293mg of Na/100g, 154,66mg of Mg/100g, 269,62mg of P/100g, 6,38mg of Fe/100g, caloric power of 3,691Kcal/g (15,502KJ/g) and is free of contamination by fecal traces and E..coli. The dry food developed in this research presented satisfactory characteristics regarding to its conservation, possessing low calories, constituting itself a good source of potassium, magnesium, sodium and iron that can be utilized as a food complement of these minerals

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The waste in the industries of escargot processing is very big. This is composed basically of escargot meat out of the commercialization patterns and the visceras. In this context, there is a need to take advantage to the use of these sub-products. A possibility should be drying them and transforming them in a certain form to be reused. Than, the present work has the objective of studying the reutilization of the sub-products of the escargot industrialization for by means of drying process. The samples were transformed in pastes, through a domestic processor for approximately 1 minute and compacted in trays of aluminum without perforations with three different heights (5 mm, 10 mm and 15 mm). The drying was accomplished in a tray dryer with air circulation and transverse flow at a speed of 0,2 m/s and three temperature levels (70°C, 80°C and 90ºC). A drying kinetics study was accomplished for the obtained curves and for the heat and mass transfer coefficients using experimental procedures based in an experimental planning of 22 factorial type. Microbiological and physiochemical analysis were also accomplished for the in nature and the dehydrated sub-products. In the drying process, it was observed the great importance of the external resistances to the mass transfer and heat in the period of constant tax influenced by the temperature. The evaporation taxes indicated a mixed control of the mass transfer for the case of the thickest layers. As already expected, the drying constant behavior was influenced by the temperature and thickness of the medium, increasing and decreasing. The statistical analysis of the results, in agreement with the factorial planning 22, showed that the fissures, the shrinking of the transfer area and the formation of a crust on the surface might have contributed to the differences between the practical results and the linear model proposed. The temperature and the thickness influenced significantly in the answers of the studied variables: evaporation tax and drying constant. They were obtained significant statistical models and predictive ones for evaporation tax for the meat as well as for the visceras

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aims to analyze the communication graphics of layouts of hypermedia interfaces oriented to Distance Education via the Internet. This proposal is justified by widening the offer of courses that modality and the consequent application of items of hypermedia for teaching-learning. The method of analysis involved the search nethnographic, addressed to the cycle student intermediary of the Training Program Continuing Medias in Education, and the evaluation heuristic of the interfaces of Virtual Learning Environment "E-Proinfo" and of the modules of the Cycle. This evaluation we observed the implementation of the attributes of usability and the degree of interactivity of each interface. The results revealed an inefficient implementation of the attributes of usability, which meant a consequent reduction of the levels of interactivity. As proposing the present Design Virtual Learning, a model of hypermedia layout, designed to generate usability for Virtual learning environments and extend the acquisition of literancy for students and tutors. This proposal design not hypermedia aims the demarcation of models pre-conceived, but the proposal of layout in which each element of hypermedia is applied with a view to generate a seaworthiness intuitive, more agile and efficient, in these ambients

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The infographics historically experience the process of evolution of journalism, from the incipient models handmade in the eighteenth century to the inclusion of computers and sophisticated software today. In order to face the advent of TV against of the partiality readers of the printed newspaper, or to represent the Gulf War, where not allowed photography, infographics reaches modern levels of production and publication. The technical devices which enabled the infographics to evolve the environment of the internet, with conditions for the manipulation of the reader, incorporating video, audio and animations, so styling of interactive infographics. These digital models of information visualization recently arrived daily in the northeast and on their respective web sites with features regionalized. This paper therefore proposes to explore and describe the processes of producing the interactive infographics, taking the example of the Diário do Nordeste, Fortaleza, Ceará, whose department was created one year ago. Therefore, based on aspects that guide the theory of journalism, as newsmaking, filters that focus on productive routine (gatekeeping) and the construction stages of the news. This research also draws on the theoretical framework on the subject, in concepts essential characteristics of computer graphics, as well as the methodological procedures and systematic empirical observations in production routines of the newsroom who can testify limitations and / or advances

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increase of capacity to integrate transistors permitted to develop completed systems, with several components, in single chip, they are called SoC (System-on-Chip). However, the interconnection subsystem cans influence the scalability of SoCs, like buses, or can be an ad hoc solution, like bus hierarchy. Thus, the ideal interconnection subsystem to SoCs is the Network-on-Chip (NoC). The NoCs permit to use simultaneous point-to-point channels between components and they can be reused in other projects. However, the NoCs can raise the complexity of project, the area in chip and the dissipated power. Thus, it is necessary or to modify the way how to use them or to change the development paradigm. Thus, a system based on NoC is proposed, where the applications are described through packages and performed in each router between source and destination, without traditional processors. To perform applications, independent of number of instructions and of the NoC dimensions, it was developed the spiral complement algorithm, which finds other destination until all instructions has been performed. Therefore, the objective is to study the viability of development that system, denominated IPNoSys system. In this study, it was developed a tool in SystemC, using accurate cycle, to simulate the system that performs applications, which was implemented in a package description language, also developed to this study. Through the simulation tool, several result were obtained that could be used to evaluate the system performance. The methodology used to describe the application corresponds to transform the high level application in data-flow graph that become one or more packages. This methodology was used in three applications: a counter, DCT-2D and float add. The counter was used to evaluate a deadlock solution and to perform parallel application. The DCT was used to compare to STORM platform. Finally, the float add aimed to evaluate the efficiency of the software routine to perform a unimplemented hardware instruction. The results from simulation confirm the viability of development of IPNoSys system. They showed that is possible to perform application described in packages, sequentially or parallelly, without interruptions caused by deadlock, and also showed that the execution time of IPNoSys is more efficient than the STORM platform

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increase of applications complexity has demanded hardware even more flexible and able to achieve higher performance. Traditional hardware solutions have not been successful in providing these applications constraints. General purpose processors have inherent flexibility, since they perform several tasks, however, they can not reach high performance when compared to application-specific devices. Moreover, since application-specific devices perform only few tasks, they achieve high performance, although they have less flexibility. Reconfigurable architectures emerged as an alternative to traditional approaches and have become an area of rising interest over the last decades. The purpose of this new paradigm is to modify the device s behavior according to the application. Thus, it is possible to balance flexibility and performance and also to attend the applications constraints. This work presents the design and implementation of a coarse grained hybrid reconfigurable architecture to stream-based applications. The architecture, named RoSA, consists of a reconfigurable logic attached to a processor. Its goal is to exploit the instruction level parallelism from intensive data-flow applications to accelerate the application s execution on the reconfigurable logic. The instruction level parallelism extraction is done at compile time, thus, this work also presents an optimization phase to the RoSA architecture to be included in the GCC compiler. To design the architecture, this work also presents a methodology based on hardware reuse of datapaths, named RoSE. RoSE aims to visualize the reconfigurable units through reusability levels, which provides area saving and datapath simplification. The architecture presented was implemented in hardware description language (VHDL). It was validated through simulations and prototyping. To characterize performance analysis some benchmarks were used and they demonstrated a speedup of 11x on the execution of some applications

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A 3D binary image is considered well-composed if, and only if, the union of the faces shared by the foreground and background voxels of the image is a surface in R3. Wellcomposed images have some desirable topological properties, which allow us to simplify and optimize algorithms that are widely used in computer graphics, computer vision and image processing. These advantages have fostered the development of algorithms to repair bi-dimensional (2D) and three-dimensional (3D) images that are not well-composed. These algorithms are known as repairing algorithms. In this dissertation, we propose two repairing algorithms, one randomized and one deterministic. Both algorithms are capable of making topological repairs in 3D binary images, producing well-composed images similar to the original images. The key idea behind both algorithms is to iteratively change the assigned color of some points in the input image from 0 (background)to 1 (foreground) until the image becomes well-composed. The points whose colors are changed by the algorithms are chosen according to their values in the fuzzy connectivity map resulting from the image segmentation process. The use of the fuzzy connectivity map ensures that a subset of points chosen by the algorithm at any given iteration is the one with the least affinity with the background among all possible choices

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The constant increase of complexity in computer applications demands the development of more powerful hardware support for them. With processor's operational frequency reaching its limit, the most viable solution is the use of parallelism. Based on parallelism techniques and the progressive growth in the capacity of transistors integration in a single chip is the concept of MPSoCs (Multi-Processor System-on-Chip). MPSoCs will eventually become a cheaper and faster alternative to supercomputers and clusters, and applications developed for these high performance systems will migrate to computers equipped with MP-SoCs containing dozens to hundreds of computation cores. In particular, applications in the area of oil and natural gas exploration are also characterized by the high processing capacity required and would benefit greatly from these high performance systems. This work intends to evaluate a traditional and complex application of the oil and gas industry known as reservoir simulation, developing a solution with integrated computational systems in a single chip, with hundreds of functional unities. For this, as the STORM (MPSoC Directory-Based Platform) platform already has a shared memory model, a new distributed memory model were developed. Also a message passing library has been developed folowing MPI standard

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents the concept, design and implementation of a MP-SoC platform, named STORM (MP-SoC DirecTory-Based PlatfORM). Currently the platform is composed of the following modules: SPARC V8 processor, GPOP processor, Cache module, Memory module, Directory module and two different modles of Network-on-Chip, NoCX4 and Obese Tree. All modules were implemented using SystemC, simulated and validated, individually or in group. The modules description is presented in details. For programming the platform in C it was implemented a SPARC assembler, fully compatible with gcc s generated assembly code. For the parallel programming it was implemented a library for mutex managing, using the due assembler s support. A total of 10 simulations of increasing complexity are presented for the validation of the presented concepts. The simulations include real parallel applications, such as matrix multiplication, Mergesort, KMP, Motion Estimation and DCT 2D

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The vascular segmentation is important in diagnosing vascular diseases like stroke and is hampered by noise in the image and very thin vessels that can pass unnoticed. One way to accomplish the segmentation is extracting the centerline of the vessel with height ridges, which uses the intensity as features for segmentation. This process can take from seconds to minutes, depending on the current technology employed. In order to accelerate the segmentation method proposed by Aylward [Aylward & Bullitt 2002] we have adapted it to run in parallel using CUDA architecture. The performance of the segmentation method running on GPU is compared to both the same method running on CPU and the original Aylward s method running also in CPU. The improvemente of the new method over the original one is twofold: the starting point for the segmentation process is not a single point in the blood vessel but a volume, thereby making it easier for the user to segment a region of interest, and; the overall gain method was 873 times faster running on GPU and 150 times more fast running on the CPU than the original CPU in Aylward

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasingly request for processing power during last years has pushed integrated circuit industry to look for ways of providing even more processing power with less heat dissipation, power consumption, and chip area. This goal has been achieved increasing the circuit clock, but since there are physical limits of this approach a new solution emerged as the multiprocessor system on chip (MPSoC). This approach demands new tools and basic software infrastructure to take advantage of the inherent parallelism of these architectures. The oil exploration industry has one of its firsts activities the project decision on exploring oil fields, those decisions are aided by reservoir simulations demanding high processing power, the MPSoC may offer greater performance if its parallelism can be well used. This work presents a proposal of a micro-kernel operating system and auxiliary libraries aimed to the STORM MPSoC platform analyzing its influence on the problem of reservoir simulation

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The visualization of three-dimensional(3D)images is increasigly being sed in the area of medicine, helping physicians diagnose desease. the advances achived in scaners esed for acquisition of these 3d exames, such as computerized tumography(CT) and Magnetic Resonance imaging (MRI), enable the generation of images with higher resolutions, thus, generating files with much larger sizes. Currently, the images of computationally expensive one, and demanding the use of a righ and computer for such task. The direct remote acess of these images thruogh the internet is not efficient also, since all images have to be trasferred to the user´s equipment before the 3D visualization process ca start. with these problems in mind, this work proposes and analyses a solution for the remote redering of 3D medical images, called Remote Rendering (RR3D). In RR3D, the whole hedering process is pefomed a server or a cluster of servers, with high computational power, and only the resulting image is tranferred to the client, still allowing the client to peform operations such as rotations, zoom, etc. the solution was developed using web services written in java and an architecture that uses the scientific visualization packcage paraview, the framework paraviewWeb and the PACS server DCM4CHEE.The solution was tested with two scenarios where the rendering process was performed by a sever with graphics hadwere (GPU) and by a server without GPUs. In the scenarios without GPUs, the soluction was executed in parallel with several number of cores (processing units)dedicated to it. In order to compare our solution to order medical visualization application, a third scenario was esed in the rendering process, was done locally. In all tree scenarios, the solution was tested for different network speeds. The solution solved satisfactorily the problem with the delay in the transfer of the DICOM files, while alowing the use of low and computers as client for visualizing the exams even, tablets and smart phones

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Reconfigurable Computing is an intermediate solution at the resolution of complex problems, making possible to combine the speed of the hardware with the flexibility of the software. An reconfigurable architecture possess some goals, among these the increase of performance. The use of reconfigurable architectures to increase the performance of systems is a well known technology, specially because of the possibility of implementing certain slow algorithms in the current processors directly in hardware. Amongst the various segments that use reconfigurable architectures the reconfigurable processors deserve a special mention. These processors combine the functions of a microprocessor with a reconfigurable logic and can be adapted after the development process. Reconfigurable Instruction Set Processors (RISP) are a subgroup of the reconfigurable processors, that have as goal the reconfiguration of the instruction set of the processor, involving issues such formats, operands and operations of the instructions. This work possess as main objective the development of a RISP processor, combining the techniques of configuration of the set of executed instructions of the processor during the development, and reconfiguration of itself in execution time. The project and implementation in VHDL of this RISP processor has as intention to prove the applicability and the efficiency of two concepts: to use more than one set of fixed instructions, with only one set active in a given time, and the possibility to create and combine new instructions, in a way that the processor pass to recognize and use them in real time as if these existed in the fixed set of instruction. The creation and combination of instructions is made through a reconfiguration unit, incorporated to the processor. This unit allows the user to send custom instructions to the processor, so that later he can use them as if they were fixed instructions of the processor. In this work can also be found simulations of applications involving fixed and custom instructions and results of the comparisons between these applications in relation to the consumption of power and the time of execution, which confirm the attainment of the goals for which the processor was developed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Non-Photorealisitc Rendering (NPR) is a class of techniques that aims to reproduce artistic techniques, trying to express feelings and moods on the rendered scenes, giving an aspect of that they had been made "by hand". Another way of defining NPR is that it is the processing of scenes, images or videos into artwork, generating scenes, images or videos that can have the visual appeal of pieces of art, expressing the visual and emotional characteristics of artistic styles. This dissertation presents a new method of NPR for stylization of images and videos, based on a typical artistic expression of the Northeast region of Brazil, that uses colored sand to compose landscape images on the inner surface of glass bottles. This method is comprised by one technique for generating 2D procedural textures of sand, and two techniques that mimic effects created by the artists using their tools. It also presents a method for generating 21 2D animations in sandbox from the stylized video. The temporal coherence within these stylized videos can be enforced on individual objects with the aid of a video segmentation algorithm. The present techniques in this work were used on stylization of synthetic and real videos, something close to impossible to be produced by artist in real life

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The estuaries are important investigation zones of the actual morphodynamic and of depositional facies of recent geological history. They are constituted in important receptor means of the coastal area sediments, where the evolutionary processes occur quickly. They are also attractive means for the development of anthropic activities, which in a disordered way interfere in the active processes in the sedimentary balance of the coastal areas. Among the human interventions, the alterations of the depositional environment of mangroves in areas of tropical estuary is deserving relevance, whose implications for the environment estuarine and the coastal adjacent, they are still far to be known. Due to the interest of the sedimentologic component in the comprehension of the processes linked to the evolution of the environments estuarine and coastal adjacent, this work, aimed at the understanding of the morphodynamic coastal phenomena that comprise the region of estuarine influence of the River Curimataú / RN. It was also evaluated in the morphodynamic context the implications due to alterations of the depositional environment of mangrove by anthropic activity. The Curimataú Estuary, located in the south portion of the oriental coast of Rio Grande do Norte, in the last decades has been objective of the overwhelming occupation of the shrimp farm in areas of mangroves, which were implanted with perspectives of development in a short to medium period. On the other hand, the estuary and its region of coastal influence lacks enough information to subsidize the planning and reorganization more effective of the surrounding activities. Thus, it was intended with this work to give a contribution target tothe maintainable use of the coastal resources of this region. A series of studies using data of orbital and acoustic remote sensing, as of sediments sampling, were executed in the gutter of the estuary. The obtained results starting from the interpretation of bathymetric maps, echo sounder graphics and of distribution of sediments made possible the location of the estuary based in morpho-sedimentar criteria. The estuarine tidal flat was dissected in environments of intertidal mangroves, supratidal mangroves and apicuns with base in the integration of data of sensor optic and of radar following by the field control. The adjacent coast that is influenced by the Curimataú estuary, was segmented according to their geomorphologic characteristics, where each segment had a point of observation of the beach morphodynamic, during the period from january/2001 to february/2002. Once every month, beaches profiles, collections of sediments in the beach zones, as measurement of hydrodynamic parameters were executed. The results of the observations of the tidal environment showed that the area of estuarine influence of the Curimataú begins to suffer negative sedimentary taxes, where in some beaches, the erosive processes are already observed. The granulometric characteristics of the beach sediments start to tend for the increase of thin sand in the erosive periods. The destruction of the depositional environments of mangroves of the Curimataú estuary, to the construction of shrimp farms, can be providing the diminution of the tidal prism of the estuary, enlarging the effects of the local increasing of the sea level, through the smaller supplying of sediments to the adjacent coast. Besides this, it was verified the possibility of the sanding of the tidal channel in the margins of the destroyed areas of mangroves, where very high taxes of sedimentation of thin materials were estimated in case of these areas were preserve