969 resultados para In-Shader Rendering


Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the major challenges facing a present day game development company is the removal of bugs from such complex virtual environments. This work presents an approach for measuring the correctness of synthetic scenes generated by a rendering system of a 3D application, such as a computer game. Our approach builds a database of labelled point clouds representing the spatiotemporal colour distribution for the objects present in a sequence of bug-free frames. This is done by converting the position that the pixels take over time into the 3D equivalent points with associated colours. Once the space of labelled points is built, each new image produced from the same game by any rendering system can be analysed by measuring its visual inconsistency in terms of distance from the database. Objects within the scene can be relocated (manually or by the application engine); yet the algorithm is able to perform the image analysis in terms of the 3D structure and colour distribution of samples on the surface of the object. We applied our framework to the publicly available game RacingGame developed for Microsoft(R) Xna(R). Preliminary results show how this approach can be used to detect a variety of visual artifacts generated by the rendering system in a professional quality game engine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We outline a method for registration of images of cross sections using the concepts of The Generalized Hough Transform (GHT). The approach may be useful in situations where automation should be a concern. To overcome known problems of noise of traditional GHT we have implemented a slight modified version of the basic algorithm. The modification consists of eliminating points of no interest in the process before the application of the accumulation step of the algorithm. This procedure minimizes the amount of accumulation points while reducing the probability of appearing of spurious peaks. Also, we apply image warping techniques to interpolate images among cross sections. This is needed where the distance of samples between sections is too large. Then it is suggested that the step of registration with GHT can help the interpolation automation by simplifying the correspondence between points of images. Some results are shown.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last years, the well known ray tracing algorithm gained new popularity with the introduction of interactive ray tracing methods. The high modularity and the ability to produce highly realistic images make ray tracing an attractive alternative to raster graphics hardware. Interactive ray tracing also proved its potential in the field of Mixed Reality rendering and provides novel methods for seamless integration of real and virtual content. Actor insertion methods, a subdomain of Mixed Reality and closely related to virtual television studio techniques, can use ray tracing for achieving high output quality in conjunction with appropriate visual cues like shadows and reflections at interactive frame rates. In this paper, we show how interactive ray tracing techniques can provide new ways of implementing virtual studio applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article describes the discovery and development of the first highly selective, small molecule antagonist of the muscarinic acetylcholine receptor subtype I (mAChR1 or M-1). An M-1 functional, cell-based, calcium-mobilization assay identified three distinct chemical series with initial selectivity for M-1 versus M-4. An iterative parallel synthesis approach was employed to optimize all three series in parallel, which led to the development of novel microwave-assisted chemistry and provided important take home lessons for probe development projects. Ultimately, this effort produced VU0255035, a potent (IC50 = 130 nM) and selective (>75-fold vs. M-2-M-5 and >10 mu M vs. a panel of 75 GPCRs, ion channels and transporters) small molecule M-1 antagonist. Further profiling demonstrated that VU0255035 was centrally penetrant (Brain(AUC)/Plasma(AUC) of 0.48) and active in vivo, rendering it acceptable as both an in vitro and in vivo MLSCN/MLPCN probe molecule for studying and dissecting M-1 function.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La synthèse d'images dites photoréalistes nécessite d'évaluer numériquement la manière dont la lumière et la matière interagissent physiquement, ce qui, malgré la puissance de calcul impressionnante dont nous bénéficions aujourd'hui et qui ne cesse d'augmenter, est encore bien loin de devenir une tâche triviale pour nos ordinateurs. Ceci est dû en majeure partie à la manière dont nous représentons les objets: afin de reproduire les interactions subtiles qui mènent à la perception du détail, il est nécessaire de modéliser des quantités phénoménales de géométries. Au moment du rendu, cette complexité conduit inexorablement à de lourdes requêtes d'entrées-sorties, qui, couplées à des évaluations d'opérateurs de filtrage complexes, rendent les temps de calcul nécessaires à produire des images sans défaut totalement déraisonnables. Afin de pallier ces limitations sous les contraintes actuelles, il est nécessaire de dériver une représentation multiéchelle de la matière. Dans cette thèse, nous construisons une telle représentation pour la matière dont l'interface correspond à une surface perturbée, une configuration qui se construit généralement via des cartes d'élévations en infographie. Nous dérivons notre représentation dans le contexte de la théorie des microfacettes (conçue à l'origine pour modéliser la réflectance de surfaces rugueuses), que nous présentons d'abord, puis augmentons en deux temps. Dans un premier temps, nous rendons la théorie applicable à travers plusieurs échelles d'observation en la généralisant aux statistiques de microfacettes décentrées. Dans l'autre, nous dérivons une procédure d'inversion capable de reconstruire les statistiques de microfacettes à partir de réponses de réflexion d'un matériau arbitraire dans les configurations de rétroréflexion. Nous montrons comment cette théorie augmentée peut être exploitée afin de dériver un opérateur général et efficace de rééchantillonnage approximatif de cartes d'élévations qui (a) préserve l'anisotropie du transport de la lumière pour n'importe quelle résolution, (b) peut être appliqué en amont du rendu et stocké dans des MIP maps afin de diminuer drastiquement le nombre de requêtes d'entrées-sorties, et (c) simplifie de manière considérable les opérations de filtrage par pixel, le tout conduisant à des temps de rendu plus courts. Afin de valider et démontrer l'efficacité de notre opérateur, nous synthétisons des images photoréalistes anticrenelées et les comparons à des images de référence. De plus, nous fournissons une implantation C++ complète tout au long de la dissertation afin de faciliter la reproduction des résultats obtenus. Nous concluons avec une discussion portant sur les limitations de notre approche, ainsi que sur les verrous restant à lever afin de dériver une représentation multiéchelle de la matière encore plus générale.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is a growing interest in simulating natural phenomena in computer graphics applications. Animating natural scenes in real time is one of the most challenging problems due to the inherent complexity of their structure, formed by millions of geometric entities, and the interactions that happen within. An example of natural scenario that is needed for games or simulation programs are forests. Forests are difficult to render because the huge amount of geometric entities and the large amount of detail to be represented. Moreover, the interactions between the objects (grass, leaves) and external forces such as wind are complex to model. In this paper we concentrate in the rendering of falling leaves at low cost. We present a technique that exploits graphics hardware in order to render thousands of leaves with different falling paths in real time and low memory requirements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading. However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer. This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach. A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures. A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering. This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In binary vectors, the antibiotic resistance gene used for selection of transformed plant cells is also usually expressed in the transforming Agrobacterium cells. This expression gives the bacterium antibiotic resistance, an unnecessary advantage on selective medium containing the antibiotic. Insertion of a castor bean catalase-1 (CAT-1) gene intron or a Parasponia andersonii haemoglobin gene intron into the coding region of the selectable marker gene, hph, completely abolished the expression of the gene in Agrobacterium, rendering it susceptible to hygromycin B. Use of these modified binary vectors minimized the overgrowth of Agrobacterium during plant transformation. Both of the introns were correctly spliced in plant cells and significantly enhanced hph gene expression in transformed rice tissue. The presence of these introns in the hph coding sequence not only maintained the selection efficiency of the hph gene, but with the CAT-1 intron also substantially increased the frequency of rice transformation. Transgenic lines with an intron-hph gene generally contained fewer gene copies and produced substantially more mRNA of the predicted size. Our results also indicate that transgenic plants with many copies of the transgene were more likely to show gene silencing than plants with 1-3 copies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Acid whey has become a major concern especially in dairy industry manufacturing Greek yoghurt. Proper disposal of acid whey is essential as it not only increases the BOD of water but also increases the acidity when disposed of in landfill, rendering soil barren and unsuitable for cultivation. Effluent (acid-whey) treatment increases the cost of production. The vast quantities of acid whey that are produced by the dairy industry make the treatment and safe disposal of effluent very difficult. Hence an economical way to handle this problem is very important. Biogenic glycine betaine and trehalose have many applications in food and confectionery industry, medicine, bioprocess industry, agriculture, genetic engineering, and animal feeds (etc.), hence their production is of industrial importance. Here we used the extreme, obligate halophile Actinopolyspora halophila (MTCC 263) for fermentative production of glycine betaine and trehalose from acid whey. Maximum yields were obtained by implementation of a sequential media optimization process, identification and addition of rate-limiting enzyme cofactors via a bioinformatics approach, and manipulation of nitrogen substrate supply. The implications of using glycine as a precursor were also investigated. The core factors that affected production were identified and then optimized using orthogonal array design followed by response surface methodology. The maximum production achieved after complete optimization was 9.07 ± 0.25 g/L and 2.49 ± 0.14 g/L for glycine betaine and trehalose, respectively.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Réalisé en cotutelle avec l'Université Bordeaux 1 (France)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O objectivo principal deste trabalho de investigação é na análise do aproveitamento das potencialidades da tecnologia da Web 2.0 pelos profissionais de informação na gestão do conhecimento nas bibliotecas, especialmente nas universitárias, onde a utilização destas tecnologias se revela de extrema importância. Para enquadrar o estudo no âmbito da aplicação das tecnologias da Web 2.0 ao serviço da biblioteca foi analisado, em primeiro lugar, o impacto que o desenvolvimento tecnológico teve na sociedade e a globalização do saber originado pelo software social. Para complementar o estudo e verificar a gestão de uma das redes sociais, o Blogue, foi usado o método qualitativo, através da aplicação de uma grelha de avaliação da qualidade dos Blogues desenvolvida por Luísa Alvim. Os resultados obtidos da análise dos Blogues foram submetidos à observação dos princípios da Web 2.0, defendidos por Maness. Concluiu-se que as potencialidades do Blogue não são completamente exploradas pelos profissionais de informação, nem mesmo nas bibliotecas universitárias. Este estudo pretende fazer a análise da situação, mas também ser um ponto de partida para que os responsáveis das bibliotecas, especialmente das bibliotecas universitárias, repensem o conceito e mais-valia da Web 2.0, de forma a que o Blogue seja gerido estrategicamente em benefício do utilizador, profissionais de informação e do próprio serviço da biblioteca.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We revisit the problem of visibility, which is to determine a set of primitives potentially visible in a set of geometry data represented by a data structure, such as a mesh of polygons or triangles, we propose a solution for speeding up the three-dimensional visualization processing in applications. We introduce a lean structure , in the sense of data abstraction and reduction, which can be used for online and interactive applications. The visibility problem is especially important in 3D visualization of scenes represented by large volumes of data, when it is not worthwhile keeping all polygons of the scene in memory. This implies a greater time spent in the rendering, or is even impossible to keep them all in huge volumes of data. In these cases, given a position and a direction of view, the main objective is to determine and load a minimum ammount of primitives (polygons) in the scene, to accelerate the rendering step. For this purpose, our algorithm performs cutting primitives (culling) using a hybrid paradigm based on three known techniques. The scene is divided into a cell grid, for each cell we associate the primitives that belong to them, and finally determined the set of primitives potentially visible. The novelty is the use of triangulation Ja 1 to create the subdivision grid. We chose this structure because of its relevant characteristics of adaptivity and algebrism (ease of calculations). The results show a substantial improvement over traditional methods when applied separately. The method introduced in this work can be used in devices with low or no dedicated processing power CPU, and also can be used to view data via the Internet, such as virtual museums applications

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The visualization of three-dimensional(3D)images is increasigly being sed in the area of medicine, helping physicians diagnose desease. the advances achived in scaners esed for acquisition of these 3d exames, such as computerized tumography(CT) and Magnetic Resonance imaging (MRI), enable the generation of images with higher resolutions, thus, generating files with much larger sizes. Currently, the images of computationally expensive one, and demanding the use of a righ and computer for such task. The direct remote acess of these images thruogh the internet is not efficient also, since all images have to be trasferred to the user´s equipment before the 3D visualization process ca start. with these problems in mind, this work proposes and analyses a solution for the remote redering of 3D medical images, called Remote Rendering (RR3D). In RR3D, the whole hedering process is pefomed a server or a cluster of servers, with high computational power, and only the resulting image is tranferred to the client, still allowing the client to peform operations such as rotations, zoom, etc. the solution was developed using web services written in java and an architecture that uses the scientific visualization packcage paraview, the framework paraviewWeb and the PACS server DCM4CHEE.The solution was tested with two scenarios where the rendering process was performed by a sever with graphics hadwere (GPU) and by a server without GPUs. In the scenarios without GPUs, the soluction was executed in parallel with several number of cores (processing units)dedicated to it. In order to compare our solution to order medical visualization application, a third scenario was esed in the rendering process, was done locally. In all tree scenarios, the solution was tested for different network speeds. The solution solved satisfactorily the problem with the delay in the transfer of the DICOM files, while alowing the use of low and computers as client for visualizing the exams even, tablets and smart phones

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The use of all by-products of bovine slaughter is of high economic importance for the industries of products of animal origin. Among these products, fat has an important role, once fat rendering may generate several different products, such as protein material that may be used in the manufacture of meat products. However, in spite of the importance that the use of all by-products has for the economic balance of the industry, there are no reports on their use in Brazil, or studies that supply data on microbiological and physical-chemical local standards for this protein. Thus, the objective of this study was to evaluate microbiological and physical-chemical characteristics of protein material obtained from fat rendering, as well as to provide support for companies to use fat rendering to generate protein material, adding value to industrialized meat products.Materials, Methods & Results: The experimental production of edible protein obtained of fat rendering was conducted in slaughterhouse with supervision of the Brazilian Ministry of Agriculture, Livestock and Food Supply. Protein material was obtained in a continuous, humid heat system at high temperatures. Fat scraps containing protein were ground and cooked at high temperature (85 degrees C), and placed in a three phase decanter centrifuge. After centrifugation, protein material was ground again and packed. Samples were collected from 15 batches of protein material, and the following microbiological analyses were carried out: counts of aerobic mesophilic and psychrotrophic microorganisms, coliforms at 35 degrees C, Escherichia coli, sulfite-reducing Clostridium, and Staphylococcus aureus, besides presence or absence of Salmonella and Listeria monocytogens. The following physical-chemical analyses were also carried out: protein, total lipid, moisture, ash, carbohydrate, and energy content. Mean counts of mesophiles, psychrotrophs, and coliforms at 35 degrees C were 4.17; 3.69 and 1.87 (log CFU/g), respectively. Levels of protein, total lipids, moisture, ashes and carbohydrates were 27.50; 7.83; 63.88%; 0.24%; and 0.55%, respectively, and energy content was 182.63 kcal/100g.Discussion: Results of microbiological analyses demonstrated that, although low, the final product showed to be contaminated. Contamination that occurred during the second grinding procedure may be an explanation for these bacterial counts. Also, the temperature used for fat fusion was not enough to eliminate thermoduric microorganisms. However, even with the presence of indicator microorganisms in the samples, none was contaminated by E. coli, sulfite-reducing Clostridium, S. aureus, Salmonella or L. monocytogenes. Physical-chemical analyses showed that the product had adequate nutritional quality. Based on these results, it was possible to conclude that protein material obtained in fat rendering showed characteristics that enable the use of this product as raw material for processed meat products. Besides, the present study was the first one to present scientific results in relation to edible by-products obtained in fat rendering, supplying important information for slaughterhouses and meat-processing plants. The study also produced relevant data on the innocuousness of the product, which may be used to guide decision-making of health inspectors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Approaching the world of the fairy tale as an adult, one soon realizes that things are not what they once seemed during story time in bed. Something that once appeared so innocent and simple can become rather complex when digging into its origin. A kiss, for example, can mean something else entirely. I can clearly remember my sister, who is ten years older than I am, telling me that the fairy tales I was told had a mysterious hidden meaning I could not understand. I was probably 9 or 10 when she told me that the story of Sleeping Beauty, which I used to love so much in Disney’s rendering, was nothing more than the story of an adolescent girl, with all the necessary steps needed to become a woman, the bleeding of menstruation and the sexual awakening - even though she did not really put it in these terms. This shocking news troubled me for a while, so much so that I haven’t watched that movie since. But in reality it was not fear that my sister had implanted in me: it was curiosity, the feeling that I was missing something terribly important behind the words and images. But it was not until last year during my semester abroad in Germany, where I had the chance to take a very interesting English literature seminar, that I fully understood what I had been looking for all these years. Thanks to what I learned from the work of Bruno Bettelheim, Jack Zipes, Vladimir Propp, and many other authors that wrote extensively about the subject, I feel I finally have the right tools to really get to know this fairy tale. But what I also know now is that the message behind fairy tales is not to be searched for behind only one version: on the contrary, since they come from oral traditions and their form was slowly shaped by centuries of recountals and retellings, the more one digs, the more complete the understanding of the tale will be. I will therefore look for Sleeping Beauty’s hidden meaning by looking for the reason why it did stick so consistently throughout time. To achieve this goal, I have organized my analysis in three chapters: in the first chapter, I will analyze the first known literary version of the tale, the French Perceforest, and then compare it with the following Italian version, Basile’s Sun, Moon, and Talia; in the second chapter, I will focus on the most famous and by now classical literary versions of Sleeping Beauty, La Belle Au Bois Dormant, written by the Frenchman, Perrault, and the German Dornröschen, recorded by the Brothers Grimm’s; finally, in the last chapter, I will analyze Almodovar’s film Talk to Her as a modern rewriting of this tale, which after a closer look, appears closely related to the earliest version of the story, Perceforest.