959 resultados para GRAPHICS
Resumo:
A long development time is needed from the design to the implementation of an AUV. During the first steps, simulation plays an important role, since it allows for the development of preliminary versions of the control system to be integrated. Once the robot is ready, the control systems are implemented, tuned and tested. The use of a real-time simulator can help closing the gap between off-line simulation and real testing using the already implemented robot. When properly interfaced with the robot hardware, a real-time graphical simulation with a "hardware in the loop" configuration, can allow for the testing of the implemented control system running in the actual robot hardware. Hence, the development time is drastically reduced. These paper overviews the field of graphical simulators used for AUV development proposing a classification. It also presents NEPTUNE, a multi-vehicle, real-time, graphical simulator based on OpenGL that allows hardware in the loop simulations
Resumo:
In computer graphics, global illumination algorithms take into account not only the light that comes directly from the sources, but also the light interreflections. This kind of algorithms produce very realistic images, but at a high computational cost, especially when dealing with complex environments. Parallel computation has been successfully applied to such algorithms in order to make it possible to compute highly-realistic images in a reasonable time. We introduce here a speculation-based parallel solution for a global illumination algorithm in the context of radiosity, in which we have taken advantage of the hierarchical nature of such an algorithm
Resumo:
The author studies the error and complexity of the discrete random walk Monte Carlo technique for radiosity, using both the shooting and gathering methods. The author shows that the shooting method exhibits a lower complexity than the gathering one, and under some constraints, it has a linear complexity. This is an improvement over a previous result that pointed to an O(n log n) complexity. The author gives and compares three unbiased estimators for each method, and obtains closed forms and bounds for their variances. The author also bounds the expected value of the mean square error (MSE). Some of the results obtained are also shown
Resumo:
In this paper we extend the reuse of paths to the shot from a moving light source. In the classical algorithm new paths have to be cast from each new position of a light source. We show that we can reuse all paths for all positions, obtaining in this way a theoretical maximum speed-up equal to the average length of the shooting path
Resumo:
Creació d'un entorn de treball per tal de visualitzar models tridimensionals en temps real amb dos objectius: proporcionar una interfície gràfica per poder visualitzar interactivament una escena, modificant-ne els seus elements i aconseguir un disseny que faci el projecte altament revisable i reutilitzable en elfutur, i serveixi per tant de plataforma per provar altres projectes
Resumo:
The international Functional Annotation Of the Mammalian Genomes 4 (FANTOM4) research collaboration set out to better understand the transcriptional network that regulates macrophage differentiation and to uncover novel components of the transcriptome employing a series of high-throughput experiments. The primary and unique technique is cap analysis of gene expression (CAGE), sequencing mRNA 5'-ends with a second-generation sequencer to quantify promoter activities even in the absence of gene annotation. Additional genome-wide experiments complement the setup including short RNA sequencing, microarray gene expression profiling on large-scale perturbation experiments and ChIP-chip for epigenetic marks and transcription factors. All the experiments are performed in a differentiation time course of the THP-1 human leukemic cell line. Furthermore, we performed a large-scale mammalian two-hybrid (M2H) assay between transcription factors and monitored their expression profile across human and mouse tissues with qRT-PCR to address combinatorial effects of regulation by transcription factors. These interdependent data have been analyzed individually and in combination with each other and are published in related but distinct papers. We provide all data together with systematic annotation in an integrated view as resource for the scientific community (http://fantom.gsc.riken.jp/4/). Additionally, we assembled a rich set of derived analysis results including published predicted and validated regulatory interactions. Here we introduce the resource and its update after the initial release.
Resumo:
OsteoLaus is a cohort of 1400 women 50 to 80 years living in Lausanne, Switzerland. Clinical risk factors for osteoporosis, bone ultrasound of the heel, lumbar spine and hip bone mineral density (BMD), assessment of vertebral fracture by DXA, and microarchitecture evaluation by TBS (Trabecular Bone Score) will be recorded. TBS is a new parameter obtained after a re-analysis of a DXA exam. TBS is correlated with parameters of microarchitecture. His reproducibility is good. TBS give an added diagnostic value to BMD, and predict osteoporotic fracture (partially) independently to BMD. The position of TBS in clinical routine in complement to BMD and clinical risk factors will be evaluated in the OsteoLaus cohort.
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
Exposure to solar ultraviolet (UV) radiation is the main causative factor for skin cancer. UV exposure depends on environmental and individual factors, but individual exposure data remain scarce. While ground UV irradiance is monitored via different techniques, it is difficult to translate such observations into human UV exposure or dose because of confounding factors. A multi-disciplinary collaboration developed a model predicting the dose and distribution of UV exposure on the basis of ground irradiation and morphological data. Standard 3D computer graphics techniques were adapted to develop a simulation tool that estimates solar exposure of a virtual manikin depicted as a triangle mesh surface. The amount of solar energy received by various body locations is computed for direct, diffuse and reflected radiation separately. Dosimetric measurements obtained in field conditions were used to assess the model performance. The model predicted exposure to solar UV adequately with a symmetric mean absolute percentage error of 13% and half of the predictions within 17% range of the measurements. Using this tool, solar UV exposure patterns were investigated with respect to the relative contribution of the direct, diffuse and reflected radiation. Exposure doses for various body parts and exposure scenarios of a standing individual were assessed using erythemally-weighted UV ground irradiance data measured in 2009 at Payerne, Switzerland as input. For most anatomical sites, mean daily doses were high (typically 6.2-14.6 Standard Erythemal Dose, SED) and exceeded recommended exposure values. Direct exposure was important during specific periods (e. g. midday during summer), but contributed moderately to the annual dose, ranging from 15 to 24% for vertical and horizontal body parts, respectively. Diffuse irradiation explained about 80% of the cumulative annual exposure dose.
Resumo:
Aquest paper es divideix en 3 parts fonamentals, la primera relata el que pretén mostrar aquest estudi, que és aplicar els sistemes actuals de reconeixement facial en una base de dades d'obres d'art. Explica quins mètodes s'utilitzaran i perquè es interessant realitzar aquest estudi. La segona passa a mostrar el detall de les dades obtingudes en l'experiment, amb imatges i gràfics que facilitaran la comprensió. I en l'última part tenim la discussió dels resultats obtinguts en l'anàlisi i les seves posteriors conclusions.
Resumo:
Hoy en día existen numerosas técnicas para aplicar texturas sobre objetos 3D genéricos, pero los mecanismos para su creación son, en general, o bien complejos y poco intuitivos para el artista, o bien poco eficientes en aspectos como obtener un texturado global sin costuras. Recientemente, la invención de los policubos ha abierto un nuevo espectro de posibilidades a la hora de realizar estas tareas, e incluso otras como animación y subdivisión, de crucial importancia para industrias como el cine o los videojuegos. Desafortunadamente, no existen herramientas automáticas y editables que permitan generar el modelo de policubos base. Un policubo es una agregación de cubos idénticos de forma que cada cubo tiene como mínimo en común una cara con otro cubo. Con la agrupación de estos cubos se pueden generar diferentes figuras espaciales. El objetivo es desarrollar una herramienta para la creación y edición interactiva de un modelo de policubos a partir de un objeto tridimensional, la cual proporcionara una libertad y control al usuario no existente en las herramientas actualmente disponibles
Resumo:
L’objectiu d’aquest PFC és desenvolupar una eina d’edició de façanes procedural apartir d’una imatge d’una façana real. L’aplicació generarà les regles procedurals de lafaçana a partir de dades adquirides del model que es vol representar, com unafotografia. L’usuari de l’aplicació generarà de forma semi-automàtica i interactiva lesregles de subdivisió i repetició, especificant també la inserció de elementsarquitectònics (portes, finestres), que podran ser instanciats a partir d’una llibreria. Uncop generades, les regles s’escriuran en el format del sistema BuildingEngine perintegrar-se completament dins el procés de modelatge urbà.Aquest projecte es desenvoluparà en Matlab
Resumo:
Actualment ens trobem en un món on tot gira al voltant de les noves tecnologies, i un pilar fonamental és l'oci i l'entreteniment. Això engloba principalment les indústries del cinema, videojocs i realitat virtual. Un dels problemes que tenen aquestes indústries és com crear l'escenari on es produeix la història. L'objectiu d'aquest projecte de final de carrera és crear una eina integrada al skylineEngine, que serveixi per crear edificis de manera procedural, on l'usuari pugui definir l'estètica d'aquest edifici, introduint la seva planta i els perfils adequats. El que s'implementarà serà una eina de modelatge per a dissenyadors, que a partir d'una planta i perfils pugui crear l'edifici.Aquest projecte es desenvoluparà a sobre del mòdul de generació d'edificis del skylineEngine, una eina pel modelatge de ciutats que s'executa sobre el Houdini 3D, que és una plataforma genèrica pel modelatge procedural d'objectes.El desenvolupament d'aquest projecte implica:• Estudi de la plataforma de desenvolupament Houdini 3D i de les llibreries necessàries per la incorporació de scripts Python. Estudi de les EEDD internes de Houdini.• Aprendre i manejar el llenguatge de programació Python.• Estudi del codi de l'article Interactive Architectural Modeling with Procedural Extrusions, per en Tom Kelly i en Peter Wonka, publicat a la revista ACM Transactions on Graphics (2011).• Desenvolupament d'algorismes de conversió de geometria d'una estructura tipus face-vertex a una de tipus half-edge, i viceversa.• Modificació del codi Java per acceptar crides sense interfície d'usuari i amb estructures de dades generades des de Python.• Aprendre el funcionament de la llibreria JPype per permetre enllaçar el Java dins el Python.• Estudi del skylineEngine i de les llibreries per la creació d'edificis.• Integració del resultat dintre del skylineEngine.• Verificació i ajust de les regles i paràmetres de la simulació per a diferents edificis
Resumo:
The Institute for Public Security of Catalonia (ISPC), the only state-funded education and research centre for police in Catalonia-Spain, developed in 2012 a comparative study on Gender diversity in police services in the European Union. The study is an update of the research Facts & Figures 2008 that was carried out by the European Network of Policewomen (ENP), a non-profit organization that works in partnership with colleagues from police and/or law enforcement organizations in its member countries to facilitate positive changes in the position of women in police services. To gather the 2012 data, the ISPC invited EU Member States’ police services to cooperate in the study answering a 10- ITEM questionnaire. The questionnaire was the same tool used in 2008 by the ENP. In February 2012, the ISPC sent the questionnaires through Cepol National Contact Points’ network. In order to include as many police services as possible in the study, the ENP also supported us to gather some of the data. Altogether we received questionnaires from 29 police services corresponding to 17 UE countries. Besides, we used data from open sources about England and Wales police services and the French National Police. In this document you can find: first, the tool we used to collect the data; second, the answers we gathered presented per country; finally, some comparative tables and graphics developed by the ISPC. Countries: Austria, Belgium Cyprus, Denmark, England, Wales, Estonia, Finland, France, Germany, Italy, Latvia, Lithuania, Luxembourg, Netherlands, Portugal, Romania, Slovenia, Spain, Swden.
Resumo:
Aquest projecte consisteix en el desenvolupament d’una demo 3D utilitzant exclusivament gràfics procedurals per tal d’avaluar la seva viabilitat en aplicacions més complexes com els videojocs. En aquesta aplicació es genera un terreny aleatori explorable amb vegetació i textures creades proceduralment.