927 resultados para graphics processor
Resumo:
OsteoLaus is a cohort of 1400 women 50 to 80 years living in Lausanne, Switzerland. Clinical risk factors for osteoporosis, bone ultrasound of the heel, lumbar spine and hip bone mineral density (BMD), assessment of vertebral fracture by DXA, and microarchitecture evaluation by TBS (Trabecular Bone Score) will be recorded. TBS is a new parameter obtained after a re-analysis of a DXA exam. TBS is correlated with parameters of microarchitecture. His reproducibility is good. TBS give an added diagnostic value to BMD, and predict osteoporotic fracture (partially) independently to BMD. The position of TBS in clinical routine in complement to BMD and clinical risk factors will be evaluated in the OsteoLaus cohort.
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
Exposure to solar ultraviolet (UV) radiation is the main causative factor for skin cancer. UV exposure depends on environmental and individual factors, but individual exposure data remain scarce. While ground UV irradiance is monitored via different techniques, it is difficult to translate such observations into human UV exposure or dose because of confounding factors. A multi-disciplinary collaboration developed a model predicting the dose and distribution of UV exposure on the basis of ground irradiation and morphological data. Standard 3D computer graphics techniques were adapted to develop a simulation tool that estimates solar exposure of a virtual manikin depicted as a triangle mesh surface. The amount of solar energy received by various body locations is computed for direct, diffuse and reflected radiation separately. Dosimetric measurements obtained in field conditions were used to assess the model performance. The model predicted exposure to solar UV adequately with a symmetric mean absolute percentage error of 13% and half of the predictions within 17% range of the measurements. Using this tool, solar UV exposure patterns were investigated with respect to the relative contribution of the direct, diffuse and reflected radiation. Exposure doses for various body parts and exposure scenarios of a standing individual were assessed using erythemally-weighted UV ground irradiance data measured in 2009 at Payerne, Switzerland as input. For most anatomical sites, mean daily doses were high (typically 6.2-14.6 Standard Erythemal Dose, SED) and exceeded recommended exposure values. Direct exposure was important during specific periods (e. g. midday during summer), but contributed moderately to the annual dose, ranging from 15 to 24% for vertical and horizontal body parts, respectively. Diffuse irradiation explained about 80% of the cumulative annual exposure dose.
Resumo:
Aquest paper es divideix en 3 parts fonamentals, la primera relata el que pretén mostrar aquest estudi, que és aplicar els sistemes actuals de reconeixement facial en una base de dades d'obres d'art. Explica quins mètodes s'utilitzaran i perquè es interessant realitzar aquest estudi. La segona passa a mostrar el detall de les dades obtingudes en l'experiment, amb imatges i gràfics que facilitaran la comprensió. I en l'última part tenim la discussió dels resultats obtinguts en l'anàlisi i les seves posteriors conclusions.
Resumo:
Hoy en día existen numerosas técnicas para aplicar texturas sobre objetos 3D genéricos, pero los mecanismos para su creación son, en general, o bien complejos y poco intuitivos para el artista, o bien poco eficientes en aspectos como obtener un texturado global sin costuras. Recientemente, la invención de los policubos ha abierto un nuevo espectro de posibilidades a la hora de realizar estas tareas, e incluso otras como animación y subdivisión, de crucial importancia para industrias como el cine o los videojuegos. Desafortunadamente, no existen herramientas automáticas y editables que permitan generar el modelo de policubos base. Un policubo es una agregación de cubos idénticos de forma que cada cubo tiene como mínimo en común una cara con otro cubo. Con la agrupación de estos cubos se pueden generar diferentes figuras espaciales. El objetivo es desarrollar una herramienta para la creación y edición interactiva de un modelo de policubos a partir de un objeto tridimensional, la cual proporcionara una libertad y control al usuario no existente en las herramientas actualmente disponibles
Resumo:
L’objectiu d’aquest PFC és desenvolupar una eina d’edició de façanes procedural apartir d’una imatge d’una façana real. L’aplicació generarà les regles procedurals de lafaçana a partir de dades adquirides del model que es vol representar, com unafotografia. L’usuari de l’aplicació generarà de forma semi-automàtica i interactiva lesregles de subdivisió i repetició, especificant també la inserció de elementsarquitectònics (portes, finestres), que podran ser instanciats a partir d’una llibreria. Uncop generades, les regles s’escriuran en el format del sistema BuildingEngine perintegrar-se completament dins el procés de modelatge urbà.Aquest projecte es desenvoluparà en Matlab
Resumo:
Actualment ens trobem en un món on tot gira al voltant de les noves tecnologies, i un pilar fonamental és l'oci i l'entreteniment. Això engloba principalment les indústries del cinema, videojocs i realitat virtual. Un dels problemes que tenen aquestes indústries és com crear l'escenari on es produeix la història. L'objectiu d'aquest projecte de final de carrera és crear una eina integrada al skylineEngine, que serveixi per crear edificis de manera procedural, on l'usuari pugui definir l'estètica d'aquest edifici, introduint la seva planta i els perfils adequats. El que s'implementarà serà una eina de modelatge per a dissenyadors, que a partir d'una planta i perfils pugui crear l'edifici.Aquest projecte es desenvoluparà a sobre del mòdul de generació d'edificis del skylineEngine, una eina pel modelatge de ciutats que s'executa sobre el Houdini 3D, que és una plataforma genèrica pel modelatge procedural d'objectes.El desenvolupament d'aquest projecte implica:• Estudi de la plataforma de desenvolupament Houdini 3D i de les llibreries necessàries per la incorporació de scripts Python. Estudi de les EEDD internes de Houdini.• Aprendre i manejar el llenguatge de programació Python.• Estudi del codi de l'article Interactive Architectural Modeling with Procedural Extrusions, per en Tom Kelly i en Peter Wonka, publicat a la revista ACM Transactions on Graphics (2011).• Desenvolupament d'algorismes de conversió de geometria d'una estructura tipus face-vertex a una de tipus half-edge, i viceversa.• Modificació del codi Java per acceptar crides sense interfície d'usuari i amb estructures de dades generades des de Python.• Aprendre el funcionament de la llibreria JPype per permetre enllaçar el Java dins el Python.• Estudi del skylineEngine i de les llibreries per la creació d'edificis.• Integració del resultat dintre del skylineEngine.• Verificació i ajust de les regles i paràmetres de la simulació per a diferents edificis
Resumo:
The Institute for Public Security of Catalonia (ISPC), the only state-funded education and research centre for police in Catalonia-Spain, developed in 2012 a comparative study on Gender diversity in police services in the European Union. The study is an update of the research Facts & Figures 2008 that was carried out by the European Network of Policewomen (ENP), a non-profit organization that works in partnership with colleagues from police and/or law enforcement organizations in its member countries to facilitate positive changes in the position of women in police services. To gather the 2012 data, the ISPC invited EU Member States’ police services to cooperate in the study answering a 10- ITEM questionnaire. The questionnaire was the same tool used in 2008 by the ENP. In February 2012, the ISPC sent the questionnaires through Cepol National Contact Points’ network. In order to include as many police services as possible in the study, the ENP also supported us to gather some of the data. Altogether we received questionnaires from 29 police services corresponding to 17 UE countries. Besides, we used data from open sources about England and Wales police services and the French National Police. In this document you can find: first, the tool we used to collect the data; second, the answers we gathered presented per country; finally, some comparative tables and graphics developed by the ISPC. Countries: Austria, Belgium Cyprus, Denmark, England, Wales, Estonia, Finland, France, Germany, Italy, Latvia, Lithuania, Luxembourg, Netherlands, Portugal, Romania, Slovenia, Spain, Swden.
Resumo:
Aquest projecte consisteix en el desenvolupament d’una demo 3D utilitzant exclusivament gràfics procedurals per tal d’avaluar la seva viabilitat en aplicacions més complexes com els videojocs. En aquesta aplicació es genera un terreny aleatori explorable amb vegetació i textures creades proceduralment.
Resumo:
Critical real-time ebedded (CRTE) Systems require safe and tight worst-case execution time (WCET) estimations to provide required safety levels and keep costs low. However, CRTE Systems require increasing performance to satisfy performance needs of existing and new features. Such performance can be only achieved by means of more agressive hardware architectures, which are much harder to analyze from a WCET perspective. The main features considered include cache memòries and multi-core processors.Thus, althoug such features provide higher performance, corrent WCET analysis methods are unable to provide tight WCET estimations. In fact, WCET estimations become worse than for simple rand less powerful hardware. The main reason is the fact that hardware behavior is deterministic but unknown and, therefore, the worst-case behavior must be assumed most of the time, leading to large WCET estimations. The purpose of this project is developing new hardware designs together with WCET analysis tools able to provide tight and safe WCET estimations. In order to do so, those pieces of hardware whose behavior is not easily analyzable due to lack of accurate information during WCET analysis will be enhanced to produce a probabilistically analyzable behavior. Thus, even if the worst-case behavior cannot be removed, its probabilty can be bounded, and hence, a safe and tight WCET can be provided for a particular safety level in line with the safety levels of the remaining components of the system. During the first year the project we have developed molt of the evaluation infraestructure as well as the techniques hardware techniques to analyze cache memories. During the second year those techniques have been evaluated, and new purely-softwar techniques have been developed.
Resumo:
A traditional photonic-force microscope (PFM) results in huge sets of data, which requires tedious numerical analysis. In this paper, we propose instead an analog signal processor to attain real-time capabilities while retaining the richness of the traditional PFM data. Our system is devoted to intracellular measurements and is fully interactive through the use of a haptic joystick. Using our specialized analog hardware along with a dedicated algorithm, we can extract the full 3D stiffness matrix of the optical trap in real time, including the off-diagonal cross-terms. Our system is also capable of simultaneously recording data for subsequent offline analysis. This allows us to check that a good correlation exists between the classical analysis of stiffness and our real-time measurements. We monitor the PFM beads using an optical microscope. The force-feedback mechanism of the haptic joystick helps us in interactively guiding the bead inside living cells and collecting information from its (possibly anisotropic) environment. The instantaneous stiffness measurements are also displayed in real time on a graphical user interface. The whole system has been built and is operational; here we present early results that confirm the consistency of the real-time measurements with offline computations.
Resumo:
The key reference on the labour market and the logics of squad formation in the big-5 European leagues. One hundred richly coloured pages, illustrated by graphics, maps, rankings, statistical models and analysis in French and English which inform managers about potential strategies to put their clubs on the road to success help managers of federations and players' unions to understand current trends and to take decisions ... suggest to journalists new lines of investigation likely to interest the general public allow researchers and students to benefit from reliable and comparable sources, developed with the greatest possible rigour ... give fans the possibility to understand in detail the dynamics at work in their favourite sport and club.
Resumo:
Purpose: The objective of this study is to investigate the feasibility of detecting and quantifying 3D cerebrovascular wall motion from a single 3D rotational x-ray angiography (3DRA) acquisition within a clinically acceptable time and computing from the estimated motion field for the further biomechanical modeling of the cerebrovascular wall. Methods: The whole motion cycle of the cerebral vasculature is modeled using a 4D B-spline transformation, which is estimated from a 4D to 2D + t image registration framework. The registration is performed by optimizing a single similarity metric between the entire 2D + t measured projection sequence and the corresponding forward projections of the deformed volume at their exact time instants. The joint use of two acceleration strategies, together with their implementation on graphics processing units, is also proposed so as to reach computation times close to clinical requirements. For further characterizing vessel wall properties, an approximation of the wall thickness changes is obtained through a strain calculation. Results: Evaluation on in silico and in vitro pulsating phantom aneurysms demonstrated an accurate estimation of wall motion curves. In general, the error was below 10% of the maximum pulsation, even in the situation when substantial inhomogeneous intensity pattern was present. Experiments on in vivo data provided realistic aneurysm and vessel wall motion estimates, whereas in regions where motion was neither visible nor anatomically possible, no motion was detected. The use of the acceleration strategies enabled completing the estimation process for one entire cycle in 5-10 min without degrading the overall performance. The strain map extracted from our motion estimation provided a realistic deformation measure of the vessel wall. Conclusions: The authors' technique has demonstrated that it can provide accurate and robust 4D estimates of cerebrovascular wall motion within a clinically acceptable time, although it has to be applied to a larger patient population prior to possible wide application to routine endovascular procedures. In particular, for the first time, this feasibility study has shown that in vivo cerebrovascular motion can be obtained intraprocedurally from a 3DRA acquisition. Results have also shown the potential of performing strain analysis using this imaging modality, thus making possible for the future modeling of biomechanical properties of the vascular wall.
Resumo:
The safety benefit of signalizing intersections of high-speed divided expressways is considered. Analyses were conducted on 50 and 55 mph and on 55 mph only intersections, comparing unsignalized and signalized intersections. Results of the 55 mph analysis are included in this report. Matched-pair analysis indicates that generally, signalized intersections have higher crash rate but lower costs per crash. On the other hand, before-and-after analysis (intersections signalized between 1994 and 2001) indicates lower crash rates (~30 percent) and total costs (~10 percent) after signalization. Empirical Bayes (EB) adjusted before-and-after analysis reduces estimates of safety benefit (crash rate) to about 20 percent. The study shows how commonly used analyses can differ in their results, and that there is great variability in the safety performance of individual signalized locations. This variability and the effect of EB adjustment are demonstrated through the use of innovative graphics.
Resumo:
The objective of my thesis was to find out how mobile TV service will influence TV consumption behaviour of the Finns. In particular the study focuses on the consumption behaviour of a well educated urban people. For my thesis, I provided a detailed analysis of the study results of a large scale questionnaire research FinPilot from the year 2005 based on an assignment of Nokia Ltd. In order to deepen the study results, I focused on the above mentioned group of young people with good education. The goal of the FinPilot research was to give answers to the following questions: what kind of programs, in what kind of circumstances, and for which reasons are they watched when using the mobile television service. The results of the research consisted mainly of data like figures, graphics etc. The data was explaned from the helicopter perspective, for it gave additional value to the research and consequently to my own thesis. My study offered complementary, unique information about their needs as it was based on questionnaires supplemented by individual interviews of the group members, their free comments as well as group discussions. The study results proved that mobile TV service did not increase the total TV consumption time. The time used for watching the mobile TV was significantly shorter than the time for watching the traditional TV. According to my study, the young urban people with good education are more interested to adapt the mobile TV service than the average Finns. Being eager to utilize the added value offered by the mobile TVs they are a potential target group in launching and marketing processes. On the basis of the outcome of the thesis, the future of mobile TV service seems very promising. The content and the pricing, however, have to match the user's needs and expectations. All the study results prove that there exists a social order for mobile TV service.