288 resultados para Models de simulació


Relevância:

20.00% 20.00%

Publicador:

Resumo:

La dinàmica de fluids computacional (CFD) és una eina que serveix per analitzar mitjançantcomputadors diferents problemes que involucren fluxos de fluids. Els programes de CFD usen expressions matemàtiques no lineals que defineixen les equacions fonamentals de fluxos i transport de calor en fluids. Aquestes es resolen amb complexos algoritmes iteratius. Actualment aquesta eina és una part fonamental en els procés de disseny en moltes empreses relacionades amb la dinàmica de fluids. Les simulacions que es realitzen ambaquests programes s’ha demostrat que són fiables i que estalvien temps i diners, ja que eviten haver de realitzar els costosos processos d’assaig-error. En el projecte s’utilitza el programa de CFD Ansys CFX 11.0 per simular una agitació bifàsica composta per aigua i aire a temperatura ambient. Els objectius són determinar els paràmetres òptims de simulació que permetin recrear aquesta agitació, per posteriorment dissenyar un nou impulsor

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aquest projecte final de carrera pretén investigar i experimentar una nova línia de desenvolupament d’algorismes dinàmics. A partir de l’algorisme AntNet-QoS [2] s’ incorporen noves mesures a l’algorisme (mesura de l’amplada de banda disponible i jitter), les quals combinant amb la mesura de retard ja feta servir, permet adaptar-se millor a les condicions actuals del trànsit en la xarxa i als requeriments específics de qualitat (QoS) per part del trànsit

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L’objecte del projecte és crear una interfície capaç d’obtenir paràmetres dels processos d’escairat i ranurat, tals com l’avanç, la profunditat de passada i el número de passades en funció de la rugositat superficial i tenint en compte tant els diferents factors i restriccions entrats per l’usuari com els que van ser estudiats, al seu moment, pels autors G. Halvei i R.D. Weill. L’aplicació informàtica formarà part d’un programa de dimensions majors que oferirà el càlcul de totes les operacions d’arrencada de ferritja

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Els mètodes de detecció, diagnosi i aïllament de fallades (Fault Detection and Isolation - FDI) basats en la redundància analítica (és a dir, la comparació del comportament actual del procés amb l’esperat, obtingut mitjançant un model matemàtic del mateix), són àmpliament utilitzats per al diagnòstic de sistemes quan el model matemàtic està disponible. S’ha implementat un algoritme per implementar aquesta redundància analítica a partir del model de la plana conegut com a Anàlisi Estructural

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L’ objectiu del projecte és la implementació d’un simulador de sistema de recomanació que permeti estudiar algoritmes de dissociació entre agent-recomanador i usuari, combinant-los amb diverses tècniques de recomanació i fent servir infohabitants com Agents Recomanadors i veure com treballen en un sistema recomanador

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Avui en dia, hi ha moltes pel.lícules o videojocs on apareixen grans quantitats de persones. Sovint és molt costós aconseguir grans quantitats de persones per realitzar les escenes, amb els problemes que això comporta (ja siguin econòmics o d’infraestructura) i la majoria de les vegades resulta inviable. Amb l’arribada de les noves targes gràfiques, és possible calcular aquesta visualització en temps real, donades una sèrie de simplificacions respecte de l’estructuració dels models a visualitzar.El que es pretén amb el GdM no és oferir una visualització molt realista de multituds (com ho faria el Massive Software, per exemple), sinó que el que es vol és la simulació de multituds usant molt menys temps per obtenir uns resultats prou bons per a simular el comportament i el moviment d' una multitud humana.Aquest projecte consta de dues parts: la primera, és la creació del model de persones que formaran la multitud. La segona, és la creació del GdM i la integració d’aquest model per la generació de la multitud

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ponència presentada a la Jornada plans d'autoprotecció

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aquest és un estudi retrospectiu que compara la mobilitat i el conflicto escàpulo-humeral entre 2 models diferents de pròtesi invertida d’espatlla. Aquestes pròtesis s’han implantat en pacients amb ruptures del manegot dels rotadors irreparables. Aquesta cirugía no està exenta de complicacions, i una de les més habituals és el conflicto escàpulo-humeral o notch.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vegeu el resum a l'inici del document de l'arxiu adjunt

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a parsimonious regime-switching approach to model the correlations between assets, the threshold conditional correlation (TCC) model. This method allows the dynamics of the correlations to change from one state (or regime) to another as a function of observable transition variables. Our model is similar in spirit to Silvennoinen and Teräsvirta (2009) and Pelletier (2006) but with the appealing feature that it does not suffer from the course of dimensionality. In particular, estimation of the parameters of the TCC involves a simple grid search procedure. In addition, it is easy to guarantee a positive definite correlation matrix because the TCC estimator is given by the sample correlation matrix, which is positive definite by construction. The methodology is illustrated by evaluating the behaviour of international equities, govenrment bonds and major exchange rates, first separately and then jointly. We also test and allow for different parts in the correlation matrix to be governed by different transition variables. For this, we estimate a multi-threshold TCC specification. Further, we evaluate the economic performance of the TCC model against a constant conditional correlation (CCC) estimator using a Diebold-Mariano type test. We conclude that threshold correlation modelling gives rise to a significant reduction in portfolio´s variance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the role of learning by private agents and the central bank (two-sided learning) in a New Keynesian framework in which both sides of the economy have asymmetric and imperfect knowledge about the true data generating process. We assume that all agents employ the data that they observe (which may be distinct for different sets of agents) to form beliefs about unknown aspects of the true model of the economy, use their beliefs to decide on actions, and revise these beliefs through a statistical learning algorithm as new information becomes available. We study the short-run dynamics of our model and derive its policy recommendations, particularly with respect to central bank communications. We demonstrate that two-sided learning can generate substantial increases in volatility and persistence, and alter the behavior of the variables in the model in a signifficant way. Our simulations do not converge to a symmetric rational expectations equilibrium and we highlight one source that invalidates the convergence results of Marcet and Sargent (1989). Finally, we identify a novel aspect of central bank communication in models of learning: communication can be harmful if the central bank's model is substantially mis-specified

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper discusses maintenance challenges of organisations with a huge number of devices and proposes the use of probabilistic models to assist monitoring and maintenance planning. The proposal assumes connectivity of instruments to report relevant features for monitoring. Also, the existence of enough historical registers with diagnosed breakdowns is required to make probabilistic models reliable and useful for predictive maintenance strategies based on them. Regular Markov models based on estimated failure and repair rates are proposed to calculate the availability of the instruments and Dynamic Bayesian Networks are proposed to model cause-effect relationships to trigger predictive maintenance services based on the influence between observed features and previously documented diagnostics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Piecewise linear models systems arise as mathematical models of systems in many practical applications, often from linearization for nonlinear systems. There are two main approaches of dealing with these systems according to their continuous or discrete-time aspects. We propose an approach which is based on the state transformation, more particularly the partition of the phase portrait in different regions where each subregion is modeled as a two-dimensional linear time invariant system. Then the Takagi-Sugeno model, which is a combination of local model is calculated. The simulation results show that the Alpha partition is well-suited for dealing with such a system

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our work is concerned with user modelling in open environments. Our proposal then is the line of contributions to the advances on user modelling in open environments thanks so the Agent Technology, in what has been called Smart User Model. Our research contains a holistic study of User Modelling in several research areas related to users. We have developed a conceptualization of User Modelling by means of examples from a broad range of research areas with the aim of improving our understanding of user modelling and its role in the next generation of open and distributed service environments. This report is organized as follow: In chapter 1 we introduce our motivation and objectives. Then in chapters 2, 3, 4 and 5 we provide the state-of-the-art on user modelling. In chapter 2, we give the main definitions of elements described in the report. In chapter 3, we present an historical perspective on user models. In chapter 4 we provide a review of user models from the perspective of different research areas, with special emphasis on the give-and-take relationship between Agent Technology and user modelling. In chapter 5, we describe the main challenges that, from our point of view, need to be tackled by researchers wanting to contribute to advances in user modelling. From the study of the state-of-the-art follows an exploratory work in chapter 6. We define a SUM and a methodology to deal with it. We also present some cases study in order to illustrate the methodology. Finally, we present the thesis proposal to continue the work, together with its corresponding work scheduling and temporalisation