73 resultados para Intention-based models
Resumo:
The paper presents a foundation model for Marxian theories of the breakdown of capitalism based on a new falling rate of profit mechanism. All of these theories are based on one or more of ?the historical tendencies?: a rising capital-wage bill ratio, a rising capitalist share and a falling rate of profit. The model is a foundation in the sense that it generates these tendencies in the context of a model with a constant subsistence wage. The newly discovered generating mechanism is based on neo-classical reasoning for a model with land. It is non-Ricardian in that land augmenting technical progress can be unboundedly rapid. Finally, since the model has no steady state, it is necessary to use a new technique, Chaplygin?s method, to prove the result.
Resumo:
We review recent likelihood-based approaches to modeling demand for medical care. A semi-nonparametric model along the lines of Cameron and Johansson's Poisson polynomial model, but using a negative binomial baseline model, is introduced. We apply these models, as well a semiparametric Poisson, hurdle semiparametric Poisson, and finite mixtures of negative binomial models to six measures of health care usage taken from the Medical Expenditure Panel survey. We conclude that most of the models lead to statistically similar results, both in terms of information criteria and conditional and unconditional prediction. This suggests that applied researchers may not need to be overly concerned with the choice of which of these models they use to analyze data on health care demand.
Resumo:
Report for the scientific sojourn carried out at the Model-based Systems and Qualitative Reasoning Group (Technical University of Munich), from September until December 2005. Constructed wetlands (CWs), or modified natural wetlands, are used all over the world as wastewater treatment systems for small communities because they can provide high treatment efficiency with low energy consumption and low construction, operation and maintenance costs. Their treatment process is very complex because it includes physical, chemical and biological mechanisms like microorganism oxidation, microorganism reduction, filtration, sedimentation and chemical precipitation. Besides, these processes can be influenced by different factors. In order to guarantee the performance of CWs, an operation and maintenance program must be defined for each Wastewater Treatment Plant (WWTP). The main objective of this project is to provide a computer support to the definition of the most appropriate operation and maintenance protocols to guarantee the correct performance of CWs. To reach them, the definition of models which represent the knowledge about CW has been proposed: components involved in the sanitation process, relation among these units and processes to remove pollutants. Horizontal Subsurface Flow CWs are chosen as a case study and the filtration process is selected as first modelling-process application. However, the goal is to represent the process knowledge in such a way that it can be reused for other types of WWTP.
Resumo:
Projecte de recerca elaborat a partir d’una estada al Laboratory of Archaeometry del National Centre of Scientific Research “Demokritos” d’Atenes, Grècia, entre juny i setembre 2006. Aquest estudi s’emmarca dins d’un context més ampli d’estudi del canvi tecnològic que es documenta en la producció d’àmfores de tipologia romana durant els segles I aC i I dC en els territoris costaners de Catalunya. Una part d’aquest estudi contempla el càlcul de les propietats mecàniques d’aquestes àmfores i la seva avaluació en funció de la tipologia amforal, a partir de l’Anàlisi d’Elements Finits (AEF). L’AEF és una aproximació numèrica que té el seu origen en les ciències d’enginyeria i que ha estat emprada per estimar el comportament mecànic d’un model en termes, per exemple, de deformació i estrès. Així, un objecte, o millor dit el seu model, es dividit en sub-dominis anomenats elements finits, als quals se’ls atribueixen les propietats mecàniques del material en estudi. Aquests elements finits estan connectats formant una xarxa amb constriccions que pot ser definida. En el cas d’aplicar una força determinada a un model, el comportament de l’objecte pot ser estimat mitjançant el conjunt d’equacions lineals que defineixen el rendiment dels elements finits, proporcionant una bona aproximació per a la descripció de la deformació estructural. Així, aquesta simulació per ordinador suposa una important eina per entendre la funcionalitat de ceràmiques arqueològiques. Aquest procediment representa un model quantitatiu per predir el trencament de l’objecte ceràmic quan aquest és sotmès a diferents condicions de pressió. Aquest model ha estat aplicat a diferents tipologies amforals. Els resultats preliminars mostren diferències significatives entre la tipologia pre-romana i les tipologies romanes, així com entre els mateixos dissenys amforals romans, d’importants implicacions arqueològiques.
Resumo:
The main purpose of this paper is building a research model to integrate the socioeconomic concept of social capital within intentional models of new firm creation. Nevertheless, some researchers have found cultural differences between countries and regions to have an effect on economic development. Therefore, a second objective of this study is exploring whether those cultural differences affect entrepreneurial cognitions. Research design and methodology: Two samples of last year university students from Spain and Taiwan are studied through an Entrepreneurial Intention Questionnaire (EIQ). Structural equation models (Partial Least Squares) are used to test the hypotheses. The possible existence of differences between both sub-samples is also empirically explored through a multigroup analysis. Main outcomes and results: The proposed model explains 54.5% of the variance in entrepreneurial intention. Besides, there are some significant differences between both subsamples that could be attributed to cultural diversity. Conclusions: This paper has shown the relevance of cognitive social capital in shaping individuals’ entrepreneurial intentions across different countries. Furthermore, it suggests that national culture could be shaping entrepreneurial perceptions, but not cognitive social capital. Therefore, both cognitive social capital and culture (made up essentially of values and beliefs), may act together to reinforce the entrepreneurial intention.
Resumo:
El projecte exposat té com a propòsit definir i implementar un model de simulació basat en la coordinació i assignació dels serveis d’emergència en accidents de trànsit. La definició del model s’ha realitzat amb l’ús de les Xarxes de Petri Acolorides i la implementació amb el software Rockwell Arena 7.0. El modelatge de la primera simulació ens mostra un model teòric basat en cues mentre que el segon, mostra un model més complet i real gràcies a la connexió mitjançant la plataforma Corba a una base de dades amb informació geogràfica de les flotes i de les rutes. Com a resultat de l’estudi i amb l’ajuda de GoogleEarth, podem realitzar simulacions gràfiques per veure els accidents generats, les flotes dels serveis i el moviment dels vehicles des de les bases fins als accidents.
Resumo:
We present a real data set of claims amounts where costs related to damage are recorded separately from those related to medical expenses. Only claims with positive costs are considered here. Two approaches to density estimation are presented: a classical parametric and a semi-parametric method, based on transformation kernel density estimation. We explore the data set with standard univariate methods. We also propose ways to select the bandwidth and transformation parameters in the univariate case based on Bayesian methods. We indicate how to compare the results of alternative methods both looking at the shape of the overall density domain and exploring the density estimates in the right tail.
Resumo:
When actuaries face with the problem of pricing an insurance contract that contains different types of coverage, such as a motor insurance or homeowner's insurance policy, they usually assume that types of claim are independent. However, this assumption may not be realistic: several studies have shown that there is a positive correlation between types of claim. Here we introduce different regression models in order to relax the independence assumption, including zero-inflated models to account for excess of zeros and overdispersion. These models have been largely ignored to multivariate Poisson date, mainly because of their computational di±culties. Bayesian inference based on MCMC helps to solve this problem (and also lets us derive, for several quantities of interest, posterior summaries to account for uncertainty). Finally, these models are applied to an automobile insurance claims database with three different types of claims. We analyse the consequences for pure and loaded premiums when the independence assumption is relaxed by using different multivariate Poisson regression models and their zero-inflated versions.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Resumo:
La Facultat de Ciències de la Salut i de la Vida ha utilitzat des de 2004 la metodologia d'aprenentatge basat en problemes (en endavant ABP) com a mètode docent en els seus estudis de Biologia. En aquest període hem après algunes de les claus de l'aplicació del mètode en els nostres estudis. En primer lloc, cal disposar d'elements formatius que afavoreixin la formació dels tutors que participin en el projecte. Per assolir aquest objectiu hem dissenyat un portal on els nostres professors poden disposar de materials útils per a la seva activitat, així com de documents que permetin entendre millor el que suposa l'ABP. En segon lloc, el projecte tenia l'objectiu de dissenyar i avaluar activitats que permetessin integrar les pràctiques de laboratori en la lògica de la resolució de problemes pròpia de l'ABP. En aquest sentit vam dissenyar dues activitats en el tercer curs de la llicenciatura que anomenaren aprenentatge basat en el laboratori (ABL). Per aquest motiu es van dissenyar problemes que tinguessin una primera part de resolució a l'aula en grup de tutoria i una segona que obligués els estudiants a realitzar experiments de laboratori dirigits a entendre i resoldre les qüestions plantejades al grup de tutoria. L'ABL-1 fou un projecte de biologia cel·lular i destinat a aprofundir en els mecanismes implicats en els fenòmens de diferenciació dels miòcits. L'ABL-2 era un projecte conjunt dels professors de Fisiologia vegetal, Bioestadística i Microbiologia. En aquest cas es desitjava que els estudiants plantegessin la resolució a un problema que suposava la manipulació genètica de cèl·lules vegetals per fer possible que produïssin una substància específica, l'escopolamina. Finalment els estudiants havien d'escriure un article original com a projecte final de cada ABL. Els resultats dels dos anys d'experimentació han esta altament satisfactoris, d'acord amb les enquestes completades per alumnes i professors.
Resumo:
This paper presents an analysis of motor vehicle insurance claims relating to vehicle damage and to associated medical expenses. We use univariate severity distributions estimated with parametric and non-parametric methods. The methods are implemented using the statistical package R. Parametric analysis is limited to estimation of normal and lognormal distributions for each of the two claim types. The nonparametric analysis presented involves kernel density estimation. We illustrate the benefits of applying transformations to data prior to employing kernel based methods. We use a log-transformation and an optimal transformation amongst a class of transformations that produces symmetry in the data. The central aim of this paper is to provide educators with material that can be used in the classroom to teach statistical estimation methods, goodness of fit analysis and importantly statistical computing in the context of insurance and risk management. To this end, we have included in the Appendix of this paper all the R code that has been used in the analysis so that readers, both students and educators, can fully explore the techniques described
Resumo:
In this paper, we present a stochastic model for disability insurance contracts. The model is based on a discrete time non-homogeneous semi-Markov process (DTNHSMP) to which the backward recurrence time process is introduced. This permits a more exhaustive study of disability evolution and a more efficient approach to the duration problem. The use of semi-Markov reward processes facilitates the possibility of deriving equations of the prospective and retrospective mathematical reserves. The model is applied to a sample of contracts drawn at random from a mutual insurance company.
Resumo:
We will present an analysis of data from a literature review and semi-structured interviews with experts on OER, to identify different aspects of OER business models and to establish how the success of the OER initiatives is measured. The results collected thus far show that two different business models for OER initiatives exist, but no data on their success or failure is published. We propose a framework for measuring success of OER initiatives.
Resumo:
The general perspective of M-technologies and M-Services at the Spanish universities is not still in a very high level when we are ending the first decade of the 21st century. Some Universities and some of their libraries are starting to try out with M-technologies, but are still far from a model of massive exploitation, less than in some other countries. A deep study is needed to know the main reasons, study that we will not do in this paper. This general perspective does not mean that there are no significant initiatives which start to trust in M-technologies from Universities and their libraries. Models based in M-technologies make more sense than ever in open universities and in open libraries. That's the reason why the UOC's Library began in late 90s its first experiences in the M-Technologies and M-Libraries developments. In 1999 the appropriate technology offered the opportunity to carry out the first pilot test with SMS, and then applying the WAP technology. At those moments we managed to link-up mobile phones to the OPAC through a WAP system that allowed searching the catalogue by categories and finding the final location of a document, offering also the address of the library in which the user could loan it. Since then, UOC (and its library) directs its efforts towards adapting the offer of services to all sorts of M-devices used by end users. Left the WAP technology, nowadays the library is experimenting with some new devices like e-books, and some new services to get more feedback through the OPAC and metalibrary search products. We propose the case of Open University of Catalonia, in two levels: M-services applied in the library and M-technologies applied in some other university services and resources.
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.