9 resultados para real option analysis
em Universitat de Girona, Spain
Resumo:
This analysis was stimulated by the real data analysis problem of household expenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that try to add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spending excluding alcohol/tobacco similar for teetotal and non-teetotal households? In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than one component, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durables within the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small. While this analysis is based on around economic data, the ideas carry over to many other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By an essential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur in many compositional situations, such as household budget patterns, time budgets, palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful in such situations. From consideration of such examples it seems sensible to build up a model in two stages, the first determining where the zeros will occur and the second how the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
One of the disadvantages of old age is that there is more past than future: this, however, may be turned into an advantage if the wealth of experience and, hopefully, wisdom gained in the past can be reflected upon and throw some light on possible future trends. To an extent, then, this talk is necessarily personal, certainly nostalgic, but also self critical and inquisitive about our understanding of the discipline of statistics. A number of almost philosophical themes will run through the talk: search for appropriate modelling in relation to the real problem envisaged, emphasis on sensible balances between simplicity and complexity, the relative roles of theory and practice, the nature of communication of inferential ideas to the statistical layman, the inter-related roles of teaching, consultation and research. A list of keywords might be: identification of sample space and its mathematical structure, choices between transform and stay, the role of parametric modelling, the role of a sample space metric, the underused hypothesis lattice, the nature of compositional change, particularly in relation to the modelling of processes. While the main theme will be relevance to compositional data analysis we shall point to substantial implications for general multivariate analysis arising from experience of the development of compositional data analysis…
Resumo:
The main instrument used in psychological measurement is the self-report questionnaire. One of its major drawbacks however is its susceptibility to response biases. A known strategy to control these biases has been the use of so-called ipsative items. Ipsative items are items that require the respondent to make between-scale comparisons within each item. The selected option determines to which scale the weight of the answer is attributed. Consequently in questionnaires only consisting of ipsative items every respondent is allotted an equal amount, i.e. the total score, that each can distribute differently over the scales. Therefore this type of response format yields data that can be considered compositional from its inception. Methodological oriented psychologists have heavily criticized this type of item format, since the resulting data is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians have kept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate both positions and addresses the similarities and differences between the two data collection methods. The ultimate objective is to formulate a guideline when to use which type of item format. The comparison is based on data obtained with both an ipsative and normative version of three psychological questionnaires, which were administered to 502 first-year students in psychology according to a balanced within-subjects design. Previous research only compared the direct ipsative scale scores with the derived ipsative scale scores. The use of compositional data analysis techniques also enables one to compare derived normative score ratios with direct normative score ratios. The addition of the second comparison not only offers the advantage of a better-balanced research strategy. In principle it also allows for parametric testing in the evaluation
Resumo:
A compositional time series is obtained when a compositional data vector is observed at different points in time. Inherently, then, a compositional time series is a multivariate time series with important constraints on the variables observed at any instance in time. Although this type of data frequently occurs in situations of real practical interest, a trawl through the statistical literature reveals that research in the field is very much in its infancy and that many theoretical and empirical issues still remain to be addressed. Any appropriate statistical methodology for the analysis of compositional time series must take into account the constraints which are not allowed for by the usual statistical techniques available for analysing multivariate time series. One general approach to analyzing compositional time series consists in the application of an initial transform to break the positive and unit sum constraints, followed by the analysis of the transformed time series using multivariate ARIMA models. In this paper we discuss the use of the additive log-ratio, centred log-ratio and isometric log-ratio transforms. We also present results from an empirical study designed to explore how the selection of the initial transform affects subsequent multivariate ARIMA modelling as well as the quality of the forecasts
Resumo:
El trabajo de investigación surge en el año 2001, ante la necesidad de hacer frente a una nueva realidad jurídica, el mobbing. Para ello fue decisivo el estudio de lo publicado (básicamente de ramas ajenas al Derecho) pero sobre todo las entrevistas con las víctimas de mobbing y sus asociaciones; este extremo, unido a la ausencia de un tratamiento internacional, obligó a un camino autodidacta para definir mobbing jurídicamente. La Tesis define mobbing como la presión laboral tendente a la autoeliminación de un trabajador mediante su denigración (presión laboral tendenciosa), y con ello por primera vez se tiene una definición de mobbing en línea y media, con plena validez jurídica, que es susceptible de ser memorizada y por lo tanto divulgada, para corregir el problema. El denominado "concepto uniformado de mobbing" recalca la denigración como mecanismo frente a los tratos degradantes y recalca la autoeliminación como finalidad de un comportamiento doloso. El trabajo aporta fórmulas para deslindar casos de mobbing de otras figuras próximas, y en este sentido debe citarse "la regla del 9" para saber si hay mobbing; en sede de estadísticas se critican metodológicamente muchas de ellas presentadas hasta el momento y se aporta alguna en sede de Tribunales; pero sobre todo se advierte de los riesgos jurídicos de una previsible regulación específica antimobbing, mediante el examen de las distintas definiciones que se han esgrimido hasta el momento. La segunda parte de la Tesis profundiza sobre el grado de sensibilización de nuestro ordenamiento jurídico y Tribunales, a cuyo fin se ha trabajado con más de un centernar y medio de sentencias dictadas sobre la materia, y por supuesto la totalidad de las recogidas en las bases de datos de las editoriales. El análisis sirve para apreciar la bondad de la sistemática aquí defendida, poniendo en evidencia errores, y contradicciones. La Tesis advierte que la presión laboral tendenciosa más allá de vulnerar el derecho constitucional al trabajo, o los derechos fundamentales a la integridad moral y el honor, es una transgresión a todo un "espíritu constitucional", y en este sentido se analiza con detalle tanto la posibilidad de recurrir en amparo, como el derecho a la indemnidad para quien se enfrenta a esta situación. Advirtiendo de las ventajas de efectuar esta reacción mediante la modalidad procesal de tutela de los derechos fundamentales, se analiza la recurrida acción del art.50 ET, donde se realizan aportaciones sugerentes como el plazo prescripción o la "doctrina de los antecedentes", y se otorgan respuestas a las preguntas sobre obligación de seguir trabajando y ejecución provisional. En sede de acciones de Seguridad Social, la Tesis distingue entre la incapacidad temporal y permanente (depresiones) y la muerte y supervivencia, aportándose sobre la primera la técnica denominada "interpretación en tres niveles" y descartando la posibilidad de considerar accidente de trabajo el suicidio tras un mobbing por imperativo legal, pero aportando un sucedáneo bastante razonable como es el accidente no laboral. Junto a ello se razona por la viabilidad del recargo del art.123 LGSS. Civilmente, la Tesis se posiciona de "lege ferenda" por reconducir este tipo de acciones resarcitorias del daño psíquico y moral al orden civil, por una mayor explicación sobre el origen del quantum, pero sobre todo considera inadmisible la STS 11-3-04, y ello por una pluralidad de argumentos, pero sobre todo por cuanto viene a autorizar "de facto" este tipo de conductas. La posibilidad de accionar administrativamente frente a este riesgo psicosocial se analiza en un doble terreno, la empresa y la Administración. Si bien el cauce sobre el primero tiene algunos meandros que se desbelan, la situación es radicalmente frustrante en la Administración -donde se encuentra el mayor caldo de cultivo del mobbing- , y ello por el RD 707/2002, pero todavía en mayor medida por el Criterio Técnico 34/2003 mediante el cual la interpretación del Director General de la Inspección de Trabajo y Seguridad Social ha venido tácitamente a derogar parcialmente la Ley de Prevención de Riesgos Laborales para la Administración. En materia penal, la Tesis se decanta "a priori" por dos tipos penales, los delitos contra los derechos de los trabajadores, y el delito de trato degradante; sin embargo, en la práctica sólo este segundo es el camino que puede alcanzar buen puerto. Finalmente se realiza un estudio detallado de la Ley 62/2003, ley que se divulgó como reguladora del acoso moral, y que después se defiende como un avance frente al mobbing. La Tesis advierte que no es cierto ni lo uno, ni lo otro, habiendo creado un "espejismo legal" que puede perjudicar a las víctimas de mobbing, además de no servir su estructura para una futura regulación explícita antimobbing.
Resumo:
Els models matemàtics quantitatius són simplificacions de la realitat i per tant el comportament obtingut per simulació d'aquests models difereix dels reals. L'ús de models quantitatius complexes no és una solució perquè en la majoria dels casos hi ha alguna incertesa en el sistema real que no pot ser representada amb aquests models. Una forma de representar aquesta incertesa és mitjançant models qualitatius o semiqualitatius. Un model d'aquest tipus de fet representa un conjunt de models. La simulació del comportament de models quantitatius genera una trajectòria en el temps per a cada variable de sortida. Aquest no pot ser el resultat de la simulació d'un conjunt de models. Una forma de representar el comportament en aquest cas és mitjançant envolupants. L'envolupant exacta és complete, és a dir, inclou tots els possibles comportaments del model, i correcta, és a dir, tots els punts dins de l'envolupant pertanyen a la sortida de, com a mínim, una instància del model. La generació d'una envolupant així normalment és una tasca molt dura que es pot abordar, per exemple, mitjançant algorismes d'optimització global o comprovació de consistència. Per aquesta raó, en molts casos s'obtenen aproximacions a l'envolupant exacta. Una aproximació completa però no correcta a l'envolupant exacta és una envolupant sobredimensionada, mentre que una envolupant correcta però no completa és subdimensionada. Aquestes propietats s'han estudiat per diferents simuladors per a sistemes incerts.
Resumo:
The proposal presented in this thesis is to provide designers of knowledge based supervisory systems of dynamic systems with a framework to facilitate their tasks avoiding interface problems among tools, data flow and management. The approach is thought to be useful to both control and process engineers in assisting their tasks. The use of AI technologies to diagnose and perform control loops and, of course, assist process supervisory tasks such as fault detection and diagnose, are in the scope of this work. Special effort has been put in integration of tools for assisting expert supervisory systems design. With this aim the experience of Computer Aided Control Systems Design (CACSD) frameworks have been analysed and used to design a Computer Aided Supervisory Systems (CASSD) framework. In this sense, some basic facilities are required to be available in this proposed framework: ·
Resumo:
Les restriccions reals quantificades (QRC) formen un formalisme matemàtic utilitzat per modelar un gran nombre de problemes físics dins els quals intervenen sistemes d'equacions no-lineals sobre variables reals, algunes de les quals podent ésser quantificades. Els QRCs apareixen en nombrosos contextos, com l'Enginyeria de Control o la Biologia. La resolució de QRCs és un domini de recerca molt actiu dins el qual es proposen dos enfocaments diferents: l'eliminació simbòlica de quantificadors i els mètodes aproximatius. Tot i això, la resolució de problemes de grans dimensions i del cas general, resten encara problemes oberts. Aquesta tesi proposa una nova metodologia aproximativa basada en l'Anàlisi Intervalar Modal, una teoria matemàtica que permet resoldre problemes en els quals intervenen quantificadors lògics sobre variables reals. Finalment, dues aplicacions a l'Enginyeria de Control són presentades. La primera fa referència al problema de detecció de fallades i la segona consisteix en un controlador per a un vaixell a vela.