966 resultados para Messaging, Request Responce, Formal Models
Resumo:
The thesis consists in three papers that investigate two debated topics in industrial organization (in particular in competition policy) through formal models based on game-theory. The first paper deals with potential effects of conglomerate mergers among leading brands in facilitating foreclosure of new suppliers through the retailing channel. The two remaining papers analyze antitrust policy with respect to monopolization of markets of spare parts and aftermarkets by monopolistic equipment manufacturers.
Resumo:
In this thesis we made the first steps towards the systematic application of a methodology for automatically building formal models of complex biological systems. Such a methodology could be useful also to design artificial systems possessing desirable properties such as robustness and evolvability. The approach we follow in this thesis is to manipulate formal models by means of adaptive search methods called metaheuristics. In the first part of the thesis we develop state-of-the-art hybrid metaheuristic algorithms to tackle two important problems in genomics, namely, the Haplotype Inference by parsimony and the Founder Sequence Reconstruction Problem. We compare our algorithms with other effective techniques in the literature, we show strength and limitations of our approaches to various problem formulations and, finally, we propose further enhancements that could possibly improve the performance of our algorithms and widen their applicability. In the second part, we concentrate on Boolean network (BN) models of gene regulatory networks (GRNs). We detail our automatic design methodology and apply it to four use cases which correspond to different design criteria and address some limitations of GRN modeling by BNs. Finally, we tackle the Density Classification Problem with the aim of showing the learning capabilities of BNs. Experimental evaluation of this methodology shows its efficacy in producing network that meet our design criteria. Our results, coherently to what has been found in other works, also suggest that networks manipulated by a search process exhibit a mixture of characteristics typical of different dynamical regimes.
Resumo:
Le scelte di asset allocation costituiscono un problema ricorrente per ogni investitore. Quest’ultimo è continuamente impegnato a combinare diverse asset class per giungere ad un investimento coerente con le proprie preferenze. L’esigenza di supportare gli asset manager nello svolgimento delle proprie mansioni ha alimentato nel tempo una vasta letteratura che ha proposto numerose strategie e modelli di portfolio construction. Questa tesi tenta di fornire una rassegna di alcuni modelli innovativi di previsione e di alcune strategie nell’ambito dell’asset allocation tattica, per poi valutarne i risvolti pratici. In primis verificheremo la sussistenza di eventuali relazioni tra la dinamica di alcune variabili macroeconomiche ed i mercati finanziari. Lo scopo è quello di individuare un modello econometrico capace di orientare le strategie dei gestori nella costruzione dei propri portafogli di investimento. L’analisi prende in considerazione il mercato americano, durante un periodo caratterizzato da rapide trasformazioni economiche e da un’elevata volatilità dei prezzi azionari. In secondo luogo verrà esaminata la validità delle strategie di trading momentum e contrarian nei mercati futures, in particolare quelli dell’Eurozona, che ben si prestano all’implementazione delle stesse, grazie all’assenza di vincoli sulle operazioni di shorting ed ai ridotti costi di transazione. Dall’indagine emerge che entrambe le anomalie si presentano con carattere di stabilità. I rendimenti anomali permangono anche qualora vengano utilizzati i tradizionali modelli di asset pricing, quali il CAPM, il modello di Fama e French e quello di Carhart. Infine, utilizzando l’approccio EGARCH-M, verranno formulate previsioni sulla volatilità dei rendimenti dei titoli appartenenti al Dow Jones. Quest’ultime saranno poi utilizzate come input per determinare le views da inserire nel modello di Black e Litterman. I risultati ottenuti, evidenziano, per diversi valori dello scalare tau, extra rendimenti medi del new combined vector superiori al vettore degli extra rendimenti di equilibrio di mercato, seppur con livelli più elevati di rischio.
Resumo:
Esta tesis doctoral se enmarca dentro de la computación con membranas. Se trata de un tipo de computación bio-inspirado, concretamente basado en las células de los organismos vivos, en las que se producen múltiples reacciones de forma simultánea. A partir de la estructura y funcionamiento de las células se han definido diferentes modelos formales, denominados P sistemas. Estos modelos no tratan de modelar el comportamiento biológico de una célula, sino que abstraen sus principios básicos con objeto de encontrar nuevos paradigmas computacionales. Los P sistemas son modelos de computación no deterministas y masivamente paralelos. De ahí el interés que en los últimos años estos modelos han suscitado para la resolución de problemas complejos. En muchos casos, consiguen resolver de forma teórica problemas NP-completos en tiempo polinómico o lineal. Por otra parte, cabe destacar también la aplicación que la computación con membranas ha tenido en la investigación de otros muchos campos, sobre todo relacionados con la biología. Actualmente, una gran cantidad de estos modelos de computación han sido estudiados desde el punto de vista teórico. Sin embargo, el modo en que pueden ser implementados es un reto de investigación todavía abierto. Existen varias líneas en este sentido, basadas en arquitecturas distribuidas o en hardware dedicado, que pretenden acercarse en lo posible a su carácter no determinista y masivamente paralelo, dentro de un contexto de viabilidad y eficiencia. En esta tesis doctoral se propone la realización de un análisis estático del P sistema, como vía para optimizar la ejecución del mismo en estas plataformas. Se pretende que la información recogida en tiempo de análisis sirva para configurar adecuadamente la plataforma donde se vaya a ejecutar posteriormente el P sistema, obteniendo como consecuencia una mejora en el rendimiento. Concretamente, en esta tesis se han tomado como referencia los P sistemas de transiciones para llevar a cabo el estudio de dicho análisis estático. De manera un poco más específica, el análisis estático propuesto en esta tesis persigue que cada membrana sea capaz de determinar sus reglas activas de forma eficiente en cada paso de evolución, es decir, aquellas reglas que reúnen las condiciones adecuadas para poder ser aplicadas. En esta línea, se afronta el problema de los estados de utilidad de una membrana dada, que en tiempo de ejecución permitirán a la misma conocer en todo momento las membranas con las que puede comunicarse, cuestión que determina las reglas que pueden aplicarse en cada momento. Además, el análisis estático propuesto en esta tesis se basa en otra serie de características del P sistema como la estructura de membranas, antecedentes de las reglas, consecuentes de las reglas o prioridades. Una vez obtenida toda esta información en tiempo de análisis, se estructura en forma de árbol de decisión, con objeto de que en tiempo de ejecución la membrana obtenga las reglas activas de la forma más eficiente posible. Por otra parte, en esta tesis se lleva a cabo un recorrido por un número importante de arquitecturas hardware y software que diferentes autores han propuesto para implementar P sistemas. Fundamentalmente, arquitecturas distribuidas, hardware dedicado basado en tarjetas FPGA y plataformas basadas en microcontroladores PIC. El objetivo es proponer soluciones que permitan implantar en dichas arquitecturas los resultados obtenidos del análisis estático (estados de utilidad y árboles de decisión para reglas activas). En líneas generales, se obtienen conclusiones positivas, en el sentido de que dichas optimizaciones se integran adecuadamente en las arquitecturas sin penalizaciones significativas. Summary Membrane computing is the focus of this doctoral thesis. It can be considered a bio-inspired computing type. Specifically, it is based on living cells, in which many reactions take place simultaneously. From cell structure and operation, many different formal models have been defined, named P systems. These models do not try to model the biological behavior of the cell, but they abstract the basic principles of the cell in order to find out new computational paradigms. P systems are non-deterministic and massively parallel computational models. This is why, they have aroused interest when dealing with complex problems nowadays. In many cases, they manage to solve in theory NP problems in polynomial or lineal time. On the other hand, it is important to note that membrane computing has been successfully applied in many researching areas, specially related to biology. Nowadays, lots of these computing models have been sufficiently characterized from a theoretical point of view. However, the way in which they can be implemented is a research challenge, that it is still open nowadays. There are some lines in this way, based on distributed architectures or dedicated hardware. All of them are trying to approach to its non-deterministic and parallel character as much as possible, taking into account viability and efficiency. In this doctoral thesis it is proposed carrying out a static analysis of the P system in order to optimize its performance in a computing platform. The general idea is that after data are collected in analysis time, they are used for getting a suitable configuration of the computing platform in which P system is going to be performed. As a consequence, the system throughput will improve. Specifically, this thesis has made use of Transition P systems for carrying out the study in static analysis. In particular, the static analysis proposed in this doctoral thesis tries to achieve that every membrane can efficiently determine its active rules in every evolution step. These rules are the ones that can be applied depending on the system configuration at each computational step. In this line, we are going to tackle the problem of the usefulness states for a membrane. This state will allow this membrane to know the set of membranes with which communication is possible at any time. This is a very important issue in determining the set of rules that can be applied. Moreover, static analysis in this thesis is carried out taking into account other properties such as membrane structure, rule antecedents, rule consequents and priorities among rules. After collecting all data in analysis time, they are arranged in a decision tree structure, enabling membranes to obtain the set of active rules as efficiently as possible in run-time system. On the other hand, in this doctoral thesis is going to carry out an overview of hardware and software architectures, proposed by different authors in order to implement P systems, such as distributed architectures, dedicated hardware based on PFGA, and computing platforms based on PIC microcontrollers. The aim of this overview is to propose solutions for implementing the results of the static analysis, that is, usefulness states and decision trees for active rules. In general, conclusions are satisfactory, because these optimizations can be properly integrated in most of the architectures without significant penalties.
Resumo:
We present an undergraduate course on concurrent programming where formal models are used in different stages of the learning process. The main practical difference with other approaches lies in the fact that the ability to develop correct concurrent software relies on a systematic transformation of formal models of inter-process interaction (so called shared resources), rather than on the specific constructs of some programming language. Using a resource-centric rather than a language-centric approach has some benefits for both teachers and students. Besides the obvious advantage of being independent of the programming language, the models help in the early validation of concurrent software design, provide students and teachers with a lingua franca that greatly simplifies communication at the classroom and during supervision, and help in the automatic generation of tests for the practical assignments. This method has been in use, with slight variations, for some 15 years, surviving changes in the programming language and course length. In this article, we describe the components and structure of the current incarnation of the course?which uses Java as target language?and some tools used to support our method. We provide a detailed description of the different outcomes that the model-driven approach delivers (validation of the initial design, automatic generation of tests, and mechanical generation of code) from a teaching perspective. A critical discussion on the perceived advantages and risks of our approach follows, including some proposals on how these risks can be minimized. We include a statistical analysis to show that our method has a positive impact in the student ability to understand concurrency and to generate correct code.
Resumo:
In this article, we review recent modifications to Jeffrey Gray's (1973, 1991) reinforcement sensitivity theory (RST), and attempt to draw implications for psychometric measurement of personality traits. First, we consider Gray and McNaughton's (2000) functional revisions to the biobehavioral systems of RST Second, we evaluate recent clarifications relating to interdependent effects that these systems may have on behavior, in addition to or in place of separable effects (e.g., Corr 2001; Pickering, 1997). Finally, we consider ambiguities regarding the exact trait dimension to which Gray's It reward system corresponds. From this review, we suggest that future work is needed to distinguish psychometric measures of (a) fear from anxiety and (b) reward-reactivity-from trait impulsivity. We also suggest, on the basis of interdependent system views of RST and associated exploration using formal models, that traits that are based upon RST are likely to have substantial intercorrelations. Finally, we advise that more substantive work is required to define relevant constructs and behaviors in RST before we can be confident in our psychometric measures of them.
Resumo:
Modern distributed control systems comprise of a set of processors which are interconnected using a suitable communication network. For use in real-time control environments, such systems must be deterministic and generate specified responses within critical timing constraints. Also, they should be sufficiently robust to survive predictable events such as communication or processor faults. This thesis considers the problem of coordinating and synchronizing a distributed real-time control system under normal and abnormal conditions. Distributed control systems need to periodically coordinate the actions of several autonomous sites. Often the type of coordination required is the all or nothing property of an atomic action. Atomic commit protocols have been used to achieve this atomicity in distributed database systems which are not subject to deadlines. This thesis addresses the problem of applying time constraints to atomic commit protocols so that decisions can be made within a deadline. A modified protocol is proposed which is suitable for real-time applications. The thesis also addresses the problem of ensuring that atomicity is provided even if processor or communication failures occur. Previous work has considered the design of atomic commit protocols for use in non time critical distributed database systems. However, in a distributed real-time control system a fault must not allow stringent timing constraints to be violated. This thesis proposes commit protocols using synchronous communications which can be made resilient to a single processor or communication failure and still satisfy deadlines. Previous formal models used to design commit protocols have had adequate state coverability but have omitted timing properties. They also assumed that sites communicated asynchronously and omitted the communications from the model. Timed Petri nets are used in this thesis to specify and design the proposed protocols which are analysed for consistency and timeliness. Also the communication system is mcxielled within the Petri net specifications so that communication failures can be included in the analysis. Analysis of the Timed Petri net and the associated reachability tree is used to show the proposed protocols always terminate consistently and satisfy timing constraints. Finally the applications of this work are described. Two different types of applications are considered, real-time databases and real-time control systems. It is shown that it may be advantageous to use synchronous communications in distributed database systems, especially if predictable response times are required. Emphasis is given to the application of the developed commit protocols to real-time control systems. Using the same analysis techniques as those used for the design of the protocols it can be shown that the overall system performs as expected both functionally and temporally.
Resumo:
Pseudoneglect represents the tendency for healthy individuals to show a slight but consistent bias in favour of stimuli appearing in the left visual field. The bias is often measured using variants of the line bisection task. An accurate model of the functional architecture of the visuospatial attention system must account for this widely observed phenomenon, as well as for modulation of the direction and magnitude of the bias within individuals by a variety of factors relating to the state of the participant and/or stimulus characteristics. To date, the neural correlates of pseudoneglect remain relatively unmapped. In the current thesis, I employed a combination of psychophysical measurements, electroencephalography (EEG) recording and transcranial direct current stimulation (tDCS) in an attempt to probe the neural generator(s) of pseudoneglect. In particular, I wished to utilise and investigate some of the factors known to modulate the bias (including age, time-on-task and the length of the to-be-bisected line) in order to identify neural processes and activity that are necessary and sufficient for the lateralized bias to arise. Across four experiments utilising a computerized version of a perceptual line bisection task, pseudoneglect was consistently observed at baseline in healthy young participants. However, decreased line length (experiments 1, 2 and 3), time-on-task (experiment 1) and healthy aging (experiment 3) were all found to modulate the bias. Specifically, all three modulations induced a rightward shift in subjective midpoint estimation. Additionally, the line length and time-on-task effects (experiment 1) and the line length and aging effects (experiment 3) were found to have additive relationships. In experiment 2, EEG measurements revealed the line length effect to be reflected in neural activity 100 – 200ms post-stimulus onset over source estimated posterior regions of the right hemisphere (RH: temporo-parietal junction (TPJ)). Long lines induced a hemispheric asymmetry in processing (in favour of the RH) during this period that was absent in short lines. In experiment 4, bi-parietal tDCS (Left Anodal/Right Cathodal) induced a polarity-specific rightward shift in bias, highlighting the crucial role played by parietal cortex in the genesis of pseudoneglect. The opposite polarity (Left Cathodal/Right Anodal) did not induce a change in bias. The combined results from the four experiments of the current thesis provide converging evidence as to the crucial role played by the RH in the genesis of pseudoneglect and in the processing of visual input more generally. The reduction in pseudoneglect with decreased line length, increased time-on-task and healthy aging may be explained by a reduction in RH function, and hence contribution to task processing, induced by each of these modulations. I discuss how behavioural and neuroimaging studies of pseudoneglect (and its various modulators) can provide empirical data upon which accurate formal models of visuospatial attention networks may be based and further tested.
Resumo:
Information is often modelled as a set of relevant possibilities, treated as logically possible worlds. However, this has the unintuitive consequence that the logical consequences of an agent's information cannot be informative for that agent. There are many scenarios in which such consequences are clearly informative for the agent in question. Attempts to weaken the logic underlying each possible world are misguided. Instead, I provide a genuinely psychological notion of epistemic possibility and show how it can be captured in a formal model, which I call a fan. I then show how to use fans to build formal models of being informed, as well as knowledge, belief and information update.
Resumo:
Study/Objective This program of research examines the effectiveness of legal mechanisms as motivators to maximise engagement and compliance with evacuation messages. This study is based on the understanding that the presence of legislative requirements, as well as sanctions and incentives encapsulated in law, can have a positive impact in achieving compliance. Our objective is to examine whether the current Australian legal frameworks, which incorporate evacuation during disasters, are an effective structure that is properly understood by those who enforce and those who are required to comply. Background In Australia, most jurisdictions have enacted legislation that encapsulates the power to evacuate and the ability to enforce compliance, either by the use of force or imposition of penalty. However, citizens still choose to not evacuate. Methods This program of research incorporates theoretical and doctrinal methodologies for reviewing literature and legislation in the Australia context. The aim of the research is to determine whether further clarity is required to create an understanding of the powers to evacuate, as well as greater public awareness of these powers. Results & Conclusion Legislators suggest that powers of evacuation can be ineffective if they are impractical to enforce. In Australia, there may also be confusion about from which legislative instrument the power to evacuate derives, and therefore whether there is a corresponding ability to enforce compliance through the use of force or imposition of a penalty. Equally, communities may lack awareness and understanding of the powers of agencies to enforce compliance. We seek to investigate whether this is the case, and whether even if greater awareness existed, it would act as an incentive to comply.
Resumo:
Well planned natural ventilation strategies and systems in the built environments may provide healthy and comfortable indoor conditions, while contributing to a significant reduction in the energy consumed by buildings. Computational Fluid Dynamics (CFD) is particularly suited for modelling indoor conditions in naturally ventilated spaces, which are difficult to predict using other types of building simulation tools. Hence, accurate and reliable CFD models of naturally ventilated indoor spaces are necessary to support the effective design and operation of indoor environments in buildings. This paper presents a formal calibration methodology for the development of CFD models of naturally ventilated indoor environments. The methodology explains how to qualitatively and quantitatively verify and validate CFD models, including parametric analysis utilising the response surface technique to support a robust calibration process. The proposed methodology is demonstrated on a naturally ventilated study zone in the library building at the National University of Ireland in Galway. The calibration process is supported by the on-site measurements performed in a normally operating building. The measurement of outdoor weather data provided boundary conditions for the CFD model, while a network of wireless sensors supplied air speeds and air temperatures inside the room for the model calibration. The concepts and techniques developed here will enhance the process of achieving reliable CFD models that represent indoor spaces and provide new and valuable information for estimating the effect of the boundary conditions on the CFD model results in indoor environments. © 2012 Elsevier Ltd.
Resumo:
Landscapes of education are a new topic within the debate about adequate and just education and human development for everybody. In particular, children and youths from social classes affected by poverty, a lack of prospects or minimal schooling are a focal group that should be offered new approaches and opportunities of cognitive and social development by way of these landscapes of education. It has become apparent that the traditional school alone does not suffice to meet this need. There is no doubt that competency-based orientation and employability are core areas with the help of which the generation now growing up will manage the start of its professional career. In addition and by no means less important, the development involves individual, social, cultural and societal perspectives that can be combined under the term of human development. In this context, the Capability Approach elaborated by Amartya Sen and Martha Nussbaum has developed a more extensive concept of human development and related it to empirical instruments. Using the analytic concept of individual capabilities and societal opportunities they shaped a socio-political formula that should be adapted in particular to modern social work. Moreover, the Capability Approach offers a critical foil with regard to further development and revision of institutionalised approaches in education and human development.