8 resultados para D72 - Economic Models of Political Processes:
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.
Resumo:
The advances that have been characterizing spatial econometrics in recent years are mostly theoretical and have not found an extensive empirical application yet. In this work we aim at supplying a review of the main tools of spatial econometrics and to show an empirical application for one of the most recently introduced estimators. Despite the numerous alternatives that the econometric theory provides for the treatment of spatial (and spatiotemporal) data, empirical analyses are still limited by the lack of availability of the correspondent routines in statistical and econometric software. Spatiotemporal modeling represents one of the most recent developments in spatial econometric theory and the finite sample properties of the estimators that have been proposed are currently being tested in the literature. We provide a comparison between some estimators (a quasi-maximum likelihood, QML, estimator and some GMM-type estimators) for a fixed effects dynamic panel data model under certain conditions, by means of a Monte Carlo simulation analysis. We focus on different settings, which are characterized either by fully stable or quasi-unit root series. We also investigate the extent of the bias that is caused by a non-spatial estimation of a model when the data are characterized by different degrees of spatial dependence. Finally, we provide an empirical application of a QML estimator for a time-space dynamic model which includes a temporal, a spatial and a spatiotemporal lag of the dependent variable. This is done by choosing a relevant and prolific field of analysis, in which spatial econometrics has only found limited space so far, in order to explore the value-added of considering the spatial dimension of the data. In particular, we study the determinants of cropland value in Midwestern U.S.A. in the years 1971-2009, by taking the present value model (PVM) as the theoretical framework of analysis.
Resumo:
Abstract This dissertation investigates the notion of equivalence with particular reference to lexical cohesion in the translation of political speeches. Lexical cohesion poses a particular challenge to the translators of political speeches and thus preserving lexical cohesion elements as one of the major elements of cohesion is undoubtedly crucial to their translation equivalence. We rely on Halliday’s (1994) classification of lexical cohesion which comprises: repetition, synonymy, antonymy, meronymy and hyponymy. Other traditional models of lexical cohesion are examined. We include Grammatical Parallelism for its role in creating textual semantic unity which is what cohesion is all about. The study shed light on the function of lexical cohesion elements as rhetorical device. The study also deals with lexical problems resulting from the transfer of lexical cohesion elements from the SL into the TL, which is often beset by many problems that most often result from the differences between languages. Three key issues are identified as being fundamental to equivalence and lexical cohesion in the translation of political speeches: sociosemiotic approach, register analysis, rhetoric, and poetic function. The study also investigates the lexical cohesion elements in the translation of political speeches from English into Arabic, Italian and French in relation to ideology, and its control, through bias and distortion. The findings are discussed, implications examined and topics for further research suggested.
Resumo:
The goal of this dissertation is to use statistical tools to analyze specific financial risks that have played dominant roles in the US financial crisis of 2008-2009. The first risk relates to the level of aggregate stress in the financial markets. I estimate the impact of financial stress on economic activity and monetary policy using structural VAR analysis. The second set of risks concerns the US housing market. There are in fact two prominent risks associated with a US mortgage, as borrowers can both prepay or default on a mortgage. I test the existence of unobservable heterogeneity in the borrower's decision to default or prepay on his mortgage by estimating a multinomial logit model with borrower-specific random coefficients.
Resumo:
Over the last three decades, international agricultural trade has grown significantly. Technological advances in transportation logistics and storage have created opportunities to ship anything almost anywhere. Bilateral and multilateral trade agreements have also opened new pathways to an increasingly global market place. Yet, international agricultural trade is often constrained by differences in regulatory regimes. The impact of “regulatory asymmetry” is particularly acute for small and medium sized enterprises (SMEs) that lack resources and expertise to successfully operate in markets that have substantially different regulatory structures. As governments seek to encourage the development of SMEs, policy makers often confront the critical question of what ultimately motivates SME export behavior. Specifically, there is considerable interest in understanding how SMEs confront the challenges of regulatory asymmetry. Neoclassical models of the firm generally emphasize expected profit maximization under uncertainty, however these approaches do not adequately explain the entrepreneurial decision under regulatory asymmetry. Behavioral theories of the firm offer a far richer understanding of decision making by taking into account aspirations and adaptive performance in risky environments. This paper develops an analytical framework for decision making of a single agent. Considering risk, uncertainty and opportunity cost, the analysis focuses on the export behavior response of an SME in a situation of regulatory asymmetry. Drawing on the experience of fruit processor in Muzaffarpur, India, who must consider different regulatory environments when shipping fruit treated with sulfur dioxide, the study dissects the firm-level decision using @Risk, a Monte Carlo computational tool.
Resumo:
The simulation of ultrafast photoinduced processes is a fundamental step towards the understanding of the underlying molecular mechanism and interpretation/prediction of experimental data. Performing a computer simulation of a complex photoinduced process is only possible introducing some approximations but, in order to obtain reliable results, the need to reduce the complexity must balance with the accuracy of the model, which should include all the relevant degrees of freedom and a quantitatively correct description of the electronic states involved in the process. This work presents new computational protocols and strategies for the parameterisation of accurate models for photochemical/photophysical processes based on state-of-the-art multiconfigurational wavefunction-based methods. The required ingredients for a dynamics simulation include potential energy surfaces (PESs) as well as electronic state couplings, which must be mapped across the wide range of geometries visited during the wavepacket/trajectory propagation. The developed procedures allow to obtain solid and extended databases reducing as much as possible the computational cost, thanks to, e.g., specific tuning of the level of theory for different PES regions and/or direct calculation of only the needed components of vectorial quantities (like gradients or nonadiabatic couplings). The presented approaches were applied to three case studies (azobenzene, pyrene, visual rhodopsin), all requiring an accurate parameterisation but for different reasons. The resulting models and simulations allowed to elucidate the mechanism and time scale of the internal conversion, reproducing or even predicting new transient experiments. The general applicability of the developed protocols to systems with different peculiarities and the possibility to parameterise different types of dynamics on an equal footing (classical vs purely quantum) prove that the developed procedures are flexible enough to be tailored for each specific system, and pave the way for exact quantum dynamics with multiple degrees of freedom.
Resumo:
With an increasing demand for rural resources and land, new challenges are approaching affecting and restructuring the European countryside. While creating opportunities for rural living, it has also opened a discussion on rural gentrification risks. The concept of rural gentrification encircles the influx of new residents leading to an economic upgrade of an area making it unaffordable for local inhabitants to stay in. Rural gentrification occurs in areas perceived as attractive. Paradoxically, in-migrants re-shape their surrounding landscape. Rural gentrification may not only cause displacement of people but also landscape values. Thus, this research aims to understand the twofold role of landscape in rural gentrification theory: as a possible driver to attract residents and as a product shaped by its residents. To understand the potential gentrifiers’ decision process, this research has provided a collection of drivers behind in-migration. Moreover, essential indicators of rural gentrification have been collected from previous studies. Yet, the available indicators do not contain measures to understand related landscape changes. To fill this gap, after analysing established landscape assessment methodologies, evaluating the relevance for assessing gentrification, a new Landscape Assessment approach is proposed. This method introduces a novel approach to capture landscape change caused by gentrification through a historical depth. The measures to study gentrification was applied on Gotland, Sweden. The study showed a population stagnating while the number of properties increased, and housing prices raised. These factors are not indicating positive growth but risks of gentrification. Then, the research applied the proposed Landscape Assessment method for areas exposed to gentrification. Results suggest that landscape change takes place on a local scale and could over time endanger key characteristics. The methodology contributes to a discussion on grasping nuances within the rural context. It has also proven useful for indicating accumulative changes, which is necessary in managing landscape values.
Resumo:
Nowadays, the chemical industry has reached significant goals to produce essential components for human being. The growing competitiveness of the market caused an important acceleration in R&D activities, introducing new opportunities and procedures for the definition of process improvement and optimization. In this dynamicity, sustainability is becoming one of the key aspects for the technological progress encompassing economic, environmental protection and safety aspects. With respect to the conceptual definition of sustainability, literature reports an extensive discussion of the strategies, as well as sets of specific principles and guidelines. However, literature procedures are not completely suitable and applicable to process design activities. Therefore, the development and introduction of sustainability-oriented methodologies is a necessary step to enhance process and plant design. The definition of key drivers as support system is a focal point for early process design decisions or implementation of process modifications. In this context, three different methodologies are developed to support design activities providing criteria and guidelines in a sustainable perspective. In this framework, a set of key Performance Indicators is selected and adopted to characterize the environmental, safety, economic and energetic aspects of a reference process. The methodologies are based on heat and material balances and the level of detailed for input data are compatible with available information of the specific application. Multiple case-studies are defined to prove the effectiveness of the methodologies. The principal application is the polyolefin productive lifecycle chain with particular focus on polymerization technologies. In this context, different design phases are investigated spanning from early process feasibility study to operative and improvements assessment. This flexibility allows to apply the methodologies at any level of design, providing supporting guidelines for design activities, compare alternative solutions, monitor operating process and identify potential for improvements.