21 resultados para Inverse Problem in Optics
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
BTES (borehole thermal energy storage)systems exchange thermal energy by conduction with the surrounding ground through borehole materials. The spatial variability of the geological properties and the space-time variability of hydrogeological conditions affect the real power rate of heat exchangers and, consequently, the amount of energy extracted from / injected into the ground. For this reason, it is not an easy task to identify the underground thermal properties to use when designing. At the current state of technology, Thermal Response Test (TRT) is the in situ test for the characterization of ground thermal properties with the higher degree of accuracy, but it doesn’t fully solve the problem of characterizing the thermal properties of a shallow geothermal reservoir, simply because it characterizes only the neighborhood of the heat exchanger at hand and only for the test duration. Different analytical and numerical models exist for the characterization of shallow geothermal reservoir, but they are still inadequate and not exhaustive: more sophisticated models must be taken into account and a geostatistical approach is needed to tackle natural variability and estimates uncertainty. The approach adopted for reservoir characterization is the “inverse problem”, typical of oil&gas field analysis. Similarly, we create different realizations of thermal properties by direct sequential simulation and we find the best one fitting real production data (fluid temperature along time). The software used to develop heat production simulation is FEFLOW 5.4 (Finite Element subsurface FLOW system). A geostatistical reservoir model has been set up based on literature thermal properties data and spatial variability hypotheses, and a real TRT has been tested. Then we analyzed and used as well two other codes (SA-Geotherm and FV-Geotherm) which are two implementation of the same numerical model of FEFLOW (Al-Khoury model).
Resumo:
Imaging technologies are widely used in application fields such as natural sciences, engineering, medicine, and life sciences. A broad class of imaging problems reduces to solve ill-posed inverse problems (IPs). Traditional strategies to solve these ill-posed IPs rely on variational regularization methods, which are based on minimization of suitable energies, and make use of knowledge about the image formation model (forward operator) and prior knowledge on the solution, but lack in incorporating knowledge directly from data. On the other hand, the more recent learned approaches can easily learn the intricate statistics of images depending on a large set of data, but do not have a systematic method for incorporating prior knowledge about the image formation model. The main purpose of this thesis is to discuss data-driven image reconstruction methods which combine the benefits of these two different reconstruction strategies for the solution of highly nonlinear ill-posed inverse problems. Mathematical formulation and numerical approaches for image IPs, including linear as well as strongly nonlinear problems are described. More specifically we address the Electrical impedance Tomography (EIT) reconstruction problem by unrolling the regularized Gauss-Newton method and integrating the regularization learned by a data-adaptive neural network. Furthermore we investigate the solution of non-linear ill-posed IPs introducing a deep-PnP framework that integrates the graph convolutional denoiser into the proximal Gauss-Newton method with a practical application to the EIT, a recently introduced promising imaging technique. Efficient algorithms are then applied to the solution of the limited electrods problem in EIT, combining compressive sensing techniques and deep learning strategies. Finally, a transformer-based neural network architecture is adapted to restore the noisy solution of the Computed Tomography problem recovered using the filtered back-projection method.
Resumo:
This thesis proposes a solution for board cutting in the wood industry with the aim of usage minimization and machine productivity. The problem is dealt with as a Two-Dimensional Cutting Stock Problem and specific Combinatorial Optimization methods are used to solve it considering the features of the real problem.
Resumo:
In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
The cardiovascular regulation undergoes wide changes in the different states of sleepwake cycle. In particular, the relationship between spontaneous fluctuations in heart period and arterial pressure clearly shows differences between the two sleep states. In non rapid-eye-movement sleep, heart rhythm is under prevalent baroreflex control, whereas in rapid-eye-movement sleep central autonomic commands prevail (Zoccoli et al., 2001). Moreover, during rapid-eye-movement sleep the cardiovascular variables show wide fluctuations around their mean value. In particular, during rapid-eyemovement sleep, the arterial pressure shows phasic hypertensive events which are superimposed upon the tonic level of arterial pressure. These phasic increases in arterial pressure are accompanied by an increase in heart rate (Sei & Morita, 1996; Silvani et al., 2005). Thus, rapid-eye-movement sleep may represent an “autonomic stress test” for the cardiovascular system, able to unmask pathological patterns of cardiovascular regulation (Verrier et al. 2005), but this hypothesis has never been tested experimentally. The aim of this study was to investigate whether rapid-eye-movement sleep may reveal derangements in central autonomic cardiovascular control in an experimental model of essential hypertension. The study was performed in Spontaneously Hypertensive Rats, which represent the most widely used model of essential hypertension, and allow full control of genetic and environmental confounding factors. In particular, we analyzed the cardiovascular, electroencephalogram, and electromyogram changes associated with phasic hypertensive events during rapid-eyemovement sleep in Spontaneously Hypertensive Rats and in their genetic Wistar Kyoto control strain. Moreover, we studied also a group of Spontaneously Hypertensive Rats made phenotypically normotensive by means of a chronic treatment with an angiotensin converting enzyme inhibitor, the Enalapril maleate, from the age of four weeks to the end of the experiment. All rats were implanted with electrodes for electroencephalographic and electromyographic recordings and with an arterial catheter for arterial pressure measurement. After six days for postoperative recovery, the rats were studied for five days, at an age of ten weeks.The study indicated that the peak of mean arterial pressure increase during the phasic hypertensive events in rapid-eye-movement sleep did not differ significantly between Spontaneously Hypertensive Rats and Wistar Kyoto rats, while on the other hand Spontaneously Hypertensive Rats showed a reduced increase in the frequency of theta rhythm and a reduced tachicardia with respect to Wistar Kyoto rats. The same pattern of changes in mean arterial pressure, heart period, and theta frequency was observed between Spontaneously Hypertensive Rats and Spontaneously Hypertensive Rats treated with Enalapril maleate. Spontaneously Hypertensive Rats do not differ from Wistar Kyoto rats only in terms of arterial hypertension, but also due to multiple unknown genetic differences. Spontaneously Hypertensive Rats were developed by selective breeding of Wistar Kyoto rats based only on the level of arterial pressure. However, in this process, multiple genes possibly unrelated to hypertension may have been selected together with the genetic determinants of hypertension (Carley et al., 2000). This study indicated that Spontaneously Hypertensive Rats differ from Wistar Kyoto rats, but not from Spontaneously Hypertensive Rats treated with Enalapril maleate, in terms of arterial pH and theta frequency. This feature may be due to genetic determinants unrelated to hypertension. In sharp contrast, the persistence of differences in the peak of heart period decrease and the peak of theta frequency increase during phasic hypertensive events between Spontaneously Hypertensive Rats and Spontaneously Hypertensive Rats treated with Enalapril maleate demonstrates that the observed reduction in central autonomic control of the cardiovascular system in Spontaneously Hypertensive Rats is not an irreversible consequence of inherited genetic determinants. Rather, the comparison between Spontaneously Hypertensive Rats and Spontaneously Hypertensive Rats treated with Enalapril maleate indicates that the observed differences in central autonomic control are the result of the hypertension per se. This work supports the view that the study of cardiovascular regulation in sleep provides fundamental insight on the pathophysiology of hypertension, and may thus contribute to the understanding of this disease, which is a major health problem in European countries (Wolf-Maier et al., 2003) with its burden of cardiac, vascular, and renal complications.
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.
Resumo:
CHAPTER 1:FLUID-VISCOUS DAMPERS In this chapter the fluid-viscous dampers are introduced. The first section is focused on the technical characteristics of these devices, their mechanical behavior and the latest evolution of the technology whose they are equipped. In the second section we report the definitions and the guide lines about the design of these devices included in some international codes. In the third section the results of some experimental tests carried out by some authors on the response of these devices to external forces are discussed. On this purpose we report some technical schedules that are usually enclosed to the devices now available on the international market. In the third section we show also some analytic models proposed by various authors, which are able to describe efficiently the physical behavior of the fluid-viscous dampers. In the last section we propose some cases of application of these devices on existing structures and on new-construction structures. We show also some cases in which these devices have been revealed good for aims that lies outside the reduction of seismic actions on the structures. CHAPTER 2:DESIGN METHODS PROPOSED IN LITERATURE In this chapter the more widespread design methods proposed in literature for structures equipped by fluid-viscous dampers are introduced. In the first part the response of sdf systems in the case of harmonic external force is studied, in the last part the response in the case of random external force is discussed. In the first section the equations of motion in the case of an elastic-linear sdf system equipped with a non-linear fluid-viscous damper undergoing a harmonic force are introduced. This differential problem is analytically quite complex and it’s not possible to be solved in a closed form. Therefore some authors have proposed approximate solution methods. The more widespread methods are based on equivalence principles between a non-linear device and an equivalent linear one. Operating in this way it is possible to define an equivalent damping ratio and the problem becomes linear; the solution of the equivalent problem is well-known. In the following section two techniques of linearization, proposed by some authors in literature, are described: the first technique is based on the equivalence of the energy dissipated by the two devices and the second one is based on the equivalence of power consumption. After that we compare these two techniques by studying the response of a sdf system undergoing a harmonic force. By introducing the equivalent damping ratio we can write the equation of motion of the non-linear differential problem in an implicit form, by dividing, as usual, for the mass of the system. In this way, we get a reduction of the number of variables, by introducing the natural frequency of the system. The equation of motion written in this form has two important properties: the response is linear dependent on the amplitude of the external force and the response is dependent on the ratio of the frequency of the external harmonic force and the natural frequency of the system only, and not on their single values. All these considerations, in the last section, are extended to the case of a random external force. CHAPTER 3: DESIGN METHOD PROPOSED In this chapter the theoretical basis of the design method proposed are introduced. The need to propose a new design method for structures equipped with fluid-viscous dampers arises from the observation that the methods reported in literature are always iterative, because the response affects some parameters included in the equation of motion (such as the equivalent damping ratio). In the first section the dimensionless parameterε is introduced. This parameter has been obtained from the definition of equivalent damping ratio. The implicit form of the equation of motion is written by introducing the parameter ε, instead of the equivalent damping ratio. This new implicit equation of motions has not any terms affected by the response, so that once ε is known the response can be evaluated directly. In the second section it is discussed how the parameter ε affects some characteristics of the response: drift, velocity and base shear. All the results described till this point have been obtained by keeping the non-linearity of the behavior of the dampers. In order to get a linear formulation of the problem, that is possible to solve by using the well-known methods of the dynamics of structures, as we did before for the iterative methods by introducing the equivalent damping ratio, it is shown how the equivalent damping ratio can be evaluated from knowing the value of ε. Operating in this way, once the parameter ε is known, it is quite easy to estimate the equivalent damping ratio and to proceed with a classic linear analysis. In the last section it is shown how the parameter ε could be taken as reference for the evaluation of the convenience of using non-linear dampers instead of linear ones on the basis of the type of external force and the characteristics of the system. CHAPTER 4: MULTI-DEGREE OF FREEDOM SYSTEMS In this chapter the design methods of a elastic-linear mdf system equipped with non-linear fluidviscous dampers are introduced. It has already been shown that, in the sdf systems, the response of the structure can be evaluated through the estimation of the equivalent damping ratio (ξsd) assuming the behavior of the structure elastic-linear. We would to mention that some adjusting coefficients, to be applied to the equivalent damping ratio in order to consider the actual behavior of the structure (that is non-linear), have already been proposed in literature; such coefficients are usually expressed in terms of ductility, but their treatment is over the aims of this thesis and we does not go into further. The method usually proposed in literature is based on energy equivalence: even though this procedure has solid theoretical basis, it must necessary include some iterative process, because the expression of the equivalent damping ratio contains a term of the response. This procedure has been introduced primarily by Ramirez, Constantinou et al. in 2000. This procedure is reported in the first section and it is defined “Iterative Method”. Following the guide lines about sdf systems reported in the previous chapters, it is introduced a procedure for the assessment of the parameter ε in the case of mdf systems. Operating in this way the evaluation of the equivalent damping ratio (ξsd) can be done directly without implementing iterative processes. This procedure is defined “Direct Method” and it is reported in the second section. In the third section the two methods are analyzed by studying 4 cases of two moment-resisting steel frames undergoing real accelerogramms: the response of the system calculated by using the two methods is compared with the numerical response obtained from the software called SAP2000-NL, CSI product. In the last section a procedure to create spectra of the equivalent damping ratio, affected by the parameter ε and the natural period of the system for a fixed value of exponent α, starting from the elasticresponse spectra provided by any international code, is introduced.
Resumo:
The study of protein fold is a central problem in life science, leading in the last years to several attempts for improving our knowledge of the protein structures. In this thesis this challenging problem is tackled by means of molecular dynamics, chirality and NMR studies. In the last decades, many algorithms were designed for the protein secondary structure assignment, which reveals the local protein shape adopted by segments of amino acids. In this regard, the use of local chirality for the protein secondary structure assignment was demonstreted, trying to correlate as well the propensity of a given amino acid for a particular secondary structure. The protein fold can be studied also by Nuclear Magnetic Resonance (NMR) investigations, finding the average structure adopted from a protein. In this context, the effect of Residual Dipolar Couplings (RDCs) in the structure refinement was shown, revealing a strong improvement of structure resolution. A wide extent of this thesis is devoted to the study of avian prion protein. Prion protein is the main responsible of a vast class of neurodegenerative diseases, known as Bovine Spongiform Encephalopathy (BSE), present in mammals, but not in avian species and it is caused from the conversion of cellular prion protein to the pathogenic misfolded isoform, accumulating in the brain in form of amiloyd plaques. In particular, the N-terminal region, namely the initial part of the protein, is quite different between mammal and avian species but both of them contain multimeric sequences called Repeats, octameric in mammals and hexameric in avians. However, such repeat regions show differences in the contained amino acids, in particular only avian hexarepeats contain tyrosine residues. The chirality analysis of avian prion protein configurations obtained from molecular dynamics reveals a high stiffness of the avian protein, which tends to preserve its regular secondary structure. This is due to the presence of prolines, histidines and especially tyrosines, which form a hydrogen bond network in the hexarepeat region, only possible in the avian protein, and thus probably hampering the aggregation.
Resumo:
The labyrinthum Capella quoted in the title (from a Prudentius of Troyes epistle) represents the allegory of the studium of the liberal arts and the looking for knowledge in the early middle age. This is a capital problem in the early Christianity and, in general, for all the western world, concerning the relationship between faith and science. I studied the evolution of this subject from its birth to Carolingian age, focusing on the most relevant figures, for the western Europe, such Saint Augustine (De doctrina christiana), Martianus Capella (De Nuptiis Philologiae et Mercurii) and Iohannes Scotus Eriugena (Annotationes in Marcianum). Clearly it emerges that there were two opposite ways about this relatioship. According to the first, the human being is capable of get a knowledge about God thanks to its own reason and logical thought processes (by the analysis of the nature as a Speculum Dei); on the other way, only the faith and the grace could give the man the possibility to perceive God, and the Bible is the only book men need to know. From late antiquity to Iohannes Scotus times, a few christian and pagan authors fall into line with first position (the neoplatonic one): Saint Augustine (first part of his life, then he retracted some of his views), Martianus, Calcidius and Macrobius. Other philosophers were not neoplatonic bat believed in the power of the studium: Boethius, Cassiodorus, Isidorus of Seville, Hrabanus Maurus and Lupus of Ferriéres. In order to get an idea of this conception, I finally focused the research on Iohannes Scotus Eriugena's Annotationes in Marcianum. I commented Eriugena's work phrase by phrase trying to catch the sense of his words, the reference, philosophical influences, to trace antecedents and its clouts to later middle age and Chartres school. In this scholastic text Eriugena comments the Capella's work and poses again the question of the studium to his students. Iohannes was a magister in schola Palatina during the time of Carl the Bald, he knew Saint Augustine works, and he knew Boethius, Calcidius, Macrobius, Isidorus and Cassiodorus ones too. He translated Pseudo-Dionysius the Areopagite and Maximus the Confessor. He had a neoplatonic view of Christianity and tried to harmonize the impossibility to know God to man's intellectual capability to get a glimpse of God through the study of the nature. According to this point of view, Eriugena's comment of Martianus Capella was no more a secondary work. It gets more and more importance to understand his research and his mystic, and to understand and really grasp the inner sense of his chief work Periphyseon.
Resumo:
This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.
Resumo:
PROBLEM In the last few years farm tourism or agritourism as it is also referred to has enjoyed increasing success because of its generally acknowledged role as a promoter of economic and social development of rural areas. As a consequence, a plethora of studies have been dedicated to this tourist sector, focusing on a variety of issues. Nevertheless, despite the difficulties of many farmers to orient their business towards potential customers, the contribution of the marketing literature has been moderate. PURPOSE This dissertation builds upon studies which advocate the necessity of farm tourism to innovate itself according to the increasingly demanding needs of customers. Henceforth, the purpose of this dissertation is to critically evaluate the level of professionalism reached in the farm tourism market within a marketing approach. METHODOLOGY This dissertation is a cross-country perspective incorporating the marketing of farm tourism studied in Germany and Italy. Hence, the marketing channels of this tourist sector are examined both from the supply and the demand side by means of five exploratory studies. The data collection has been conducted in the timeframe of 2006 to 2009 in manifold ways (online survey, catalogues of industry associations, face-to-face interviews, etc.) according to the purpose of the research of each study project. The data have been analyzed using multivariate statistical analysis. FINDINGS A comprehensive literature review provides the state of the art of the main differences and similarities of farm tourism in the two countries of study. The main findings contained in the empirical chapters provide insights on many aspects of agritourism including how the expectations of farm operators and customers differ, which development scenarios of farm tourism are more likely to meet individuals’ needs, how new technologies can impact the demand for farm tourism, etc. ORIGINALITY/VALUE The value of this study is in the investigation of the process by which farmers’ participation in the development of this sector intersects with consumer consumption patterns. Focusing on this process should allow farm operators and others including related businesses to more efficiently allocate resources.
Resumo:
Rita Cannas presents a PhD thesis in Economics (Geo-Economic curriculum) which is titled “Public Policies for Seasonality in Tourism from a Territorial Perspective. Case Studies in Scotland and Sardinia”. The specific area of the research is public policies for contrasting seasonality in tourism in peripheral areas. Seasonality has seen such as a problem in terms of social and economics patterns especially for those local communities which are situated in peripheral areas. The research explores what, how and for who, public policies, that have been in place in Scotland and Sardinia over the last 10-5 years, are working and what kind of results these have produced. The research has empirical and theoretical implications for studying tourism seasonality. It aims to highlight the local supply patterns of the phenomenon investigated, and to improve knowledge about the strategies and the policies that have been adopted in the two territorial contexts (Scotland and Sardinia) for contrasting or modifying seasonality in tourism. The type of subject and the research questions have suggested the adoption of an interpretative theoretical perspective and a qualitative methodological approach, although a set of quantitative secondary data is also required for understanding main tourism's characteristics and for analyzing the specificity of seasonality. Interview with key actors of the local system in Scotland and Sardinia is the method chosen to collect primary data. In total the researcher has done 20 interviews in deep. Case studies are chosen both as unity of analysis and research strategy. The main findings of the research show a different and complex scenario about quality and quantity of public policies and strategies in tourism in the two case studies. The role of local resources is quite strategic on delivering tourism services and on counteracting seasonality. Events, festival are the main demand-side strategies. From a supply-side the principles policies are focused on quality of services, technology, high skills, sustainability. Partnership between public and private sector seems to be a fundamental way to work in order to attain changes and outcomes. The research has a strong research design, provides coherent results, and it has been done paying attention to the validation of the whole process.