15 resultados para Spatial dynamic modeling
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The advances that have been characterizing spatial econometrics in recent years are mostly theoretical and have not found an extensive empirical application yet. In this work we aim at supplying a review of the main tools of spatial econometrics and to show an empirical application for one of the most recently introduced estimators. Despite the numerous alternatives that the econometric theory provides for the treatment of spatial (and spatiotemporal) data, empirical analyses are still limited by the lack of availability of the correspondent routines in statistical and econometric software. Spatiotemporal modeling represents one of the most recent developments in spatial econometric theory and the finite sample properties of the estimators that have been proposed are currently being tested in the literature. We provide a comparison between some estimators (a quasi-maximum likelihood, QML, estimator and some GMM-type estimators) for a fixed effects dynamic panel data model under certain conditions, by means of a Monte Carlo simulation analysis. We focus on different settings, which are characterized either by fully stable or quasi-unit root series. We also investigate the extent of the bias that is caused by a non-spatial estimation of a model when the data are characterized by different degrees of spatial dependence. Finally, we provide an empirical application of a QML estimator for a time-space dynamic model which includes a temporal, a spatial and a spatiotemporal lag of the dependent variable. This is done by choosing a relevant and prolific field of analysis, in which spatial econometrics has only found limited space so far, in order to explore the value-added of considering the spatial dimension of the data. In particular, we study the determinants of cropland value in Midwestern U.S.A. in the years 1971-2009, by taking the present value model (PVM) as the theoretical framework of analysis.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
China is a large country characterized by remarkable growth and distinct regional diversity. Spatial disparity has always been a hot issue since China has been struggling to follow a balanced growth path but still confronting with unprecedented pressures and challenges. To better understand the inequality level benchmarking spatial distributions of Chinese provinces and municipalities and estimate dynamic trajectory of sustainable development in China, I constructed the Composite Index of Regional Development (CIRD) with five sub pillars/dimensions involving Macroeconomic Index (MEI), Science and Innovation Index (SCI), Environmental Sustainability Index (ESI), Human Capital Index (HCI) and Public Facilities Index (PFI), endeavoring to cover various fields of regional socioeconomic development. Ranking reports on the five sub dimensions and aggregated CIRD were provided in order to better measure the developmental degrees of 31 or 30 Chinese provinces and municipalities over 13 years from 1998 to 2010 as the time interval of three “Five-year Plans”. Further empirical applications of this CIRD focused on clustering and convergence estimation, attempting to fill up the gap in quantifying the developmental levels of regional comprehensive socioeconomics and estimating the dynamic convergence trajectory of regional sustainable development in a long run. Four clusters were benchmarked geographically-oriented in the map on the basis of cluster analysis, and club-convergence was observed in the Chinese provinces and municipalities based on stochastic kernel density estimation.
Resumo:
The objective of this dissertation is to develop and test a predictive model for the passive kinematics of human joints based on the energy minimization principle. To pursue this goal, the tibio-talar joint is chosen as a reference joint, for the reduced number of bones involved and its simplicity, if compared with other sinovial joints such as the knee or the wrist. Starting from the knowledge of the articular surface shapes, the spatial trajectory of passive motion is obtained as the envelop of joint configurations that maximize the surfaces congruence. An increase in joint congruence corresponds to an improved capability of distributing an applied load, allowing the joint to attain a better strength with less material. Thus, joint congruence maximization is a simple geometric way to capture the idea of joint energy minimization. The results obtained are validated against in vitro measured trajectories. Preliminary comparison provide strong support for the predictions of the theoretical model.
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing.
Resumo:
This doctoral dissertation presents a new method to asses the influence of clearancein the kinematic pairs on the configuration of planar and spatial mechanisms. The subject has been widely investigated in both past and present scientific literature, and is approached in different ways: a static/kinetostatic way, which looks for the clearance take-up due to the external loads on the mechanism; a probabilistic way, which expresses clearance-due displacements using probability density functions; a dynamic way, which evaluates dynamic effects like the actual forces in the pairs caused by impacts, or the consequent vibrations. This dissertation presents a new method to approach the problem of clearance. The problem is studied from a purely kinematic perspective. With reference to a given mechanism configuration, the pose (position and orientation) error of the mechanism link of interest is expressed as a vector function of the degrees of freedom introduced in each pair by clearance: the presence of clearance in a kinematic pair, in facts, causes the actual pair to have more degrees of freedom than the theoretical clearance-free one. The clearance-due degrees of freedom are bounded by the pair geometry. A proper modelling of clearance-affected pairs allows expressing such bounding through analytical functions. It is then possible to study the problem as a maximization problem, where a continuous function (the pose error of the link of interest) subject to some constraints (the analytical functions bounding clearance- due degrees of freedom) has to be maximize. Revolute, prismatic, cylindrical, and spherical clearance-affected pairs have been analytically modelled; with reference to mechanisms involving such pairs, the solution to the maximization problem has been obtained in a closed form.
Resumo:
In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.
Resumo:
Traditionally, the study of internal combustion engines operation has focused on the steady-state performance. However, the daily driving schedule of automotive engines is inherently related to unsteady conditions. There are various operating conditions experienced by (diesel) engines that can be classified as transient. Besides the variation of the engine operating point, in terms of engine speed and torque, also the warm up phase can be considered as a transient condition. Chapter 2 has to do with this thermal transient condition; more precisely the main issue is the performance of a Selective Catalytic Reduction (SCR) system during cold start and warm up phases of the engine. The proposal of the underlying work is to investigate and identify optimal exhaust line heating strategies, to provide a fast activation of the catalytic reactions on SCR. Chapters 3 and 4 focus the attention on the dynamic behavior of the engine, when considering typical driving conditions. The common approach to dynamic optimization involves the solution of a single optimal-control problem. However, this approach requires the availability of models that are valid throughout the whole engine operating range and actuator ranges. In addition, the result of the optimization is meaningful only if the model is very accurate. Chapter 3 proposes a methodology to circumvent those demanding requirements: an iteration between transient measurements to refine a purpose-built model and a dynamic optimization which is constrained to the model validity region. Moreover all numerical methods required to implement this procedure are presented. Chapter 4 proposes an approach to derive a transient feedforward control system in an automated way. It relies on optimal control theory to solve a dynamic optimization problem for fast transients. From the optimal solutions, the relevant information is extracted and stored in maps spanned by the engine speed and the torque gradient.
Resumo:
The aim of this Thesis is to obtain a better understanding of the mechanical behavior of the active Alto Tiberina normal fault (ATF). Integrating geological, geodetic and seismological data, we perform 2D and 3D quasi-static and dynamic mechanical models to simulate the interseismic phase and rupture dynamic of the ATF. Effects of ATF locking depth, synthetic and antithetic fault activity, lithology and realistic fault geometries are taken in account. The 2D and 3D quasi-static model results suggest that the deformation pattern inferred by GPS data is consistent with a very compliant ATF zone (from 5 to 15 km) and Gubbio fault activity. The presence of the ATF compliant zone is a first order condition to redistribute the stress in the Umbria-Marche region; the stress bipartition between hanging wall (high values) and footwall (low values) inferred by the ATF zone activity could explain the microseismicity rates that are higher in the hanging wall respect to the footwall. The interseismic stress build-up is mainly located along the Gubbio fault zone and near ATF patches with higher dip (30°
Resumo:
In the last years the attentions on the energy efficiency on historical buildings grows, as different research project took place across Europe. The attention on combining, the need of the preservation of the buildings, their value and their characteristic, with the need of the reduction of energy consumption and the improvements of indoor comfort condition, stimulate the discussion of two points of view that are usually in contradiction, buildings engineer and Conservation Institution. The results are surprising because a common field is growing while remains the need of balancing the respective exigencies. From these experience results clear that many questions should be answered also from the building physicist regarding the correct assessment: on the energy consumption of this class of buildings, on the effectiveness of the measures that could be adopted, and much more. This thesis gives a contribution to answer to these questions developing a procedure to analyse the historic building. The procedure gives a guideline of the energy audit for the historical building considering the experimental activities to dial with the uncertainty of the estimation of the energy balance. It offers a procedure to simulate the energy balance of building with a validated dynamic model considering also a calibration procedure to increase the accuracy of the model. An approach of design of energy efficiency measures through an optimization that consider different aspect is also presented. All the process is applied to a real case study to give to the reader a practical understanding.
Resumo:
Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.
Resumo:
This work is focused on the analysis of sea–level change (last century), based mainly on instrumental observations. During this period, individual components of sea–level change are investigated, both at global and regional scales. Some of the geophysical processes responsible for current sea-level change such as glacial isostatic adjustments and current melting terrestrial ice sources, have been modeled and compared with observations. A new value of global mean sea level change based of tide gauges observations has been independently assessed in 1.5 mm/year, using corrections for glacial isostatic adjustment obtained with different models as a criterion for the tide gauge selection. The long wavelength spatial variability of the main components of sea–level change has been investigated by means of traditional and new spectral methods. Complex non–linear trends and abrupt sea–level variations shown by tide gauges records have been addressed applying different approaches to regional case studies. The Ensemble Empirical Mode Decomposition technique has been used to analyse tide gauges records from the Adriatic Sea to ascertain the existence of cyclic sea-level variations. An Early Warning approach have been adopted to detect tipping points in sea–level records of North East Pacific and their relationship with oceanic modes. Global sea–level projections to year 2100 have been obtained by a semi-empirical approach based on the artificial neural network method. In addition, a model-based approach has been applied to the case of the Mediterranean Sea, obtaining sea-level projection to year 2050.
Resumo:
The research field of my PhD concerns mathematical modeling and numerical simulation, applied to the cardiac electrophysiology analysis at a single cell level. This is possible thanks to the development of mathematical descriptions of single cellular components, ionic channels, pumps, exchangers and subcellular compartments. Due to the difficulties of vivo experiments on human cells, most of the measurements are acquired in vitro using animal models (e.g. guinea pig, dog, rabbit). Moreover, to study the cardiac action potential and all its features, it is necessary to acquire more specific knowledge about single ionic currents that contribute to the cardiac activity. Electrophysiological models of the heart have become very accurate in recent years giving rise to extremely complicated systems of differential equations. Although describing the behavior of cardiac cells quite well, the models are computationally demanding for numerical simulations and are very difficult to analyze from a mathematical (dynamical-systems) viewpoint. Simplified mathematical models that capture the underlying dynamics to a certain extent are therefore frequently used. The results presented in this thesis have confirmed that a close integration of computational modeling and experimental recordings in real myocytes, as performed by dynamic clamp, is a useful tool in enhancing our understanding of various components of normal cardiac electrophysiology, but also arrhythmogenic mechanisms in a pathological condition, especially when fully integrated with experimental data.
Resumo:
The kinematics is a fundamental tool to infer the dynamical structure of galaxies and to understand their formation and evolution. Spectroscopic observations of gas emission lines are often used to derive rotation curves and velocity dispersions. It is however difficult to disentangle these two quantities in low spatial-resolution data because of beam smearing. In this thesis, we present 3D-Barolo, a new software to derive the gas kinematics of disk galaxies from emission-line data-cubes. The code builds tilted-ring models in the 3D observational space and compares them with the actual data-cubes. 3D-Barolo works with data at a wide range of spatial resolutions without being affected by instrumental biases. We use 3D-Barolo to derive rotation curves and velocity dispersions of several galaxies in both the local and the high-redshift Universe. We run our code on HI observations of nearby galaxies and we compare our results with 2D traditional approaches. We show that a 3D approach to the derivation of the gas kinematics has to be preferred to a 2D approach whenever a galaxy is resolved with less than about 20 elements across the disk. We moreover analyze a sample of galaxies at z~1, observed in the H-alpha line with the KMOS/VLT spectrograph. Our 3D modeling reveals that the kinematics of these high-z systems is comparable to that of local disk galaxies, with steeply-rising rotation curves followed by a flat part and H-alpha velocity dispersions of 15-40 km/s over the whole disks. This evidence suggests that disk galaxies were already fully settled about 7-8 billion years ago. In summary, 3D-Barolo is a powerful and robust tool to separate physical and instrumental effects and to derive a reliable kinematics. The analysis of large samples of galaxies at different redshifts with 3D-Barolo will provide new insights on how galaxies assemble and evolve throughout cosmic time.