933 resultados para Hierarchical dynamic models
Resumo:
Although deterministic models of the evolution of mass tourism coastal resorts predict an almost inevitable decline over time, theoretical frameworks of the evolution and restructuring policies of mature destinations should be revised to reflect the complex and dynamic way in which these destinations evolve and interact with the tourism market and global socio-economic environment. The present study examines Benidorm because its urban and tourism model and large-scale tourism supply and demand make it one of the most unique destinations on the Mediterranean coast. The investigation reveals the need to adopt theories and models that are not purely deterministic. The dialectic interplay between external factors and the internal factors inherent in this destination simultaneously reveals a complex and diverse stage of maturity and the ability of destinations to create their own future.
Resumo:
Three sets of laboratory column experimental results concerning the hydrogeochemistry of seawater intrusion have been modelled using two codes: ACUAINTRUSION (Chemical Engineering Department, University of Alicante) and PHREEQC (U.S.G.S.). These reactive models utilise the hydrodynamic parameters determined using the ACUAINTRUSION TRANSPORT software and fit the chloride breakthrough curves perfectly. The ACUAINTRUSION code was improved, and the instabilities were studied relative to the discretisation. The relative square errors were obtained using different combinations of the spatial and temporal steps: the global error for the total experimental data and the partial error for each element. Good simulations for the three experiments were obtained using the ACUAINTRUSION software with slight variations in the selectivity coefficients for both sediments determined in batch experiments with fresh water. The cation exchange parameters included in ACUAINTRUSION are those reported by the Gapon convention with modified exponents for the Ca/Mg exchange. PHREEQC simulations performed using the Gains-Thomas convention were unsatisfactory, with the exchange coefficients from the database of PHREEQC (or range), but those determined with fresh water – natural sediment allowed only an approximation to be obtained. For the treated sediment, the adjusted exchange coefficients were determined to improve the simulation and are vastly different from those from the database of PHREEQC or batch experiment values; however, these values fall in an order similar to the others determined under dynamic conditions. Different cation concentrations were simulated using two different software packages; this disparity could be attributed to the defined selectivity coefficients that affect the gypsum equilibrium. Consequently, different calculated sulphate concentrations are obtained using each type of software; a smaller mismatch was predicted using ACUAINTRUSION. In general, the presented simulations by ACUAINTRUSION and PHREEQC produced similar results, making predictions consistent with the experimental data. However, the simulated results are not identical to the experimental data; sulphate (total S) is overpredicted by both models, most likely due to such factors as the kinetics of gypsum, the possible variations in the exchange coefficients due to salinity and the neglect of other processes.
Resumo:
The modeling of complex dynamic systems depends on the solution of a differential equations system. Some problems appear because we do not know the mathematical expressions of the said equations. Enough numerical data of the system variables are known. The authors, think that it is very important to establish a code between the different languages to let them codify and decodify information. Coding permits us to reduce the study of some objects to others. Mathematical expressions are used to model certain variables of the system are complex, so it is convenient to define an alphabet code determining the correspondence between these equations and words in the alphabet. In this paper the authors begin with the introduction to the coding and decoding of complex structural systems modeling.
Resumo:
Stroke is a leading cause of death and permanent disability worldwide, affecting millions of individuals. Traditional clinical scores for assessment of stroke-related impairments are inherently subjective and limited by inter-rater and intra-rater reliability, as well as floor and ceiling effects. In contrast, robotic technologies provide objective, highly repeatable tools for quantification of neurological impairments following stroke. KINARM is an exoskeleton robotic device that provides objective, reliable tools for assessment of sensorimotor, proprioceptive and cognitive brain function by means of a battery of behavioral tasks. As such, KINARM is particularly useful for assessment of neurological impairments following stroke. This thesis introduces a computational framework for assessment of neurological impairments using the data provided by KINARM. This is done by achieving two main objectives. First, to investigate how robotic measurements can be used to estimate current and future abilities to perform daily activities for subjects with stroke. We are able to predict clinical scores related to activities of daily living at present and future time points using a set of robotic biomarkers. The findings of this analysis provide a proof of principle that robotic evaluation can be an effective tool for clinical decision support and target-based rehabilitation therapy. The second main objective of this thesis is to address the emerging problem of long assessment time, which can potentially lead to fatigue when assessing subjects with stroke. To address this issue, we examine two time reduction strategies. The first strategy focuses on task selection, whereby KINARM tasks are arranged in a hierarchical structure so that an earlier task in the assessment procedure can be used to decide whether or not subsequent tasks should be performed. The second strategy focuses on time reduction on the longest two individual KINARM tasks. Both reduction strategies are shown to provide significant time savings, ranging from 30% to 90% using task selection and 50% using individual task reductions, thereby establishing a framework for reduction of assessment time on a broader set of KINARM tasks. All in all, findings of this thesis establish an improved platform for diagnosis and prognosis of stroke using robot-based biomarkers.
Resumo:
This paper employs fifteen dynamic macroeconomic models maintained within the European System of Central Banks to assess the size of fiscal multipliers in European countries. Using a set of common simulations, we consider transitory and permanent shocks to government expenditures and different taxes. We investigate how the baseline multipliers change when monetary policy is transitorily constrained by the zero nominal interest rate bound, certain crisis-related structural features of the economy such as the share of liquidity-constrained households change, and the endogenous fiscal rule that ensures fiscal sustainability in the long run is specified in terms of labour income taxes instead of lump-sum taxes.
Resumo:
This paper considers the role of social model features in the economic performance of Italy and Spain during the run-up to the Eurozone crisis, as well as the consequences of that crisis, in turn, for the two countries social models. It takes issue with the prevailing view - what I refer to as the “competitiveness thesis” - which attributes the debtor status of the two countries to a lack of competitive capacity rooted in social model features. This competitiveness thesis has been key in justifying the “liberalization plus austerity” measures that European institutions have demanded in return for financial support for Italy and Spain at critical points during the crisis. The paper challenges this prevailing wisdom. First, it reviews the characteristics of the Italian and Spanish social models and their evolution in the period prior to the crisis, revealing a far more complex, dynamic and differentiated picture than is given in the political economy literature. Second, the paper considers various ways in which social model characteristics are said to have contributed to the Eurozone crisis, finding such explanations wanting. Italy and Spain ́s debtor status was primarily the result of much broader dynamics in the Euro- zone, including capital flows from richer to poorer countries that affected economic demand, with social model features playing, at most, an ancillary role. More aggressive reforms responding to EU demands in Spain may have increased the long term social and economic costs of the crisis, whereas the political stalemate that slowed such reforms in Italy may have paradoxically mitigated these costs. The comparison of the two countries thus suggests that, in the absence of broader macro-institutional reform of the Eurozone, compliance with EU dictates may have had perverse effects.
Resumo:
I will start by discussing some aspects of Kagitcibasi’s Theory of Family Change: its current empirical status and, more importantly, its focus on universal human needs and the consequences of this focus. Family Change Theory’s focus on the universality of the basic human needs of autonomy and relatedness and its culture-level emphasis on cultural norms and family values as reflecting a culture’s capacity for fulfilling its members’ respective needs shows that the theory advocates balanced cultural norms of independence and interdependence. As a normative theory it therefore postulates the necessity of a synthetic family model of emotional interdependence as an alternative to extreme models of total independence and total interdependence. Generalizing from this I will sketch a theoretical model where a dynamic and dialectical process of the fit between individual and culture and between culture and universal human needs and related social practices is central. I will discuss this model using a recent cross-cultural project on implicit theories of self/world and primary/secondary control orientations as an example. Implications for migrating families and acculturating individuals are also discussed.
Resumo:
Regular vine copulas are multivariate dependence models constructed from pair-copulas (bivariate copulas). In this paper, we allow the dependence parameters of the pair-copulas in a D-vine decomposition to be potentially time-varying, following a nonlinear restricted ARMA(1,m) process, in order to obtain a very flexible dependence model for applications to multivariate financial return data. We investigate the dependence among the broad stock market indexes from Germany (DAX), France (CAC 40), Britain (FTSE 100), the United States (S&P 500) and Brazil (IBOVESPA) both in a crisis and in a non-crisis period. We find evidence of stronger dependence among the indexes in bear markets. Surprisingly, though, the dynamic D-vine copula indicates the occurrence of a sharp decrease in dependence between the indexes FTSE and CAC in the beginning of 2011, and also between CAC and DAX during mid-2011 and in the beginning of 2008, suggesting the absence of contagion in these cases. We also evaluate the dynamic D-vine copula with respect to Value-at-Risk (VaR) forecasting accuracy in crisis periods. The dynamic D-vine outperforms the static D-vine in terms of predictive accuracy for our real data sets.
Resumo:
Substantial retreat or disintegration of numerous ice shelves have been observed on the Antarctic Peninsula. The ice shelf in the Prince Gustav Channel retreated gradually since the late 1980's and broke-up in 1995. Tributary glaciers reacted with speed-up, surface lowering and increased ice discharge, consequently contributing to sea level rise. We present a detailed long-term study (1993-2014) on the dynamic response of Sjögren Inlet glaciers to the disintegration of Prince Gustav Ice Shelf. We analyzed various remote sensing datasets to observe the reactions of the glaciers to the loss of the buttressing ice shelf. A strong increase in ice surface velocities was observed with maximum flow speeds reaching 2.82±0.48 m/d in 2007 and 1.50±0.32 m/d in 2004 at Sjögren and Boydell glaciers respectively. Subsequently, the flow velocities decelerated, however in late 2014, we still measured about two times the values of our first measurements in 1996. The tributary glaciers retreated 61.7±3.1 km² behind the former grounding line of the ice shelf. In regions below 1000 m a.s.l., a mean surface lowering of -68±10 m (-3.1 m/a) was observed in the period 1993-2014. The lowering rate decreased to -2.2 m/a in recent years. Based on the surface lowering rates, geodetic mass balances of the glaciers were derived for different time steps. High mass loss rate of -1.21±0.36 Gt/a was found in the earliest period (1993-2001). Due to the dynamic adjustments of the glaciers to the new boundary conditions the ice mass loss reduced to -0.59±0.11 Gt/a in the period 2012-2014, resulting in an average mass loss rate of -0.89±0.16 Gt/a (1993-2014). Including the retreat of the ice front and grounding line, a total mass change of -38.5±7.7 Gt and a contribution to sea level rise of 0.061±0.013 mm were computed. Analysis of the ice flux revealed that available bedrock elevation estimates at Sjögren Inlet are too shallow and are the major uncertainty in ice flux computations. This temporally dense time series analysis of Sjögren Inlet glaciers shows that the adjustments of tributary glaciers to ice shelf disintegration are still going on and provides detailed information of the changes in glacier dynamics.
Resumo:
"This report reproduces a thesis of the same title submitted to the Department of Electrical Engineering, Massachusetts Institute of Technology, in partial fulfillment of the requirements for the degree of Doctor of Philosophy, May 1970."--p. 2
Resumo:
Context: There is evidence suggesting that the prevalence of disability in late life has declined over time while the prevalence of disabling chronic diseases has increased. The dynamic equilibrium of morbidity hypothesis suggests that these seemingly contradictory trends are due to the attenuation of the morbidity-disability link over time. The aim of this study was to empirically test this assumption.Methods: Data were drawn from three repeated cross-sections of SWEOLD, a population-based survey among the Swedish men and women ages 77 and older. Logistic regression models were fitted to assess the trends in the prevalence of Activities of Daily Living (ADL) disability, Instrumental ADL (IADL) disability, and selected groups of chronic conditions. The changes in the associations between chronic conditions and disabilities were examined in both multiplicative and additive models.Results: Between 1992 and 2011, the odds of ADL disability significantly declined among women whereas the odds of IADL disability significantly declined among men. During the same period, the prevalence of most chronic morbidities including multimorbidity went up. Significant attenuations of the morbidity-disability associations were found for cardiovascular diseases, metabolic disorders, poor lung function, psychological distress, and multimorbidity.Conclusion: In agreement with the dynamic equilibrium hypothesis, this study concludes that the associations between chronic conditions and disability among the Swedish older adults have largely waned over time.
Resumo:
The modelling of inpatient length of stay (LOS) has important implications in health care studies. Finite mixture distributions are usually used to model the heterogeneous LOS distribution, due to a certain proportion of patients sustaining-a longer stay. However, the morbidity data are collected from hospitals, observations clustered within the same hospital are often correlated. The generalized linear mixed model approach is adopted to accommodate the inherent correlation via unobservable random effects. An EM algorithm is developed to obtain residual maximum quasi-likelihood estimation. The proposed hierarchical mixture regression approach enables the identification and assessment of factors influencing the long-stay proportion and the LOS for the long-stay patient subgroup. A neonatal LOS data set is used for illustration, (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The rheology of 10 Australian honeys was investigated at temperatures -15C to 0C by a strain-controlled rheometer. The honeys exhibited Newtonian behavior irrespective of the temperature, and follow the Cox-Merz rule. G/G' and omega are quadratically related, and the crossover frequencies for liquid to solid transformation and relaxation times were obtained. The composition of the honeys correlates well (r(2) > 0.83) with the viscosity, and with 24 7 data sets (Australian and Greek honeys), the following equation was obtained: mu = 1.41 x 10(-17) exp [-1.20M + 0.01F - 0.0G + (18.6 X 10(3)/T)] The viscosity of the honeys showed a strong dependence on temperature, and four models were examined to describe this. The models gave good fits (r(2) > 0.95), but better fits were obtained for the WLF model using T-g of the honeys and mu(g) = 10(11) Pa.s. The WLF model with its universal values poorly predicted the viscosity, and the implications of the measured rheological behaviors of the honeys in their processing and handling are discussed.
Resumo:
We examine the event statistics obtained from two differing simplified models for earthquake faults. The first model is a reproduction of the Block-Slider model of Carlson et al. (1991), a model often employed in seismicity studies. The second model is an elastodynamic fault model based upon the Lattice Solid Model (LSM) of Mora and Place (1994). We performed simulations in which the fault length was varied in each model and generated synthetic catalogs of event sizes and times. From these catalogs, we constructed interval event size distributions and inter-event time distributions. The larger, localised events in the Block-Slider model displayed the same scaling behaviour as events in the LSM however the distribution of inter-event times was markedly different. The analysis of both event size and inter-event time statistics is an effective method for comparative studies of differing simplified models for earthquake faults.
Resumo:
A systematic goal-driven top-down modelling methodology is proposed that is capable of developing a multiscale model of a process system for given diagnostic purposes. The diagnostic goal-set and the symptoms are extracted from HAZOP analysis results, where the possible actions to be performed in a fault situation are also described. The multiscale dynamic model is realized in the form of a hierarchical coloured Petri net by using a novel substitution place-transition pair. Multiscale simulation that focuses automatically on the fault areas is used to predict the effect of the proposed preventive actions. The notions and procedures are illustrated on some simple case studies including a heat exchanger network and a more complex wet granulation process.