978 resultados para Macro-econometric model
Resumo:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
Resumo:
This research employs solid-state actuators for delay of flow separation seen in airfoils at low Reynolds numbers. The flow control technique investigated here is aimed for a variable camber airfoil that employs two active surfaces and a single four-bar (box) mechanism as the internal structure. To reduce separation, periodic excitation to the flow around the leading edge of the airfoil is induced by a total of nine piezocomposite actuated clamped-free unimorph benders distributed in the spanwise direction. An electromechanical model is employed to design an actuator capable of high deformations at the desired frequency for lift improvement at post-stall angles. The optimum spanwise distribution of excitation for increasing lift coefficient is identified experimentally in the wind tunnel. A 3D (non-uniform) excitation distribution achieved higher lift enhancement in the post-stall region with lower power consumption when compared to the 2D (uniform) excitation distribution. A lift coefficient increase of 18.4% is achieved with the identified non-uniform excitation mode at the bender resonance frequency of 125 Hz, the flow velocity of 5 m/s and at the reduced frequency of 3.78. The maximum lift (Clmax) is increased 5.2% from the baseline. The total power consumption of the flow control technique is 639 mW(RMS).
Resumo:
An updated flow pattern map was developed for CO2 on the basis of the previous Cheng-Ribatski-Wojtan-Thome CO2 flow pattern map [1,2] to extend the flow pattern map to a wider range of conditions. A new annular flow to dryout transition (A-D) and a new dryout to mist flow transition (D-M) were proposed here. In addition, a bubbly flow region which generally occurs at high mass velocities and low vapor qualities was added to the updated flow pattern map. The updated flow pattern map is applicable to a much wider range of conditions: tube diameters from 0.6 to 10 mm, mass velocities from 50 to 1500 kg/m(2) s, heat fluxes from 1.8 to 46 kW/m(2) and saturation temperatures from -28 to +25 degrees C (reduced pressures from 0.21 to 0.87). The updated flow pattern map was compared to independent experimental data of flow patterns for CO2 in the literature and it predicts the flow patterns well. Then, a database of CO2 two-phase flow pressure drop results from the literature was set up and the database was compared to the leading empirical pressure drop models: the correlations by Chisholm [3], Friedel [4], Gronnerud [5] and Muller-Steinhagen and Heck [6], a modified Chisholm correlation by Yoon et al. [7] and the flow pattern based model of Moreno Quiben and Thome [8-10]. None of these models was able to predict the CO2 pressure drop data well. Therefore, a new flow pattern based phenomenological model of two-phase flow frictional pressure drop for CO2 was developed by modifying the model of Moreno Quiben and Thome using the updated flow pattern map in this study and it predicts the CO2 pressure drop database quite well overall. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Corresponding to the updated flow pattern map presented in Part I of this study, an updated general flow pattern based flow boiling heat transfer model was developed for CO2 using the Cheng-Ribatski-Wojtan-Thome [L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside horizontal tubes, Int. J. Heat Mass Transfer 49 (2006) 4082-4094; L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, Erratum to: ""New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside tubes"" [Heat Mass Transfer 49 (21-22) (2006) 4082-4094], Int. J. Heat Mass Transfer 50 (2007) 391] flow boiling heat transfer model as the starting basis. The flow boiling heat transfer correlation in the dryout region was updated. In addition, a new mist flow heat transfer correlation for CO2 was developed based on the CO2 data and a heat transfer method for bubbly flow was proposed for completeness sake. The updated general flow boiling heat transfer model for CO2 covers all flow regimes and is applicable to a wider range of conditions for horizontal tubes: tube diameters from 0.6 to 10 mm, mass velocities from 50 to 1500 kg/m(2) s, heat fluxes from 1.8 to 46 kW/m(2) and saturation temperatures from -28 to 25 degrees C (reduced pressures from 0.21 to 0.87). The updated general flow boiling heat transfer model was compared to a new experimental database which contains 1124 data points (790 more than that in the previous model [Cheng et al., 2006, 2007]) in this study. Good agreement between the predicted and experimental data was found in general with 71.4% of the entire database and 83.2% of the database without the dryout and mist flow data predicted within +/-30%. However, the predictions for the dryout and mist flow regions were less satisfactory due to the limited number of data points, the higher inaccuracy in such data, scatter in some data sets ranging up to 40%, significant discrepancies from one experimental study to another and the difficulties associated with predicting the inception and completion of dryout around the perimeter of the horizontal tubes. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Accurate price forecasting for agricultural commodities can have significant decision-making implications for suppliers, especially those of biofuels, where the agriculture and energy sectors intersect. Environmental pressures and high oil prices affect demand for biofuels and have reignited the discussion about effects on food prices. Suppliers in the sugar-alcohol sector need to decide the ideal proportion of ethanol and sugar to optimise their financial strategy. Prices can be affected by exogenous factors, such as exchange rates and interest rates, as well as non-observable variables like the convenience yield, which is related to supply shortages. The literature generally uses two approaches: artificial neural networks (ANNs), which are recognised as being in the forefront of exogenous-variable analysis, and stochastic models such as the Kalman filter, which is able to account for non-observable variables. This article proposes a hybrid model for forecasting the prices of agricultural commodities that is built upon both approaches and is applied to forecast the price of sugar. The Kalman filter considers the structure of the stochastic process that describes the evolution of prices. Neural networks allow variables that can impact asset prices in an indirect, nonlinear way, what cannot be incorporated easily into traditional econometric models.
Resumo:
This study analyzes data on migrants' remittances using a two-period theory of intergenerational transfers based on an informal, intrafamilial loan arrangement using weak altruism, a behavior between strong altruism and pure self-interest. The model provides an integrated theory of migrants' remittances, human capital investment decisions, and intrafamilial transfers applicable to low-income countries with no official pension schemes and imperfect capital markets. Propositions, derived from the theory, are tested, re-analyzing original survey data on remittances of Pacific island migrants in Sydney. When weak altruism and strong altruism yield opposite predictions, the econometric results tend to confirm the former hypothesis and invalidate the latter.
Resumo:
When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.
Resumo:
Background/Aims: Hepatocellular carcinoma (HCC) is a well recognized complication of advanced NASH (non-alcoholic steatohepatitis). We sought to produce a rat model of NASH, cirrhosis and HCC. Methods: Adult Sprague-Dawley rats, weighing 250-300 g, were fed a choline-deficient, high trans-fat diet and exposed to DEN in drinking water. After 16 weeks, the animals underwent liver ultrasound (US), sacrifice and assessment by microscopy, immunohistochemistry and transmission electron microscopy (TEM). Results: US revealed steatosis and focal lesions in 6 of 7. All had steatohepatitis defined as inflammation, advanced fibrosis and ballooning with Mallory-Denk bodies (MDB) with frank cirrhosis in 6. Areas of more severe injury were associated with anti-CK19 positive ductular reaction. HCC, present in all, were macro-trabecullar or solid with polyhedral cells with foci of steatosis and ballooned cells. CK19 was positive in single or solid nests of oval cells and in neoplastic hepatocytes. TEM showed ballooning with small droplet fat, dilated endoplasmic reticulum and MDB in non-neoplastic hepatocytes and small droplet steatosis in some cancer cells. Conclusions: This model replicated many features of NASH including steatohepatitis with ballooning, fibrosis, cirrhosis and hepatocellular carcinoma. Oval cell proliferation was evident and the presence anti-CK 19 positivity in the cancer suggests oval cell origin of the malignancy. (C) 2008 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Resumo:
We propose a model for permeation in oxide coated gas barrier films. The model accounts for diffusion through the amorphous oxide lattice, nano-defects within the lattice, and macro-defects. The presence of nano-defects indicate the oxide layer is more similar to a nano-porous solid (such as zeolite) than silica glass with respect to permeation properties. This explains why the permeability of oxide coated polymers is much greater, and the activation energy of permeation much lower, than values expected for polymers coated with glass. We have used the model to interpret permeability and activation energies measured for the inert gases (He, Ne and Ar) in evaporated SiOx films of varying thickness (13-70 nm) coated on a polymer substrate. Atomic force and scanning electron microscopy were used to study the structure of the oxide layer. Although no defects could be detected by microscopy, the permeation data indicate that macro-defects (>1 nm), nano-defects (0.3-0.4 nm) and the lattice interstices (<0.3 nm) all contribute to the total permeation. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
In MIMO systems the antenna array configuration in the BS and MS has a large influence on the available channel capacity. In this paper, we first introduce a new Frequency Selective (FS) MIMO framework for macro-cells in a realistic urban environment. The MIMO channel is built over a previously developed directional channel model, which considers the terrain and clutter information in the cluster, line-of-sight and link loss calculations. Next, MIMO configuration characteristics are investigated in order to maximize capacity, mainly the number of antennas, inter-antenna spacing and SNR impact. Channel and capacity simulation results are presented for the city of Lisbon, Portugal, using different antenna configurations. Two power allocations schemes are considered, uniform distribution and FS spatial water-filling. The results suggest optimized MIMO configurations, considering the antenna array size limitations, specially at the MS side.
Resumo:
Não existe uma definição única de processo de memória de longo prazo. Esse processo é geralmente definido como uma série que possui um correlograma decaindo lentamente ou um espectro infinito de frequência zero. Também se refere que uma série com tal propriedade é caracterizada pela dependência a longo prazo e por não periódicos ciclos longos, ou que essa característica descreve a estrutura de correlação de uma série de longos desfasamentos ou que é convencionalmente expressa em termos do declínio da lei-potência da função auto-covariância. O interesse crescente da investigação internacional no aprofundamento do tema é justificado pela procura de um melhor entendimento da natureza dinâmica das séries temporais dos preços dos ativos financeiros. Em primeiro lugar, a falta de consistência entre os resultados reclama novos estudos e a utilização de várias metodologias complementares. Em segundo lugar, a confirmação de processos de memória longa tem implicações relevantes ao nível da (1) modelação teórica e econométrica (i.e., dos modelos martingale de preços e das regras técnicas de negociação), (2) dos testes estatísticos aos modelos de equilíbrio e avaliação, (3) das decisões ótimas de consumo / poupança e de portefólio e (4) da medição de eficiência e racionalidade. Em terceiro lugar, ainda permanecem questões científicas empíricas sobre a identificação do modelo geral teórico de mercado mais adequado para modelar a difusão das séries. Em quarto lugar, aos reguladores e gestores de risco importa saber se existem mercados persistentes e, por isso, ineficientes, que, portanto, possam produzir retornos anormais. O objetivo do trabalho de investigação da dissertação é duplo. Por um lado, pretende proporcionar conhecimento adicional para o debate da memória de longo prazo, debruçando-se sobre o comportamento das séries diárias de retornos dos principais índices acionistas da EURONEXT. Por outro lado, pretende contribuir para o aperfeiçoamento do capital asset pricing model CAPM, considerando uma medida de risco alternativa capaz de ultrapassar os constrangimentos da hipótese de mercado eficiente EMH na presença de séries financeiras com processos sem incrementos independentes e identicamente distribuídos (i.i.d.). O estudo empírico indica a possibilidade de utilização alternativa das obrigações do tesouro (OT’s) com maturidade de longo prazo no cálculo dos retornos do mercado, dado que o seu comportamento nos mercados de dívida soberana reflete a confiança dos investidores nas condições financeiras dos Estados e mede a forma como avaliam as respetiva economias com base no desempenho da generalidade dos seus ativos. Embora o modelo de difusão de preços definido pelo movimento Browniano geométrico gBm alegue proporcionar um bom ajustamento das séries temporais financeiras, os seus pressupostos de normalidade, estacionariedade e independência das inovações residuais são adulterados pelos dados empíricos analisados. Por isso, na procura de evidências sobre a propriedade de memória longa nos mercados recorre-se à rescaled-range analysis R/S e à detrended fluctuation analysis DFA, sob abordagem do movimento Browniano fracionário fBm, para estimar o expoente Hurst H em relação às séries de dados completas e para calcular o expoente Hurst “local” H t em janelas móveis. Complementarmente, são realizados testes estatísticos de hipóteses através do rescaled-range tests R/S , do modified rescaled-range test M - R/S e do fractional differencing test GPH. Em termos de uma conclusão única a partir de todos os métodos sobre a natureza da dependência para o mercado acionista em geral, os resultados empíricos são inconclusivos. Isso quer dizer que o grau de memória de longo prazo e, assim, qualquer classificação, depende de cada mercado particular. No entanto, os resultados gerais maioritariamente positivos suportam a presença de memória longa, sob a forma de persistência, nos retornos acionistas da Bélgica, Holanda e Portugal. Isto sugere que estes mercados estão mais sujeitos a maior previsibilidade (“efeito José”), mas também a tendências que podem ser inesperadamente interrompidas por descontinuidades (“efeito Noé”), e, por isso, tendem a ser mais arriscados para negociar. Apesar da evidência de dinâmica fractal ter suporte estatístico fraco, em sintonia com a maior parte dos estudos internacionais, refuta a hipótese de passeio aleatório com incrementos i.i.d., que é a base da EMH na sua forma fraca. Atendendo a isso, propõem-se contributos para aperfeiçoamento do CAPM, através da proposta de uma nova fractal capital market line FCML e de uma nova fractal security market line FSML. A nova proposta sugere que o elemento de risco (para o mercado e para um ativo) seja dado pelo expoente H de Hurst para desfasamentos de longo prazo dos retornos acionistas. O expoente H mede o grau de memória de longo prazo nos índices acionistas, quer quando as séries de retornos seguem um processo i.i.d. não correlacionado, descrito pelo gBm(em que H = 0,5 , confirmando- se a EMH e adequando-se o CAPM), quer quando seguem um processo com dependência estatística, descrito pelo fBm(em que H é diferente de 0,5, rejeitando-se a EMH e desadequando-se o CAPM). A vantagem da FCML e da FSML é que a medida de memória de longo prazo, definida por H, é a referência adequada para traduzir o risco em modelos que possam ser aplicados a séries de dados que sigam processos i.i.d. e processos com dependência não linear. Então, estas formulações contemplam a EMH como um caso particular possível.
Resumo:
This project focuses on the study of different explanatory models for the behavior of CDS security, such as Fixed-Effect Model, GLS Random-Effect Model, Pooled OLS and Quantile Regression Model. After determining the best fitness model, trading strategies with long and short positions in CDS have been developed. Due to some specifications of CDS, I conclude that the quantile regression is the most efficient model to estimate the data. The P&L and Sharpe Ratio of the strategy are analyzed using a backtesting analogy, where I conclude that, mainly for non-financial companies, the model allows traders to take advantage of and profit from arbitrages.
Resumo:
The structural analysis involves the definition of the model and selection of the analysis type. The model should represent the stiffness, the mass and the loads of the structure. The structures can be represented using simplified models, such as the lumped mass models, and advanced models resorting the Finite Element Method (FEM) and Discrete Element Method (DEM). Depending on the characteristics of the structure, different types of analysis can be used such as limit analysis, linear and non-linear static analysis and linear and non-linear dynamic analysis. Unreinforced masonry structures present low tensile strength and the linear analyses seem to not be adequate for assessing their structural behaviour. On the other hand, the static and dynamic non-linear analyses are complex, since they involve large time computational requirements and advanced knowledge of the practitioner. The non-linear analysis requires advanced knowledge on the material properties, analysis tools and interpretation of results. The limit analysis with macro-blocks can be assumed as a more practical method in the estimation of maximum load capacity of structure. Furthermore, the limit analysis require a reduced number of parameters, which is an advantage for the assessment of ancient and historical masonry structures, due to the difficult in obtaining reliable data.
Resumo:
This paper reports on one of the first empirical attempts to investigate small firm growth and survival, and their determinants, in the Peoples’ Republic of China. The work is based on field work evidence gathered from a sample of 83 Chinese private firms (mainly SMEs) collected initially by face-to-face interviews, and subsequently by follow-up telephone interviews a year later. We extend the models of Gibrat (1931) and Jovanovic (1982), which traditionally focus on size and age alone (e.g. Brock and Evans, 1986), to a ‘comprehensive’ growth model with two types of additional explanatory variables: firm-specific (e.g. business planning); and environmental (e.g. choice of location). We estimate two econometric models: a ‘basic’ age-size-growth model; and a ‘comprehensive’ growth model, using Heckman’s two-step regression procedure. Estimation is by log-linear regression on cross-section data, with corrections for sample selection bias and heteroskedasticity. Our results refute a pure Gibrat model (but support a more general variant) and support the learning model, as regards the consequences of size and age for growth; and our extension to a comprehensive model highlights the importance of location choice and customer orientation for the growth of Chinese private firms. In the latter model, growth is explained by variables like planning, R&D orientation, market competition, elasticity of demand etc. as well as by control variables. Our work on small firm growth achieves two things. First, it upholds the validity of ‘basic’ size-age-growth models, and successfully applies them to the Chinese economy. Second, it extends the compass of such models to a ‘comprehensive’ growth model incorporating firm-specific and environmental variables.