914 resultados para Three Factor Model
Resumo:
From H. G. Johnson's work (Review of Economic Studies, 1953–54) on tariff retaliation, the questions of whether a country can win a “tariff war” and how or even the broader question of what will affect a country's strategic position in setting bilateral tariff have been tackled in various situations. Although it is widely accepted that a country will have strategic advantages in winning the tariff war if its relative monopoly power is sufficiently large, it is unclear what are the forces behind such power formation. The goal of this research is to provide a unified framework and discuss various forces such as relative country size, absolute advantages and relative advantages simultaneously. In a two-country continuum-of-commodity neoclassical trade model, it is shown that sufficiently large relative country size is a sufficient condition for a country to choose a non-cooperative tariff Nash equilibrium over free trade. It is also shown that technology disparities such as absolute advantage, rate of technology disparity and the distribution of the technology disparity all contribute to a country's strategic position and interact with country size. ^ Leverage effect is usually used to explain the phenomenon of asymmetric volatility in equity returns. However, leverage itself can only account for parts of the asymmetry. In this research, it is shown that stock return volatility is related to firms’ financial status. Financially constrained firms tend to be more sensitive to the return changes. Financial constraint factor explains why some firms tend to be more volatile than others. I found that the financial constraint factor explains the stock return volatility independent of other factors such as firm size, industry affiliation and leverage. Firms’ industry affiliations are shown to be very weak in differentiating volatility. Firm size is proven to be a good factor in distinguishing the different levels of volatility and volatility-return sensitivity. Leverage hypothesis is also partly corroborated and the situation where leverage effect is not applicable is discussed. Finally, I examined the macroeconomic policy's effects on overall market volatility. ^
Resumo:
This dissertation consists of three theoretical essays on immigration, international trade and political economy. The first two essays analyze the political economy of immigration in developed countries. The third essay explores new ground on the effects of labor liberalization in developing countries. Trade economists have witnessed remarkable methodological developments in mathematical and game theoretical models during the last seventy years. This dissertation benefits from these advances to analyze economic issues related to immigration. The first essay applies a long run general equilibrium trade model similar to Krugman (1980), and blends it with the median voter ala-Mayer (1984) framework. The second essay uses a short run general equilibrium specific factor trade model similar to Jones (1975) and incorporates it with the median voter model similar to Benhabib (1997). The third essay employs a five stage game theoretical approach similar to Vogel (2007) and solves it by the method of backward induction. The first essay shows that labor liberalization is more likely to come about in societies that have more taste for varieties, and that workers and capital owners could share the same positive stance toward labor liberalization. In a dynamic model, it demonstrates that the median voter is willing to accept fewer immigrants in the first period in order to preserve her domestic political influence in the second period threatened by the naturalization of these immigrants. The second essay shows that the liberalization of labor depends on the host country's stock and distribution of capital, and the number of groups of skilled workers within each country. I demonstrate that the more types of goods both countries produce, the more liberal the host country is toward immigration. The third essay proposes a theory of free movement of goods and labor between two economies with imperfect labor contracts. The heart of my analysis lies in the determinants of talent development where individuals' decisions to emigrate are related to the fixed costs of emigration. Finally, free trade and labor affect income via an indirect effect on individuals' incentives to invest in the skill levels and a direct effect on the prices of goods.
Resumo:
This research is based on the premises that teams can be designed to optimize its performance, and appropriate team coordination is a significant factor to team outcome performance. Contingency theory argues that the effectiveness of a team depends on the right fit of the team design factors to the particular job at hand. Therefore, organizations need computational tools capable of predict the performance of different configurations of teams. This research created an agent-based model of teams called the Team Coordination Model (TCM). The TCM estimates the coordination load and performance of a team, based on its composition, coordination mechanisms, and job’s structural characteristics. The TCM can be used to determine the team’s design characteristics that most likely lead the team to achieve optimal performance. The TCM is implemented as an agent-based discrete-event simulation application built using JAVA and Cybele Pro agent architecture. The model implements the effect of individual team design factors on team processes, but the resulting performance emerges from the behavior of the agents. These team member agents use decision making, and explicit and implicit mechanisms to coordinate the job. The model validation included the comparison of the TCM’s results with statistics from a real team and with the results predicted by the team performance literature. An illustrative 26-1 fractional factorial experimental design demonstrates the application of the simulation model to the design of a team. The results from the ANOVA analysis have been used to recommend the combination of levels of the experimental factors that optimize the completion time for a team that runs sailboats races. This research main contribution to the team modeling literature is a model capable of simulating teams working on complex job environments. The TCM implements a stochastic job structure model capable of capturing some of the complexity not capture by current models. In a stochastic job structure, the tasks required to complete the job change during the team execution of the job. This research proposed three new types of dependencies between tasks required to model a job as a stochastic structure. These dependencies are conditional sequential, single-conditional sequential, and the merge dependencies.
Resumo:
Organizational researchers have recently taken an interest in the ways in which social movements, non-governmental organizations (NGOs), and other secondary stakeholders attempt to influence corporate behavior. Scholars, however, have yet to carefully probe the link between secondary stakeholder legal action and target firm stock market performance. This is puzzling given the sharp rise in NGO-initiated civil lawsuits against corporations in recent years for alleged overseas human rights abuses and environmental misconduct. Furthermore, few studies have considered how such lawsuits impact a target firm’s intangible assets, namely its image and reputation. Structured in the form of three essays, this dissertation examined the antecedents and consequences of secondary stakeholder legal activism in both conceptual and empirical settings. ^ Essay One argued that conventional approaches to understanding political risk fail to account for the reputational risks to multinational enterprises (MNEs) posed by transnational networks of human rights NGOs employing litigation-based strategies. It offered a new framework for understanding this emerging challenge to multinational corporate activity. Essay Two empirically tested the relationship between the filing of human rights-related civil lawsuits and corporate stock market performance using an event study methodology and regression analysis. The statistical analysis performed showed that target firms experience a significant decline in share price upon filing and that both industry and nature of the lawsuit are significantly and negatively related to shareholder wealth. Essay Three drew upon social movement and social identity theories to develop and test a set of hypotheses on how secondary stakeholder groups select their targets for human rights-related civil lawsuits. The results of a logistic regression model offered support for the proposition that MNE targets are chosen based on both interest and identity factors. The results of these essays suggest that legal action initiated by secondary stakeholder groups is a new and salient threat to multinational business and that firms doing business in countries with weak political institutions should factor this into corporate planning and take steps to mitigate their exposure to such risks.^
Resumo:
Exchange traded funds (ETFs) have increased significantly in popularity since they were first introduced in 1993. However, there is still much that is unknown about ETFs in the extant literature. This dissertation attempts to fill gaps in the ETF literature by using three related essays. In these three essays, we compare ETFs to closed ended mutual funds (CEFs) by decomposing the bid-ask spread into its three components; we look at the intraday shape of ETFs and compare it to the intraday shape of equities as well as examine the co-integration factor between ETFs on the London Stock Exchange and the New York Stock Exchange; we also examine the differences between leveraged ETFs and unleveraged ETFs by analyzing the impact of liquidity and volatility. These three essays are presented in Chapters 1, 2, and 3, respectively. ^ Chapter one uses the Huang and Stoll (1997) model to decompose the bid-ask spread in CEFs and ETFs for two distinct periods—a normal and a volatile period. We show a higher adverse selection component for CEFs than for ETFs without regard to volatility. However, both ETFs and CEFs increased in magnitude of the adverse selection component in the period of high volatility. Chapter two uses a mix of the Werner and Kleidon (1993) and the Hupperets and Menkveld (2002) methods to get the intraday shape of ETFs and analyze co-integration between London and New York trading. We find two different shapes for New York and London ETFs. There also appears to be evidence of co-integration in the overlapping two-hour trading period but not over the entire trading day for the two locations. The third chapter discusses the new class of ETFs called leveraged ETFs. We examine the liquidity and depth differences between unleveraged and leveraged ETFs at the aggregate level and when the leveraged ETFs are classified by the leveraged multiples of -3, -2, -1, 2, and 3, both for a normal and a volatile period. We find distinct differences between leveraged and unleveraged ETFs at the aggregate level, with leveraged ETFs having larger spreads than unleveraged ETFs. Furthermore, while both leveraged and unleveraged ETFs have larger spreads in high volatility, for the leveraged ETFs the change in magnitude is significantly larger than for the unleveraged ETFs. Among the multiples, the -2 leveraged ETF is the most pronounced in its liquidity characteristics, more so in volatile times. ^
Resumo:
Major portion of hurricane-induced economic loss originates from damages to building structures. The damages on building structures are typically grouped into three main categories: exterior, interior, and contents damage. Although the latter two types of damages, in most cases, cause more than 50% of the total loss, little has been done to investigate the physical damage process and unveil the interdependence of interior damage parameters. Building interior and contents damages are mainly due to wind-driven rain (WDR) intrusion through building envelope defects, breaches, and other functional openings. The limitation of research works and subsequent knowledge gaps, are in most part due to the complexity of damage phenomena during hurricanes and lack of established measurement methodologies to quantify rainwater intrusion. This dissertation focuses on devising methodologies for large-scale experimental simulation of tropical cyclone WDR and measurements of rainwater intrusion to acquire benchmark test-based data for the development of hurricane-induced building interior and contents damage model. Target WDR parameters derived from tropical cyclone rainfall data were used to simulate the WDR characteristics at the Wall of Wind (WOW) facility. The proposed WDR simulation methodology presents detailed procedures for selection of type and number of nozzles formulated based on tropical cyclone WDR study. The simulated WDR was later used to experimentally investigate the mechanisms of rainwater deposition/intrusion in buildings. Test-based dataset of two rainwater intrusion parameters that quantify the distribution of direct impinging raindrops and surface runoff rainwater over building surface — rain admittance factor (RAF) and surface runoff coefficient (SRC), respectively —were developed using common shapes of low-rise buildings. The dataset was applied to a newly formulated WDR estimation model to predict the volume of rainwater ingress through envelope openings such as wall and roof deck breaches and window sill cracks. The validation of the new model using experimental data indicated reasonable estimation of rainwater ingress through envelope defects and breaches during tropical cyclones. The WDR estimation model and experimental dataset of WDR parameters developed in this dissertation work can be used to enhance the prediction capabilities of existing interior damage models such as the Florida Public Hurricane Loss Model (FPHLM).^
Resumo:
Hydrogeologic variables controlling groundwater exchange with inflow and flow-through lakes were simulated using a three-dimensional numerical model (MODFLOW) to investigate and quantify spatial patterns of lake bed seepage and hydraulic head distributions in the porous medium surrounding the lakes. Also, the total annual inflow and outflow were calculated as a percentage of lake volume for flow-through lake simulations. The general exponential decline of seepage rates with distance offshore was best demonstrated at lower anisotropy ratio (i.e., Kh/Kv = 1, 10), with increasing deviation from the exponential pattern as anisotropy was increased to 100 and 1000. 2-D vertical section models constructed for comparison with 3-D models showed that groundwater heads and seepages were higher in 3-D simulations. Addition of low conductivity lake sediments decreased seepage rates nearshore and increased seepage rates offshore in inflow lakes, and increased the area of groundwater inseepage on the beds of flow-through lakes. Introduction of heterogeneity into the medium decreased the water table and seepage ratesnearshore, and increased seepage rates offshore in inflow lakes. A laterally restricted aquifer located at the downgradient side of the flow-through lake increased the area of outseepage. Recharge rate, lake depth and lake bed slope had relatively little effect on the spatial patterns of seepage rates and groundwater exchange with lakes.
Resumo:
High resolution 230Thex and 10Be and biogenic barium profiles were measured at three sediment gravity cores (length 605-850 cm) from the Weddell Sea continental margin. Applying the 230Thex dating method, average sedimentation rates of 3 cm/kyr for the two cores from the South Orkney Slope and of 2.4 cm/kyr for the core from the eastern Weddell Sea were determined and compared to delta18O and lithostratigraphic results. Strong variations in the radionuclide concentrations in the sediments resembling the glacial/interglacial pattern of the delta18O stratigraphy and the 10Be stratigraphy of high northern latitudes were used for establishing a chronostratigraphy. Biogenic Ba shows a pattern similar to the radionuclide profiles, suggesting that both records were influenced by increased paleoproductivity at the beginning of the interglacials. However, 230Thex0 fluxes (0 stands for initial) exceeding production by up to a factor of 4 suggest that sediment redistribution processes, linked to variations in bottom water current velocity, played the major role in controlling the radionuclide and biogenic barium deposition during isotope stages 5e and 1. The correction for sediment focusing makes the 'true' vertical paleoproductivity rates, deduced from the fluxes of proxy tracers like biogenic barium, much lower than previously estimated. Very low 230Thex0 concentrations and fluxes during isotope stage 6 were probably caused by rapid deposition of older, resedimented material, delivered to the Weddell Sea continental slopes by the grounded ice shelves and contemporaneous erosion of particles originating from the water column.
Resumo:
Tumor angiogenesis is critical to tumor growth and metastasis, yet much is unknown about the role vascular cells play in the tumor microenvironment. A major outstanding challenge associated with studying tumor angiogenesis is that existing preclinical models are limited in their recapitulation of in vivo cellular organization in 3D. This disparity highlights the need for better approaches to study the dynamic interplay of relevant cells and signaling molecules as they are organized in the tumor microenvironment. In this thesis, we combined 3D culture of lung adenocarcinoma cells with adjacent 3D microvascular cell culture in 2-layer cell-adhesive, proteolytically-degradable poly(ethylene glycol) (PEG)-based hydrogels to study tumor angiogenesis and the impacts of neovascularization on tumor cell behavior.
In initial studies, 344SQ cells, a highly metastatic, murine lung adenocarcinoma cell line, were characterized alone in 3D in PEG hydrogels. 344SQ cells formed spheroids in 3D culture and secreted proangiogenic growth factors into the conditioned media that significantly increased with exposure to transforming growth factor beta 1 (TGF-β1), a potent tumor progression-promoting factor. Vascular cells alone in hydrogels formed tubule networks with localized activated TGF-β1. To study cancer cell-vascular cell interactions, the engineered 2-layer tumor angiogenesis model with 344SQ and vascular cell layers was employed. Large, invasive 344SQ clusters developed at the interface between the layers, and were not evident further from the interface or in control hydrogels without vascular cells. A modified model with spatially restricted 344SQ and vascular cell layers confirmed that observed 344SQ cluster morphological changes required close proximity to vascular cells. Additionally, TGF-β1 inhibition blocked endothelial cell-driven 344SQ migration.
Two other lung adenocarcinoma cell lines were also explored in the tumor angiogenesis model: primary tumor-derived metastasis-incompetent, murine 393P cells and primary tumor-derived metastasis-capable human A549 cells. These lung cancer cells also formed spheroids in 3D culture and secreted proangiogenic growth factors into the conditioned media. Epithelial morphogenesis varied for the primary tumor-derived cell lines compared to 344SQ cells, with far less epithelial organization present in A549 spheroids. Additionally, 344SQ cells secreted the highest concentration of two of the three angiogenic growth factors assessed. This finding correlated to 344SQ exhibiting the most pronounced morphological response in the tumor angiogenesis model compared to the 393P and A549 cell lines.
Overall, this dissertation demonstrates the development of a novel 3D tumor angiogenesis model that was used to study vascular cell-cancer cell interactions in lung adenocarcinoma cell lines with varying metastatic capacities. Findings in this thesis have helped to elucidate the role of vascular cells in tumor progression and have identified differences in cancer cell behavior in vitro that correlate to metastatic capacity, thus highlighting the usefulness of this model platform for future discovery of novel tumor angiogenesis and tumor progression-promoting targets.
Resumo:
The need for continuous recording rain gauges makes it difficult to determine the rainfall erosivity factor (R-factor) of the (R)USLE model in areas without good temporal data coverage. In mainland Spain, the Nature Conservation Institute (ICONA) determined the R-factor at few selected pluviographs, so simple estimates of the R-factor are definitely of great interest. The objectives of this study were: (1) to identify a readily available estimate of the R-factor for mainland Spain; (2) to discuss the applicability of a single (global) estimate based on analysis of regional results; (3) to evaluate the effect of record length on estimate precision and accuracy; and (4) to validate an available regression model developed by ICONA. Four estimators based on monthly precipitation were computed at 74 rainfall stations throughout mainland Spain. The regression analysis conducted at a global level clearly showed that modified Fournier index (MFI) ranked first among all assessed indexes. Applicability of this preliminary global model across mainland Spain was evaluated by analyzing regression results obtained at a regional level. It was found that three contiguous regions of eastern Spain (Catalonia, Valencian Community and Murcia) could have a different rainfall erosivity pattern, so a new regression analysis was conducted by dividing mainland Spain into two areas: Eastern Spain and plateau-lowland area. A comparative analysis concluded that the bi-areal regression model based on MFI for a 10-year record length provided a simple, precise and accurate estimate of the R-factor in mainland Spain. Finally, validation of the regression model proposed by ICONA showed that R-ICONA index overpredicted the R-factor by approximately 19%.
Resumo:
We consider how three firms compete in a Salop location model and how cooperation in location choice by two of these firms affects the outcomes. We con- sider the classical case of linear transportation costs as a two-stage game in which the firms select first a location on a unit circle along which consumers are dispersed evenly, followed by the competitive selection of a price. Standard analysis restricts itself to purely competitive selection of location; instead, we focus on the situation in which two firms collectively decide about location, but price their products competitively after the location choice has been effectuated. We show that such partial coordination of location is beneficial to all firms, since it reduces the number of equilibria significantly and, thereby, the resulting coordination problem. Subsequently, we show that the case of quadratic transportation costs changes the main conclusions only marginally.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
The role of computer modeling has grown recently to integrate itself as an inseparable tool to experimental studies for the optimization of automotive engines and the development of future fuels. Traditionally, computer models rely on simplified global reaction steps to simulate the combustion and pollutant formation inside the internal combustion engine. With the current interest in advanced combustion modes and injection strategies, this approach depends on arbitrary adjustment of model parameters that could reduce credibility of the predictions. The purpose of this study is to enhance the combustion model of KIVA, a computational fluid dynamics code, by coupling its fluid mechanics solution with detailed kinetic reactions solved by the chemistry solver, CHEMKIN. As a result, an engine-friendly reaction mechanism for n-heptane was selected to simulate diesel oxidation. Each cell in the computational domain is considered as a perfectly-stirred reactor which undergoes adiabatic constant- volume combustion. The model was applied to an ideally-prepared homogeneous- charge compression-ignition combustion (HCCI) and direct injection (DI) diesel combustion. Ignition and combustion results show that the code successfully simulates the premixed HCCI scenario when compared to traditional combustion models. Direct injection cases, on the other hand, do not offer a reliable prediction mainly due to the lack of turbulent-mixing model, inherent in the perfectly-stirred reactor formulation. In addition, the model is sensitive to intake conditions and experimental uncertainties which require implementation of enhanced predictive tools. It is recommended that future improvements consider turbulent-mixing effects as well as optimization techniques to accurately simulate actual in-cylinder process with reduced computational cost. Furthermore, the model requires the extension of existing fuel oxidation mechanisms to include pollutant formation kinetics for emission control studies.