352 resultados para Contention
Resumo:
In this dissertation, I investigate three related topics on asset pricing: the consumption-based asset pricing under long-run risks and fat tails, the pricing of VIX (CBOE Volatility Index) options and the market price of risk embedded in stock returns and stock options. These three topics are fully explored in Chapter II through IV. Chapter V summarizes the main conclusions. In Chapter II, I explore the effects of fat tails on the equilibrium implications of the long run risks model of asset pricing by introducing innovations with dampened power law to consumption and dividends growth processes. I estimate the structural parameters of the proposed model by maximum likelihood. I find that the stochastic volatility model with fat tails can, without resorting to high risk aversion, generate implied risk premium, expected risk free rate and their volatilities comparable to the magnitudes observed in data. In Chapter III, I examine the pricing performance of VIX option models. The contention that simpler-is-better is supported by the empirical evidence using actual VIX option market data. I find that no model has small pricing errors over the entire range of strike prices and times to expiration. In general, Whaley’s Black-like option model produces the best overall results, supporting the simpler-is-better contention. However, the Whaley model does under/overprice out-of-the-money call/put VIX options, which is contrary to the behavior of stock index option pricing models. In Chapter IV, I explore risk pricing through a model of time-changed Lvy processes based on the joint evidence from individual stock options and underlying stocks. I specify a pricing kernel that prices idiosyncratic and systematic risks. This approach to examining risk premia on stocks deviates from existing studies. The empirical results show that the market pays positive premia for idiosyncratic and market jump-diffusion risk, and idiosyncratic volatility risk. However, there is no consensus on the premium for market volatility risk. It can be positive or negative. The positive premium on idiosyncratic risk runs contrary to the implications of traditional capital asset pricing theory.
Resumo:
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer's processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one.^ The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over-provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consolidated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain duplicated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write policies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy.^ We addressed these problems by modeling their impact and by proposing solutions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we proposed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.^
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^
Resumo:
In a post-Cold War, post-9/11 world, the advent of US global supremacy resulted in the installation, perpetuation, and dissemination of an Absolutist Security Agenda (hereinafter, ASA). The US ASA explicitly and aggressively articulates and equates US national security interests with the security of all states in the international system, and replaced the bipolar, Cold War framework that defined international affairs from 1945-1992. Since the collapse of the USSR and the 11 September 2001 terrorist attacks, the US has unilaterally defined, implemented, and managed systemic security policy. The US ASA is indicative of a systemic category of knowledge (security) anchored in variegated conceptual and material components, such as morality, philosophy, and political rubrics. The US ASA is based on a logic that involves the following security components: (1) hyper militarization, (2) intimidation,(3) coercion, (4) criminalization, (5) panoptic surveillance, (6) plenary security measures, and (7) unabashed US interference in the domestic affairs of select states. Such interference has produced destabilizing tensions and conflicts that have, in turn, produced resistance, revolutions, proliferation, cults of personality, and militarization. This is the case because the US ASA rests on the notion that the international system of states is an extension, instrument of US power, rather than a system and/or society of states comprised of functionally sovereign entities. To analyze the US ASA, this study utilizes: (1) official government statements, legal doctrines, treaties, and policies pertaining to US foreign policy; (2) militarization rationales, budgets, and expenditures; and (3) case studies of rogue states. The data used in this study are drawn from information that is publicly available (academic journals, think-tank publications, government publications, and information provided by international organizations). The data supports the contention that global security is effectuated via a discrete set of hegemonic/imperialistic US values and interests, finding empirical expression in legal acts (USA Patriot ACT 2001) and the concept of rogue states. Rogue states, therefore, provide test cases to clarify the breadth, depth, and consequentialness of the US ASA in world affairs vis-à-vis the relationship between US security and global security.
Resumo:
New labor movements are currently emerging across the Global South. This is happening in countries as disparate as China, Egypt, and Iran. New developments are taking place within labor movements in places such as Colombia, Indonesia, Iraq, Mexico, Pakistan and Venezuela. Activists and leaders in these labor movements are seeking information from workers and unions around the world. However, many labor activists today know little or nothing about the last period of intense efforts to build international labor solidarity, the years 1978-2007. One of the key labor movements of this period, and which continues today, is the KMU Labor Center of the Philippines. It is this author’s contention that there is a lot unknown about the KMU that would help advance global labor solidarity today. This paper focuses specifically on the KMU’s development, and shares five things that have emerged from this author’s study of the KMU: a new type of trade unionism, new union organizations, an emphasis on rank and file education, building relations with sectoral organizations, and the need to build international labor solidarity.
Resumo:
Exchange rate economics has achieved substantial development in the past few decades. Despite extensive research, a large number of unresolved problems remain in the exchange rate debate. This dissertation studied three puzzling issues aiming to improve our understanding of exchange rate behavior. Chapter Two used advanced econometric techniques to model and forecast exchange rate dynamics. Chapter Three and Chapter Four studied issues related to exchange rates using the theory of New Open Economy Macroeconomics. Chapter Two empirically examined the short-run forecastability of nominal exchange rates. It analyzed important empirical regularities in daily exchange rates. Through a series of hypothesis tests, a best-fitting fractionally integrated GARCH model with skewed student-t error distribution was identified. The forecasting performance of the model was compared with that of a random walk model. Results supported the contention that nominal exchange rates seem to be unpredictable over the short run in the sense that the best-fitting model cannot beat the random walk model in forecasting exchange rate movements. Chapter Three assessed the ability of dynamic general-equilibrium sticky-price monetary models to generate volatile foreign exchange risk premia. It developed a tractable two-country model where agents face a cash-in-advance constraint and set prices to the local market; the exogenous money supply process exhibits time-varying volatility. The model yielded approximate closed form solutions for risk premia and real exchange rates. Numerical results provided quantitative evidence that volatile risk premia can endogenously arise in a new open economy macroeconomic model. Thus, the model had potential to rationalize the Uncovered Interest Parity Puzzle. Chapter Four sought to resolve the consumption-real exchange rate anomaly, which refers to the inability of most international macro models to generate negative cross-correlations between real exchange rates and relative consumption across two countries as observed in the data. While maintaining the assumption of complete asset markets, this chapter introduced endogenously segmented asset markets into a dynamic sticky-price monetary model. Simulation results showed that such a model could replicate the stylized fact that real exchange rates tend to move in an opposite direction with respect to relative consumption.
Resumo:
This thesis studies the historic encounter between United States Navy airship K-74 and Nazi submarine U-134 in World War II. The Battle of the Atlantic is examined through case study of this one U-boat and its voyage. In all things except her fight with the American blimp, the patrol was perfectly typical. Looked at from start to finish, both her reports and the reports of the Allies encountered, many realities of the war can be studied. U-134 sailed to attack shipping between Florida and Cuba. She was challenged by the attack of United States Navy airship K-74 over the Florida Straits. It is the only documented instance of battle between two such combatants in history. That merits attention. Thesis finding disprove historian William Eliot Morison’s contention that the K-74 airship bombs were not dropped and did not damage the U-boat. Study of this U-boat and its antagonist broadens our understanding of the Battle of the Atlantic. It is a contribution to our knowledge of military, naval, aviation, and local history.
Resumo:
In this dissertation, I investigate three related topics on asset pricing: the consumption-based asset pricing under long-run risks and fat tails, the pricing of VIX (CBOE Volatility Index) options and the market price of risk embedded in stock returns and stock options. These three topics are fully explored in Chapter II through IV. Chapter V summarizes the main conclusions. In Chapter II, I explore the effects of fat tails on the equilibrium implications of the long run risks model of asset pricing by introducing innovations with dampened power law to consumption and dividends growth processes. I estimate the structural parameters of the proposed model by maximum likelihood. I find that the stochastic volatility model with fat tails can, without resorting to high risk aversion, generate implied risk premium, expected risk free rate and their volatilities comparable to the magnitudes observed in data. In Chapter III, I examine the pricing performance of VIX option models. The contention that simpler-is-better is supported by the empirical evidence using actual VIX option market data. I find that no model has small pricing errors over the entire range of strike prices and times to expiration. In general, Whaley’s Black-like option model produces the best overall results, supporting the simpler-is-better contention. However, the Whaley model does under/overprice out-of-the-money call/put VIX options, which is contrary to the behavior of stock index option pricing models. In Chapter IV, I explore risk pricing through a model of time-changed Lévy processes based on the joint evidence from individual stock options and underlying stocks. I specify a pricing kernel that prices idiosyncratic and systematic risks. This approach to examining risk premia on stocks deviates from existing studies. The empirical results show that the market pays positive premia for idiosyncratic and market jump-diffusion risk, and idiosyncratic volatility risk. However, there is no consensus on the premium for market volatility risk. It can be positive or negative. The positive premium on idiosyncratic risk runs contrary to the implications of traditional capital asset pricing theory.
Resumo:
This thesis examines the involvement of the United States in the decade-long trade dispute before the World Trade Organization (WTO) over the European Union's preferential banana regime. Washington's justification for bringing this case to the WTO comes from Section 301 of the U.S. trade act, which allows for disputes to be undertaken if U.S. "interests" are violated; however, this is the first case ever undertaken by the United States that does not directly threaten any American banana industry, nor affect any American jobs. Why, then, would the United States involve itself in this European-Caribbean-Latin American dispute? It is the contention of this thesis that the United States thrust itself headlong into this debate for two reasons: domestically, the United States Trade Representative came under pressure, via the White House and Congress, from Chiquita CEO Carl Lindner, who in the past decade donated more than $7.1 million to American politicians to take the case to the WTO. Internationally, the United States used the case as an opportunity to assert its power over Europe, with the Eastern Caribbean islands being caught in the economic crossfire. According to existing literature, in undertaking this case, the United States did as any nation would: it operated within both domestic and international levels, satisfying at each level key interests, with the overall goal of maintaining the nation's best interests.
Resumo:
In a post-Cold War, post-9/11 world, the advent of US global supremacy resulted in the installation, perpetuation, and dissemination of an Absolutist Security Agenda (hereinafter, ASA). The US ASA explicitly and aggressively articulates and equates US national security interests with the security of all states in the international system, and replaced the bipolar, Cold War framework that defined international affairs from 1945-1992. Since the collapse of the USSR and the 11 September 2001 terrorist attacks, the US has unilaterally defined, implemented, and managed systemic security policy. The US ASA is indicative of a systemic category of knowledge (security) anchored in variegated conceptual and material components, such as morality, philosophy, and political rubrics. The US ASA is based on a logic that involves the following security components: 1., hyper militarization, 2., intimidation, 3., coercion, 4., criminalization, 5., panoptic surveillance, 6., plenary security measures, and 7., unabashed US interference in the domestic affairs of select states. Such interference has produced destabilizing tensions and conflicts that have, in turn, produced resistance, revolutions, proliferation, cults of personality, and militarization. This is the case because the US ASA rests on the notion that the international system of states is an extension, instrument of US power, rather than a system and/or society of states comprised of functionally sovereign entities. To analyze the US ASA, this study utilizes: 1., official government statements, legal doctrines, treaties, and policies pertaining to US foreign policy; 2., militarization rationales, budgets, and expenditures; and 3., case studies of rogue states. The data used in this study are drawn from information that is publicly available (academic journals, think-tank publications, government publications, and information provided by international organizations). The data supports the contention that global security is effectuated via a discrete set of hegemonic/imperialistic US values and interests, finding empirical expression in legal acts (USA Patriot ACT 2001) and the concept of rogue states. Rogue states, therefore, provide test cases to clarify the breadth, depth, and consequentialness of the US ASA in world affairs vis-a-vis the relationship between US security and global security.
Resumo:
A heat loop suitable for the study of thermal fouling and its relationship to corrosion processes was designed, constructed and tested. The design adopted was an improvement over those used by such investigators as Hopkins and the Heat Transfer Research Institute in that very low levels of fouling could be detected accurately, the heat transfer surface could be readily removed for examination and the chemistry of the environment could be carefully monitored and controlled. In addition, an indirect method of electrical heating of the heat transfer surface was employed to eliminate magnetic and electric effects which result when direct resistance heating is employed to a test section. The testing of the loop was done using a 316 stainless steel test section and a suspension of ferric oxide and water in an attempt to duplicate the results obtained by Hopkins. Two types of thermal ·fouling resistance versus time curves were obtained . (i) Asymptotic type fouling curve, similar to the fouling behaviour described by Kern and Seaton and other investigators, was the most frequent type of fouling curve obtained. Thermal fouling occurred at a steadily decreasing rate before reaching a final asymptotic value. (ii) If an asymptotically fouled tube was cooled with rapid cir- ·culation for periods up to eight hours at zero heat flux, and heating restarted, fouling recommenced at a high linear rate. The fouling results obtained were observed to be similar and 1n agreement with the fouling behaviour reported previously by Hopkins and it was possible to duplicate quite closely the previous results . This supports the contention of Hopkins that the fouling results obtained were due to a crevice corrosion process and not an artifact of that heat loop which might have caused electrical and magnetic effects influencing the fouling. The effects of Reynolds number and heat flux on the asymptotic fouling resistance have been determined. A single experiment to study the effect of oxygen concentration has been carried out. The ferric oxide concentration for most of the fouling trials was standardized at 2400 ppM and the range of Reynolds number and heat flux for the study was 11000-29500 and 89-121 KW/M², respectively.
Resumo:
The purpose of this research is to investigate how international students negotiate encounters with Irish students and construct ‘meaning’ from those encounters in the spaces of the university and city. As cities are increasingly characterised by a multiplexity of diversity, the issue of living with difference is becoming more and more pertinent. In the wake of escalating socio-spatial polarisation, inter-cultural tension, racism, and xenophobia, the geographies of encounter seek to untangle the interactions that occur in the quotidian activities and spaces of everyday life to determine whether such encounters might reduce prejudice, antipathy and indifference and establish common social bonds (Amin 2002; Valentine 2008). Thus far, the literature has investigated a number of sites of encounter; public space, the home, neighbourhoods, schools, sports clubs, public transport, cafes and libraries (Wilson 2011; Schuermans 2013; Hemming 2011; Neal and Vincent 2011; Mayblin, Valentine and Anderrson 2015; Laurier and Philo 2006; Valentine and Sadgrove 2013; Harris, Valentine and Piekut 2014; Fincher and Iveson 2008). While these spaces produce a range of outcomes, the literature remains frustrated by a lack of clarity of what constitutes a ‘meaningful’ encounter and how such encounters might be planned for. Drawing on survey and interview data with full-time international students at University College Cork, Ireland, this study contributes to understanding how encounters are shaped by the construction and reproduction of particular identities in particular spaces, imbuing spaces with uneven power frameworks that produce diverse outcomes. Rather than identifying a singular ‘meaningful’ outcome of encounter as a potential panacea to the issues of exclusion and oppression, the contention here is to recognise a range of outcomes that are created by individuals in a range of ways. To define one outcome of encounter as ‘meaningful’ is to overlook the scale of intensity of diverse interactions and the multiplicity of ways in which people learn to live with difference.
Resumo:
O18/O16 data on a depth profile of water samples from the Arctic Ocean reveal that near surface water is depleted in O18 by about 4 per mil, but water at depths greater than 350 meters reaches near normal open ocean water composition. The O18 profile very closely follows the salinity profile, with deltaO18 changing by about 0.8 per mil per 1 per mil salinity change. The results of deltaO18 measurements on the pelagic species Globigerina pachyderma from a composite core show that the deltaO18 value has not changed since the latter part of the last glacial period. This constancy we take to indicate that the temperature and the deltaO18 value of the water in which these foraminifera grew have not changed significantly since that time. Such a conclusion seems to imply that the present ice coverage in the Arctic Ocean has remained unchanged during the last 25,000 years. However, the deltaO18 value of benthonic foraminifera shows a shift of 1.2 per mil between the end of the last glacial period and the present warm period. This shift is consistent with the idea that the deep water mass of the Arctic Ocean is formed outside the Arctic basin. The information on the deltaO18 value of the benthonic foraminifera from the top of the core was used in conjunction with the data on deltaO18 and temperature of the bottom water to establish the constant in the empirical equation relating deltaO18 values to temperature for the preparation procedure used in our laboratory. Based on this calibration, the data confirm A. W. H. Bé's contention (personal communication, 1960) that G. pachyderma incorporates about one-half of its CaCO3 below 300 meters.
Resumo:
1. Desmoscolecida from the continental slope and the deep-sea bottom (59-4354 m) off the Portuguese and Moroccan coasts are described. 18 species were identified: Desmoscolex bathyalis sp. nov., D. chaetalatus sp. nov., D. eftus sp. nov., D. galeatus sp. nov., D. lapilliferus sp. nov., D. longisetosus Timm, 1970, D. lorenzeni sp. nov., D. perspicuus sp. nov., D. pustulatus sp. nov., Quadricoma angulocephala sp. nov., Q. brevichaeta sp. nov., Q. iberica sp. nov., Q. loricatoides sp. nov., Tricoma atlantica sp. nov., T. bathycola sp. nov., T. beata sp. nov., T. incomposita sp. nov., T. meteora sp. nov., T. mauretania sp. nov. 2. The following new terms are proposed: "Desmos" (ring-shaped concretions consisting of secretion and concretion particles), "desmoscolecoid" and "tricomoid" arrangement of the somatic setae, "regelmaessige" (regular), "unregelmaessige" (irregular), "vollstaendige" (complete) and "unvollstaendige" (incomplete) arrangement of somatic seta (variations in the desmoscolecoid arrangement of the somatic setae). The length of the somatic setae is given in the setal pattern. 3. Desmoscolecida identical as to genus and species exhibit no morphological differences even if forthcoming from different bathymetrical zones (deep sea, sublitoral, litoral) or different environments (marin, freshwater, coastal subsoil water, terrestrial environment). 4. Lorenzen's (1969) contention that thearrangement of the somatic setae is more significant for the natural relationships between the different genera of Desmoscolecida than other characteristics is further confirmed. Species with tricomoid arrangement of somatic setae are regarded as primitive, species with desmoscolecoid arrangement of somatic setae are regarded as more advanced. 5. Three new genus are established: Desmogerlachia gen. nov., Desmolorenzenia gen. nov. and Desmofimmia gen. nov. - Protricoma Timm, 1970 is synonymized with Paratricoma Gerlach, 1964 and Protodesmoscolex Timm, 1970 is synonymized with Desmoscolex Claparede,1863. 6. Checklists of all species of the order Desmoscolecida and keys to species of the subfamilies Tricominae and Desmoscolecinae are provided. 7. The following nomenclatorial changes are suggested: Desmogerlachia papillifer (Gerlach, 1956) comb. nov., D .pratensis (Lorenz, 1969) comb. nov., Desmotimmia mirabilis (Timm, 1970) comb. nov., Paratricoma squamosa (Timm, 1970) comb. nov., Desmolorenzenia crassicauda (Timm, 1970) comb. nov., D. desmoscolecoides (Timm, 1970) comb. nov., D. eurycricus (Filipjev, 1922) comb. nov., D. frontalis (Gerlach, 1952) comb. nov., D. hupferi (Steiner, 1916) comb. nov., D. longicauda (Timm, 1970) comb. nov., D. parva (Timm, 1970) comb. nov., D. platycricus (Steiner, 1916) comb. nov., D. viffata (Lorenzen, 1969) comb. nov., Desmoscolex anfarcficos (Timm, 1970) comb. nov.
Resumo:
A record of Pb isotopic compositions and Pb and Ba concentrations are presented for the EPICA Dome C ice core covering the past 220 ky, indicating the characteristics of dust and volcanic Pb deposition in central East Antarctica. Lead isotopic compositions are also reported in a suite of soil and loess samples from the Southern Hemisphere (Australia, Southern Africa, Southern South America, New Zealand, Antarctica) in order to evaluate the provenance of dust present in Antarctic ice. Lead isotopic compositions in Dome C ice support the contention that Southern South America was an important source of dust in Antarctica during the last two glacial maxima, and furthermore suggest occasional dust contributions from local Antarctic sources. The isotopic signature of Pb in Antarctic ice is altered by the presence of volcanic Pb, inhibiting the evaluation of glacial-interglacial changes in dust sources and the evaluation of Australia as a source of dust to Antarctica. Consequently, an accurate evaluation of the predominant source(s) of Antarctic dust can only be obtained from glacial maxima, when dust-Pb concentrations were greatest. These data confirm that volcanic Pb is present throughout Antarctica and is emitted in a physical phase that is free from Ba, while dust Pb is transported within a matrix containing Ba and other crustal elements.