70 resultados para Power of Veto
Resumo:
This paper assesses the impact of the monetary integration on different types of stock returns in Europe. In order to isolate European factors, the impact of global equity integration and small cap factors are investigated. European countries are sub-divided according to the process of monetary convergence. Analysis shows that national equity indices are strongly influenced by global market movements, with a European stock factor providing additional explanatory power. The global and European factors explain small cap and real estate stocks much less well –suggesting an increased importance of ‘local’ drivers. For real estate, there are notable differences between core and non-core countries. Core European countries exhibit convergence – a convergence to a European rather than a global factor. The non-core countries do not seem to exhibit common trends or movements. For the non-core countries, monetary integration has been associated with increased dispersion of returns, lower correlation and lower explanatory power of a European factor. It is concluded that this may be explained by divergence in underlying macro-economic drivers between core and non-core countries in the post-Euro period.
Resumo:
In this paper sequential importance sampling is used to assess the impact of observations on a ensemble prediction for the decadal path transitions of the Kuroshio Extension (KE). This particle filtering approach gives access to the probability density of the state vector, which allows us to determine the predictive power — an entropy based measure — of the ensemble prediction. The proposed set-up makes use of an ensemble that, at each time, samples the climatological probability distribution. Then, in a post-processing step, the impact of different sets of observations is measured by the increase in predictive power of the ensemble over the climatological signal during one-year. The method is applied in an identical-twin experiment for the Kuroshio Extension using a reduced-gravity shallow water model. We investigate the impact of assimilating velocity observations from different locations during the elongated and the contracted meandering state of the KE. Optimal observations location correspond to regions with strong potential vorticity gradients. For the elongated state the optimal location is in the first meander of the KE. During the contracted state of the KE it is located south of Japan, where the Kuroshio separates from the coast.
Resumo:
Worries about the possibility of consent recall a more familiar problem about promising raised by Hume. To see the parallel here we must distinguish the power of consent from the normative significance of choice. I'll argue that we have normative interests, interests in being able to control the rights and obligations of ourselves and those around us, interests distinct from our interest in controlling the non-normative situation. Choice gets its normative significance from our non-normative control interests. By contrast, the possibility of consent depends on a species of normative interest that I'll call a permissive interest, an interest in its being the case that certain acts wrong us unless we declare otherwise. In the final section, I'll show how our permissive interests underwrite the possibility of consent.
Resumo:
The Cold War in the late 1940s blunted attempts by the Truman administration to extend the scope of government in areas such as health care and civil rights. In California, the combined weakness of the Democratic Party in electoral politics and the importance of fellow travelers and communists in state liberal politics made the problem of how to advance the left at a time of heightened Cold War tensions particularly acute. Yet by the early 1960s a new generation of liberal politicians had gained political power in the Golden State and was constructing a greatly expanded welfare system as a way of cementing their hold on power. In this article I argue that the New Politics of the 1970s, shaped nationally by Vietnam and by the social upheavals of the 1960s over questions of race, gender, sexuality, and economic rights, possessed particular power in California because many activists drew on the longer-term experiences of a liberal politics receptive to earlier anti-Cold War struggles. A desire to use political involvement as a form of social networking had given California a strong Popular Front, and in some respects the power of new liberalism was an offspring of those earlier battles.
Resumo:
The work of nouvelliste Annie Saumont constantly explores the phenomenon of memory, and of memories. This article identifies and nuances the various forms that this exploration takes. An introductory contextualization of author and theme is followed by the presentation of a short story, ‘Méandres’, which embodies the first quality of memory to be examined: its capacity not only to recall but also to re-evaluate a past which is thus shown to be as hypothetical as the future. Memory as guilt that moulds or puts its indelible stamp on lives is then evoked by means of examples from other stories, illustrating the gradations Saumont achieves in her investigation of the power of this complex faculty. The next section turns to her portrayal of involuntary memory. Unlike for Proust, the instances of spontaneous remembering that are experienced by her characters lunge at them down the years almost exclusively to wound or disorientate. Depictions of the memory which conserves, and is thus burdened by, secrets are then considered, and finally Saumont's evocation of characters who have different reasons to analyse the way their own and other people's memories work. The conclusion to be drawn is that for Saumont, we are our memories; the ability to master a ‘judicious interpretation’ of memory – or indeed, to forget – is, in her stories, overwhelmingly a quality to be envied.
Resumo:
Each year, small Member States receive a disproportionate share of the European Union's (EU's) budget. A prominent explanation for this is that Council decision-making involves a healthy dose of vote selling, whereby large Member States offer small states generous fiscal transfers in exchange for influence over policy. But nobody has investigated whether net budget contributors actually get anything for their money. In this paper I identify the vote selling model's observable implications and find virtually no evidence consistent with Council cash-for-votes exchanges. I also show that a compromise model – the leading model of EU decision-making to date – modified to incorporate vote selling does not outperform a standard one that assumes votes are traded rather than sold. Taken together, the results suggest that Council decision-making operates with little or no vote selling, and that regardless of whatever they think they might be buying, net budget contributors get little or nothing in return for their money. These findings call for further investigation into how Member States approach the issue of fiscal transfers, and into the factors other than formal voting weight that affect the power of actors engaged in EU decision-making.
Resumo:
A novel but simple time-of-flight neutron scattering geometry which allows structural anisotropy to be probed directly, simultaneously and thus unambiguously in polymeric and other materials is described. A particular advantage of the simultaneous data collection when coupled to the large area of the beam is that it enables thin films (< 10 μm < 10 mg) to be studied with relative ease. The utility of the technique is illustrated by studies on both deformed poly(styrene) glasses and on thin films of electrical conducting polymers. In the latter case, the power of isotopic substitution is illustrated to great effect. The development of these procedures for use in other areas of materials science is briefly discussed.
Resumo:
In biological mass spectrometry (MS), two ionization techniques are predominantly employed for the analysis of larger biomolecules, such as polypeptides. These are nano-electrospray ionization [1, 2] (nanoESI) and matrix-assisted laser desorption/ionization [3, 4] (MALDI). Both techniques are considered to be “soft”, allowing the desorption and ionization of intact molecular analyte species and thus their successful mass-spectrometric analysis. One of the main differences between these two ionization techniques lies in their ability to produce multiply charged ions. MALDI typically generates singly charged peptide ions whereas nanoESI easily provides multiply charged ions, even for peptides as low as 1000 Da in mass. The production of highly charged ions is desirable as this allows the use of mass analyzers, such as ion traps (including orbitraps) and hybrid quadrupole instruments, which typically offer only a limited m/z range (< 2000–4000). It also enables more informative fragmentation spectra using techniques such as collisioninduced dissociation (CID) and electron capture/transfer dissociation (ECD/ETD) in combination with tandem MS (MS/MS). [5, 6] Thus, there is a clear advantage of using ESI in research areas where peptide sequencing, or in general, the structural elucidation of biomolecules by MS/MS is required. Nonetheless, MALDI with its higher tolerance to contaminants and additives, ease-of-operation, potential for highspeed and automated sample preparation and analysis as well as its MS imaging capabilities makes it an ionization technique that can cover bioanalytical areas for which ESI is less suitable. [7, 8] If these strengths could be combined with the analytical power of multiply charged ions, new instrumental configurations and large-scale proteomic analyses based on MALDI MS(/MS) would become feasible.
Resumo:
Purpose: Increasing costs of health care, fuelled by demand for high quality, cost-effective healthcare has drove hospitals to streamline their patient care delivery systems. One such systematic approach is the adaptation of Clinical Pathways (CP) as a tool to increase the quality of healthcare delivery. However, most organizations still rely on are paper-based pathway guidelines or specifications, which have limitations in process management and as a result can influence patient safety outcomes. In this paper, we present a method for generating clinical pathways based on organizational semiotics by capturing knowledge from syntactic, semantic and pragmatic to social level. Design/methodology/approach: The proposed modeling approach to generation of CPs adopts organizational semiotics and enables the generation of semantically rich representation of CP knowledge. Semantic Analysis Method (SAM) is applied to explicitly represent the semantics of the concepts, their relationships and patterns of behavior in terms of an ontology chart. Norm Analysis Method (NAM) is adopted to identify and formally specify patterns of behavior and rules that govern the actions identified on the ontology chart. Information collected during semantic and norm analysis is integrated to guide the generation of CPs using best practice represented in BPMN thus enabling the automation of CP. Findings: This research confirms the necessity of taking into consideration social aspects in designing information systems and automating CP. The complexity of healthcare processes can be best tackled by analyzing stakeholders, which we treat as social agents, their goals and patterns of action within the agent network. Originality/value: The current modeling methods describe CPs from a structural aspect comprising activities, properties and interrelationships. However, these methods lack a mechanism to describe possible patterns of human behavior and the conditions under which the behavior will occur. To overcome this weakness, a semiotic approach to generation of clinical pathway is introduced. The CP generated from SAM together with norms will enrich the knowledge representation of the domain through ontology modeling, which allows the recognition of human responsibilities and obligations and more importantly, the ultimate power of decision making in exceptional circumstances.
Resumo:
The current state of the art and direction of research in computer vision aimed at automating the analysis of CCTV images is presented. This includes low level identification of objects within the field of view of cameras, following those objects over time and between cameras, and the interpretation of those objects’ appearance and movements with respect to models of behaviour (and therefore intentions inferred). The potential ethical problems (and some potential opportunities) such developments may pose if and when deployed in the real world are presented, and suggestions made as to the necessary new regulations which will be needed if such systems are not to further enhance the power of the surveillers against the surveilled.
Resumo:
We study a two-way relay network (TWRN), where distributed space-time codes are constructed across multiple relay terminals in an amplify-and-forward mode. Each relay transmits a scaled linear combination of its received symbols and their conjugates,with the scaling factor chosen based on automatic gain control. We consider equal power allocation (EPA) across the relays, as well as the optimal power allocation (OPA) strategy given access to instantaneous channel state information (CSI). For EPA, we derive an upper bound on the pairwise-error-probability (PEP), from which we prove that full diversity is achieved in TWRNs. This result is in contrast to one-way relay networks, in which case a maximum diversity order of only unity can be obtained. When instantaneous CSI is available at the relays, we show that the OPA which minimizes the conditional PEP of the worse link can be cast as a generalized linear fractional program, which can be solved efficiently using the Dinkelback-type procedure.We also prove that, if the sum-power of the relay terminals is constrained, then the OPA will activate at most two relays.
Resumo:
Quantile forecasts are central to risk management decisions because of the widespread use of Value-at-Risk. A quantile forecast is the product of two factors: the model used to forecast volatility, and the method of computing quantiles from the volatility forecasts. In this paper we calculate and evaluate quantile forecasts of the daily exchange rate returns of five currencies. The forecasting models that have been used in recent analyses of the predictability of daily realized volatility permit a comparison of the predictive power of different measures of intraday variation and intraday returns in forecasting exchange rate variability. The methods of computing quantile forecasts include making distributional assumptions for future daily returns as well as using the empirical distribution of predicted standardized returns with both rolling and recursive samples. Our main findings are that the Heterogenous Autoregressive model provides more accurate volatility and quantile forecasts for currencies which experience shifts in volatility, such as the Canadian dollar, and that the use of the empirical distribution to calculate quantiles can improve forecasts when there are shifts
Using the past to constrain the future: how the palaeorecord can improve estimates of global warming
Resumo:
Climate sensitivity is defined as the change in global mean equilibrium temperature after a doubling of atmospheric CO2 concentration and provides a simple measure of global warming. An early estimate of climate sensitivity, 1.5—4.5°C, has changed little subsequently, including the latest assessment by the Intergovernmental Panel on Climate Change. The persistence of such large uncertainties in this simple measure casts doubt on our understanding of the mechanisms of climate change and our ability to predict the response of the climate system to future perturbations. This has motivated continued attempts to constrain the range with climate data, alone or in conjunction with models. The majority of studies use data from the instrumental period (post-1850), but recent work has made use of information about the large climate changes experienced in the geological past. In this review, we first outline approaches that estimate climate sensitivity using instrumental climate observations and then summarize attempts to use the record of climate change on geological timescales. We examine the limitations of these studies and suggest ways in which the power of the palaeoclimate record could be better used to reduce uncertainties in our predictions of climate sensitivity.
Resumo:
This paper employs an extensive Monte Carlo study to test the size and power of the BDS and close return methods of testing for departures from independent and identical distribution. It is found that the finite sample properties of the BDS test are far superior and that the close return method cannot be recommended as a model diagnostic. Neither test can be reliably used for very small samples, while the close return test has low power even at large sample sizes
Resumo:
Global warming has attracted attention from all over the world and led to the concern about carbon emission. Kyoto Protocol, as the first major international regulatory emission trading scheme, was introduced in 1997 and outlined the strategies for reducing carbon emission (Ratnatunga et al., 2011). As the increased interest in carbon reduction the Protocol came into force in 2005, currently there are already 191 nations ratifying the Protocol(UNFCCC, 2012). Under the cap-and-trade schemes, each company has its carbon emission target. When company’s carbon emission exceeds the target the company will either face fines or buy emission allowance from other companies. Thus unlike most of the other social and environmental issues carbon emission could trigger cost for companies in introducing low-emission equipment and systems and also emission allowance cost when they emit more than their targets. Despite the importance of carbon emission to companies, carbon emission reporting is still operating under unregulated environment and companies are only required to disclose when it is material either in value or in substances (Miller, 2005, Deegan and Rankin, 1997). Even though there is still an increase in the volume of carbon emission disclosures in company’s financial reports and stand-alone social and environmental reports to show their concern of the environment and also their social responsibility (Peters and Romi, 2009), the motivations behind corporate carbon emission disclosures and whether carbon disclosures have impact on corporate environmental reputation and financial performance have not yet to explore. The problems with carbon emission lie on both the financial side and non-financial side of corporate governance. On one hand corporate needs to spend money in reducing carbon emission or paying penalties when they emit more than allowed. On the other hand as the public are more interested in environmental issues than before carbon emission could also impact on the image of corporate regarding to its environmental performance. The importance of carbon emission issue are beginning to be recognized by companies from different industries as one of the critical issues in supply chain management (Lee, 2011) and 80% of companies analysed are facing carbon risks resulting from emissions in the companies’ supply chain as shown in a study conducted by the Investor Responsibility Research Centre Institute for Corporate Responsibility (IRRCI) and over 80% of the companies analysed found that the majority of greenhouse gas (GHG) emission are from electricity and other direct suppliers (Trucost, 2009). The review of extant literature shows the increased importance of carbon emission issues and the gap in the study of carbon reporting and disclosures and also the study which links corporate environmental reputation and corporate financial performance with carbon reporting (Lohmann, 2009a, Ratnatunga and Balachandran, 2009, Bebbington and Larrinaga-Gonzalez, 2008). This study would focus on investigating the current status of UK carbon emission disclosures, the determinant factors of corporate carbon disclosure, and the relationship between carbon emission disclosures and corporate environmental reputation and financial performance of UK listed companies from 2004-2012 and explore the explanatory power of classical disclosure theories.