988 resultados para Hold


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In response to the often-heard accusation that “austerity is killing growth in Europe”, Daniel Gros asks in this new Commentary: “What austerity?” Looking at the entire budget cycle, he finds that the picture of austerity killing growth simply does not hold up. Since the bursting of the bubble in 2007, Gros reports that the economic performance of the US has been very similar to that of the euro area: GDP per capita is today about 2% below the 2007 level on both sides of the Atlantic; and the unemployment rate has increased by about the same amount as well: it increased by 3% both in the US and the euro area. Thus, he concludes that over a five-year period, the US has not done any better than the euro area although it has used a much larger dose of fiscal expansion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Europe's failure to specialise in new ICT sectors and firms is likely to hold back Europe’s post-crisis recovery. Europe lacks in particular leading platform providers, who are capturing most of the value in the new ICT ecosystem. • In-depth analysis of some specific new emerging ICT sectors shows that the problem in Europe appears not to be so much in the generation of new ideas, but rather in bringing ideas successfully to market. Among the barriers are the lack of a single digital market, fragmented intellectual property regimes, lack of an entrepreneurial culture, limited access to risk capital and an absence of ICT clusters. • The EU policy framework, particularly the Innovation Union and Digital Agenda EU 2020 Flagships, could better leverage the growth power for Europe of new ICT markets. The emphasis should move beyond providing support for infrastructure and research, to funding programmes for pre-commercial projects. But perhaps most important is dealing with the fragmentation in European digital markets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes benchmark testing of six two-dimensional (2D) hydraulic models (DIVAST, DIVASTTVD, TUFLOW, JFLOW, TRENT and LISFLOOD-FP) in terms of their ability to simulate surface flows in a densely urbanised area. The models are applied to a 1·0 km × 0·4 km urban catchment within the city of Glasgow, Scotland, UK, and are used to simulate a flood event that occurred at this site on 30 July 2002. An identical numerical grid describing the underlying topography is constructed for each model, using a combination of airborne laser altimetry (LiDAR) fused with digital map data, and used to run a benchmark simulation. Two numerical experiments were then conducted to test the response of each model to topographic error and uncertainty over friction parameterisation. While all the models tested produce plausible results, subtle differences between particular groups of codes give considerable insight into both the practice and science of urban hydraulic modelling. In particular, the results show that the terrain data available from modern LiDAR systems are sufficiently accurate and resolved for simulating urban flows, but such data need to be fused with digital map data of building topology and land use to gain maximum benefit from the information contained therein. When such terrain data are available, uncertainty in friction parameters becomes a more dominant factor than topographic error for typical problems. The simulations also show that flows in urban environments are characterised by numerous transitions to supercritical flow and numerical shocks. However, the effects of these are localised and they do not appear to affect overall wave propagation. In contrast, inertia terms are shown to be important in this particular case, but the specific characteristics of the test site may mean that this does not hold more generally.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A university degree is effectively a prerequisite for entering the archaeological workforce in the UK. Archaeological employers consider that new entrants to the profession are insufficiently skilled, and hold university training to blame. But university archaeology departments do not consider it their responsibility to deliver fully formed archaeological professionals, but rather to provide an education that can then be applied in different workplaces, within and outside archaeology. The number of individuals studying archaeology at university exceeds the total number working in professional practice, with many more new graduates emerging than archaeological jobs advertised annually. Over-supply of practitioners is also a contributing factor to low pay in archaeology. Steps are being made to provide opportunities for vocational training, both within and outside the university system, but archaeological training and education within the universities and subsequently the archaeological labour market may be adversely impacted upon by the introduction of variable top-up student fees.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rapidly-flowing sectors of an ice sheet (ice streams) can play ail important role in abrupt climate change through tile delivery of icebergs and meltwater and tile Subsequent disruption of ocean thermohaline circulation (e.g., the North Atlantic's Heinrich events). Recently, several cores have been raised from the Arctic Ocean which document the existence of massive ice export events during tile Late Pleistocene and whose provenance has been linked to Source regions in the Canadian Arctic Archipelago. In this paper, satellite imagery is used to map glacial geomorphology in the vicinity of Victoria Island, Banks Island and Prince of Wales Island (Canadian Arctic) in order to reconstruct ice flow patterns in the highly complex glacial landscape. A total of 88 discrete flow-sets are mapped and of these, 13 exhibit the characteristic geomorphology of palaeo-ice streams (i.e., parallel patterns of large, highly elongated mega-scale glacial lineations forming a convergent flow pattern with abrupt lateral margins). Previous studies by other workers and cross-cutting relationships indicate that the majority of these ice streams are relatively young and operated during or immediately prior to deglaciation. Our new mapping, however, documents a large (> 700 km long; 110 km wide) and relatively old ice stream imprint centred in M'Clintock Channel and converging into Viscount Melville Sound. A trough mouth fan located on the continental shelf Suggests that it extended along M'Clure Strait and was grounded at tile shelf edge. The location of the M'Clure Strait Ice Stream exactly matches the Source area of 4 (possibly 5) major ice export events recorded in core PS 1230 raised from Fram Strait, the major ice exit for the Arctic Ocean. These ice export events occur at similar to 12.9, similar to 15.6, similar to 22 and 29.8 ka (C-14 yr BP) and we argue that they record vigorous episodes of activity of the M'Clure Strait Ice Stream. The timing of these events is remarkably similar to the North Atlantic's Heinrich events and we take this as evidence that the M'Clure Strait Ice Stream was also activated around the same time. This may hold important implications for tile cause of the North Atlantic's Heinrich events and hints at tile possibility of a pall-ice sheet response. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The modelled El Nino-mean state-seasonal cycle interactions in 23 coupled ocean-atmosphere GCMs, including the recent IPCC AR4 models, are assessed and compared to observations and theory. The models show a clear improvement over previous generations in simulating the tropical Pacific climatology. Systematic biases still include too strong mean and seasonal cycle of trade winds. El Nino amplitude is shown to be an inverse function of the mean trade winds in agreement with the observed shift of 1976 and with theoretical studies. El Nino amplitude is further shown to be an inverse function of the relative strength of the seasonal cycle. When most of the energy is within the seasonal cycle, little is left for inter-annual signals and vice versa. An interannual coupling strength (ICS) is defined and its relation with the modelled El Nino frequency is compared to that predicted by theoretical models. An assessment of the modelled El Nino in term of SST mode (S-mode) or thermocline mode (T-mode) shows that most models are locked into a S-mode and that only a few models exhibit a hybrid mode, like in observations. It is concluded that several basic El Nino-mean state-seasonal cycle relationships proposed by either theory or analysis of observations seem to be reproduced by CGCMs. This is especially true for the amplitude of El Nino and is less clear for its frequency. Most of these relationships, first established for the pre-industrial control simulations, hold for the double and quadruple CO2 stabilized scenarios. The models that exhibit the largest El Nino amplitude change in these greenhouse gas (GHG) increase scenarios are those that exhibit a mode change towards a T-mode (either from S-mode to hybrid or hybrid to T-mode). This follows the observed 1976 climate shift in the tropical Pacific, and supports the-still debated-finding of studies that associated this shift to increased GHGs. In many respects, these models are also among those that best simulate the tropical Pacific climatology (ECHAM5/MPI-OM, GFDL-CM2.0, GFDL-CM2.1, MRI-CGM2.3.2, UKMO-HadCM3). Results from this large subset of models suggest the likelihood of increased El Nino amplitude in a warmer climate, though there is considerable spread of El Nino behaviour among the models and the changes in the subsurface thermocline properties that may be important for El Nino change could not be assessed. There are no clear indications of an El Nino frequency change with increased GHG.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports on research undertaken by the author into what secondary school drama teachers think they need to possess in terms of subject knowledge in order to operate effectively as subject specialists. ‘Subject knowledge’ is regarded as being multi faceted and the paper reports on how drama teachers prioritise its different aspects. A discussion of what ‘subject knowledge’ may be seen to encompass reveals interesting tensions between aspects of professional knowledge that are prescribed by statutory dictate and local context, and those that are valued by individual teachers and are manifest in their construction of a professional identity. The paper proposes that making judgements that associate propositional and substantive knowledge with traditionally held academic values as ‘bad’ or ‘irrelevant’ to drama education, and what Foucault has coined as ‘subjugated knowledge’ (i.e. local, vernacular, enactive knowledge that eludes inscription) as ‘good’ and more apposite to the work of all those involved in drama education, fails to reflect the complex matrices of values that specialists appear to hold. While the reported research focused on secondary school drama teachers in England, Bourdieu’s conception of field and habitus is invoked to suggest a model which recognises how drama educators more generally may construct a professional identity that necessarily balances personal interests and beliefs with externally imposed demands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Rome Statute of the International Criminal Court (ICC) is silent on the issue of national truth commissions. How the ICC might treat these bodies and the information they may hold is uncertain. The overlapping nature of the investigations likely to be carried out by the ICC and future truth-seeking bodies may, however, give rise to areas of tension, particularly where truth commissions hold confidential or self-incriminating information. This article questions whether the traditional truth-seeking powers to grant confidentiality and compel the provision of self-incriminating statements are compatible with the prosecutorial framework of the ICC. It considers how such information is likely to be dealt with by the ICC and analyses whether effective truth seeking can be carried out in the absence of such powers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Central Brazil, the long-term sustainability of beef cattle systems is under threat over vast tracts of farming areas, as more than half of the 50 million hectares of sown pastures are suffering from degradation. Overgrazing practised to maintain high stocking rates is regarded as one of the main causes. High stocking rates are deliberate and crucial decisions taken by the farmers, which appear paradoxical, even irrational given the state of knowledge regarding the consequences of overgrazing. The phenomenon however appears inextricably linked with the objectives that farmers hold. In this research those objectives were elicited first and from their ranking two, ‘asset value of cattle (representing cattle ownership)' and ‘present value of economic returns', were chosen to develop an original bi-criteria Compromise Programming model to test various hypotheses postulated to explain the overgrazing behaviour. As part of the model a pasture productivity index is derived to estimate the pasture recovery cost. Different scenarios based on farmers' attitudes towards overgrazing, pasture costs and capital availability were analysed. The results of the model runs show that benefits from holding more cattle can outweigh the increased pasture recovery and maintenance costs. This result undermines the hypothesis that farmers practise overgrazing because they are unaware or uncaring about overgrazing costs. An appropriate approach to the problem of pasture degradation requires information on the economics, and its interplay with farmers' objectives, for a wide range of pasture recovery and maintenance methods. Seen within the context of farmers' objectives, some level of overgrazing appears rational. Advocacy of the simple ‘no overgrazing' rule is an insufficient strategy to maintain the long-term sustainability of the beef production systems in Central Brazil.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Central Brazil, the long-term, sustainability of beef cattle systems is under threat over vast tracts of farming areas, as more than half of the 50 million hectares of sown pastures are suffering from. degradation. Overgrazing practised to maintain high stocking rates is regarded as one of the main causes. High stocking rates are deliberate and crucial decisions taken by the farmers, which appear paradoxical, even irrational given the state of knowledge regarding the consequences of overgrazing. The phenomenon however appears inextricably linked with the objectives that farmers hold. In this research those objectives were elicited first and from their ranking two, 'asset value of cattle (representing cattle ownership and 'present value of economic returns', were chosen to develop an original bi-criteria Compromise Programming model to test various hypotheses postulated to explain the overgrazing behaviour. As part of the model a pasture productivity index is derived to estimate the pasture recovery cost. Different scenarios based on farmers' attitudes towards overgrazing, pasture costs and capital availability were analysed. The results of the model runs show that benefits from holding more cattle can outweigh the increased pasture recovery and maintenance costs. This result undermines the hypothesis that farmers practise overgrazing because they are unaware or uncaring caring about overgrazing costs. An appropriate approach to the problem of pasture degradation requires information on the economics,and its interplay with farmers' objectives, for a wide range of pasture recovery and maintenance methods. Seen within the context of farmers' objectives, some level of overgrazing appears rational. Advocacy of the simple 'no overgrazing' rule is an insufficient strategy to maintain the long-term sustainability of the beef production systems in Central Brazil. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article critically examines the challenges that come with implementing the Extractive Industries Transparency Initiative (EITI)a policy mechanism marketed by donors and Western governments as a key to facilitating economic improvement in resource-rich developing countriesin sub-Saharan Africa. The forces behind the EITI contest that impoverished institutions, the embezzlement of petroleum and/or mineral revenues, and a lack of transparency are the chief reasons why resource-rich sub-Saharan Africa is underperforming economically, and that implementation of the EITI, with its foundation of good governance, will help address these problems. The position here, however, is that the task is by no means straightforward: that the EITI is not necessarily a blueprint for facilitating good governance in the region's resource-rich countries. It is concluded that the EITI is a policy mechanism that could prove to be effective with significant institutional change in host African countries but, on its own, it is incapable of reducing corruption and mobilizing citizens to hold government officials accountable for hoarding profits from extractive industry operations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uplands around the world are facing significant social, economic and environmental changes, and decision-makers need to better understand what the future may hold if they are to adapt and maintain upland goods and services. This paper draws together all major research comprising eight studies that have used scenarios to describe possible futures for UK uplands. The paper evaluates which scenarios are perceived by stakeholders to be most likely and desirable, and assesses the benefits and drawbacks of the scenario methods used in UK uplands to date. Stakeholders agreed that the most desirable and likely scenario would be a continuation of hill farming (albeit at reduced levels) based on cross-compliance with environmental measures. The least desirable scenario is a withdrawal of government financial support for hill farming. Although this was deemed by stakeholders to be the least likely scenario, the loss of government support warrants close attention due to its potential implications for the local economy. Stakeholders noted that the environmental implications of this scenario are much less clear-cut. As such, there is an urgent need to understand the full implications of this scenario, so that upland stakeholders can adequately prepare, and policy-makers can better evaluate the likely implications of different policy options. The paper concludes that in future, upland scenario research needs to: (1) better integrate in-depth and representative participation from stakeholders during both scenario development and evaluation; and (2) make more effective use of visualisation techniques and simulation models. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Some families of mammalian interspersed repetitive DNA, such as the Alu SINE sequence, appear to have evolved by the serial replacement of one active sequence with another, consistent with there being a single source of transposition: the "master gene." Alternative models, in which multiple source sequences are simultaneously active, have been called "transposon models." Transposon models differ in the proportion of elements that are active and in whether inactivation occurs at the moment of transposition or later. Here we examine the predictions of various types of transposon model regarding the patterns of sequence variation expected at an equilibrium between transposition, inactivation, and deletion. Under the master gene model, all bifurcations in the true tree of elements occur in a single lineage. We show that this property will also hold approximately for transposon models in which most elements are inactive and where at least some of the inactivation events occur after transposition. Such tree shapes are therefore not conclusive evidence for a single source of transposition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Phylogenetic methods hold great promise for the reconstruction of the transition from precursor to modern flora and the identification of underlying factors which drive the process. The phylogenetic methods presently used to address the question of the origin of the Cape flora of South Africa are considered here. The sampling requirements of each of these methods, which include dating of diversifications using calibrated molecular trees, sister pair comparisons, lineage through time plots and biogeographical optimizations are reviewed. Sampling of genes, genomes and species are considered. Although increased higher-level studies and increased sampling are required for robust interpretation, it is clear that much progress is already made. It is argued that despite the remarkable richness of the flora, the Cape flora is a valuable model system to demonstrate the utility of phylogenetic methods in determining the history of a modern flora.