96 resultados para interim
Resumo:
The ability of the climate models participating in phase 5 of the Coupled Model Intercomparison Project (CMIP5) to simulate North Atlantic extratropical cyclones in winter [December–February (DJF)] and summer [June–August (JJA)] is investigated in detail. Cyclones are identified as maxima in T42 vorticity at 850 hPa and their propagation is tracked using an objective feature-tracking algorithm. By comparing the historical CMIP5 simulations (1976–2005) and the ECMWF Interim Re-Analysis (ERA-Interim; 1979–2008), the authors find that systematic biases affect the number and intensity of North Atlantic cyclones in CMIP5 models. In DJF, the North Atlantic storm track tends to be either too zonal or displaced southward, thus leading to too few and weak cyclones over the Norwegian Sea and too many cyclones in central Europe. In JJA, the position of the North Atlantic storm track is generally well captured but some CMIP5 models underestimate the total number of cyclones. The dynamical intensity of cyclones, as measured by either T42 vorticity at 850 hPa or mean sea level pressure, is too weak in both DJF and JJA. The intensity bias has a hemispheric character, and it cannot be simply attributed to the representation of the North Atlantic large- scale atmospheric state. Despite these biases, the representation of Northern Hemisphere (NH) storm tracks has improved since CMIP3 and some CMIP5 models are able of representing well both the number and the intensity of North Atlantic cyclones. In particular, some of the higher-atmospheric-resolution models tend to have a better representation of the tilt of the North Atlantic storm track and of the intensity of cyclones in DJF.
Resumo:
[1] An eddy-permitting ¼° global ocean reanalysis based on the Operational Met Office FOAM data assimilation system has been run for 1989–2010 forced by ERA-Interim meteorology. Freshwater and heat transports are compared with published estimates globally and in each basin, with special focus on the Atlantic. The meridional transports agree with observations within errors at most locations, but where eddies are active the transports by the mean flow are nearly always in better agreement than the total transports. Eddy transports are down gradient and are enhanced relative to a free run. They may oppose or reinforce mean transports and provide 40–50% of the total transport near midlatitude fronts, where eddies with time scales <1 month provide up to 15%. Basin-scale freshwater convergences are calculated with the Arctic/Atlantic, Indian, and Pacific oceans north of 32°S, all implying net evaporation of 0.33 ± 0.04 Sv, 0.65 ± 0.07 Sv, and 0.09 ± 0.04 Sv, respectively, within the uncertainty of observations in the Atlantic and Pacific. The Indian is more evaporative and the Southern Ocean has more precipitation (1.07 Sv). Air-sea fluxes are modified by assimilation influencing turbulent heat fluxes and evaporation. Generally, surface and assimilation fluxes together match the meridional transports, indicating that the reanalysis is close to a steady state. Atlantic overturning and gyre transports are assessed with overturning freshwater transports southward at all latitudes. At 26°N eddy transports are negligible, overturning transport is 0.67 ± 0.19 Sv southward and gyre transport is 0.44 ± 0.17 Sv northward, with divergence between 26°N and the Bering Strait of 0.13 ± 0.23 Sv over 2004–2010.
Resumo:
Recently, in order to accelerate drug development, trials that use adaptive seamless designs such as phase II/III clinical trials have been proposed. Phase II/III clinical trials combine traditional phases II and III into a single trial that is conducted in two stages. Using stage 1 data, an interim analysis is performed to answer phase II objectives and after collection of stage 2 data, a final confirmatory analysis is performed to answer phase III objectives. In this paper we consider phase II/III clinical trials in which, at stage 1, several experimental treatments are compared to a control and the apparently most effective experimental treatment is selected to continue to stage 2. Although these trials are attractive because the confirmatory analysis includes phase II data from stage 1, the inference methods used for trials that compare a single experimental treatment to a control and do not have an interim analysis are no longer appropriate. Several methods for analysing phase II/III clinical trials have been developed. These methods are recent and so there is little literature on extensive comparisons of their characteristics. In this paper we review and compare the various methods available for constructing confidence intervals after phase II/III clinical trials.
Resumo:
This paper provides a high-level overview of E-UTRAN interworking and interoperability with existing Third Generation Partnership Project (3GPP) and non-3GPP wireless networks. E-UTRAN access networks (LTE and LTE-A) are currently the latest technologies for 3GPP evolution specified in Release 8, 9 and beyond. These technologies promise higher throughputs and lower latency while also reducing the cost of delivering the services to fit with subscriber demands. 3GPP offers a direct transition path from the current 3GPP UTRAN/GERAN networks to LTE including seamless handover. E-UTRAN and other wireless networks interworking is an option that allows operators to maximize the life of their existing network components before a complete transition to truly 4G networks. Network convergence, backward compatibility and interpretability are regarded as the next major challenge in the evolution and the integration of mobile wireless communications. In this paper, interworking and interoperability between the E-UTRAN Evolved Packet Core (EPC) architecture and 3GPP, 3GPP2 and IEEE based networks are clearly explained. How the EPC is designed to deliver multimedia and facilitate interworking is also explained. Moreover, the seamless handover needed to perform this interworking efficiently is described briefly. This study showed that interoperability and interworking between existing networks and E-UTRAN are highly recommended as an interim solution before the transition to full 4G. Furthermore, wireless operators have to consider a clear interoperability and interworking plan for their existing networks before making a decision to migrate completely to LTE. Interworking provides not only communication between different wireless networks; in many scenarios it contributes to add technical enhancements to one or both environments.
Resumo:
Understanding the nature of air parcels that exhibit ice-supersaturation is important because they are the regions of potential formation of both cirrus and aircraft contrails, which affect the radiation balance. Ice-supersaturated air parcels in the upper troposphere and lower stratosphere over the North Atlantic are investigated using Lagrangian trajectories. The trajectory calculations use ERA-Interim data for three winter and three summer seasons, resulting in approximately 200,000 trajectories with ice-supersaturation for each season. For both summer and winter, the median duration of ice-supersaturation along a trajectory is less than 6 hours. 5% of air which becomes ice-supersaturated in the troposphere, and 23% of air which becomes ice-supersaturated in the stratosphere will remain ice-supersaturated for at least 24 hours. Weighting the ice-supersaturation duration with the observed frequency indicates the likely overall importance of the longer duration ice-supersaturated trajectories. Ice-supersaturated air parcels typically experience a decrease in moisture content while ice-supersaturated, suggesting that cirrus clouds eventually form in the majority of such air. A comparison is made between short-lived (less than 24 h) and long-lived (greater than 24 h) ice-supersaturated air flows. For both air flows, ice-supersaturation occurs around the northernmost part of the trajectory. Short-lived ice-supersaturated air flows show no significant differences in speed or direction of movement to subsaturated air parcels. However, long-lived ice-supersaturated air occurs in slower moving air flows, which implies that they are not associated with the fastest moving air through a jet stream.
Resumo:
The ability of the HiGEM climate model to represent high-impact, regional, precipitation events is investigated in two ways. The first focusses on a case study of extreme regional accumulation of precipitation during the passage of a summer extra-tropical cyclone across southern England on 20 July 2007 that resulted in a national flooding emergency. The climate model is compared with a global Numerical Weather Prediction (NWP) model and higher resolution, nested limited area models. While the climate model does not simulate the timing and location of the cyclone and associated precipitation as accurately as the NWP simulations, the total accumulated precipitation in all models is similar to the rain gauge estimate across England and Wales. The regional accumulation over the event is insensitive to horizontal resolution for grid spacings ranging from 90km to 4km. Secondly, the free-running climate model reproduces the statistical distribution of daily precipitation accumulations observed in the England-Wales precipitation record. The model distribution diverges increasingly from the record for longer accumulation periods with a consistent under-representation of more intense multi-day accumulations. This may indicate a lack of low-frequency variability associated with weather regime persistence. Despite this, the overall seasonal and annual precipitation totals from the model are still comparable to those from ERA-Interim.
Resumo:
Seamless phase II/III clinical trials are conducted in two stages with treatment selection at the first stage. In the first stage, patients are randomized to a control or one of k > 1 experimental treatments. At the end of this stage, interim data are analysed, and a decision is made concerning which experimental treatment should continue to the second stage. If the primary endpoint is observable only after some period of follow-up, at the interim analysis data may be available on some early outcome on a larger number of patients than those for whom the primary endpoint is available. These early endpoint data can thus be used for treatment selection. For two previously proposed approaches, the power has been shown to be greater for one or other method depending on the true treatment effects and correlations. We propose a new approach that builds on the previously proposed approaches and uses data available at the interim analysis to estimate these parameters and then, on the basis of these estimates, chooses the treatment selection method with the highest probability of correctly selecting the most effective treatment. This method is shown to perform well compared with the two previously described methods for a wide range of true parameter values. In most cases, the performance of the new method is either similar to or, in some cases, better than either of the two previously proposed methods.
Resumo:
Purpose The research objective of this study is to understand how institutional changes to the EU regulatory landscape may affect corresponding institutionalized operational practices within financial organizations. Design/methodology/approach The study adopts an Investment Management System as its case and investigates different implementations of this system within eight financial organizations, predominantly focused on investment banking and asset management activities within capital markets. At the systems vendor site, senior systems consultants and client relationship managers were interviewed. Within the financial organizations, compliance, risk and systems experts were interviewed. Findings The study empirically tests modes of institutional change. Displacement and Layering were found to be the most prevalent modes. However, the study highlights how the outcomes of Displacement and Drift may be similar in effect as both modes may cause compliance gaps. The research highlights how changes in regulations may create gaps in systems and processes which, in the short term, need to be plugged by manual processes. Practical implications Vendors abilities to manage institutional change caused by Drift, Displacement, Layering and Conversion and their ability to efficiently and quickly translate institutional variables into structured systems has the power to ease the pain and cost of compliance as well as reducing the risk of breeches by reducing the need for interim manual systems. Originality/value The study makes a contribution by applying recent theoretical concepts of institutional change to the topic of regulatory change uses this analysis to provide insight into the effects of this new environment
Resumo:
Policy-makers are creating mechanisms to help developing countries cope with loss and damage from climate change, but the negotiations are largely neglecting scientific questions about what the impacts of climate change actually are. Mitigation efforts have failed to prevent the continued increase of anthropogenic greenhouse gas (GHG) emissions. Adaptation is now unlikely to be sufficient to prevent negative impacts from current and future climate change1. In this context, vulnerable nations argue that existing frameworks to promote mitigation and adaptation are inadequate, and have called for a third international mechanism to deal with residual climate change impacts, or “loss and damage”2. In 2013, the United Nations Framework Convention on Climate Change (UNFCCC) responded to these calls and established the Warsaw International Mechanism (WIM) to address loss and damage from the impacts of climate change in developing countries3. An interim Executive Committee of party representatives has been set up, and is currently drafting a two-year workplan comprising meetings, reports, and expert groups; and aiming to enhance knowledge and understanding of loss and damage, strengthen dialogue among stakeholders, and promote enhanced action and support. Issues identified as priorities for the WIM thus far include: how to deal with non-economic losses, such as loss of life, livelihood, and cultural heritage; and linkages between loss and damage and patterns of migration and displacement2. In all this, one fundamental issue still demands our attention: which losses and damages are relevant to the WIM? What counts as loss and damage from climate change?
Resumo:
A new generation of reanalysis products is currently being produced that provides global gridded atmospheric data spanning more than a century. Such data may be useful for characterising the observed long-term variability of extreme precipitation events, particularly in regions where spatial coverage of surface observations is limited, and in the pre-satellite era. An analysis of extreme precipitation events is performed over England and Wales, investigating the ability of Twentieth Century Reanalysis and ERA-Interim to represent extreme precipitation accumulations as recorded in the England and Wales Precipitation dataset on accumulation time-scales from 1 to 7 days. Significant correlations are found between daily precipitation accumulation observations and both reanalysis products. A hit-rate analysis indicates that the reanalyses have hit rates (as defined by an event above the 98th percentile) of approximately 40–65% for extreme events in both summer (JJA) and winter (DJF). This suggests that both ERA-Interim and Twentieth Century Reanalysis are difficult to use for representing individual extreme precipitation events over England and Wales.
Resumo:
This study assesses the influence of the El Niño–Southern Oscillation (ENSO) on global tropical cyclone activity using a 150-yr-long integration with a high-resolution coupled atmosphere–ocean general circulation model [High-Resolution Global Environmental Model (HiGEM); with N144 resolution: ~90 km in the atmosphere and ~40 km in the ocean]. Tropical cyclone activity is compared to an atmosphere-only simulation using the atmospheric component of HiGEM (HiGAM). Observations of tropical cyclones in the International Best Track Archive for Climate Stewardship (IBTrACS) and tropical cyclones identified in the Interim ECMWF Re-Analysis (ERA-Interim) are used to validate the models. Composite anomalies of tropical cyclone activity in El Niño and La Niña years are used. HiGEM is able to capture the shift in tropical cyclone locations to ENSO in the Pacific and Indian Oceans. However, HiGEM does not capture the expected ENSO–tropical cyclone teleconnection in the North Atlantic. HiGAM shows more skill in simulating the global ENSO–tropical cyclone teleconnection; however, variability in the Pacific is overpronounced. HiGAM is able to capture the ENSO–tropical cyclone teleconnection in the North Atlantic more accurately than HiGEM. An investigation into the large-scale environmental conditions, known to influence tropical cyclone activity, is used to further understand the response of tropical cyclone activity to ENSO in the North Atlantic and western North Pacific. The vertical wind shear response over the Caribbean is not captured in HiGEM compared to HiGAM and ERA-Interim. Biases in the mean ascent at 500 hPa in HiGEM remain in HiGAM over the western North Pacific; however, a more realistic low-level vorticity in HiGAM results in a more accurate ENSO–tropical cyclone teleconnection.
Resumo:
ERA-Interim reanalysis data from the past 35 years have been used with a newly-developed feature tracking algorithm to identify Indian monsoon depressions originating in or near the Bay of Bengal. These were then rotated, centralised and combined to give a fully three-dimensional 106-depression composite structure – a considerably larger sample than any previous detailed study on monsoon depressions and their structure. Many known features of depression structure are confirmed, particularly the existence of a maximum to the southwest of the centre in rainfall and other fields, and a westward axial tilt in others. Additionally, the depressions are found to have significant asymmetry due to the presence of the Himalayas; a bimodal mid-tropospheric potential vorticity core; a separation into thermally cold- (~–1.5K) and neutral- (~0K) cores near the surface with distinct properties; and that the centre has very large CAPE and very small CIN. Variability as a function of background state has also been explored, with land/coast/sea, diurnal, ENSO, active/break and Indian Ocean Dipole contrasts considered. Depressions are found to be markedly stronger during the active phase of the monsoon, as well as during La Niña. Depressions on land are shown to be more intense and more tightly constrained to the central axis. A detailed schematic diagram of a vertical cross-section through a composite depression is also presented, showing its inherent asymmetric structure.
Resumo:
Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and reinsurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland, in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than most commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module, and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge-corrected rainfall radar, meteorological reanalysis data (European Centre for Medium-Range Weather Forecasts Reanalysis-Interim; ERA-Interim) and a satellite rainfall product (The Climate Prediction Center morphing method; CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find the loss estimates to be more sensitive to uncertainties propagated from the driving precipitation data sets than to other uncertainties in the hazard and vulnerability modules, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.
Resumo:
Since 2007 a large decline in Arctic sea ice has been observed. The large-scale atmospheric circulation response to this decline is investigated in ERA-Interim reanalyses and HadGEM3 climate model experiments. In winter, post-2007 observed circulation anomalies over the Arctic, North Atlantic and Eurasia are small compared to interannual variability. In summer, the post-2007 observed circulation is dominated by an anticyclonic anomaly over Greenland which has a large signal-to-noise ratio. Climate model experiments driven by observed SST and sea ice anomalies are able to capture the summertime pattern of observed circulation anomalies, although the magnitude is a third of that observed. The experiments suggest high SSTs and reduced sea ice in the Labrador Sea lead to positive temperature anomalies in the lower troposphere which weaken the westerlies over North America through thermal wind balance. The experiments also capture cyclonic anomalies over Northwest Europe, which are consistent with downstream Rossby wave propagation
Resumo:
In an adaptive seamless phase II/III clinical trial interim analysis, data are used for treatment selection, enabling resources to be focused on comparison of more effective treatment(s) with a control. In this paper, we compare two methods recently proposed to enable use of short-term endpoint data for decision-making at the interim analysis. The comparison focuses on the power and the probability of correctly identifying the most promising treatment. We show that the choice of method depends on how well short-term data predict the best treatment, which may be measured by the correlation between treatment effects on short- and long-term endpoints.