83 resultados para Essential-state models
Resumo:
The ability of the climate models participating in phase 5 of the Coupled Model Intercomparison Project (CMIP5) to simulate North Atlantic extratropical cyclones in winter [December–February (DJF)] and summer [June–August (JJA)] is investigated in detail. Cyclones are identified as maxima in T42 vorticity at 850 hPa and their propagation is tracked using an objective feature-tracking algorithm. By comparing the historical CMIP5 simulations (1976–2005) and the ECMWF Interim Re-Analysis (ERA-Interim; 1979–2008), the authors find that systematic biases affect the number and intensity of North Atlantic cyclones in CMIP5 models. In DJF, the North Atlantic storm track tends to be either too zonal or displaced southward, thus leading to too few and weak cyclones over the Norwegian Sea and too many cyclones in central Europe. In JJA, the position of the North Atlantic storm track is generally well captured but some CMIP5 models underestimate the total number of cyclones. The dynamical intensity of cyclones, as measured by either T42 vorticity at 850 hPa or mean sea level pressure, is too weak in both DJF and JJA. The intensity bias has a hemispheric character, and it cannot be simply attributed to the representation of the North Atlantic large- scale atmospheric state. Despite these biases, the representation of Northern Hemisphere (NH) storm tracks has improved since CMIP3 and some CMIP5 models are able of representing well both the number and the intensity of North Atlantic cyclones. In particular, some of the higher-atmospheric-resolution models tend to have a better representation of the tilt of the North Atlantic storm track and of the intensity of cyclones in DJF.
Resumo:
Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.
Resumo:
Sea ice friction models are necessary to predict the nature of interactions between sea ice floes. These interactions are of interest on a range of scales, for example, to predict loads on engineering structures in icy waters or to understand the basin-scale motion of sea ice. Many models use Amonton's friction law due to its simplicity. More advanced models allow for hydrodynamic lubrication and refreezing of asperities; however, modeling these processes leads to greatly increased complexity. In this paper we propose, by analogy with rock physics, that a rate- and state-dependent friction law allows us to incorporate memory (and thus the effects of lubrication and bonding) into ice friction models without a great increase in complexity. We support this proposal with experimental data on both the laboratory (∼0.1 m) and ice tank (∼1 m) scale. These experiments show that the effects of static contact under normal load can be incorporated into a friction model. We find the parameters for a first-order rate and state model to be A = 0.310, B = 0.382, and μ0 = 0.872. Such a model then allows us to make predictions about the nature of memory effects in moving ice-ice contacts.
Resumo:
We consider the forecasting performance of two SETAR exchange rate models proposed by Kräger and Kugler [J. Int. Money Fin. 12 (1993) 195]. Assuming that the models are good approximations to the data generating process, we show that whether the non-linearities inherent in the data can be exploited to forecast better than a random walk depends on both how forecast accuracy is assessed and on the ‘state of nature’. Evaluation based on traditional measures, such as (root) mean squared forecast errors, may mask the superiority of the non-linear models. Generalized impulse response functions are also calculated as a means of portraying the asymmetric response to shocks implied by such models.
Resumo:
In this paper we discuss the current state-of-the-art in estimating, evaluating, and selecting among non-linear forecasting models for economic and financial time series. We review theoretical and empirical issues, including predictive density, interval and point evaluation and model selection, loss functions, data-mining, and aggregation. In addition, we argue that although the evidence in favor of constructing forecasts using non-linear models is rather sparse, there is reason to be optimistic. However, much remains to be done. Finally, we outline a variety of topics for future research, and discuss a number of areas which have received considerable attention in the recent literature, but where many questions remain.
Resumo:
Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.
Resumo:
In this article, we review the state-of-the-art techniques in mining data streams for mobile and ubiquitous environments. We start the review with a concise background of data stream processing, presenting the building blocks for mining data streams. In a wide range of applications, data streams are required to be processed on small ubiquitous devices like smartphones and sensor devices. Mobile and ubiquitous data mining target these applications with tailored techniques and approaches addressing scarcity of resources and mobility issues. Two categories can be identified for mobile and ubiquitous mining of streaming data: single-node and distributed. This survey will cover both categories. Mining mobile and ubiquitous data require algorithms with the ability to monitor and adapt the working conditions to the available computational resources. We identify the key characteristics of these algorithms and present illustrative applications. Distributed data stream mining in the mobile environment is then discussed, presenting the Pocket Data Mining framework. Mobility of users stimulates the adoption of context-awareness in this area of research. Context-awareness and collaboration are discussed in the Collaborative Data Stream Mining, where agents share knowledge to learn adaptive accurate models.
Resumo:
Comparison of single-forcing varieties of 20th century historical experiments in a subset of models from the Fifth Coupled Model Intercomparison Project (CMIP5) reveals that South Asian summer monsoon rainfall increases towards the present day in Greenhouse Gas (GHG)-only experiments with respect to pre-industrial levels, while it decreases in anthropogenic aerosol-only experiments. Comparison of these single-forcing experiments with the all-forcings historical experiment suggests aerosol emissions have dominated South Asian monsoon rainfall trends in recent decades, especially during the 1950s to 1970s. The variations in South Asian monsoon rainfall in these experiments follows approximately the time evolution of inter-hemispheric temperature gradient over the same period, suggesting a contribution from the large-scale background state relating to the asymmetric distribution of aerosol emissions about the equator. By examining the 24 available all-forcings historical experiments, we show that models including aerosol indirect effects dominate the negative rainfall trend. Indeed, models including only the direct radiative effect of aerosol show an increase in monsoon rainfall, consistent with the dominance of increasing greenhouse gas emissions and planetary warming on monsoon rainfall in those models. For South Asia, reduced rainfall in the models with indirect effects is related to decreased evaporation at the land surface rather than from anomalies in horizontal moisture flux, suggesting the impact of indirect effects on local aerosol emissions. This is confirmed by examination of aerosol loading and cloud droplet number trends over the South Asia region. Thus, while remote aerosols and their asymmetric distribution about the equator play a role in setting the inter-hemispheric temperature distribution on which the South Asian monsoon, as one of the global monsoons, operates, the addition of indirect aerosol effects acting on very local aerosol emissions also plays a role in declining monsoon rainfall. The disparity between the response of monsoon rainfall to increasing aerosol emissions in models containing direct aerosol effects only and those also containing indirect effects needs to be urgently investigated since the suggested future decline in Asian anthropogenic aerosol emissions inherent to the representative concentration pathways (RCPs) used for future climate projection may turn out to be optimistic. In addition, both groups of models show declining rainfall over China, also relating to local aerosol mechanisms. We hypothesize that aerosol emissions over China are large enough, in the CMIP5 models, to cause declining monsoon rainfall even in the absence of indirect aerosol effects. The same is not true for India.
Resumo:
China’s financial system has experienced a series of major reforms in recent years. Efforts have been made towards introducing the shareholding system in state-owned commercial banks, restructuring of securities firms, re-organising equity of joint venture insurance companies, further improving the corporate governance structure, managing financial risks and ultimately establishing a system to protect investors (Xinhua, 2010). Financial product innovation, with the further opening up of financial markets and the development of the insurance and bond market, has increased liquidity as well as reduced financial risks. The U.S. subprime crisis indicated the benefit of financial innovations for the economy, but without proper control, they may lead to unexpected consequences. Kirkpatrick (2009) argues that failures and weaknesses in corporate governance arrangements and insufficient accounting standards and regulatory requirements attributed to the financial crisis. Similar to the financial crises of the last decade, the global financial crisis which sparked in 2008, surfaced a variety of significant corporate governance failures: the dysfunction of market mechanisms, the lack of transparency and accountability, misaligned compensation arrangements and the late response of government, all which encouraged management short-termism, poor risk management, as well as some fraudulent schemes. The unique characteristics of the Chinese banking system are an interesting point for studying post-crisis corporate governance reform. Considering that China modelled its governance system on the Anglo-American system, this paper examines the impact of the financial crisis on corporate governance reform in developed economies, and particularly, China’s reform of its financial sector. The paper further analyses the Chinese government’s role in bank supervision and risk management. In this regard, the paper contributes to the corporate governance literature within the Chinese context by providing insights into the contributing factors to the corporate governance failure that led to the global financial crisis. It also provides policy recommendations for China’s policy makers to seriously consider. The results suggest a need for the re-examination of corporate governance adequacy and the institutionalisation of business ethics. The paper’s next section provides a review of China’s financial system with reference to the financial crisis, followed by a critical evaluation of a capitalistic system and a review of Anglo-American and Continental European models. It then analyses the need for a new corporate governance model in China by considering the bank failures in developed economies and the potential risks and inefficiencies in a current State controlled system. The paper closes by reflecting the need for Chinese policy makers to continually develop, adapt and rewrite corporate governance practices capable of meeting the new challenge, and to pay attention to business ethics, an issue which goes beyond regulation.
Resumo:
We compare five general circulation models (GCMs) which have been recently used to study hot extrasolar planet atmospheres (BOB, CAM, IGCM, MITgcm, and PEQMOD), under three test cases useful for assessing model convergence and accuracy. Such a broad, detailed intercomparison has not been performed thus far for extrasolar planets study. The models considered all solve the traditional primitive equations, but employ di↵erent numerical algorithms or grids (e.g., pseudospectral and finite volume, with the latter separately in longitude-latitude and ‘cubed-sphere’ grids). The test cases are chosen to cleanly address specific aspects of the behaviors typically reported in hot extrasolar planet simulations: 1) steady-state, 2) nonlinearly evolving baroclinic wave, and 3) response to fast timescale thermal relaxation. When initialized with a steady jet, all models maintain the steadiness, as they should—except MITgcm in cubed-sphere grid. A very good agreement is obtained for a baroclinic wave evolving from an initial instability in pseudospectral models (only). However, exact numerical convergence is still not achieved across the pseudospectral models: amplitudes and phases are observably di↵erent. When subject to a typical ‘hot-Jupiter’-like forcing, all five models show quantitatively di↵erent behavior—although qualitatively similar, time-variable, quadrupole-dominated flows are produced. Hence, as have been advocated in several past studies, specific quantitative predictions (such as the location of large vortices and hot regions) by GCMs should be viewed with caution. Overall, in the tests considered here, pseudospectral models in pressure coordinate (PEBOB and PEQMOD) perform the best and MITgcm in cubed-sphere grid performs the worst.
Resumo:
Site-specific meteorological forcing appropriate for applications such as urban outdoor thermal comfort simulations can be obtained using a newly coupled scheme that combines a simple slab convective boundary layer (CBL) model and urban land surface model (ULSM) (here two ULSMs are considered). The former simulates daytime CBL height, air temperature and humidity, and the latter estimates urban surface energy and water balance fluxes accounting for changes in land surface cover. The coupled models are tested at a suburban site and two rural sites, one irrigated and one unirrigated grass, in Sacramento, U.S.A. All the variables modelled compare well to measurements (e.g. coefficient of determination = 0.97 and root mean square error = 1.5 °C for air temperature). The current version is applicable to daytime conditions and needs initial state conditions for the CBL model in the appropriate range to obtain the required performance. The coupled model allows routine observations from distant sites (e.g. rural, airport) to be used to predict air temperature and relative humidity in an urban area of interest. This simple model, which can be rapidly applied, could provide urban data for applications such as air quality forecasting and building energy modelling, in addition to outdoor thermal comfort.
Resumo:
The implications of polar cap expansions, contractions and movements for empirical models of high-latitude plasma convection are examined. Some of these models have been generated by directly averaging flow measurements from large numbers of satellite passes or radar scans; others have employed more complex means to combine data taken at different times into large-scale patterns of flow. In all cases, the models have implicitly adopted the assumption that the polar cap is in steady state: they have all characterized the ionospheric flow in terms of the prevailing conditions (e.g. the interplanetary magnetic field and/or some index of terrestrial magnetic activity) without allowance for their history. On long enough time scales, the polar cap is indeed in steady state but on time scales shorter than a few hours it is not and can oscillate in size and position. As a result, the method used to combine the data can influence the nature of the convection reversal boundary and the transpolar voltage in the derived model. This paper discusses a variety of effects due to time-dependence in relation to some ionospheric convection models which are widely applied. The effects are shown to be varied and to depend upon the procedure adopted to compile the model.
Resumo:
The state-resolved reactivity of CH4 in its totally symmetric C-H stretch vibration (�1) has been measured on a Ni(100) surface. Methane molecules were accelerated to kinetic energies of 49 and 63:5 kJ=mol in a molecular beam and vibrationally excited to �1 by stimulated Raman pumping before surface impact at normal incidence. The reactivity of the symmetric-stretch excited CH4 is about an order of magnitude higher than that of methane excited to the antisymmetric stretch (�3) reported by Juurlink et al. [Phys. Rev. Lett. 83, 868 (1999)] and is similar to that we have previously observed for the excitation of the first overtone (2�3). The difference between the state-resolved reactivity for �1 and �3 is consistent with predictions of a vibrationally adiabatic model of the methane reaction dynamics and indicates that statistical models cannot correctly describe the chemisorption of CH4 on nickel.
Resumo:
The study of the mechanical energy budget of the oceans using Lorenz available potential energy (APE) theory is based on knowledge of the adiabatically re-arranged Lorenz reference state of minimum potential energy. The compressible and nonlinear character of the equation of state for seawater has been thought to cause the reference state to be ill-defined, casting doubt on the usefulness of APE theory for investigating ocean energetics under realistic conditions. Using a method based on the volume frequency distribution of parcels as a function of temperature and salinity in the context of the seawater Boussinesq approximation, which we illustrate using climatological data, we show that compressibility effects are in fact minor. The reference state can be regarded as a well defined one-dimensional function of depth, which forms a surface in temperature, salinity and density space between the surface and the bottom of the ocean. For a very small proportion of water masses, this surface can be multivalued and water parcels can have up to two statically stable levels in the reference density profile, of which the shallowest is energetically more accessible. Classifying parcels from the surface to the bottom gives a different reference density profile than classifying in the opposite direction. However, this difference is negligible. We show that the reference state obtained by standard sorting methods is equivalent, though computationally more expensive, to the volume frequency distribution approach. The approach we present can be applied systematically and in a computationally efficient manner to investigate the APE budget of the ocean circulation using models or climatological data.
Resumo:
The term neural population models (NPMs) is used here as catchall for a wide range of approaches that have been variously called neural mass models, mean field models, neural field models, bulk models, and so forth. All NPMs attempt to describe the collective action of neural assemblies directly. Some NPMs treat the densely populated tissue of cortex as an excitable medium, leading to spatially continuous cortical field theories (CFTs). An indirect approach would start by modelling individual cells and then would explain the collective action of a group of cells by coupling many individual models together. In contrast, NPMs employ collective state variables, typically defined as averages over the group of cells, in order to describe the population activity directly in a single model. The strength and the weakness of his approach are hence one and the same: simplification by bulk. Is this justified and indeed useful, or does it lead to oversimplification which fails to capture the pheno ...