940 resultados para Essential-state models
Resumo:
China’s financial system has experienced a series of major reforms in recent years. Efforts have been made towards introducing the shareholding system in state-owned commercial banks, restructuring of securities firms, re-organising equity of joint venture insurance companies, further improving the corporate governance structure, managing financial risks and ultimately establishing a system to protect investors (Xinhua, 2010). Financial product innovation, with the further opening up of financial markets and the development of the insurance and bond market, has increased liquidity as well as reduced financial risks. The U.S. subprime crisis indicated the benefit of financial innovations for the economy, but without proper control, they may lead to unexpected consequences. Kirkpatrick (2009) argues that failures and weaknesses in corporate governance arrangements and insufficient accounting standards and regulatory requirements attributed to the financial crisis. Similar to the financial crises of the last decade, the global financial crisis which sparked in 2008, surfaced a variety of significant corporate governance failures: the dysfunction of market mechanisms, the lack of transparency and accountability, misaligned compensation arrangements and the late response of government, all which encouraged management short-termism, poor risk management, as well as some fraudulent schemes. The unique characteristics of the Chinese banking system are an interesting point for studying post-crisis corporate governance reform. Considering that China modelled its governance system on the Anglo-American system, this paper examines the impact of the financial crisis on corporate governance reform in developed economies, and particularly, China’s reform of its financial sector. The paper further analyses the Chinese government’s role in bank supervision and risk management. In this regard, the paper contributes to the corporate governance literature within the Chinese context by providing insights into the contributing factors to the corporate governance failure that led to the global financial crisis. It also provides policy recommendations for China’s policy makers to seriously consider. The results suggest a need for the re-examination of corporate governance adequacy and the institutionalisation of business ethics. The paper’s next section provides a review of China’s financial system with reference to the financial crisis, followed by a critical evaluation of a capitalistic system and a review of Anglo-American and Continental European models. It then analyses the need for a new corporate governance model in China by considering the bank failures in developed economies and the potential risks and inefficiencies in a current State controlled system. The paper closes by reflecting the need for Chinese policy makers to continually develop, adapt and rewrite corporate governance practices capable of meeting the new challenge, and to pay attention to business ethics, an issue which goes beyond regulation.
Resumo:
We compare five general circulation models (GCMs) which have been recently used to study hot extrasolar planet atmospheres (BOB, CAM, IGCM, MITgcm, and PEQMOD), under three test cases useful for assessing model convergence and accuracy. Such a broad, detailed intercomparison has not been performed thus far for extrasolar planets study. The models considered all solve the traditional primitive equations, but employ di↵erent numerical algorithms or grids (e.g., pseudospectral and finite volume, with the latter separately in longitude-latitude and ‘cubed-sphere’ grids). The test cases are chosen to cleanly address specific aspects of the behaviors typically reported in hot extrasolar planet simulations: 1) steady-state, 2) nonlinearly evolving baroclinic wave, and 3) response to fast timescale thermal relaxation. When initialized with a steady jet, all models maintain the steadiness, as they should—except MITgcm in cubed-sphere grid. A very good agreement is obtained for a baroclinic wave evolving from an initial instability in pseudospectral models (only). However, exact numerical convergence is still not achieved across the pseudospectral models: amplitudes and phases are observably di↵erent. When subject to a typical ‘hot-Jupiter’-like forcing, all five models show quantitatively di↵erent behavior—although qualitatively similar, time-variable, quadrupole-dominated flows are produced. Hence, as have been advocated in several past studies, specific quantitative predictions (such as the location of large vortices and hot regions) by GCMs should be viewed with caution. Overall, in the tests considered here, pseudospectral models in pressure coordinate (PEBOB and PEQMOD) perform the best and MITgcm in cubed-sphere grid performs the worst.
Resumo:
Site-specific meteorological forcing appropriate for applications such as urban outdoor thermal comfort simulations can be obtained using a newly coupled scheme that combines a simple slab convective boundary layer (CBL) model and urban land surface model (ULSM) (here two ULSMs are considered). The former simulates daytime CBL height, air temperature and humidity, and the latter estimates urban surface energy and water balance fluxes accounting for changes in land surface cover. The coupled models are tested at a suburban site and two rural sites, one irrigated and one unirrigated grass, in Sacramento, U.S.A. All the variables modelled compare well to measurements (e.g. coefficient of determination = 0.97 and root mean square error = 1.5 °C for air temperature). The current version is applicable to daytime conditions and needs initial state conditions for the CBL model in the appropriate range to obtain the required performance. The coupled model allows routine observations from distant sites (e.g. rural, airport) to be used to predict air temperature and relative humidity in an urban area of interest. This simple model, which can be rapidly applied, could provide urban data for applications such as air quality forecasting and building energy modelling, in addition to outdoor thermal comfort.
Resumo:
The implications of polar cap expansions, contractions and movements for empirical models of high-latitude plasma convection are examined. Some of these models have been generated by directly averaging flow measurements from large numbers of satellite passes or radar scans; others have employed more complex means to combine data taken at different times into large-scale patterns of flow. In all cases, the models have implicitly adopted the assumption that the polar cap is in steady state: they have all characterized the ionospheric flow in terms of the prevailing conditions (e.g. the interplanetary magnetic field and/or some index of terrestrial magnetic activity) without allowance for their history. On long enough time scales, the polar cap is indeed in steady state but on time scales shorter than a few hours it is not and can oscillate in size and position. As a result, the method used to combine the data can influence the nature of the convection reversal boundary and the transpolar voltage in the derived model. This paper discusses a variety of effects due to time-dependence in relation to some ionospheric convection models which are widely applied. The effects are shown to be varied and to depend upon the procedure adopted to compile the model.
Resumo:
The state-resolved reactivity of CH4 in its totally symmetric C-H stretch vibration (�1) has been measured on a Ni(100) surface. Methane molecules were accelerated to kinetic energies of 49 and 63:5 kJ=mol in a molecular beam and vibrationally excited to �1 by stimulated Raman pumping before surface impact at normal incidence. The reactivity of the symmetric-stretch excited CH4 is about an order of magnitude higher than that of methane excited to the antisymmetric stretch (�3) reported by Juurlink et al. [Phys. Rev. Lett. 83, 868 (1999)] and is similar to that we have previously observed for the excitation of the first overtone (2�3). The difference between the state-resolved reactivity for �1 and �3 is consistent with predictions of a vibrationally adiabatic model of the methane reaction dynamics and indicates that statistical models cannot correctly describe the chemisorption of CH4 on nickel.
Resumo:
The study of the mechanical energy budget of the oceans using Lorenz available potential energy (APE) theory is based on knowledge of the adiabatically re-arranged Lorenz reference state of minimum potential energy. The compressible and nonlinear character of the equation of state for seawater has been thought to cause the reference state to be ill-defined, casting doubt on the usefulness of APE theory for investigating ocean energetics under realistic conditions. Using a method based on the volume frequency distribution of parcels as a function of temperature and salinity in the context of the seawater Boussinesq approximation, which we illustrate using climatological data, we show that compressibility effects are in fact minor. The reference state can be regarded as a well defined one-dimensional function of depth, which forms a surface in temperature, salinity and density space between the surface and the bottom of the ocean. For a very small proportion of water masses, this surface can be multivalued and water parcels can have up to two statically stable levels in the reference density profile, of which the shallowest is energetically more accessible. Classifying parcels from the surface to the bottom gives a different reference density profile than classifying in the opposite direction. However, this difference is negligible. We show that the reference state obtained by standard sorting methods is equivalent, though computationally more expensive, to the volume frequency distribution approach. The approach we present can be applied systematically and in a computationally efficient manner to investigate the APE budget of the ocean circulation using models or climatological data.
Resumo:
The term neural population models (NPMs) is used here as catchall for a wide range of approaches that have been variously called neural mass models, mean field models, neural field models, bulk models, and so forth. All NPMs attempt to describe the collective action of neural assemblies directly. Some NPMs treat the densely populated tissue of cortex as an excitable medium, leading to spatially continuous cortical field theories (CFTs). An indirect approach would start by modelling individual cells and then would explain the collective action of a group of cells by coupling many individual models together. In contrast, NPMs employ collective state variables, typically defined as averages over the group of cells, in order to describe the population activity directly in a single model. The strength and the weakness of his approach are hence one and the same: simplification by bulk. Is this justified and indeed useful, or does it lead to oversimplification which fails to capture the pheno ...
Resumo:
The notion that learning can be enhanced when a teaching approach matches a learner’s learning style has been widely accepted in classroom settings since the latter represents a predictor of student’s attitude and preferences. As such, the traditional approach of ‘one-size-fits-all’ as may be applied to teaching delivery in Educational Hypermedia Systems (EHSs) has to be changed with an approach that responds to users’ needs by exploiting their individual differences. However, establishing and implementing reliable approaches for matching the teaching delivery and modalities to learning styles still represents an innovation challenge which has to be tackled. In this paper, seventy six studies are objectively analysed for several goals. In order to reveal the value of integrating learning styles in EHSs, different perspectives in this context are discussed. Identifying the most effective learning style models as incorporated within AEHSs. Investigating the effectiveness of different approaches for modelling students’ individual learning traits is another goal of this study. Thus, the paper highlights a number of theoretical and technical issues of LS-BAEHSs to serve as a comprehensive guidance for researchers who interest in this area.
Resumo:
Substantial biases in shortwave cloud forcing (SWCF) of up to ±30 W m−2are found in the midlatitudes of the Southern Hemisphere in the historical simulations of 34 CMIP5 coupled general circulation models. The SWCF biases are shown to induce surface temperature anomalies localized in the midlatitudes, and are significantly correlated with the mean latitude of the eddy-driven jet, with a negative SWCF bias corresponding to an equatorward jet latitude bias. Aquaplanet model experiments are performed to demonstrate that the jet latitude biases are primarily induced by the midlatitude SWCF anomalies, such that the jet moves toward (away from) regions of enhanced (reduced) temperature gradients. The results underline the necessity of accurately representing cloud radiative forcings in state-of-the-art coupled models.
Resumo:
Ecological forecasting is difficult but essential, because reactive management results in corrective actions that are often too late to avert significant environmental damage. Here, we appraise different forecasting methods with a particular focus on the modelling of species populations. We show how simple extrapolation of current trends in state is often inadequate because environmental drivers change in intensity over time and new drivers emerge. However, statistical models, incorporating relationships with drivers, simply offset the prediction problem, requiring us to forecast how the drivers will themselves change over time. Some authors approach this problem by focusing in detail on a single driver, whilst others use ‘storyline’ scenarios, which consider projected changes in a wide range of different drivers. We explain why both approaches are problematic and identify a compromise to model key drivers and interactions along with possible response options to help inform environmental management. We also highlight the crucial role of validation of forecasts using independent data. Although these issues are relevant for all types of ecological forecasting, we provide examples based on forecasts for populations of UK butterflies. We show how a high goodness-of-fit for models used to calibrate data is not sufficient for good forecasting. Long-term biological recording schemes rather than experiments will often provide data for ecological forecasting and validation because these schemes allow capture of landscape-scale land-use effects and their interactions with other drivers.
Resumo:
We present a novel algorithm for concurrent model state and parameter estimation in nonlinear dynamical systems. The new scheme uses ideas from three dimensional variational data assimilation (3D-Var) and the extended Kalman filter (EKF) together with the technique of state augmentation to estimate uncertain model parameters alongside the model state variables in a sequential filtering system. The method is relatively simple to implement and computationally inexpensive to run for large systems with relatively few parameters. We demonstrate the efficacy of the method via a series of identical twin experiments with three simple dynamical system models. The scheme is able to recover the parameter values to a good level of accuracy, even when observational data are noisy. We expect this new technique to be easily transferable to much larger models.
Resumo:
We analyse the ability of CMIP3 and CMIP5 coupled ocean–atmosphere general circulation models (CGCMs) to simulate the tropical Pacific mean state and El Niño-Southern Oscillation (ENSO). The CMIP5 multi-model ensemble displays an encouraging 30 % reduction of the pervasive cold bias in the western Pacific, but no quantum leap in ENSO performance compared to CMIP3. CMIP3 and CMIP5 can thus be considered as one large ensemble (CMIP3 + CMIP5) for multi-model ENSO analysis. The too large diversity in CMIP3 ENSO amplitude is however reduced by a factor of two in CMIP5 and the ENSO life cycle (location of surface temperature anomalies, seasonal phase locking) is modestly improved. Other fundamental ENSO characteristics such as central Pacific precipitation anomalies however remain poorly represented. The sea surface temperature (SST)-latent heat flux feedback is slightly improved in the CMIP5 ensemble but the wind-SST feedback is still underestimated by 20–50 % and the shortwave-SST feedbacks remain underestimated by a factor of two. The improvement in ENSO amplitudes might therefore result from error compensations. The ability of CMIP models to simulate the SST-shortwave feedback, a major source of erroneous ENSO in CGCMs, is further detailed. In observations, this feedback is strongly nonlinear because the real atmosphere switches from subsident (positive feedback) to convective (negative feedback) regimes under the effect of seasonal and interannual variations. Only one-third of CMIP3 + CMIP5 models reproduce this regime shift, with the other models remaining locked in one of the two regimes. The modelled shortwave feedback nonlinearity increases with ENSO amplitude and the amplitude of this feedback in the spring strongly relates with the models ability to simulate ENSO phase locking. In a final stage, a subset of metrics is proposed in order to synthesize the ability of each CMIP3 and CMIP5 models to simulate ENSO main characteristics and key atmospheric feedbacks.
Resumo:
Accurate knowledge of the location and magnitude of ocean heat content (OHC) variability and change is essential for understanding the processes that govern decadal variations in surface temperature, quantifying changes in the planetary energy budget, and developing constraints on the transient climate response to external forcings. We present an overview of the temporal and spatial characteristics of OHC variability and change as represented by an ensemble of dynamical and statistical ocean reanalyses (ORAs). Spatial maps of the 0–300 m layer show large regions of the Pacific and Indian Oceans where the interannual variability of the ensemble mean exceeds ensemble spread, indicating that OHC variations are well-constrained by the available observations over the period 1993–2009. At deeper levels, the ORAs are less well-constrained by observations with the largest differences across the ensemble mostly associated with areas of high eddy kinetic energy, such as the Southern Ocean and boundary current regions. Spatial patterns of OHC change for the period 1997–2009 show good agreement in the upper 300 m and are characterized by a strong dipole pattern in the Pacific Ocean. There is less agreement in the patterns of change at deeper levels, potentially linked to differences in the representation of ocean dynamics, such as water mass formation processes. However, the Atlantic and Southern Oceans are regions in which many ORAs show widespread warming below 700 m over the period 1997–2009. Annual time series of global and hemispheric OHC change for 0–700 m show the largest spread for the data sparse Southern Hemisphere and a number of ORAs seem to be subject to large initialization ‘shock’ over the first few years. In agreement with previous studies, a number of ORAs exhibit enhanced ocean heat uptake below 300 and 700 m during the mid-1990s or early 2000s. The ORA ensemble mean (±1 standard deviation) of rolling 5-year trends in full-depth OHC shows a relatively steady heat uptake of approximately 0.9 ± 0.8 W m−2 (expressed relative to Earth’s surface area) between 1995 and 2002, which reduces to about 0.2 ± 0.6 W m−2 between 2004 and 2006, in qualitative agreement with recent analysis of Earth’s energy imbalance. There is a marked reduction in the ensemble spread of OHC trends below 300 m as the Argo profiling float observations become available in the early 2000s. In general, we suggest that ORAs should be treated with caution when employed to understand past ocean warming trends—especially when considering the deeper ocean where there is little in the way of observational constraints. The current work emphasizes the need to better observe the deep ocean, both for providing observational constraints for future ocean state estimation efforts and also to develop improved models and data assimilation methods.
Resumo:
A set of four eddy-permitting global ocean reanalyses produced in the framework of the MyOcean project have been compared over the altimetry period 1993–2011. The main differences among the reanalyses used here come from the data assimilation scheme implemented to control the ocean state by inserting reprocessed observations of sea surface temperature (SST), in situ temperature and salinity profiles, sea level anomaly and sea-ice concentration. A first objective of this work includes assessing the interannual variability and trends for a series of parameters, usually considered in the community as essential ocean variables: SST, sea surface salinity, temperature and salinity averaged over meaningful layers of the water column, sea level, transports across pre-defined sections, and sea ice parameters. The eddy-permitting nature of the global reanalyses allows also to estimate eddy kinetic energy. The results show that in general there is a good consistency between the different reanalyses. An intercomparison against experiments without data assimilation was done during the MyOcean project and we conclude that data assimilation is crucial for correctly simulating some quantities such as regional trends of sea level as well as the eddy kinetic energy. A second objective is to show that the ensemble mean of reanalyses can be evaluated as one single system regarding its reliability in reproducing the climate signals, where both variability and uncertainties are assessed through the ensemble spread and signal-to-noise ratio. The main advantage of having access to several reanalyses differing in the way data assimilation is performed is that it becomes possible to assess part of the total uncertainty. Given the fact that we use very similar ocean models and atmospheric forcing, we can conclude that the spread of the ensemble of reanalyses is mainly representative of our ability to gauge uncertainty in the assimilation methods. This uncertainty changes a lot from one ocean parameter to another, especially in global indices. However, despite several caveats in the design of the multi-system ensemble, the main conclusion from this study is that an eddy-permitting multi-system ensemble approach has become mature and our results provide a first step towards a systematic comparison of eddy-permitting global ocean reanalyses aimed at providing robust conclusions on the recent evolution of the oceanic state.
Resumo:
Sea-ice concentrations in the Laptev Sea simulated by the coupled North Atlantic-Arctic Ocean-Sea-Ice Model and Finite Element Sea-Ice Ocean Model are evaluated using sea-ice concentrations from Advanced Microwave Scanning Radiometer-Earth Observing System satellite data and a polynya classification method for winter 2007/08. While developed to simulate largescale sea-ice conditions, both models are analysed here in terms of polynya simulation. The main modification of both models in this study is the implementation of a landfast-ice mask. Simulated sea-ice fields from different model runs are compared with emphasis placed on the impact of this prescribed landfast-ice mask. We demonstrate that sea-ice models are not able to simulate flaw polynyas realistically when used without fast-ice description. Our investigations indicate that without landfast ice and with coarse horizontal resolution the models overestimate the fraction of open water in the polynya. This is not because a realistic polynya appears but due to a larger-scale reduction of ice concentrations and smoothed ice-concentration fields. After implementation of a landfast-ice mask, the polynya location is realistically simulated but the total open-water area is still overestimated in most cases. The study shows that the fast-ice parameterization is essential for model improvements. However, further improvements are necessary in order to progress from the simulation of large-scale features in the Arctic towards a more detailed simulation of smaller-scaled features (here polynyas) in an Arctic shelf sea.