129 resultados para Unified Model Reference
Resumo:
Idealized explicit convection simulations of the Met Office Unified Model exhibit spontaneous self-aggregation in radiative-convective equilibrium, as seen in other models in previous studies. This self-aggregation is linked to feedbacks between radiation, surface fluxes, and convection, and the organization is intimately related to the evolution of the column water vapor field. Analysis of the budget of the spatial variance of column-integrated frozen moist static energy (MSE), following Wing and Emanuel [2014], reveals that the direct radiative feedback (including significant cloud longwave effects) is dominant in both the initial development of self-aggregation and the maintenance of an aggregated state. A low-level circulation at intermediate stages of aggregation does appear to transport MSE from drier to moister regions, but this circulation is mostly balanced by other advective effects of opposite sign and is forced by horizontal anomalies of convective heating (not radiation). Sensitivity studies with either fixed prescribed radiative cooling, fixed prescribed surface fluxes, or both do not show full self-aggregation from homogeneous initial conditions, though fixed surface fluxes do not disaggregate an initialized aggregated state. A sensitivity study in which rain evaporation is turned off shows more rapid self-aggregation, while a run with this change plus fixed radiative cooling still shows strong self-aggregation, supporting a “moisture memory” effect found in Muller and Bony [2015]. Interestingly, self-aggregation occurs even in simulations with sea surface temperatures (SSTs) of 295 K and 290 K, with direct radiative feedbacks dominating the budget of MSE variance, in contrast to results in some previous studies.
Resumo:
Numerical models of the atmosphere combine a dynamical core, which approximates solutions to the adiabatic, frictionless governing equations for fluid dynamics, with tendencies arising from the parametrization of other physical processes. Since potential vorticity (PV) is conserved following fluid flow in adiabatic, frictionless circumstances, it is possible to isolate the effects of non-conservative processes by accumulating PV changes in an air-mass relative framework. This “PV tracer technique” is used to accumulate separately the effects on PV of each of the different non-conservative processes represented in a numerical model of the atmosphere. Dynamical cores are not exactly conservative because they introduce, explicitly or implicitly, some level of dissipation and adjustment of prognostic model variables which acts to modify PV. Here, the PV tracers technique is extended to diagnose the cumulative effect of the non-conservation of PV by a dynamical core and its characteristics relative to the PV modification by parametrized physical processes. Quantification using the Met Office Unified Model reveals that the magnitude of the non-conservation of PV by the dynamical core is comparable to those from physical processes. Moreover, the residual of the PV budget, when tracing the effects of the dynamical core and physical processes, is at least an order of magnitude smaller than the PV tracers associated with the most active physical processes. The implication of this work is that the non-conservation of PV by a dynamical core can be assessed in case studies with a full suite of physics parametrizations and directly compared with the PV modification by parametrized physical processes. The nonconservation of PV by the dynamical core is shown to move the position of the extratropical tropopause while the parametrized physical processes have a lesser effect at the tropopause level.
Sensitivity of resolved and parameterized surface drag to changes in resolution and parameterization
Resumo:
The relative contribution of resolved and parameterized surface drag towards balancing the atmospheric angular momentum flux convergence (AMFC), and their sensitivity to horizontal resolution and parameterization, are investigated in an atmospheric model. This sensitivity can be difficult to elucidate in free-running climate models, in which the AMFC varies with changing climatologies and, as a result, the relative contributions of surface terms balancing the AMFC also vary. While the sensitivity question has previously been addressed using short-range forecasts, we demonstrate that a nudging framework is an effective method for constraining the AMFC. The Met Office Unified Model is integrated at three horizontal resolutions ranging from 130 km (N96) to 25 km (N512) while relaxing the model’s wind and temperature fields towards the ERAinterim reanalysis within the altitude regions of maximum AMFC. This method is validated against short range forecasts and good agreement is found. These experiments are then used to assess the fidelity of the exchange between parameterized and resolved orographic torques with changes in horizontal resolution. Although the parameterized orographic torque reduces substantially with increasing horizontal resolution, there is little change in resolved orographic torque over 20N to 50N. The tendencies produced by the nudging routine indicate that the additional drag at lower horizontal resolution is excessive. When parameterized orographic blocking is removed at the coarsest of these resolutions, there is a lack of compensation, and even compensation of the opposite sense, by the boundary layer and resolved torques which is particularly pronounced over 20N to 50N. This study demonstrates that there is strong sensitivity in the behaviour of the resolved and parameterized surface drag over this region.
Resumo:
A single habit parameterization for the shortwave optical properties of cirrus is presented. The parameterization utilizes a hollow particle geometry, with stepped internal cavities as identified in laboratory and field studies. This particular habit was chosen as both experimental and theoretical results show that the particle exhibits lower asymmetry parameters when compared to solid crystals of the same aspect ratio. The aspect ratio of the particle was varied as a function of maximum dimension, D, in order to adhere to the same physical relationships assumed in the microphysical scheme in a configuration of the Met Office atmosphere-only global model, concerning particle mass, size and effective density. Single scattering properties were then computed using T-Matrix, Ray Tracing with Diffraction on Facets (RTDF) and Ray Tracing (RT) for small, medium, and large size parameters respectively. The scattering properties were integrated over 28 particle size distributions as used in the microphysical scheme. The fits were then parameterized as simple functions of Ice Water Content (IWC) for 6 shortwave bands. The parameterization was implemented into the GA6 configuration of the Met Office Unified Model along with the current operational long-wave parameterization. The GA6 configuration is used to simulate the annual twenty-year short-wave (SW) fluxes at top-of-atmosphere (TOA) and also the temperature and humidity structure of the atmosphere. The parameterization presented here is compared against the current operational model and a more recent habit mixture model.
Resumo:
Bovine tuberculosis (TB)is an important economic disease. Badgers (Meles meles) are the wildlife source implicated in many cattle outbreaks of TB in Britain, and extensive badger control is a controversial option to reduce the disease. A badger and cattle population model was developed, simulating TB epidemiology; badger ecology, including postcull social perturbation; and TB-related farm management. An economic cost-benefit module was integrated into the model to assess whether badger control offers economic benefits. Model results strongly indicate that although, if perturbation were restricted, extensive badger culling could reduce rates in cattle, overall an economic loss would be more likely than a benefit. Perturbation of the badger population was a key factor determining success or failure of control. The model highlighted some important knowledge gaps regarding both the spatial and temporal characteristics of perturbation that warrant further research.
Resumo:
The concepts of on-line transactional processing (OLTP) and on-line analytical processing (OLAP) are often confused with the technologies or models that are used to design transactional and analytics based information systems. This in some way has contributed to existence of gaps between the semantics in information captured during transactional processing and information stored for analytical use. In this paper, we propose the use of a unified semantics design model, as a solution to help bridge the semantic gaps between data captured by OLTP systems and the information provided by OLAP systems. The central focus of this design approach is on enabling business intelligence using not just data, but data with context.
Resumo:
This work proposes a unified neurofuzzy modelling scheme. To begin with, the initial fuzzy base construction method is based on fuzzy clustering utilising a Gaussian mixture model (GMM) combined with the analysis of covariance (ANOVA) decomposition in order to obtain more compact univariate and bivariate membership functions over the subspaces of the input features. The mean and covariance of the Gaussian membership functions are found by the expectation maximisation (EM) algorithm with the merit of revealing the underlying density distribution of system inputs. The resultant set of membership functions forms the basis of the generalised fuzzy model (GFM) inference engine. The model structure and parameters of this neurofuzzy model are identified via the supervised subspace orthogonal least square (OLS) learning. Finally, instead of providing deterministic class label as model output by convention, a logistic regression model is applied to present the classifier’s output, in which the sigmoid type of logistic transfer function scales the outputs of the neurofuzzy model to the class probability. Experimental validation results are presented to demonstrate the effectiveness of the proposed neurofuzzy modelling scheme.
Resumo:
This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.
Resumo:
Using a flexible chemical box model with full heterogeneous chemistry, intercepts of chemically modified Langley plots have been computed for the 5 years of zenith-sky NO2 data from Faraday in Antarctica (65°S). By using these intercepts as the effective amount in the reference spectrum, drifts in zero of total vertical NO2 were much reduced. The error in zero of total NO2 is ±0.03×1015 moleccm−2 from one year to another. This error is small enough to determine trends in midsummer and any variability in denoxification between midwinters. The technique also suggests a more sensitive method for determining N2O5 from zenith-sky NO2 data.
Resumo:
Effective medium approximations for the frequency-dependent and complex-valued effective stiffness tensors of cracked/ porous rocks with multiple solid constituents are developed on the basis of the T-matrix approach (based on integral equation methods for quasi-static composites), the elastic - viscoelastic correspondence principle, and a unified treatment of the local and global flow mechanisms, which is consistent with the principle of fluid mass conservation. The main advantage of using the T-matrix approach, rather than the first-order approach of Eshelby or the second-order approach of Hudson, is that it produces physically plausible results even when the volume concentrations of inclusions or cavities are no longer small. The new formulae, which operates with an arbitrary homogeneous (anisotropic) reference medium and contains terms of all order in the volume concentrations of solid particles and communicating cavities, take explicitly account of inclusion shape and spatial distribution independently. We show analytically that an expansion of the T-matrix formulae to first order in the volume concentration of cavities (in agreement with the dilute estimate of Eshelby) has the correct dependence on the properties of the saturating fluid, in the sense that it is consistent with the Brown-Korringa relation, when the frequency is sufficiently low. We present numerical results for the (anisotropic) effective viscoelastic properties of a cracked permeable medium with finite storage porosity, indicating that the complete T-matrix formulae (including the higher-order terms) are generally consistent with the Brown-Korringa relation, at least if we assume the spatial distribution of cavities to be the same for all cavity pairs. We have found an efficient way to treat statistical correlations in the shapes and orientations of the communicating cavities, and also obtained a reasonable match between theoretical predictions (based on a dual porosity model for quartz-clay mixtures, involving relatively flat clay-related pores and more rounded quartz-related pores) and laboratory results for the ultrasonic velocity and attenuation spectra of a suite of typical reservoir rocks. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
A reference model of Fallible Endgame Play has been implemented and exercised with the chess-engine WILHELM. Past experiments have demonstrated the value of the model and the robustness of decisions based on it: experiments agree well with a Markov Model theory. Here, the reference model is exercised on the well-known endgame KBBKN.
Resumo:
A reference model of Fallible Endgame Play has been implemented and exercised with the chess engine WILHELM. Various experiments have demonstrated the value of the model and the robustness of decisions based on it. Experimental results have also been compared with the theoretical predictions of a Markov model of the endgame and found to be in close agreement.
Resumo:
A reference model of fallible endgame play is defined in terms of a spectrum of endgame players whose play ranges in competence from the optimal to the anti-optimal choice of move. They may be used as suitably skilled practice partners, to assess a player, to differentiate between otherwise equi-optimal moves, to promote or expedite a game result, to run Monte-Carlo simulations, and to identify the difficulty of a position or a whole endgame.
Resumo:
A reference model of fallible endgame play is defined in terms of a spectrum of endgame players whose play ranges in competence from random to optimal choice of move. They may be used as suitable practice partners, to differentiate between otherwise equi-optimal moves, to promote or expedite a result, to assess an opponent, to run Monte Carlo simulations, and to identify the difficulty of a position or a whole endgame.