212 resultados para parametrization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main result of this work is a parametric description of the spectral surfaces of a class of periodic 5-diagonal matrices, related to the strong moment problem. This class is a self-adjoint twin of the class of CMV matrices. Jointly they form the simplest possible classes of 5-diagonal matrices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We discuss and test the potential usefulness of single-column models (SCMs) for the testing of stchastic physics schemes that have been proposed for use in general circulation models (GCMs). We argue that although single column tests cannot be definitive in exposing the full behaviour of a stochastic method in the full GCM, and although there are differences between SCM testing of deterministic and stochastic methods, nonetheless SCM testing remains a useful tool. It is necessary to consider an ensemble of SCM runs produced by the stochastic method. These can be usefully compared to deterministic ensembles describing initial condition uncertainty and also to combinations of these (with structural model changes) into poor man's ensembles. The proposed methodology is demonstrated using an SCM experiment recently developed by the GCSS community, simulating the transitions between active and suppressed periods of tropical convection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sensitivity of the UK Universities Global Atmospheric Modelling Programme (UGAMP) General Circulation Model (UGCM) to two very different approaches to convective parametrization is described. Comparison is made between a Kuo scheme, which is constrained by large-scale moisture convergence, and a convective-adjustment scheme, which relaxes to observed thermodynamic states. Results from 360-day integrations with perpetual January conditions are used to describe the model's tropical time-mean climate and its variability. Both convection schemes give reasonable simulations of the time-mean climate, but the representation of the main modes of tropical variability is markedly different. The Kuo scheme has much weaker variance, confined to synoptic frequencies near 4 days, and a poor simulation of intraseasonal variability. In contrast, the convective-adjustment scheme has much more transient activity at all time-scales. The various aspects of the two schemes which might explain this difference are discussed. The particular closure on moisture convergence used in this version of the Kuo scheme is identified as being inappropriate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A poor representation of cloud structure in a general circulation model (GCM) is widely recognised as a potential source of error in the radiation budget. Here, we develop a new way of representing both horizontal and vertical cloud structure in a radiation scheme. This combines the ‘Tripleclouds’ parametrization, which introduces inhomogeneity by using two cloudy regions in each layer as opposed to one, each with different water content values, with ‘exponential-random’ overlap, in which clouds in adjacent layers are not overlapped maximally, but according to a vertical decorrelation scale. This paper, Part I of two, aims to parametrize the two effects such that they can be used in a GCM. To achieve this, we first review a number of studies for a globally applicable value of fractional standard deviation of water content for use in Tripleclouds. We obtain a value of 0.75 ± 0.18 from a variety of different types of observations, with no apparent dependence on cloud type or gridbox size. Then, through a second short review, we create a parametrization of decorrelation scale for use in exponential-random overlap, which varies the scale linearly with latitude from 2.9 km at the Equator to 0.4 km at the poles. When applied to radar data, both components are found to have radiative impacts capable of offsetting biases caused by cloud misrepresentation. Part II of this paper implements Tripleclouds and exponential-random overlap into a radiation code and examines both their individual and combined impacts on the global radiation budget using re-analysis data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For an increasing number of applications, mesoscale modelling systems now aim to better represent urban areas. The complexity of processes resolved by urban parametrization schemes varies with the application. The concept of fitness-for-purpose is therefore critical for both the choice of parametrizations and the way in which the scheme should be evaluated. A systematic and objective model response analysis procedure (Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm) is used to assess the fitness of the single-layer urban canopy parametrization implemented in the Weather Research and Forecasting (WRF) model. The scheme is evaluated regarding its ability to simulate observed surface energy fluxes and the sensitivity to input parameters. Recent amendments are described, focussing on features which improve its applicability to numerical weather prediction, such as a reduced and physically more meaningful list of input parameters. The study shows a high sensitivity of the scheme to parameters characterizing roof properties in contrast to a low response to road-related ones. Problems in partitioning of energy between turbulent sensible and latent heat fluxes are also emphasized. Some initial guidelines to prioritize efforts to obtain urban land-cover class characteristics in WRF are provided. Copyright © 2010 Royal Meteorological Society and Crown Copyright.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ground-based remote-sensing observations from Atmospheric Radiation Measurement (ARM) and Cloud-Net sites are used to evaluate the clouds predicted by a weather forecasting and climate model. By evaluating the cloud predictions using separate measures for the errors in frequency of occurrence, amount when present, and timing, we provide a detailed assessment of the model performance, which is relevant to weather and climate time-scales. Importantly, this methodology will be of great use when attempting to develop a cloud parametrization scheme, as it provides a clearer picture of the current deficiencies in the predicted clouds. Using the Met Office Unified Model, it is shown that when cloud fractions produced by a diagnostic and a prognostic cloud scheme are compared, the prognostic cloud scheme shows improvements to the biases in frequency of occurrence of low, medium and high cloud and to the frequency distributions of cloud amount when cloud is present. The mean cloud profiles are generally improved, although it is shown that in some cases the diagnostic scheme produced misleadingly good mean profiles as a result of compensating errors in frequency of occurrence and amount when present. Some biases remain when using the prognostic scheme, notably the underprediction of mean ice cloud fraction due to the amount when present being too low, and the overprediction of mean liquid cloud fraction due to the frequency of occurrence being too high.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The subgrid-scale spatial variability in cloud water content can be described by a parameter f called the fractional standard deviation. This is equal to the standard deviation of the cloud water content divided by the mean. This parameter is an input to schemes that calculate the impact of subgrid-scale cloud inhomogeneity on gridbox-mean radiative fluxes and microphysical process rates. A new regime-dependent parametrization of the spatial variability of cloud water content is derived from CloudSat observations of ice clouds. In addition to the dependencies on horizontal and vertical resolution and cloud fraction included in previous parametrizations, the new parametrization includes an explicit dependence on cloud type. The new parametrization is then implemented in the Global Atmosphere 6 (GA6) configuration of the Met Office Unified Model and used to model the effects of subgrid variability of both ice and liquid water content on radiative fluxes and autoconversion and accretion rates in three 20-year atmosphere-only climate simulations. These simulations show the impact of the new regime-dependent parametrization on diagnostic radiation calculations, interactive radiation calculations and both interactive radiation calculations and in a new warm microphysics scheme. The control simulation uses a globally constant f value of 0.75 to model the effect of cloud water content variability on radiative fluxes. The use of the new regime-dependent parametrization in the model results in a global mean which is higher than the control's fixed value and a global distribution of f which is closer to CloudSat observations. When the new regime-dependent parametrization is used in radiative transfer calculations only, the magnitudes of short-wave and long-wave top of atmosphere cloud radiative forcing are reduced, increasing the existing global mean biases in the control. When also applied in a new warm microphysics scheme, the short-wave global mean bias is reduced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

P>In the context of either Bayesian or classical sensitivity analyses of over-parametrized models for incomplete categorical data, it is well known that prior-dependence on posterior inferences of nonidentifiable parameters or that too parsimonious over-parametrized models may lead to erroneous conclusions. Nevertheless, some authors either pay no attention to which parameters are nonidentifiable or do not appropriately account for possible prior-dependence. We review the literature on this topic and consider simple examples to emphasize that in both inferential frameworks, the subjective components can influence results in nontrivial ways, irrespectively of the sample size. Specifically, we show that prior distributions commonly regarded as slightly informative or noninformative may actually be too informative for nonidentifiable parameters, and that the choice of over-parametrized models may drastically impact the results, suggesting that a careful examination of their effects should be considered before drawing conclusions.Resume Que ce soit dans un cadre Bayesien ou classique, il est bien connu que la surparametrisation, dans les modeles pour donnees categorielles incompletes, peut conduire a des conclusions erronees. Cependant, certains auteurs persistent a negliger les problemes lies a la presence de parametres non identifies. Nous passons en revue la litterature dans ce domaine, et considerons quelques exemples surparametres simples dans lesquels les elements subjectifs influencent de facon non negligeable les resultats, independamment de la taille des echantillons. Plus precisement, nous montrons comment des a priori consideres comme peu ou non-informatifs peuvent se reveler extremement informatifs en ce qui concerne les parametres non identifies, et que le recours a des modeles surparametres peut avoir sur les conclusions finales un impact considerable. Ceci suggere un examen tres attentif de l`impact potentiel des a priori.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyse the finite-sample behaviour of two second-order bias-corrected alternatives to the maximum-likelihood estimator of the parameters in a multivariate normal regression model with general parametrization proposed by Patriota and Lemonte [A. G. Patriota and A. J. Lemonte, Bias correction in a multivariate regression model with genereal parameterization, Stat. Prob. Lett. 79 (2009), pp. 1655-1662]. The two finite-sample corrections we consider are the conventional second-order bias-corrected estimator and the bootstrap bias correction. We present the numerical results comparing the performance of these estimators. Our results reveal that analytical bias correction outperforms numerical bias corrections obtained from bootstrapping schemes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that at one-loop order, negative-dimensional, Mellin-Barnes (MB) and Feynman parametrization (FP) approaches to Feynman loop integral calculations are equivalent. Starting with a generating functional, for two and then for n-point scalar integrals, we show how to reobtain MB results, using negative-dimensional and FP techniques. The n-point result is valid for different masses, arbitrary exponents of propagators and dimension.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN]The meccano method is a novel and promising mesh generation method for simultaneously creating adaptive tetrahedral meshes and volume parametrizations of a complex solid. We highlight the fact that the method requires minimum user intervention and has a low computational cost. The method builds a 3-D triangulation of the solid as a deformation of an appropriate tetrahedral mesh of the meccano. The new mesh generator combines an automatic parametrization of surface triangulations, a local refinement algorithm for 3-D nested triangulations and a simultaneous untangling and smoothing procedure. At present, the procedure is fully automatic for a genus-zero solid. In this case, the meccano can be a single cube. The efficiency of the proposed technique is shown with several applications...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN]In this paper we review the novel meccano method. We summarize the main stages (subdivision, mapping, optimization) of this automatic tetrahedral mesh generation technique and we concentrate the study to complex genus-zero solids. In this case, our procedure only requires a surface triangulation of the solid. A crucial consequence of our method is the volume parametrization of the solid to a cube. We construct volume T-meshes for isogeometric analysis by using this result. The efficiency of the proposed technique is shown with several examples. A comparison between the meccano method and standard mesh generation techniques is introduced.-1…

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN]This work presents a novel approach to solve a two dimensional problem by using an adaptive finite element approach. The most common strategy to deal with nested adaptivity is to generate a mesh that represents the geometry and the input parameters correctly, and to refine this mesh locally to obtain the most accurate solution. As opposed to this approach, the authors propose a technique using independent meshes : geometry, input data and the unknowns. Each particular mesh is obtained by a local nested refinement of the same coarse mesh at the parametric space…