54 resultados para Numerical experiments


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present study investigates the growth of error in baroclinic waves. It is found that stable or neutral waves are particularly sensitive to errors in the initial condition. Short stable waves are mainly sensitive to phase errors and the ultra long waves to amplitude errors. Analysis simulation experiments have indicated that the amplitudes of the very long waves become usually too small in the free atmosphere, due to the sparse and very irregular distribution of upper air observations. This also applies to the four-dimensional data assimilation experiments, since the amplitudes of the very long waves are usually underpredicted. The numerical experiments reported here show that if the very long waves have these kinds of amplitude errors in the upper troposphere or lower stratosphere the error is rapidly propagated (within a day or two) to the surface and to the lower troposphere.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As laid out in its convention there are 8 different objectives for ECMWF. One of the major objectives will consist of the preparation, on a regular basis, of the data necessary for the preparation of medium-range weather forecasts. The interpretation of this item is that the Centre will make forecasts once a day for a prediction period of up to 10 days. It is also evident that the Centre should not carry out any real weather forecasting but merely disseminate to the member countries the basic forecasting parameters with an appropriate resolution in space and time. It follows from this that the forecasting system at the Centre must from the operational point of view be functionally integrated with the Weather Services of the Member Countries. The operational interface between ECMWF and the Member Countries must be properly specified in order to get a reasonable flexibility for both systems. The problem of making numerical atmospheric predictions for periods beyond 4-5 days differs substantially from 2-3 days forecasting. From the physical point we can define a medium range forecast as a forecast where the initial disturbances have lost their individual structure. However we are still interested to predict the atmosphere in a similar way as in short range forecasting which means that the model must be able to predict the dissipation and decay of the initial phenomena and the creation of new ones. With this definition, medium range forecasting is indeed very difficult and generally regarded as more difficult than extended forecasts, where we usually only predict time and space mean values. The predictability of atmospheric flow has been extensively studied during the last years in theoretical investigations and by numerical experiments. As has been discussed elsewhere in this publication (see pp 338 and 431) a 10-day forecast is apparently on the fringe of predictability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem of spurious excitation of gravity waves in the context of four-dimensional data assimilation is investigated using a simple model of balanced dynamics. The model admits a chaotic vortical mode coupled to a comparatively fast gravity wave mode, and can be initialized such that the model evolves on a so-called slow manifold, where the fast motion is suppressed. Identical twin assimilation experiments are performed, comparing the extended and ensemble Kalman filters (EKF and EnKF, respectively). The EKF uses a tangent linear model (TLM) to estimate the evolution of forecast error statistics in time, whereas the EnKF uses the statistics of an ensemble of nonlinear model integrations. Specifically, the case is examined where the true state is balanced, but observation errors project onto all degrees of freedom, including the fast modes. It is shown that the EKF and EnKF will assimilate observations in a balanced way only if certain assumptions hold, and that, outside of ideal cases (i.e., with very frequent observations), dynamical balance can easily be lost in the assimilation. For the EKF, the repeated adjustment of the covariances by the assimilation of observations can easily unbalance the TLM, and destroy the assumptions on which balanced assimilation rests. It is shown that an important factor is the choice of initial forecast error covariance matrix. A balance-constrained EKF is described and compared to the standard EKF, and shown to offer significant improvement for observation frequencies where balance in the standard EKF is lost. The EnKF is advantageous in that balance in the error covariances relies only on a balanced forecast ensemble, and that the analysis step is an ensemble-mean operation. Numerical experiments show that the EnKF may be preferable to the EKF in terms of balance, though its validity is limited by ensemble size. It is also found that overobserving can lead to a more unbalanced forecast ensemble and thus to an unbalanced analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

New representations and efficient calculation methods are derived for the problem of propagation from an infinite regularly spaced array of coherent line sources above a homogeneous impedance plane, and for the Green's function for sound propagation in the canyon formed by two infinitely high, parallel rigid or sound soft walls and an impedance ground surface. The infinite sum of source contributions is replaced by a finite sum and the remainder is expressed as a Laplace-type integral. A pole subtraction technique is used to remove poles in the integrand which lie near the path of integration, obtaining a smooth integrand, more suitable for numerical integration, and a specific numerical integration method is proposed. Numerical experiments show highly accurate results across the frequency spectrum for a range of ground surface types. It is expected that the methods proposed will prove useful in boundary element modeling of noise propagation in canyon streets and in ducts, and for problems of scattering by periodic surfaces.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we have proposed and analyzed a simple mathematical model consisting of four variables, viz., nutrient concentration, toxin producing phytoplankton (TPP), non-toxic phytoplankton (NTP), and toxin concentration. Limitation in the concentration of the extracellular nutrient has been incorporated as an environmental stress condition for the plankton population, and the liberation of toxic chemicals has been described by a monotonic function of extracellular nutrient. The model is analyzed and simulated to reproduce the experimental findings of Graneli and Johansson [Graneli, E., Johansson, N., 2003. Increase in the production of allelopathic Prymnesium parvum cells grown under N- or P-deficient conditions. Harmful Algae 2, 135–145]. The robustness of the numerical experiments are tested by a formal parameter sensitivity analysis. As the first theoretical model consistent with the experiment of Graneli and Johansson (2003), our results demonstrate that, when nutrient-deficient conditions are favorable for the TPP population to release toxic chemicals, the TPP species control the bloom of other phytoplankton species which are non-toxic. Consistent with the observations made by Graneli and Johansson (2003), our model overcomes the limitation of not incorporating the effect of nutrient-limited toxic production in several other models developed on plankton dynamics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The European summer of 2012 was marked by strongly contrasting rainfall anomalies, which led to flooding in northern Europe and droughts and wildfires in southern Europe. This season was not an isolated event, rather the latest in a string of summers characterized by a southward shifted Atlantic storm track as described by the negative phase of the SNAO. The degree of decadal variability in these features suggests a role for forcing from outside the dynamical atmosphere, and preliminary numerical experiments suggest that the global SST and low Arctic sea ice extent anomalies are likely to have played a role and that warm North Atlantic SSTs were a particular contributing factor. The direct effects of changes in radiative forcing from greenhouse gas and aerosol forcing are not included in these experiments, but both anthropogenic forcing and natural variability may have influenced the SST and sea ice changes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Numerical experiments are described that pertain to the climate of a coupled atmosphere–ocean–ice system in the absence of land, driven by modern-day orbital and CO2 forcing. Millennial time-scale simulations yield a mean state in which ice caps reach down to 55° of latitude and both the atmosphere and ocean comprise eastward- and westward-flowing zonal jets, whose structure is set by their respective baroclinic instabilities. Despite the zonality of the ocean, it is remarkably efficient at transporting heat meridionally through the agency of Ekman transport and eddy-driven subduction. Indeed the partition of heat transport between the atmosphere and ocean is much the same as the present climate, with the ocean dominating in the Tropics and the atmosphere in the mid–high latitudes. Variability of the system is dominated by the coupling of annular modes in the atmosphere and ocean. Stochastic variability inherent to the atmospheric jets drives variability in the ocean. Zonal flows in the ocean exhibit decadal variability, which, remarkably, feeds back to the atmosphere, coloring the spectrum of annular variability. A simple stochastic model can capture the essence of the process. Finally, it is briefly reviewed how the aquaplanet can provide information about the processes that set the partition of heat transport and the climate of Earth.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Based on the geological evidence that the northern Tibetan Plateau (NTP) had an uplift of a finite magnitude since the Miocene and the major Asian inland deserts formed in the early Pliocene, a regional climate model (RegCM4.1) with a horizontal resolution of 50 km was used to explore the effects of the NTP uplift and the related aridification of inland Asia on regional climate. We designed three numerical experiments including the control experiment representing the present-day condition, the high-mountain experiment representing the early Pliocene condition with uplifted NTP but absence of the Asian inland deserts, and the low-mountain experiment representing the mid-Miocene condition with reduced topography in the NTP (by as much as 2400 m) and also absence of the deserts. Our simulation results indicated that the NTP uplift caused significant reductions in annual precipitation in a broad region of inland Asia north of the Tibetan Plateau (TP) mainly due to the enhanced rain shadow effect of the mountains and changes in the regional circulations. However, four mountainous regions located in the uplift showed significant increases in precipitation, stretching from the Pamir Plateau in the west to the Qilian Mountains in the east. These mountainous areas also experienced different changes in the rainfall seasonality with the greatest increases occurring during the respective rainy seasons, predominantly resulted from the enhanced orographically forced upwind ascents. The appearance of the major deserts in the inland Asia further reduced precipitation in the region and led to increased dust emission and deposition fluxes, while the spatial patterns of dust deposition were also changed, not only in the regions of uplift-impacted topography, but also in the downwind regions. One major contribution from this study is the comparison of the simulation results with 11 existing geological records representing the moisture conditions from Miocene to Pliocene. The comparisons revealed good matches between the simulation results and the published geological records. Therefore, we conclude that the NTP uplift and the related formation of the major deserts played a controlling role in the evolution of regional climatic conditions in a broad region in inland Asia since the Miocene.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study discontinuous Galerkin approximations of the p-biharmonic equation for p∈(1,∞) from a variational perspective. We propose a discrete variational formulation of the problem based on an appropriate definition of a finite element Hessian and study convergence of the method (without rates) using a semicontinuity argument. We also present numerical experiments aimed at testing the robustness of the method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present and analyse a space–time discontinuous Galerkin method for wave propagation problems. The special feature of the scheme is that it is a Trefftz method, namely that trial and test functions are solution of the partial differential equation to be discretised in each element of the (space–time) mesh. The method considered is a modification of the discontinuous Galerkin schemes of Kretzschmar et al. (2014) and of Monk & Richter (2005). For Maxwell’s equations in one space dimension, we prove stability of the method, quasi-optimality, best approximation estimates for polynomial Trefftz spaces and (fully explicit) error bounds with high order in the meshwidth and in the polynomial degree. The analysis framework also applies to scalar wave problems and Maxwell’s equations in higher space dimensions. Some numerical experiments demonstrate the theoretical results proved and the faster convergence compared to the non-Trefftz version of the scheme.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Optimal state estimation is a method that requires minimising a weighted, nonlinear, least-squares objective function in order to obtain the best estimate of the current state of a dynamical system. Often the minimisation is non-trivial due to the large scale of the problem, the relative sparsity of the observations and the nonlinearity of the objective function. To simplify the problem the solution is often found via a sequence of linearised objective functions. The condition number of the Hessian of the linearised problem is an important indicator of the convergence rate of the minimisation and the expected accuracy of the solution. In the standard formulation the convergence is slow, indicating an ill-conditioned objective function. A transformation to different variables is often used to ameliorate the conditioning of the Hessian by changing, or preconditioning, the Hessian. There is only sparse information in the literature for describing the causes of ill-conditioning of the optimal state estimation problem and explaining the effect of preconditioning on the condition number. This paper derives descriptive theoretical bounds on the condition number of both the unpreconditioned and preconditioned system in order to better understand the conditioning of the problem. We use these bounds to explain why the standard objective function is often ill-conditioned and why a standard preconditioning reduces the condition number. We also use the bounds on the preconditioned Hessian to understand the main factors that affect the conditioning of the system. We illustrate the results with simple numerical experiments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trust and reputation are important factors that influence the success of both traditional transactions in physical social networks and modern e-commerce in virtual Internet environments. It is difficult to define the concept of trust and quantify it because trust has both subjective and objective characteristics at the same time. A well-reported issue with reputation management system in business-to-consumer (BtoC) e-commerce is the “all good reputation” problem. In order to deal with the confusion, a new computational model of reputation is proposed in this paper. The ratings of each customer are set as basic trust score events. In addition, the time series of massive ratings are aggregated to formulate the sellers’ local temporal trust scores by Beta distribution. A logical model of trust and reputation is established based on the analysis of the dynamical relationship between trust and reputation. As for single goods with repeat transactions, an iterative mathematical model of trust and reputation is established with a closed-loop feedback mechanism. Numerical experiments on repeated transactions recorded over a period of 24 months are performed. The experimental results show that the proposed method plays guiding roles for both theoretical research into trust and reputation and the practical design of reputation systems in BtoC e-commerce.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The goal of the Palaeoclimate Modelling Intercomparison Project (PMIP) is to understand the response of the climate system to changes in different climate forcings and to feedbacks. Through comparison with observations of the environmental impacts of these climate changes, or with climate reconstructions based on physical, chemical or biological records, PMIP also addresses the issue of how well state-of-the-art models simulate climate changes. Palaeoclimate states are radically different from those of the recent past documented by the instrumental record and thus provide an out-of-sample test of the models used for future climate projections and a way to assess whether they have the correct sensitivity to forcings and feedbacks. Five distinctly different periods have been selected as focus for the core palaeoclimate experiments that are designed to contribute to the objectives of the sixth phase of the Coupled Model Intercomparison Project (CMIP6). This manuscript describes the motivation for the choice of these periods and the design of the numerical experiments, with a focus upon their novel features compared to the experiments performed in previous phases of PMIP and CMIP as well as the benefits of common analyses of the models across multiple climate states. It also describes the information needed to document each experiment and the model outputs required for analysis and benchmarking.