67 resultados para Unified transform


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concepts of on-line transactional processing (OLTP) and on-line analytical processing (OLAP) are often confused with the technologies or models that are used to design transactional and analytics based information systems. This in some way has contributed to existence of gaps between the semantics in information captured during transactional processing and information stored for analytical use. In this paper, we propose the use of a unified semantics design model, as a solution to help bridge the semantic gaps between data captured by OLTP systems and the information provided by OLAP systems. The central focus of this design approach is on enabling business intelligence using not just data, but data with context.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many physical systems exhibit dynamics with vastly different time scales. Often the different motions interact only weakly and the slow dynamics is naturally constrained to a subspace of phase space, in the vicinity of a slow manifold. In geophysical fluid dynamics this reduction in phase space is called balance. Classically, balance is understood by way of the Rossby number R or the Froude number F; either R ≪ 1 or F ≪ 1. We examined the shallow-water equations and Boussinesq equations on an f -plane and determined a dimensionless parameter _, small values of which imply a time-scale separation. In terms of R and F, ∈= RF/√(R^2+R^2 ) We then developed a unified theory of (extratropical) balance based on _ that includes all cases of small R and/or small F. The leading-order systems are ensured to be Hamiltonian and turn out to be governed by the quasi-geostrophic potential-vorticity equation. However, the height field is not necessarily in geostrophic balance, so the leading-order dynamics are more general than in quasi-geostrophy. Thus the quasi-geostrophic potential-vorticity equation (as distinct from the quasi-geostrophic dynamics) is valid more generally than its traditional derivation would suggest. In the case of the Boussinesq equations, we have found that balanced dynamics generally implies hydrostatic balance without any assumption on the aspect ratio; only when the Froude number is not small and it is the Rossby number that guarantees a timescale separation must we impose the requirement of a small aspect ratio to ensure hydrostatic balance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work proposes a unified neurofuzzy modelling scheme. To begin with, the initial fuzzy base construction method is based on fuzzy clustering utilising a Gaussian mixture model (GMM) combined with the analysis of covariance (ANOVA) decomposition in order to obtain more compact univariate and bivariate membership functions over the subspaces of the input features. The mean and covariance of the Gaussian membership functions are found by the expectation maximisation (EM) algorithm with the merit of revealing the underlying density distribution of system inputs. The resultant set of membership functions forms the basis of the generalised fuzzy model (GFM) inference engine. The model structure and parameters of this neurofuzzy model are identified via the supervised subspace orthogonal least square (OLS) learning. Finally, instead of providing deterministic class label as model output by convention, a logistic regression model is applied to present the classifier’s output, in which the sigmoid type of logistic transfer function scales the outputs of the neurofuzzy model to the class probability. Experimental validation results are presented to demonstrate the effectiveness of the proposed neurofuzzy modelling scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A detailed spectrally-resolved extraterrestrial solar spectrum (ESS) is important for line-by-line radiative transfer modeling in the near-infrared (near-IR). Very few observationally-based high-resolution ESS are available in this spectral region. Consequently the theoretically-calculated ESS by Kurucz has been widely adopted. We present the CAVIAR (Continuum Absorption at Visible and Infrared Wavelengths and its Atmospheric Relevance) ESS which is derived using the Langley technique applied to calibrated observations using a ground-based high-resolution Fourier transform spectrometer (FTS) in atmospheric windows from 2000–10000 cm-1 (1–5 μm). There is good agreement between the strengths and positions of solar lines between the CAVIAR and the satellite-based ACE-FTS (Atmospheric Chemistry Experiment-FTS) ESS, in the spectral region where they overlap, and good agreement with other ground-based FTS measurements in two near-IR windows. However there are significant differences in the structure between the CAVIAR ESS and spectra from semi-empirical models. In addition, we found a difference of up to 8 % in the absolute (and hence the wavelength-integrated) irradiance between the CAVIAR ESS and that of Thuillier et al., which was based on measurements from the Atmospheric Laboratory for Applications and Science satellite and other sources. In many spectral regions, this difference is significant, as the coverage factor k = 2 (or 95 % confidence limit) uncertainties in the two sets of observations do not overlap. Since the total solar irradiance is relatively well constrained, if the CAVIAR ESS is correct, then this would indicate an integrated “loss” of solar irradiance of about 30 W m-2 in the near-IR that would have to be compensated by an increase at other wavelengths.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and highfrequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditional derivations of available potential energy, in a variety of contexts, involve combining some form of mass conservation together with energy conservation. This raises the questions of why such constructions are required in the first place, and whether there is some general method of deriving the available potential energy for an arbitrary fluid system. By appealing to the underlying Hamiltonian structure of geophysical fluid dynamics, it becomes clear why energy conservation is not enough, and why other conservation laws such as mass conservation need to be incorporated in order to construct an invariant, known as the pseudoenergy, that is a positive‐definite functional of disturbance quantities. The available potential energy is just the non‐kinetic part of the pseudoenergy, the construction of which follows a well defined algorithm. Two notable features of the available potential energy defined thereby are first, that it is a locally defined quantity, and second, that it is inherently definable at finite amplitude (though one may of course always take the small‐amplitude limit if this is appropriate). The general theory is made concrete by systematic derivations of available potential energy in a number of different contexts. All the well known expressions are recovered, and some new expressions are obtained. The possibility of generalizing the concept of available potential energy to dynamically stable basic flows (as opposed to statically stable basic states) is also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We generalize the popular ensemble Kalman filter to an ensemble transform filter, in which the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian-mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions and the three-dimensional Lorenz-63 model). It is demonstrated that the EGMF is capable of tracking systems with non-Gaussian uni- and multimodal ensemble distributions. Copyright © 2011 Royal Meteorological Society

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables. © 2013, Society for Industrial and Applied Mathematics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two recent works have adapted the Kalman–Bucy filter into an ensemble setting. In the first formulation, the ensemble of perturbations is updated by the solution of an ordinary differential equation (ODE) in pseudo-time, while the mean is updated as in the standard Kalman filter. In the second formulation, the full ensemble is updated in the analysis step as the solution of single set of ODEs in pseudo-time. Neither requires matrix inversions except for the frequently diagonal observation error covariance. We analyse the behaviour of the ODEs involved in these formulations. We demonstrate that they stiffen for large magnitudes of the ratio of background error to observational error variance, and that using the integration scheme proposed in both formulations can lead to failure. A numerical integration scheme that is both stable and is not computationally expensive is proposed. We develop transform-based alternatives for these Bucy-type approaches so that the integrations are computed in ensemble space where the variables are weights (of dimension equal to the ensemble size) rather than model variables. Finally, the performance of our ensemble transform Kalman–Bucy implementations is evaluated using three models: the 3-variable Lorenz 1963 model, the 40-variable Lorenz 1996 model, and a medium complexity atmospheric general circulation model known as SPEEDY. The results from all three models are encouraging and warrant further exploration of these assimilation techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sea ice contains flaws including frictional contacts. We aim to describe quantitatively the mechanics of those contacts, providing local physics for geophysical models. With a focus on the internal friction of ice, we review standard micro-mechanical models of friction. The solid's deformation under normal load may be ductile or elastic. The shear failure of the contact may be by ductile flow, brittle fracture, or melting and hydrodynamic lubrication. Combinations of these give a total of six rheological models. When the material under study is ice, several of the rheological parameters in the standard models are not constant, but depend on the temperature of the bulk, on the normal stress under which samples are pressed together, or on the sliding velocity and acceleration. This has the effect of making the shear stress required for sliding dependent on sliding velocity, acceleration, and temperature. In some cases, it also perturbs the exponent in the normal-stress dependence of that shear stress away from the value that applies to most materials. We unify the models by a principle of maximum displacement for normal deformation, and of minimum stress for shear failure, reducing the controversy over the mechanism of internal friction in ice to the choice of values of four parameters in a single model. The four parameters represent, for a typical asperity contact, the sliding distance required to expel melt-water, the sliding distance required to break contact, the normal strain in the asperity, and the thickness of any ductile shear zone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe Global Atmosphere 4.0 (GA4.0) and Global Land 4.0 (GL4.0): configurations of the Met Office Unified Model and JULES (Joint UK Land Environment Simulator) community land surface model developed for use in global and regional climate research and weather prediction activities. GA4.0 and GL4.0 are based on the previous GA3.0 and GL3.0 configurations, with the inclusion of developments made by the Met Office and its collaborators during its annual development cycle. This paper provides a comprehensive technical and scientific description of GA4.0 and GL4.0 as well as details of how these differ from their predecessors. We also present the results of some initial evaluations of their performance. Overall, performance is comparable with that of GA3.0/GL3.0; the updated configurations include improvements to the science of several parametrisation schemes, however, and will form a baseline for further ongoing development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic generation of classification rules has been an increasingly popular technique in commercial applications such as Big Data analytics, rule based expert systems and decision making systems. However, a principal problem that arises with most methods for generation of classification rules is the overfit-ting of training data. When Big Data is dealt with, this may result in the generation of a large number of complex rules. This may not only increase computational cost but also lower the accuracy in predicting further unseen instances. This has led to the necessity of developing pruning methods for the simplification of rules. In addition, classification rules are used further to make predictions after the completion of their generation. As efficiency is concerned, it is expected to find the first rule that fires as soon as possible by searching through a rule set. Thus a suit-able structure is required to represent the rule set effectively. In this chapter, the authors introduce a unified framework for construction of rule based classification systems consisting of three operations on Big Data: rule generation, rule simplification and rule representation. The authors also review some existing methods and techniques used for each of the three operations and highlight their limitations. They introduce some novel methods and techniques developed by them recently. These methods and techniques are also discussed in comparison to existing ones with respect to efficient processing of Big Data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For certain observing types, such as those that are remotely sensed, the observation errors are correlated and these correlations are state- and time-dependent. In this work, we develop a method for diagnosing and incorporating spatially correlated and time-dependent observation error in an ensemble data assimilation system. The method combines an ensemble transform Kalman filter with a method that uses statistical averages of background and analysis innovations to provide an estimate of the observation error covariance matrix. To evaluate the performance of the method, we perform identical twin experiments using the Lorenz ’96 and Kuramoto-Sivashinsky models. Using our approach, a good approximation to the true observation error covariance can be recovered in cases where the initial estimate of the error covariance is incorrect. Spatial observation error covariances where the length scale of the true covariance changes slowly in time can also be captured. We find that using the estimated correlated observation error in the assimilation improves the analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a method for the recognition of complex actions. Our method combines automatic learning of simple actions and manual definition of complex actions in a single grammar. Contrary to the general trend in complex action recognition that consists in dividing recognition into two stages, our method performs recognition of simple and complex actions in a unified way. This is performed by encoding simple action HMMs within the stochastic grammar that models complex actions. This unified approach enables a more effective influence of the higher activity layers into the recognition of simple actions which leads to a substantial improvement in the classification of complex actions. We consider the recognition of complex actions based on person transits between areas in the scene. As input, our method receives crossings of tracks along a set of zones which are derived using unsupervised learning of the movement patterns of the objects in the scene. We evaluate our method on a large dataset showing normal, suspicious and threat behaviour on a parking lot. Experiments show an improvement of ~ 30% in the recognition of both high-level scenarios and their composing simple actions with respect to a two-stage approach. Experiments with synthetic noise simulating the most common tracking failures show that our method only experiences a limited decrease in performance when moderate amounts of noise are added.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Among existing remote sensing applications, land-based X-band radar is an effective technique to monitor the wave fields, and spatial wave information could be obtained from the radar images. Two-dimensional Fourier Transform (2-D FT) is the common algorithm to derive the spectra of radar images. However, the wave field in the nearshore area is highly non-homogeneous due to wave refraction, shoaling, and other coastal mechanisms. When applied in nearshore radar images, 2-D FT would lead to ambiguity of wave characteristics in wave number domain. In this article, we introduce two-dimensional Wavelet Transform (2-D WT) to capture the non-homogeneity of wave fields from nearshore radar images. The results show that wave number spectra by 2-D WT at six parallel space locations in the given image clearly present the shoaling of nearshore waves. Wave number of the peak wave energy is increasing along the inshore direction, and dominant direction of the spectra changes from South South West (SSW) to West South West (WSW). To verify the results of 2-D WT, wave shoaling in radar images is calculated based on dispersion relation. The theoretical calculation results agree with the results of 2-D WT on the whole. The encouraging performance of 2-D WT indicates its strong capability of revealing the non-homogeneity of wave fields in nearshore X-band radar images.