48 resultados para Redundant Residue Number Systems
Resumo:
Model intercomparisons have identified important deficits in the representation of the stable boundary layer by turbulence parametrizations used in current weather and climate models. However, detrimental impacts of more realistic schemes on the large-scale flow have hindered progress in this area. Here we implement a total turbulent energy scheme into the climate model ECHAM6. The total turbulent energy scheme considers the effects of Earth’s rotation and static stability on the turbulence length scale. In contrast to the previously used turbulence scheme, the TTE scheme also implicitly represents entrainment flux in a dry convective boundary layer. Reducing the previously exaggerated surface drag in stable boundary layers indeed causes an increase in southern hemispheric zonal winds and large-scale pressure gradients beyond observed values. These biases can be largely removed by increasing the parametrized orographic drag. Reducing the neutral limit turbulent Prandtl number warms and moistens low-latitude boundary layers and acts to reduce longstanding radiation biases in the stratocumulus regions, the Southern Ocean and the equatorial cold tongue that are common to many climate models.
Resumo:
Subspace clustering groups a set of samples from a union of several linear subspaces into clusters, so that the samples in the same cluster are drawn from the same linear subspace. In the majority of the existing work on subspace clustering, clusters are built based on feature information, while sample correlations in their original spatial structure are simply ignored. Besides, original high-dimensional feature vector contains noisy/redundant information, and the time complexity grows exponentially with the number of dimensions. To address these issues, we propose a tensor low-rank representation (TLRR) and sparse coding-based (TLRRSC) subspace clustering method by simultaneously considering feature information and spatial structures. TLRR seeks the lowest rank representation over original spatial structures along all spatial directions. Sparse coding learns a dictionary along feature spaces, so that each sample can be represented by a few atoms of the learned dictionary. The affinity matrix used for spectral clustering is built from the joint similarities in both spatial and feature spaces. TLRRSC can well capture the global structure and inherent feature information of data, and provide a robust subspace segmentation from corrupted data. Experimental results on both synthetic and real-world data sets show that TLRRSC outperforms several established state-of-the-art methods.
Resumo:
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.