8 resultados para structure tensor
em CentAUR: Central Archive University of Reading - UK
Resumo:
new rheology that explicitly accounts for the subcontinuum anisotropy of the sea ice cover is implemented into the Los Alamos sea ice model. This is in contrast to all models of sea ice included in global circulation models that use an isotropic rheology. The model contains one new prognostic variable, the local structure tensor, which quantifies the degree of anisotropy of the sea ice, and two parameters that set the time scale of the evolution of this tensor. The anisotropic rheology provides a subcontinuum description of the mechanical behavior of sea ice and accounts for a continuum scale stress with large shear to compression ratio and tensile stress component. Results over the Arctic of a stand-alone version of the model are presented and anisotropic model sensitivity runs are compared with a reference elasto-visco-plastic simulation. Under realistic forcing sea ice quickly becomes highly anisotropic over large length scales, as is observed from satellite imagery. The influence of the new rheology on the state and dynamics of the sea ice cover is discussed. Our reference anisotropic run reveals that the new rheology leads to a substantial change of the spatial distribution of ice thickness and ice drift relative to the reference standard visco-plastic isotropic run, with ice thickness regionally increased by more than 1 m, and ice speed reduced by up to 50%.
Resumo:
In polar oceans, seawater freezes to form a layer of sea ice of several metres thickness that can cover up to 8% of the Earth’s surface. The modelled sea ice cover state is described by thickness and orientational distribution of interlocking, anisotropic diamond-shaped ice floes delineated by slip lines, as supported by observation. The purpose of this study is to develop a set of equations describing the mean-field sea ice stresses that result from interactions between the ice floes and the evolution of the ice floe orientation, which are simple enough to be incorporated into a climate model. The sea ice stress caused by a deformation of the ice cover is determined by employing an existing kinematic model of ice floe motion, which enables us to calculate the forces acting on the ice floes due to crushing into and sliding past each other, and then by averaging over all possible floe orientations. We describe the orientational floe distribution with a structure tensor and propose an evolution equation for this tensor that accounts for rigid body rotation of the floes, their apparent re-orientation due to new slip line formation, and change of shape of the floes due to freezing and melting. The form of the evolution equation proposed is motivated by laboratory observations of sea ice failure under controlled conditions. Finally, we present simulations of the evolution of sea ice stress and floe orientation for several imposed flow types. Although evidence to test the simulations against is lacking, the simulations seem physically reasonable.
Resumo:
We develop the essential ingredients of a new, continuum and anisotropic model of sea-ice dynamics designed for eventual use in climate simulation. These ingredients are a constitutive law for sea-ice stress, relating stress to the material properties of sea ice and to internal variables describing the sea-ice state, and equations describing the evolution of these variables. The sea-ice cover is treated as a densely flawed two-dimensional continuum consisting of a uniform field of thick ice that is uniformly permeated with narrow linear regions of thinner ice called leads. Lead orientation, thickness and width distributions are described by second-rank tensor internal variables: the structure, thickness and width tensors, whose dynamics are governed by corresponding evolution equations accounting for processes such as new lead generation and rotation as the ice cover deforms. These evolution equations contain contractions of higher-order tensor expressions that require closures. We develop a sea-ice stress constitutive law that relates sea-ice stress to the structure tensor, thickness tensor and strain rate. For the special case of empty leads (containing no ice), linear closures are adopted and we present calculations for simple shear, convergence and divergence.
Resumo:
The effect of the tensor component of the Skyrme effective nucleon-nucleon interaction on the single-particle structure in superheavy elements is studied. A selection of the available Skyrme forces has been chosen and their predictions for the proton and neutron shell closures investigated. The inclusion of the tensor term with realistic coupling strength parameters leads to a small increase in the spin-orbit splitting between the proton 2f7/2 and 2f5/2 partners, opening the Z=114 shell gap over a wide range of nuclei. The Z=126 shell gap, predicted by these models in the absence of the tensor term, is found to be stongly dependent on neutron number with a Z=138 gap opening for large neutron numbers, having a consequent implication for the synthesis of neutron-rich superheavy elements. The predicted neutron shell structures remain largely unchanged by inclusion of the tensor component.
Resumo:
Traditional dictionary learning algorithms are used for finding a sparse representation on high dimensional data by transforming samples into a one-dimensional (1D) vector. This 1D model loses the inherent spatial structure property of data. An alternative solution is to employ Tensor Decomposition for dictionary learning on their original structural form —a tensor— by learning multiple dictionaries along each mode and the corresponding sparse representation in respect to the Kronecker product of these dictionaries. To learn tensor dictionaries along each mode, all the existing methods update each dictionary iteratively in an alternating manner. Because atoms from each mode dictionary jointly make contributions to the sparsity of tensor, existing works ignore atoms correlations between different mode dictionaries by treating each mode dictionary independently. In this paper, we propose a joint multiple dictionary learning method for tensor sparse coding, which explores atom correlations for sparse representation and updates multiple atoms from each mode dictionary simultaneously. In this algorithm, the Frequent-Pattern Tree (FP-tree) mining algorithm is employed to exploit frequent atom patterns in the sparse representation. Inspired by the idea of K-SVD, we develop a new dictionary update method that jointly updates elements in each pattern. Experimental results demonstrate our method outperforms other tensor based dictionary learning algorithms.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
Resumo:
Subspace clustering groups a set of samples from a union of several linear subspaces into clusters, so that the samples in the same cluster are drawn from the same linear subspace. In the majority of the existing work on subspace clustering, clusters are built based on feature information, while sample correlations in their original spatial structure are simply ignored. Besides, original high-dimensional feature vector contains noisy/redundant information, and the time complexity grows exponentially with the number of dimensions. To address these issues, we propose a tensor low-rank representation (TLRR) and sparse coding-based (TLRRSC) subspace clustering method by simultaneously considering feature information and spatial structures. TLRR seeks the lowest rank representation over original spatial structures along all spatial directions. Sparse coding learns a dictionary along feature spaces, so that each sample can be represented by a few atoms of the learned dictionary. The affinity matrix used for spectral clustering is built from the joint similarities in both spatial and feature spaces. TLRRSC can well capture the global structure and inherent feature information of data, and provide a robust subspace segmentation from corrupted data. Experimental results on both synthetic and real-world data sets show that TLRRSC outperforms several established state-of-the-art methods.
Resumo:
Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.