36 resultados para non-negative matrix factorization
em CentAUR: Central Archive University of Reading - UK
Resumo:
This paper is concerned with tensor clustering with the assistance of dimensionality reduction approaches. A class of formulation for tensor clustering is introduced based on tensor Tucker decomposition models. In this formulation, an extra tensor mode is formed by a collection of tensors of the same dimensions and then used to assist a Tucker decomposition in order to achieve data dimensionality reduction. We design two types of clustering models for the tensors: PCA Tensor Clustering model and Non-negative Tensor Clustering model, by utilizing different regularizations. The tensor clustering can thus be solved by the optimization method based on the alternative coordinate scheme. Interestingly, our experiments show that the proposed models yield comparable or even better performance compared to most recent clustering algorithms based on matrix factorization.
Resumo:
In this paper we consider hybrid (fast stochastic approximation and deterministic refinement) algorithms for Matrix Inversion (MI) and Solving Systems of Linear Equations (SLAE). Monte Carlo methods are used for the stochastic approximation, since it is known that they are very efficient in finding a quick rough approximation of the element or a row of the inverse matrix or finding a component of the solution vector. We show how the stochastic approximation of the MI can be combined with a deterministic refinement procedure to obtain MI with the required precision and further solve the SLAE using MI. We employ a splitting A = D – C of a given non-singular matrix A, where D is a diagonal dominant matrix and matrix C is a diagonal matrix. In our algorithm for solving SLAE and MI different choices of D can be considered in order to control the norm of matrix T = D –1C, of the resulting SLAE and to minimize the number of the Markov Chains required to reach given precision. Further we run the algorithms on a mini-Grid and investigate their efficiency depending on the granularity. Corresponding experimental results are presented.
Resumo:
We introduce transreal analysis as a generalisation of real analysis. We find that the generalisation of the real exponential and logarithmic functions is well defined for all transreal numbers. Hence, we derive well defined values of all transreal powers of all non-negative transreal numbers. In particular, we find a well defined value for zero to the power of zero. We also note that the computation of products via the transreal logarithm is identical to the transreal product, as expected. We then generalise all of the common, real, trigonometric functions to transreal functions and show that transreal (sin x)/x is well defined everywhere. This raises the possibility that transreal analysis is total, in other words, that every function and every limit is everywhere well defined. If so, transreal analysis should be an adequate mathematical basis for analysing the perspex machine - a theoretical, super-Turing machine that operates on a total geometry. We go on to dispel all of the standard counter "proofs" that purport to show that division by zero is impossible. This is done simply by carrying the proof through in transreal arithmetic or transreal analysis. We find that either the supposed counter proof has no content or else that it supports the contention that division by zero is possible. The supposed counter proofs rely on extending the standard systems in arbitrary and inconsistent ways and then showing, tautologously, that the chosen extensions are not consistent. This shows only that the chosen extensions are inconsistent and does not bear on the question of whether division by zero is logically possible. By contrast, transreal arithmetic is total and consistent so it defeats any possible "straw man" argument. Finally, we show how to arrange that a function has finite or else unmeasurable (nullity) values, but no infinite values. This arithmetical arrangement might prove useful in mathematical physics because it outlaws naked singularities in all equations.
Resumo:
Use of orthogonal space-time block codes (STBCs) with multiple transmitters and receivers can improve signal quality. However, in optical intensity modulated signals, output of the transmitter is non-negative and hence standard orthogonal STBC schemes need to be modified. A generalised framework for applying orthogonal STBCs for free-space IM/DD optical links is presented.
Resumo:
Theory of mind ability has been associated with performance in interpersonal interactions and has been found to influence aspects such as emotion recognition, social competence, and social anxiety. Being able to attribute mental states to others requires attention to subtle communication cues such as facial emotional expressions. Decoding and interpreting emotions expressed by the face, especially those with negative valence, are essential skills to successful social interaction. The current study explored the association between theory of mind skills and attentional bias to facial emotional expressions. According to the study hypothesis, individuals with poor theory of mind skills showed preferential attention to negative faces over both non-negative faces and neutral objects. Tentative explanations for the findings are offered emphasizing the potential adaptive role of vigilance for threat as a way of allocating a limited capacity to interpret others’ mental states to obtain as much information as possible about potential danger in the social environment.
Resumo:
Use of orthogonal space-time block codes (STBCs) with multiple transmitters and receivers can improve signal quality. However, in optical intensity modulated signals, output of the transmitter is non-negative and hence standard orthogonal STBC schemes need to be modified. A generalised framework for applying orthogonal STBCs for free-space IM/DD optical links is presented.
Resumo:
Trace element measurements in PM10–2.5, PM2.5–1.0 and PM1.0–0.3 aerosol were performed with 2 h time resolution at kerbside, urban background and rural sites during the ClearfLo winter 2012 campaign in London. The environment-dependent variability of emissions was characterized using the Multilinear Engine implementation of the positive matrix factorization model, conducted on data sets comprising all three sites but segregated by size. Combining the sites enabled separation of sources with high temporal covariance but significant spatial variability. Separation of sizes improved source resolution by preventing sources occurring in only a single size fraction from having too small a contribution for the model to resolve. Anchor profiles were retrieved internally by analysing data subsets, and these profiles were used in the analyses of the complete data sets of all sites for enhanced source apportionment. A total of nine different factors were resolved (notable elements in brackets): in PM10–2.5, brake wear (Cu, Zr, Sb, Ba), other traffic-related (Fe), resuspended dust (Si, Ca), sea/road salt (Cl), aged sea salt (Na, Mg) and industrial (Cr, Ni); in PM2.5–1.0, brake wear, other traffic-related, resuspended dust, sea/road salt, aged sea salt and S-rich (S); and in PM1.0–0.3, traffic-related (Fe, Cu, Zr, Sb, Ba), resuspended dust, sea/road salt, aged sea salt, reacted Cl (Cl), S-rich and solid fuel (K, Pb). Human activities enhance the kerb-to-rural concentration gradients of coarse aged sea salt, typically considered to have a natural source, by 1.7–2.2. These site-dependent concentration differences reflect the effect of local resuspension processes in London. The anthropogenically influenced factors traffic (brake wear and other traffic-related processes), dust and sea/road salt provide further kerb-to-rural concentration enhancements by direct source emissions by a factor of 3.5–12.7. The traffic and dust factors are mainly emitted in PM10–2.5 and show strong diurnal variations with concentrations up to 4 times higher during rush hour than during night-time. Regionally influenced S-rich and solid fuel factors, occurring primarily in PM1.0–0.3, have negligible resuspension influences, and concentrations are similar throughout the day and across the regions.
Resumo:
A semi-distributed model, INCA, has been developed to determine the fate and distribution of nutrients in terrestrial and aquatic systems. The model simulates nitrogen and phosphorus processes in soils, groundwaters and river systems and can be applied in a semi-distributed manner at a range of scales. In this study, the model has been applied at field to sub-catchment to whole catchment scale to evaluate the behaviour of biosolid-derived losses of P in agricultural systems. It is shown that process-based models such as INCA, applied at a wide range of scales, reproduce field and catchment behaviour satisfactorily. The INCA model can also be used to generate generic information for risk assessment. By adjusting three key variables: biosolid application rates, the hydrological connectivity of the catchment and the initial P-status of the soils within the model, a matrix of P loss rates can be generated to evaluate the behaviour of the model and, hence, of the catchment system. The results, which indicate the sensitivity of the catchment to flow paths, to application rates and to initial soil conditions, have been incorporated into a Nutrient Export Risk Matrix (NERM).
Resumo:
We consider the application of the conjugate gradient method to the solution of large, symmetric indefinite linear systems. Special emphasis is put on the use of constraint preconditioners and a new factorization that can reduce the number of flops required by the preconditioning step. Results concerning the eigenvalues of the preconditioned matrix and its minimum polynomial are given. Numerical experiments validate these conclusions.
Resumo:
We consider conjugate-gradient like methods for solving block symmetric indefinite linear systems that arise from saddle-point problems or, in particular, regularizations thereof. Such methods require preconditioners that preserve certain sub-blocks from the original systems but allow considerable flexibility for the remaining blocks. We construct a number of families of implicit factorizations that are capable of reproducing the required sub-blocks and (some) of the remainder. These generalize known implicit factorizations for the unregularized case. Improved eigenvalue clustering is possible if additionally some of the noncrucial blocks are reproduced. Numerical experiments confirm that these implicit-factorization preconditioners can be very effective in practice.
Resumo:
A study was designed to examine the relationships between protein, condensed tannin and cell wall carbohydrate content and composition and the nutritional quality of seven tropical legumes (Desmodium ovalifolium, Flemingia macrophylla, Leucaena leucocephala, L pallida, L macrophylla, Calliandra calothyrsus and Clitotia fairchildiana). Among the legume species studied, D ovalifolium showed the lowest concentration of nitrogen, while L leucocephala showed the highest. Fibre (NDF) content was lowest in C calothyrsus, L Leucocephala and L pallida and highest in L macrophylla, which had no measurable condensed tannins. The highest tannin concentration was found in C calothyrsus. Total non-structural polysaccharides (NSP) varied among legumes species (lowest in C calothyrsus and highest in D ovalifolium), and glucose and uronic acids were the most abundant carbohydrate constituents in all legumes. Total NSP losses were lowest in F macrophylla and highest in L leucocephala and L pallida. Gas accumulation and acetate and propionate levels were 50% less with F macrophylla and D ovalifolium as compared with L leucocephala. The highest levels of branched-chain fatty acids were observed with non-tanniniferous legumes, and negative concentrations were observed with some of the legumes with high tannin content (D ovalifolium and F macrophylla). Linear regression analysis showed that the presence of condensed tannins was more related to a reduction of the initial rate of gas production (0-48 h) than to the final amount of gas produced or the extent (144h) of dry matter degradation, which could be due to differences in tannin chemistry. Consequently, more attention should be given in the future to elucidating the impact of tannin structure on the nutritional quality of tropical forage legumes. (C) 2003 Society of Chemical Industry.
Resumo:
In this work, IR thermography is used as a non-destructive tool for impact damage characterisation on thermoplastic E-glass/polypropylene composites for automotive applications. The aim of this experimentation was to compare impact resistance and to characterise damage patterns of different laminates, in order to provide indications for their use in components. Two E-glass/polypropylene composites, commingled ®Twintex (with three different weave structures: directional, balanced and 3-D) and random reinforced GMT, were in particular characterised. Directional and balanced Twintex were also coupled in a number of hybrid configurations with GMT to evaluate the possible use of GMT/Twintex hybrids in high-energy absorption components. The laminates were impacted using a falling weight tower, with impact energies ranging from 15 J to penetration. Using IR thermography during cooling down following a long pulse (3 s), impact damaged areas were characterised and the influence of weave structure on damage patterns was studied. IR thermography offered good accuracy for laminates with thickness not exceeding 3.5 mm: this appears to be a limit for the direct use of this method on components, where more refined signal treatment would probably be needed for impact damage characterisation.
Resumo:
An unknown Gram-positive, catalase-negative, facultatively anaerobic, non-spore-forming, rod-shaped bacterium originating from semen of a pig was characterized using phenotypic, molecular chemical and molecular phylogenetic methods. Chemical studies revealed the presence of a directly cross-linked cell wall murein based on L-lysine and a DNA G + C content of 39 mol%. Comparative 16S rRNA gene sequencing showed that the unidentified rod-shaped organism formed a hitherto unknown subline related, albeit loosely, to Alkalibacterium olivapovliticus, Alloiococcus otitis, Dolosigranulum pigrum and related organisms, in the low-G + C-content Gram-positive bacteria. However, sequence divergence values of > 11 % from these recognized taxa. clearly indicated that the novel bacterium represents a separate genus. Based on phenotypic and phylogenetic considerations, it is proposed that the unknown bacterium from pig semen be classified as a new genus and species, Allofustis seminis gen. nov., sp. nov. The type strain is strain 01-570-1(T) (=CCUG 45438(T)=CIP 107425(T)).
Resumo:
In this paper we introduce a new algorithm, based on the successful work of Fathi and Alexandrov, on hybrid Monte Carlo algorithms for matrix inversion and solving systems of linear algebraic equations. This algorithm consists of two parts, approximate inversion by Monte Carlo and iterative refinement using a deterministic method. Here we present a parallel hybrid Monte Carlo algorithm, which uses Monte Carlo to generate an approximate inverse and that improves the accuracy of the inverse with an iterative refinement. The new algorithm is applied efficiently to sparse non-singular matrices. When we are solving a system of linear algebraic equations, Bx = b, the inverse matrix is used to compute the solution vector x = B(-1)b. We present results that show the efficiency of the parallel hybrid Monte Carlo algorithm in the case of sparse matrices.
Resumo:
We present a novel approach to calculating Low-Energy Electron Diffraction (LEED) intensities for ordered molecular adsorbates. First, the intra-molecular multiple scattering is computed to obtain a non-diagonal molecular T-matrix. This is then used to represent the entire molecule as a single scattering object in a conventional LEED calculation, where the Layer Doubling technique is applied to assemble the different layers, including the molecular ones. A detailed comparison with conventional layer-type LEED calculations is provided to ascertain the accuracy of this scheme of calculation. Advantages of this scheme for problems involving ordered arrays of molecules adsorbed on surfaces are discussed.