19 resultados para Dimensionality
em University of Queensland eSpace - Australia
Resumo:
This study examined the utility of the Attachment Style Questionnaire (ASQ) in an Italian sample of 487 consecutively admitted psychiatric participants and an independent sample of 605 nonclinical participants. Minimum average partial analysis of data from the psychiatric sample supported the hypothesized five-factor structure of the items; furthermore, multiple-group component analysis showed that this five-factor structure was not an artifact of differences in item distributions. The five-factor structure of the ASQ was largely replicated in the nonclinical sample. Furthermore, in both psychiatric and nonclinical samples, a two-factor higher order structure of the ASQ scales was observed. The higher order factors of Avoidance and Anxious Attachment showed meaningful relations with scales assessing parental bonding, but were not redundant with these scales. Multivariate normal mixture analysis supported the hypothesis that adult attachment patterns, as measured by the ASQ, are best considered as dimensional constructs.
Resumo:
The notorious "dimensionality curse" is a well-known phenomenon for any multi-dimensional indexes attempting to scale up to high dimensions. One well-known approach to overcome degradation in performance with respect to increasing dimensions is to reduce the dimensionality of the original dataset before constructing the index. However, identifying the correlation among the dimensions and effectively reducing them are challenging tasks. In this paper, we present an adaptive Multi-level Mahalanobis-based Dimensionality Reduction (MMDR) technique for high-dimensional indexing. Our MMDR technique has four notable features compared to existing methods. First, it discovers elliptical clusters for more effective dimensionality reduction by using only the low-dimensional subspaces. Second, data points in the different axis systems are indexed using a single B+-tree. Third, our technique is highly scalable in terms of data size and dimension. Finally, it is also dynamic and adaptive to insertions. An extensive performance study was conducted using both real and synthetic datasets, and the results show that our technique not only achieves higher precision, but also enables queries to be processed efficiently. Copyright Springer-Verlag 2005
Resumo:
Determining the dimensionality of G provides an important perspective on the genetic basis of a multivariate suite of traits. Since the introduction of Fisher's geometric model, the number of genetically independent traits underlying a set of functionally related phenotypic traits has been recognized as an important factor influencing the response to selection. Here, we show how the effective dimensionality of G can be established, using a method for the determination of the dimensionality of the effect space from a multivariate general linear model introduced by AMEMIYA (1985). We compare this approach with two other available methods, factor-analytic modeling and bootstrapping, using a half-sib experiment that estimated G for eight cuticular hydrocarbons of Drosophila serrata. In our example, eight pheromone traits were shown to be adequately represented by only two underlying genetic dimensions by Amemiya's approach and factor-analytic modeling of the covariance structure at the sire level. In, contrast, bootstrapping identified four dimensions with significant genetic variance. A simulation study indicated that while the performance of Amemiya's method was more sensitive to power constraints, it performed as well or better than factor-analytic modeling in correctly identifying the original genetic dimensions at moderate to high levels of heritability. The bootstrap approach consistently overestimated the number of dimensions in all cases and performed less well than Amemiya's method at subspace recovery.
Resumo:
Finite mixture models are being increasingly used to model the distributions of a wide variety of random phenomena. While normal mixture models are often used to cluster data sets of continuous multivariate data, a more robust clustering can be obtained by considering the t mixture model-based approach. Mixtures of factor analyzers enable model-based density estimation to be undertaken for high-dimensional data where the number of observations n is very large relative to their dimension p. As the approach using the multivariate normal family of distributions is sensitive to outliers, it is more robust to adopt the multivariate t family for the component error and factor distributions. The computational aspects associated with robustness and high dimensionality in these approaches to cluster analysis are discussed and illustrated.
Resumo:
We study the distribution of energy level spacings in two models describing coupled single-mode Bose-Einstein condensates. Both models have a fixed number of degrees of freedom, which is small compared to the number of interaction parameters, and is independent of the dimensionality of the Hilbert space. We find that the distribution follows a universal Poisson form independent of the choice of coupling parameters, which is indicative of the integrability of both models. These results complement those for integrable lattice models where the number of degrees of freedom increases with increasing dimensionality of the Hilbert space. Finally, we also show that for one model the inclusion of an additional interaction which breaks the integrability leads to a non-Poisson distribution.
Resumo:
By stochastic modeling of the process of Raman photoassociation of Bose-Einstein condensates, we show that, the farther the initial quantum state is from a coherent state, the farther the one-dimensional predictions are from those of the commonly used zero-dimensional approach. We compare the dynamics of condensates, initially in different quantum states, finding that, even when the quantum prediction for an initial coherent state is relatively close to the Gross-Pitaevskii prediction, an initial Fock state gives qualitatively different predictions. We also show that this difference is not present in a single-mode type of model, but that the quantum statistics assume a more important role as the dimensionality of the model is increased. This contrasting behavior in different dimensions, well known with critical phenomena in statistical mechanics, makes itself plainly visible here in a mesoscopic system and is a strong demonstration of the need to consider physically realistic models of interacting condensates.
Resumo:
Purpose. This study examined benefit finding in MS carers including the dimensionality of benefit finding, relations between carer and care recipient benefit finding, and the effects of carer benefit finding on carer positive and negative adjustment domains. Method. A total of 267 carers and their care recipients completed questionnaires at Time 1 and 3 months later, Time 2 (n=155). Illness data were collected at Time 1, and number of problems, stress appraisal, benefit finding, negative (global distress, negative affect) and positive (life satisfaction, positive affect, dyadic adjustment) adjustment domains were measured at Time 2. Results. Qualitative data revealed seven benefit finding themes, two of which were adequately represented by the Benefit Finding Scale (BFS) [1] (Mohr et al. Health Psychology 1999; 18: 376). Factor analyses indicated two factors (Personal Growth, Family Relations Growth) which were psychometrically sound and showed differential relations with illness and adjustment domains. Although care recipients reported higher levels of benefit finding than carers, their benefit finding reports regarding personal growth were correlated. The carer BFS factors were positively related to carer and care recipient dyadic adjustment. Care recipient benefit finding was unrelated to carer adjustment domains. After controlling for the effects of demographics, care recipient characteristics, problems and appraisal, carer benefit finding was related to carer positive adjustment domains and unrelated to carer negative adjustment domains. Conclusion. Findings support the role of benefit finding in sustaining positive psychological states and the communal search for meaning within carer-care recipient dyads.
Resumo:
This research extends the consumer-based brand equity measurement approach to the measurement of the equity associated with retailers. This paper also addresses some of the limitations associated with current retailer equity measurement such as a lack of clarity regarding its nature and dimensionality. We conceptualise retailer equity as a four-dimensional construct comprising retailer awareness, retailer associations, perceived retailer quality, and retailer loyalty. The paper reports the result of an empirical study of a convenience sample of 601 shopping mall consumers at an Australian state capital city. Following a confirmatory factor analysis using structural equation modelling to examine the dimensionality of the retailer equity construct, the proposed model is tested for two retailer categories: department stores and speciality stores. Results confirm the hypothesised four-dimensional structure.
Resumo:
With the rapid increase in both centralized video archives and distributed WWW video resources, content-based video retrieval is gaining its importance. To support such applications efficiently, content-based video indexing must be addressed. Typically, each video is represented by a sequence of frames. Due to the high dimensionality of frame representation and the large number of frames, video indexing introduces an additional degree of complexity. In this paper, we address the problem of content-based video indexing and propose an efficient solution, called the Ordered VA-File (OVA-File) based on the VA-file. OVA-File is a hierarchical structure and has two novel features: 1) partitioning the whole file into slices such that only a small number of slices are accessed and checked during k Nearest Neighbor (kNN) search and 2) efficient handling of insertions of new vectors into the OVA-File, such that the average distance between the new vectors and those approximations near that position is minimized. To facilitate a search, we present an efficient approximate kNN algorithm named Ordered VA-LOW (OVA-LOW) based on the proposed OVA-File. OVA-LOW first chooses possible OVA-Slices by ranking the distances between their corresponding centers and the query vector, and then visits all approximations in the selected OVA-Slices to work out approximate kNN. The number of possible OVA-Slices is controlled by a user-defined parameter delta. By adjusting delta, OVA-LOW provides a trade-off between the query cost and the result quality. Query by video clip consisting of multiple frames is also discussed. Extensive experimental studies using real video data sets were conducted and the results showed that our methods can yield a significant speed-up over an existing VA-file-based method and iDistance with high query result quality. Furthermore, by incorporating temporal correlation of video content, our methods achieved much more efficient performance.
Resumo:
We use series expansions to study the excitation spectra of spin-1/2 antiferromagnets on anisotropic triangular lattices. For the isotropic triangular lattice model (TLM), the high-energy spectra show several anomalous features that differ strongly from linear spin-wave theory (LSWT). Even in the Neel phase, the deviations from LSWT increase sharply with frustration, leading to rotonlike minima at special wave vectors. We argue that these results can be interpreted naturally in a spinon language and provide an explanation for the previously observed anomalous finite-temperature properties of the TLM. In the coupled-chains limit, quantum renormalizations strongly enhance the one-dimensionality of the spectra, in agreement with experiments on Cs2CuCl4.
Resumo:
The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Document classification is a supervised machine learning process, where predefined category labels are assigned to documents based on the hypothesis derived from training set of labelled documents. Documents cannot be directly interpreted by a computer system unless they have been modelled as a collection of computable features. Rogati and Yang [M. Rogati and Y. Yang, Resource selection for domain-specific cross-lingual IR, in SIGIR 2004: Proceedings of the 27th annual international conference on Research and Development in Information Retrieval, ACM Press, Sheffied: United Kingdom, pp. 154-161.] pointed out that the effectiveness of document classification system may vary in different domains. This implies that the quality of document model contributes to the effectiveness of document classification. Conventionally, model evaluation is accomplished by comparing the effectiveness scores of classifiers on model candidates. However, this kind of evaluation methods may encounter either under-fitting or over-fitting problems, because the effectiveness scores are restricted by the learning capacities of classifiers. We propose a model fitness evaluation method to determine whether a model is sufficient to distinguish positive and negative instances while still competent to provide satisfactory effectiveness with a small feature subset. Our experiments demonstrated how the fitness of models are assessed. The results of our work contribute to the researches of feature selection, dimensionality reduction and document classification.
Resumo:
In this paper, we present a novel indexing technique called Multi-scale Similarity Indexing (MSI) to index image's multi-features into a single one-dimensional structure. Both for text and visual feature spaces, the similarity between a point and a local partition's center in individual space is used as the indexing key, where similarity values in different features are distinguished by different scale. Then a single indexing tree can be built on these keys. Based on the property that relevant images have similar similarity values from the center of the same local partition in any feature space, certain number of irrelevant images can be fast pruned based on the triangle inequity on indexing keys. To remove the dimensionality curse existing in high dimensional structure, we propose a new technique called Local Bit Stream (LBS). LBS transforms image's text and visual feature representations into simple, uniform and effective bit stream (BS) representations based on local partition's center. Such BS representations are small in size and fast for comparison since only bit operation are involved. By comparing common bits existing in two BSs, most of irrelevant images can be immediately filtered. To effectively integrate multi-features, we also investigated the following evidence combination techniques-Certainty Factor, Dempster Shafer Theory, Compound Probability, and Linear Combination. Our extensive experiment showed that single one-dimensional index on multi-features improves multi-indices on multi-features greatly. Our LBS method outperforms sequential scan on high dimensional space by an order of magnitude. And Certainty Factor and Dempster Shafer Theory perform best in combining multiple similarities from corresponding multiple features.