101 resultados para INCIDENCE MATRICES APPLICATIONS
em University of Queensland eSpace - Australia
Resumo:
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
We detail the automatic construction of R matrices corresponding to (the tensor products of) the (O-m\alpha(n)) families of highest-weight representations of the quantum superalgebras Uq[gl(m\n)]. These representations are irreducible, contain a free complex parameter a, and are 2(mn)-dimensional. Our R matrices are actually (sparse) rank 4 tensors, containing a total of 2(4mn) components, each of which is in general an algebraic expression in the two complex variables q and a. Although the constructions are straightforward, we describe them in full here, to fill a perceived gap in the literature. As the algorithms are generally impracticable for manual calculation, we have implemented the entire process in MATHEMATICA; illustrating our results with U-q [gl(3\1)]. (C) 2002 Published by Elsevier Science B.V.
Resumo:
A systematic method for constructing trigonometric R-matrices corresponding to the (multiplicity-free) tensor product of any two affinizable representations of a quantum algebra or superalgebra has been developed by the Brisbane group and its collaborators. This method has been referred to as the Tensor Product Graph Method. Here we describe applications of this method to untwisted and twisted quantum affine superalgebras.
Resumo:
The stable similarity reduction of a nonsymmetric square matrix to tridiagonal form has been a long-standing problem in numerical linear algebra. The biorthogonal Lanczos process is in principle a candidate method for this task, but in practice it is confined to sparse matrices and is restarted periodically because roundoff errors affect its three-term recurrence scheme and degrade the biorthogonality after a few steps. This adds to its vulnerability to serious breakdowns or near-breakdowns, the handling of which involves recovery strategies such as the look-ahead technique, which needs a careful implementation to produce a block-tridiagonal form with unpredictable block sizes. Other candidate methods, geared generally towards full matrices, rely on elementary similarity transformations that are prone to numerical instabilities. Such concomitant difficulties have hampered finding a satisfactory solution to the problem for either sparse or full matrices. This study focuses primarily on full matrices. After outlining earlier tridiagonalization algorithms from within a general framework, we present a new elimination technique combining orthogonal similarity transformations that are stable. We also discuss heuristics to circumvent breakdowns. Applications of this study include eigenvalue calculation and the approximation of matrix functions.
Resumo:
Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all their coefficient matrices. However, applications of VECM models to financial market data have revealed that zero entries are often a necessary part of efficient modelling. In such cases, the use of full-order VECM models may lead to incorrect inferences. Specifically, if indirect causality or Granger non-causality exists among the variables, the use of over-parameterised full-order VECM models may weaken the power of statistical inference. In this paper, it is argued that the zero–non-zero (ZNZ) patterned VECM is a more straightforward and effective means of testing for both indirect causality and Granger non-causality. For a ZNZ patterned VECM framework for time series of integrated order two, we provide a new algorithm to select cointegrating and loading vectors that can contain zero entries. Two case studies are used to demonstrate the usefulness of the algorithm in tests of purchasing power parity and a three-variable system involving the stock market.
Resumo:
A set of techniques referred to as circular statistics has been developed for the analysis of directional and orientational data. The unit of measure for such data is angular (usually in either degrees or radians), and the statistical distributions underlying the techniques are characterised by their cyclic nature-for example, angles of 359.9 degrees are considered close to angles of 0 degrees. In this paper, we assert that such approaches can be easily adapted to analyse time-of-day and time-of-week data, and in particular daily cycles in the numbers of incidents reported to the police. We begin the paper by describing circular statistics. We then discuss how these may be modified, and demonstrate the approach with some examples for reported incidents in the Cardiff area of Wales. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Current debates about educational theory are concerned with the relationship between knowledge and power and thereby issues such as who possesses a truth and how have they arrived at it, what questions are important to ask, and how should they best be answered. As such, these debates revolve around questions of preferred, appropriate, and useful theoretical perspectives. This paper overviews the key theoretical perspectives that are currently used in physical education pedagogy research and considers how these inform the questions we ask and shapes the conduct of research. It also addresses what is contested with respect to these perspectives. The paper concludes with some cautions about allegiances to and use of theories in line with concerns for the applicability of educational research to pressing social issues.
Resumo:
Optically transparent, mesostructured titanium dioxide thin films were fabricated using an amphiphilic poly(alkylene oxide) block copolymer template in combination with retarded hydrolysis of a titanium isopropoxide precursor. Prior to calcination, the films displayed a stable hexagonal mesophase and high refractive indices (1.5 to 1.6) relative to mesostructured silica (1.43). After calcination, the hexagonal mesophase was retained with surface areas >300 m2 g-1. The dye Rhodamine 6G (commonly used as a laser dye) was incorporated into the copolymer micelle during the templating process. In this way, novel dye-doped mesostructured titanium dioxide films were synthesised. The copolymer not only directs the film structure, but also provides a solubilizing environment suitable for sustaining a high monomer-to-aggregate ratio at elevated dye concentrations. The dye-doped films displayed optical thresholdlike behaviour characteristic of amplified spontaneous emission. Soft lithography was successfully applied to micropattern the dye-doped films. These results pave the way for the fabrication and demonstration of novel microlaser structures and other active optical structures. This new, high-refractive index, mesostructured, dye-doped material could also find applications in areas such as optical coatings, displays and integrated photonic devices.
Resumo:
The minimal irreducible representations of U-q[gl(m|n)], i.e. those irreducible representations that are also irreducible under U-q[osp(m|n)] are investigated and shown to be affinizable to give irreducible representations of the twisted quantum affine superalgebra U-q[gl(m|n)((2))]. The U-q[osp(m|n)] invariant R-matrices corresponding to the tensor product of any two minimal representations are constructed, thus extending our twisted tensor product graph method to the supersymmetric case. These give new solutions to the spectral-dependent graded Yang-Baxter equation arising from U-q[gl(m|n)((2))], which exhibit novel features not previously seen in the untwisted or non-super cases.
Resumo:
In this review we demonstrate how the algebraic Bethe ansatz is used for the calculation of the-energy spectra and form factors (operator matrix elements in the basis of Hamiltonian eigenstates) in exactly solvable quantum systems. As examples we apply the theory to several models of current interest in the study of Bose-Einstein condensates, which have been successfully created using ultracold dilute atomic gases. The first model we introduce describes Josephson tunnelling between two coupled Bose-Einstein condensates. It can be used not only for the study of tunnelling between condensates of atomic gases, but for solid state Josephson junctions and coupled Cooper pair boxes. The theory is also applicable to models of atomic-molecular Bose-Einstein condensates, with two examples given and analysed. Additionally, these same two models are relevant to studies in quantum optics; Finally, we discuss the model of Bardeen, Cooper and Schrieffer in this framework, which is appropriate for systems of ultracold fermionic atomic gases, as well as being applicable for the description of superconducting correlations in metallic grains with nanoscale dimensions.; In applying all the above models to. physical situations, the need for an exact analysis of small-scale systems is established due to large quantum fluctuations which render mean-field approaches inaccurate.
Resumo:
Extracting human postural information from video sequences has proved a difficult research question. The most successful approaches to date have been based on particle filtering, whereby the underlying probability distribution is approximated by a set of particles. The shape of the underlying observational probability distribution plays a significant role in determining the success, both accuracy and efficiency, of any visual tracker. In this paper we compare approaches used by other authors and present a cost path approach which is commonly used in image segmentation problems, however is currently not widely used in tracking applications.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
This is the second in a series of articles whose ultimate goal is the evaluation of the matrix elements (MEs) of the U(2n) generators in a multishell spin-orbit basis. This extends the existing unitary group approach to spin-dependent configuration interaction (CI) and many-body perturbation theory calculations on molecules to systems where there is a natural partitioning of the electronic orbital space. As a necessary preliminary to obtaining the U(2n) generator MEs in a multishell spin-orbit basis, we must obtain a complete set of adjoint coupling coefficients for the two-shell composite Gelfand-Paldus basis. The zero-shift coefficients were obtained in the first article of the series. in this article, we evaluate the nonzero shift adjoint coupling coefficients for the two-shell composite Gelfand-Paldus basis. We then demonstrate that the one-shell versions of these coefficients may be obtained by taking the Gelfand-Tsetlin limit of the two-shell formulas. These coefficients,together with the zero-shift types, then enable us to write down formulas for the U(2n) generator matrix elements in a two-shell spin-orbit basis. Ultimately, the results of the series may be used to determine the many-electron density matrices for a partitioned system. (C) 1998 John Wiley & Sons, Inc.
Resumo:
In order to determine the role played by heroin purity in fatal heroin overdoses, time series analyses were conducted on the purity of street heroin seizures in south western Sydney and overdose fatalities in that region. A total of 322 heroin samples were analysed in fortnightly periods between February 1993 to January 1995. A total of 61 overdose deaths occurred in the region in the study period. Cross correlation plots revealed a significant correlation of 0.57 at time lag zero between mean purity of heroin samples per fortnight and number of overdose fatalities. Similarly, there was a significant correlation of 0.50 at time lag zero between the highest heroin purity per fortnight and number of overdose fatalities. The correlation between range of heroin purity and number of deaths per fortnight was 0.40. A simultaneous multiple regression on scores adjusted for first order correlation indicated both the mean level of heroin purity and the range of heroin purity were independent predictors of the number of deaths per fortnight. The results indicate that the occurrence of overdose fatalities was moderately associated with both the average heroin purity and the range of heroin purity over the study period. (C) 1999 Elsevier Science Ireland Ltd. All rights reserved.