16 resultados para Orthographic representations
em Indian Institute of Science - Bangalore - Índia
Resumo:
This work grew out of an attempt to understand a conjectural remark made by Professor Kyoji Saito to the author about a possible link between the Fox-calculus description of the symplectic structure on the moduli space of representations of the fundamental group of surfaces into a Lie group and pairs of mutually dual sets of generators of the fundamental group. In fact in his paper [3] , Prof. Kyoji Saito gives an explicit description of the system of dual generators of the fundamental group.
Resumo:
Para-Bose commutation relations are related to the SL(2,R) Lie algebra. The irreducible representation [script D]alpha of the para-Bose system is obtained as the direct sum Dbeta[direct-sum]Dbeta+1/2 of the representations of the SL(2,R) Lie algebra. The position and momentum eigenstates are then obtained in this representation [script D]alpha, using the matrix mechanical method. The orthogonality, completeness, and the overlap of these eigenstates are derived. The momentum eigenstates are also derived using the wave mechanical method by specifying the domain of the definition of the momentum operator in addition to giving it a formal differential expression. By a careful consideration in this manner we find that the two apparently different solutions obtained by Ohnuki and Kamefuchi in this context are actually unitarily equivalent. Journal of Mathematical Physics is copyrighted by The American Institute of Physics.
Resumo:
The aim of this article is to characterize unitary increment process by a quantum stochastic integral representation on symmetric Fock space. Under certain assumptions we have proved its unitary equivalence to a Hudson-Parthasarathy flow.
Resumo:
Bonding in buckminsterfullerene, C60, can be described in terms of a unique canonical representation in which all six membered rings have a benzenoid Kekule structure while the pentagons are all made of exclusively single bonds. The corresponding valence bond structure reflects the full symmetry of the molecule and is consistent with the observed bond length variations. Computational support for the bonding description is provided using localized MO's obtained at the MNDO level. The requirement of benzenoid structures for all the hexagons can be used as a criterion of stability of fullerenes which complements the pentagon isolation rule. A convenient two-dimensional representation of the fullerene structures incorporating the above bonding description is suggested, especially for use in mechanistic discussions.
Resumo:
In engineering design, the end goal is the creation of an artifact, product, system, or process that fulfills some functional requirements at some desired level of performance. As such, knowledge of functionality is essential in a wide variety of tasks in engineering activities, including modeling, generation, modification, visualization, explanation, evaluation, diagnosis, and repair of these artifacts and processes. A formal representation of functionality is essential for supporting any of these activities on computers. The goal of Parts 1 and 2 of this Special Issue is to bring together the state of knowledge of representing functionality in engineering applications from both the engineering and the artificial intelligence (AI) research communities.
Resumo:
We look at graphical descriptions of block codes known as trellises, which illustrate connections between algebra and graph theory, and can be used to develop powerful decoding algorithms. Trellis sizes for linear block codes are known to grow exponentially with the code parameters. Of considerable interest to coding theorists therefore, are more compact descriptions called tail-biting trellises which in some cases can be much smaller than any conventional trellis for the same code . We derive some interesting properties of tail-biting trellises and present a new decoding algorithm.
Resumo:
We give an efficient randomized algorithm to construct a box representation of any graph G on n vertices in $1.5 (\Delta + 2) \ln n$ dimensions, where $\Delta$ is the maximum degree of G. We also show that $\boxi(G) \le (\Delta + 2) \ln n$ for any graph G. Our bound is tight up to a factor of $\ln n$. We also show that our randomized algorithm can be derandomized to get a polynomial time deterministic algorithm. Though our general upper bound is in terms of maximum degree $\Delta$, we show that for almost all graphs on n vertices, its boxicity is upper bound by $c\cdot(d_{av} + 1) \ln n$ where d_{av} is the average degree and c is a small constant. Also, we show that for any graph G, $\boxi(G) \le \sqrt{8 n d_{av} \ln n}$, which is tight up to a factor of $b \sqrt{\ln n}$ for a constant b.
Resumo:
The role of a computer emerged from modeling and analyzing concepts (ideas) to generate concepts. Research into methods for supporting conceptual design using automated synthesis had attracted much attention in the past decades. To find out how designers synthesize solution concepts for multi-state mechanical devices, ten experimental studies were conducted. Observations from these empirical studies would be used as the basis to develop knowledge involved in the multi-state design synthesis process. In this paper, we propose a computational representation for expressing the multi-state design task and for enumerating multi-state behaviors of kinematic pairs and mechanisms. This computational representation would be used to formulate computational methods for the synthesis process to develop a system for supporting design synthesis of multiple state mechanical devices by generating a comprehensive variety of solution alternatives.
Resumo:
The concurrent planning of sequential saccades offers a simple model to study the nature of visuomotor transformations since the second saccade vector needs to be remapped to foveate the second target following the first saccade. Remapping is thought to occur through egocentric mechanisms involving an efference copy of the first saccade that is available around the time of its onset. In contrast, an exocentric representation of the second target relative to the first target, if available, can be used to directly code the second saccade vector. While human volunteers performed a modified double-step task, we examined the role of exocentric encoding in concurrent saccade planning by shifting the first target location well before the efference copy could be used by the oculomotor system. The impact of the first target shift on concurrent processing was tested by examining the end-points of second saccades following a shift of the second target during the first saccade. The frequency of second saccades to the old versus new location of the second target, as well as the propagation of first saccade localization errors, both indices of concurrent processing, were found to be significantly reduced in trials with the first target shift compared to those without it. A similar decrease in concurrent processing was obtained when we shifted the first target but kept constant the second saccade vector. Overall, these results suggest that the brain can use relatively stable visual landmarks, independent of efference copy-based egocentric mechanisms, for concurrent planning of sequential saccades.
Resumo:
With the availability of a huge amount of video data on various sources, efficient video retrieval tools are increasingly in demand. Video being a multi-modal data, the perceptions of ``relevance'' between the user provided query video (in case of Query-By-Example type of video search) and retrieved video clips are subjective in nature. We present an efficient video retrieval method that takes user's feedback on the relevance of retrieved videos and iteratively reformulates the input query feature vectors (QFV) for improved video retrieval. The QFV reformulation is done by a simple, but powerful feature weight optimization method based on Simultaneous Perturbation Stochastic Approximation (SPSA) technique. A video retrieval system with video indexing, searching and relevance feedback (RF) phases is built for demonstrating the performance of the proposed method. The query and database videos are indexed using the conventional video features like color, texture, etc. However, we use the comprehensive and novel methods of feature representations, and a spatio-temporal distance measure to retrieve the top M videos that are similar to the query. In feedback phase, the user activated iterative on the previously retrieved videos is used to reformulate the QFV weights (measure of importance) that reflect the user's preference, automatically. It is our observation that a few iterations of such feedback are generally sufficient for retrieving the desired video clips. The novel application of SPSA based RF for user-oriented feature weights optimization makes the proposed method to be distinct from the existing ones. The experimental results show that the proposed RF based video retrieval exhibit good performance.
Resumo:
Background: A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN) from transcript profiling data. Results: The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting) problem and solved finally by formulating a Linear Program (LP). A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known regulatory associations. In each S. cerevisiae LP-SLGN, the number of nodes with a particular degree follows an approximate power law suggesting that its degree distributions is similar to that observed in real-world networks. Inspection of these LP-SLGNs suggests biological hypotheses amenable to experimental verification. Conclusion: A statistically robust and computationally efficient LP-based method for estimating the topology of a large sparse undirected graph from high-dimensional data yields representations of genetic networks that are biologically plausible and useful abstractions of the structures of real genetic networks. Analysis of the statistical and topological properties of learned LP-SLGNs may have practical value; for example, genes with high random walk betweenness, a measure of the centrality of a node in a graph, are good candidates for intervention studies and hence integrated computational – experimental investigations designed to infer more realistic and sophisticated probabilistic directed graphical model representations of genetic networks. The LP-based solutions of the sparse linear regression problem described here may provide a method for learning the structure of transcription factor networks from transcript profiling and transcription factor binding motif data.
Resumo:
In this paper, expressions for convolution multiplication properties of DCT IV and DST IV are derived starting from equivalent DFT representations. Using these expressions methods for implementing linear filtering through block convolution in the DCT IV and DST IV domain are proposed. Techniques developed for DCT IV and DST IV are further extended to MDCT and MDST where the filter implementation is near exact for symmetric filters and approximate for non-symmetric filters. No additional overlapping is required for implementing the symmetric filtering in the MDCT domain and hence the proposed algorithm is computationally competitive with DFT based systems. Moreover, inherent 50% overlap between the adjacent frames used for MDCT/MDST domain reduces the blocking artifacts due to block processing or quantization. The techniques are computationally efficient for symmetric filters and provides a new alternative to DFT based convolution.
Resumo:
Topology-based methods have been successfully used for the analysis and visualization of piecewise-linear functions defined on triangle meshes. This paper describes a mechanism for extending these methods to piecewise-quadratic functions defined on triangulations of surfaces. Each triangular patch is tessellated into monotone regions, so that existing algorithms for computing topological representations of piecewise-linear functions may be applied directly to the piecewise-quadratic function. In particular, the tessellation is used for computing the Reeb graph, a topological data structure that provides a succinct representation of level sets of the function.
Resumo:
A dual representation scheme for performing arithmetic modulo an arbitrary integer M is presented. The coding scheme maps each integer N in the range 0 <= N < M into one of two representations, each being identified by its most significant bit. The encoding of numbers is straightforward and the problem of checking for unused combinations is eliminated.