960 resultados para Syntactic Projection
Resumo:
An application of image processing techniques to recognition of hand-drawn circuit diagrams is presented. The scanned image of a diagram is pre-processed to remove noise and converted to bilevel. Morphological operations are applied to obtain a clean, connected representation using thinned lines. The diagram comprises of nodes, connections and components. Nodes and components are segmented using appropriate thresholds on a spatially varying object pixel density. Connection paths are traced using a pixel-stack. Nodes are classified using syntactic analysis. Components are classified using a combination of invariant moments, scalar pixel-distribution features, and vector relationships between straight lines in polygonal representations. A node recognition accuracy of 82% and a component recognition accuracy of 86% was achieved on a database comprising 107 nodes and 449 components. This recogniser can be used for layout “beautification” or to generate input code for circuit analysis and simulation packages
Resumo:
Stochastic models for competing clonotypes of T cells by multivariate, continuous-time, discrete state, Markov processes have been proposed in the literature by Stirk, Molina-París and van den Berg (2008). A stochastic modelling framework is important because of rare events associated with small populations of some critical cell types. Usually, computational methods for these problems employ a trajectory-based approach, based on Monte Carlo simulation. This is partly because the complementary, probability density function (PDF) approaches can be expensive but here we describe some efficient PDF approaches by directly solving the governing equations, known as the Master Equation. These computations are made very efficient through an approximation of the state space by the Finite State Projection and through the use of Krylov subspace methods when evolving the matrix exponential. These computational methods allow us to explore the evolution of the PDFs associated with these stochastic models, and bimodal distributions arise in some parameter regimes. Time-dependent propensities naturally arise in immunological processes due to, for example, age-dependent effects. Incorporating time-dependent propensities into the framework of the Master Equation significantly complicates the corresponding computational methods but here we describe an efficient approach via Magnus formulas. Although this contribution focuses on the example of competing clonotypes, the general principles are relevant to multivariate Markov processes and provide fundamental techniques for computational immunology.
Resumo:
Recently the application of the quasi-steady-state approximation (QSSA) to the stochastic simulation algorithm (SSA) was suggested for the purpose of speeding up stochastic simulations of chemical systems that involve both relatively fast and slow chemical reactions [Rao and Arkin, J. Chem. Phys. 118, 4999 (2003)] and further work has led to the nested and slow-scale SSA. Improved numerical efficiency is obtained by respecting the vastly different time scales characterizing the system and then by advancing only the slow reactions exactly, based on a suitable approximation to the fast reactions. We considerably extend these works by applying the QSSA to numerical methods for the direct solution of the chemical master equation (CME) and, in particular, to the finite state projection algorithm [Munsky and Khammash, J. Chem. Phys. 124, 044104 (2006)], in conjunction with Krylov methods. In addition, we point out some important connections to the literature on the (deterministic) total QSSA (tQSSA) and place the stochastic analogue of the QSSA within the more general framework of aggregation of Markov processes. We demonstrate the new methods on four examples: Michaelis–Menten enzyme kinetics, double phosphorylation, the Goldbeter–Koshland switch, and the mitogen activated protein kinase cascade. Overall, we report dramatic improvements by applying the tQSSA to the CME solver.
Resumo:
In automatic facial expression recognition, an increasing number of techniques had been proposed for in the literature that exploits the temporal nature of facial expressions. As all facial expressions are known to evolve over time, it is crucially important for a classifier to be capable of modelling their dynamics. We establish that the method of sparse representation (SR) classifiers proves to be a suitable candidate for this purpose, and subsequently propose a framework for expression dynamics to be efficiently incorporated into its current formulation. We additionally show that for the SR method to be applied effectively, then a certain threshold on image dimensionality must be enforced (unlike in facial recognition problems). Thirdly, we determined that recognition rates may be significantly influenced by the size of the projection matrix \Phi. To demonstrate these, a battery of experiments had been conducted on the CK+ dataset for the recognition of the seven prototypic expressions - anger, contempt, disgust, fear, happiness, sadness and surprise - and comparisons have been made between the proposed temporal-SR against the static-SR framework and state-of-the-art support vector machine.
Resumo:
Robust speaker verification on short utterances remains a key consideration when deploying automatic speaker recognition, as many real world applications often have access to only limited duration speech data. This paper explores how the recent technologies focused around total variability modeling behave when training and testing utterance lengths are reduced. Results are presented which provide a comparison of Joint Factor Analysis (JFA) and i-vector based systems including various compensation techniques; Within-Class Covariance Normalization (WCCN), LDA, Scatter Difference Nuisance Attribute Projection (SDNAP) and Gaussian Probabilistic Linear Discriminant Analysis (GPLDA). Speaker verification performance for utterances with as little as 2 sec of data taken from the NIST Speaker Recognition Evaluations are presented to provide a clearer picture of the current performance characteristics of these techniques in short utterance conditions.
Resumo:
Compressive Sensing (CS) is a popular signal processing technique, that can exactly reconstruct a signal given a small number of random projections of the original signal, provided that the signal is sufficiently sparse. We demonstrate the applicability of CS in the field of gait recognition as a very effective dimensionality reduction technique, using the gait energy image (GEI) as the feature extraction process. We compare the CS based approach to the principal component analysis (PCA) and show that the proposed method outperforms this baseline, particularly under situations where there are appearance changes in the subject. Applying CS to the gait features also avoids the need to train the models, by using a generalised random projection.
Resumo:
For the analysis of material nonlinearity, an effective shear modulus approach based on the strain control method is proposed in this paper by using point collocation method. Hencky’s total deformation theory is used to evaluate the effective shear modulus, Young’s modulus and Poisson’s ratio, which are treated as spatial field variables. These effective properties are obtained by the strain controlled projection method in an iterative manner. To evaluate the second order derivatives of shape function at the field point, the radial basis function (RBF) in the local support domain is used. Several numerical examples are presented to demonstrate the efficiency and accuracy of the proposed method and comparisons have been made with analytical solutions and the finite element method (ABAQUS).
Resumo:
An interactive installation with full body interface, digital projection, multi-touch sensitive screen surfaces, interactive 3D gaming software, motorised dioramas, 4.1 spatial sound & new furniture forms - investigating the cultural dimensions of sustainability through the lens of 'time'. “Time is change, time is finitude. Humans are a finite species. Every decision we make today brings that end closer, or alternatively pushes it further away. Nothing can be neutral”. Tony Fry DETAILS: Finitude (Mallee:Time) is a major new media/sculptural hybrid work premiered in 2011 in version 1 at the Ka-rama Motel for the Mildura Palimpsest #8 ('Collaborators and Saboteurs'). Each participant/viewer lies comfortably on their back on the double bed of Room 22. Directly above them, supported by a wooden structure, not unlike a house frame, is a semi-transparent Perspex screen that displays projected 3D imagery and is simultaneously sensitive to the lightest of finger touches. Depending upon the ever changing qualities of the projected image on this screen the participant can see through its surface to a series of physical dioramas suspended above, lit by subtle LED spotlighting. This diorama consists of a slowly rotating series of physical environments, which also include several animatronic components, allowing the realtime composition of whimsical ‘landscapes’ of both 'real' and 'virtual' media. Through subtle, non-didactic touch-sensitive interactivity the participant then has influence over both the 3D graphic imagery, the physical movements of the diorama and the 4.1 immersive soundscape, creating an uncanny blend of physical and virtual media. Five speakers positioned around the room deliver a rich interactive soundscape that responds both audibly and physically to interactions. VERSION 1, CONTEXT/THEORY: Finitude (Mallee: Time) is Version 1 of a series of presentations during 2012-14. This version has been inspired through a series of recent visits and residencies in the SW Victoria Mallee country. Further drawing on recent writings by post colonial author Paul Carter, the work is envisaged as an evolving ‘personal topography’ of place-discovery. By contrasting and melding readily available generalisations of the Mallee regions’ rational surfaces, climatic maps and ecological systems with what Carter calls “a fine capillary system of interconnected words, places, memories and sensations” generated through my own idiosyncratic research processes, Finitude (Mallee Time) invokes a “dark writing” of place through outside eyes - an approach that avoids concentration upon what 'everyone else knows', to instead imagine and develop a sense how things might be. This basis in re-imagining and re-invention becomes the vehicle for the work’s more fundamental intention - as a meditative re-imagination of 'time' (and region) as finite resources: Towards this end, every object, process and idea in the work is re-thought as having its own ‘time component’ or ‘residue’ that becomes deposited into our 'collective future'. Thought this way Finitude (Mallee Time) suggests the poverty of predominant images of time as ‘mechanism’ to instead envisage time as a plastic cyclical medium that we can each choose to ‘give to’ or ‘take away from’ our future. Put another way - time has become finitude.
Resumo:
We propose an approach to employ eigen light-fields for face recognition across pose on video. Faces of a subject are collected from video frames and combined based on the pose to obtain a set of probe light-fields. These probe data are then projected to the principal subspace of the eigen light-fields within which the classification takes place. We modify the original light-field projection and found that it is more robust in the proposed system. Evaluation on VidTIMIT dataset has demonstrated that the eigen light-fields method is able to take advantage of multiple observations contained in the video.
Resumo:
With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.
Resumo:
We report on analysis of discussions in an online community of people with chronic illness using socio-cognitively motivated, automatically produced semantic spaces. The analysis aims to further the emerging theory of "transition" (how people can learn to incorporate the consequences of illness into their lives). An automatically derived representation of sense of self for individuals is created in the semantic space by the analysis of the email utterances of the community members. The movement over time of the sense of self is visualised, via projection, with respect to axes of "ordinariness" and "extra-ordinariness". Qualitative evaluation shows that the visualisation is paralleled by the transitions of people during the course of their illness. The research aims to progress tools for analysis of textual data to promote greater use of tacit knowledge as found in online virtual communities. We hope it also encourages further interest in representation of sense-of-self.
Resumo:
This paper discusses and summarises a recent systematic study on the implication of global warming on air conditioned office buildings in Australia. Four areas are covered, including analysis of historical weather data, generation of future weather data for the impact study of global warming, projection of building performance under various global warming scenarios, and evaluation of various adaptation strategies under 2070 high global warming conditions. Overall, it is found that depending on the assumed future climate scenarios and the location considered, the increase of total building energy use for the sample Australian office building may range from 0.4 to 15.1%. When the increase of annual average outdoor temperature exceeds 2 °C, the risk of overheating will increase significantly. However, the potential overheating problem could be completely eliminated if internal load density is significantly reduced.
Resumo:
As global warming entails new conditions for the built environment, the thermal behavior of existing air conditioned office buildings, which are typically designed based on current weather data, may also change. Through building computer simulations, this paper evaluates the impact of global warming on the design and performance of air-conditioned office buildings in Australia, including the increased cooling loads imposed by potential global warming and probable indoor temperature increases due to possible undersized air-conditioning system, as well as the possible change in energy use and CO2 emission of Australian office buildings. It is found that the existing office buildings would generally be able to adapt to the increasing warmth of 2030 year Low and High scenarios projections and 2070 year Low scenario projection. However, for the 2070 year High scenario, the study indicates that the existing office buildings, in all capital cities except for Hobart, will suffer from overheating problems. If the energy source is assumed to be the electricity, it is found that in comparison with current weather scenario, the increased energy uses would translate into the increase of CO2 emissions by 0 to 34.6 kg CO2 equivalent/m2, varying with different future weather scenarios and with different locations.
Resumo:
This paper arises from our concern for the level of teaching of engineering drawing at tertiary institutions in Australia. Little attention is paid to teaching hand drawing and tolerancing. Teaching of engineering drawing is usually limited to computer-aided design (CAD) using AutoCAD or one of the solid-modelling packages. As a result, many engineering graduates have diffi culties in understanding how views are produced in different projection angles, are unable to produce engineering drawings of professional quality, or read engineering drawings, and unable to select fits and limits or surface roughness. In the Faculty of Built Environment and Engineering at the Queensland University of Technology new approaches to teaching engineering drawing have been introduced. In this paper the results of these innovative approaches are examined through surveys and other research methods.
Resumo:
This paper analyses the pairwise distances of signatures produced by the TopSig retrieval model on two document collections. The distribution of the distances are compared to purely random signatures. It explains why TopSig is only competitive with state of the art retrieval models at early precision. Only the local neighbourhood of the signatures is interpretable. We suggest this is a common property of vector space models.