886 resultados para Projection
Resumo:
Due to the limitation of current condition monitoring technologies, the estimates of asset health states may contain some uncertainties. A maintenance strategy ignoring this uncertainty of asset health state can cause additional costs or downtime. The partially observable Markov decision process (POMDP) is a commonly used approach to derive optimal maintenance strategies when asset health inspections are imperfect. However, existing applications of the POMDP to maintenance decision-making largely adopt the discrete time and state assumptions. The discrete-time assumption requires the health state transitions and maintenance activities only happen at discrete epochs, which cannot model the failure time accurately and is not cost-effective. The discrete health state assumption, on the other hand, may not be elaborate enough to improve the effectiveness of maintenance. To address these limitations, this paper proposes a continuous state partially observable semi-Markov decision process (POSMDP). An algorithm that combines the Monte Carlo-based density projection method and the policy iteration is developed to solve the POSMDP. Different types of maintenance activities (i.e., inspections, replacement, and imperfect maintenance) are considered in this paper. The next maintenance action and the corresponding waiting durations are optimized jointly to minimize the long-run expected cost per unit time and availability. The result of simulation studies shows that the proposed maintenance optimization approach is more cost-effective than maintenance strategies derived by another two approximate methods, when regular inspection intervals are adopted. The simulation study also shows that the maintenance cost can be further reduced by developing maintenance strategies with state-dependent maintenance intervals using the POSMDP. In addition, during the simulation studies the proposed POSMDP shows the ability to adopt a cost-effective strategy structure when multiple types of maintenance activities are involved.
Resumo:
Background: Heat-related mortality is a matter of great public health concern, especially in the light of climate change. Although many studies have found associations between high temperatures and mortality, more research is needed to project the future impacts of climate change on heat-related mortality. Objectives: We conducted a systematic review of research and methods for projecting future heat-related mortality under climate change scenarios. Data sources and extraction: A literature search was conducted in August 2010, using the electronic databases PubMed, Scopus, ScienceDirect, ProQuest, and Web of Science. The search was limited to peer-reviewed journal articles published in English up to 2010. Data synthesis: The review included 14 studies that fulfilled the inclusion criteria. Most projections showed that climate change would result in a substantial increase in heat-related mortality. Projecting heat-related mortality requires understanding of the historical temperature-mortality relationships, and consideration of the future changes in climate, population and acclimatization. Further research is needed to provide a stronger theoretical framework for projections, including a better understanding of socio-economic development, adaptation strategies, land-use patterns, air pollution and mortality displacement. Conclusions: Scenario-based projection research will meaningfully contribute to assessing and managing the potential impacts of climate change on heat-related mortality.
Resumo:
Single particle analysis (SPA) coupled with high-resolution electron cryo-microscopy is emerging as a powerful technique for the structure determination of membrane protein complexes and soluble macromolecular assemblies. Current estimates suggest that ∼104–105 particle projections are required to attain a 3 Å resolution 3D reconstruction (symmetry dependent). Selecting this number of molecular projections differing in size, shape and symmetry is a rate-limiting step for the automation of 3D image reconstruction. Here, we present SwarmPS, a feature rich GUI based software package to manage large scale, semi-automated particle picking projects. The software provides cross-correlation and edge-detection algorithms. Algorithm-specific parameters are transparently and automatically determined through user interaction with the image, rather than by trial and error. Other features include multiple image handling (∼102), local and global particle selection options, interactive image freezing, automatic particle centering, and full manual override to correct false positives and negatives. SwarmPS is user friendly, flexible, extensible, fast, and capable of exporting boxed out projection images, or particle coordinates, compatible with downstream image processing suites.
Resumo:
A new algorithm for extracting features from images for object recognition is described. The algorithm uses higher order spectra to provide desirable invariance properties, to provide noise immunity, and to incorporate nonlinearity into the feature extraction procedure thereby allowing the use of simple classifiers. An image can be reduced to a set of 1D functions via the Radon transform, or alternatively, the Fourier transform of each 1D projection can be obtained from a radial slice of the 2D Fourier transform of the image according to the Fourier slice theorem. A triple product of Fourier coefficients, referred to as the deterministic bispectrum, is computed for each 1D function and is integrated along radial lines in bifrequency space. Phases of the integrated bispectra are shown to be translation- and scale-invariant. Rotation invariance is achieved by a regrouping of these invariants at a constant radius followed by a second stage of invariant extraction. Rotation invariance is thus converted to translation invariance in the second step. Results using synthetic and actual images show that isolated, compact clusters are formed in feature space. These clusters are linearly separable, indicating that the nonlinearity required in the mapping from the input space to the classification space is incorporated well into the feature extraction stage. The use of higher order spectra results in good noise immunity, as verified with synthetic and real images. Classification of images using the higher order spectra-based algorithm compares favorably to classification using the method of moment invariants
Resumo:
Stochastic models for competing clonotypes of T cells by multivariate, continuous-time, discrete state, Markov processes have been proposed in the literature by Stirk, Molina-París and van den Berg (2008). A stochastic modelling framework is important because of rare events associated with small populations of some critical cell types. Usually, computational methods for these problems employ a trajectory-based approach, based on Monte Carlo simulation. This is partly because the complementary, probability density function (PDF) approaches can be expensive but here we describe some efficient PDF approaches by directly solving the governing equations, known as the Master Equation. These computations are made very efficient through an approximation of the state space by the Finite State Projection and through the use of Krylov subspace methods when evolving the matrix exponential. These computational methods allow us to explore the evolution of the PDFs associated with these stochastic models, and bimodal distributions arise in some parameter regimes. Time-dependent propensities naturally arise in immunological processes due to, for example, age-dependent effects. Incorporating time-dependent propensities into the framework of the Master Equation significantly complicates the corresponding computational methods but here we describe an efficient approach via Magnus formulas. Although this contribution focuses on the example of competing clonotypes, the general principles are relevant to multivariate Markov processes and provide fundamental techniques for computational immunology.
Resumo:
Recently the application of the quasi-steady-state approximation (QSSA) to the stochastic simulation algorithm (SSA) was suggested for the purpose of speeding up stochastic simulations of chemical systems that involve both relatively fast and slow chemical reactions [Rao and Arkin, J. Chem. Phys. 118, 4999 (2003)] and further work has led to the nested and slow-scale SSA. Improved numerical efficiency is obtained by respecting the vastly different time scales characterizing the system and then by advancing only the slow reactions exactly, based on a suitable approximation to the fast reactions. We considerably extend these works by applying the QSSA to numerical methods for the direct solution of the chemical master equation (CME) and, in particular, to the finite state projection algorithm [Munsky and Khammash, J. Chem. Phys. 124, 044104 (2006)], in conjunction with Krylov methods. In addition, we point out some important connections to the literature on the (deterministic) total QSSA (tQSSA) and place the stochastic analogue of the QSSA within the more general framework of aggregation of Markov processes. We demonstrate the new methods on four examples: Michaelis–Menten enzyme kinetics, double phosphorylation, the Goldbeter–Koshland switch, and the mitogen activated protein kinase cascade. Overall, we report dramatic improvements by applying the tQSSA to the CME solver.
Resumo:
In automatic facial expression recognition, an increasing number of techniques had been proposed for in the literature that exploits the temporal nature of facial expressions. As all facial expressions are known to evolve over time, it is crucially important for a classifier to be capable of modelling their dynamics. We establish that the method of sparse representation (SR) classifiers proves to be a suitable candidate for this purpose, and subsequently propose a framework for expression dynamics to be efficiently incorporated into its current formulation. We additionally show that for the SR method to be applied effectively, then a certain threshold on image dimensionality must be enforced (unlike in facial recognition problems). Thirdly, we determined that recognition rates may be significantly influenced by the size of the projection matrix \Phi. To demonstrate these, a battery of experiments had been conducted on the CK+ dataset for the recognition of the seven prototypic expressions - anger, contempt, disgust, fear, happiness, sadness and surprise - and comparisons have been made between the proposed temporal-SR against the static-SR framework and state-of-the-art support vector machine.
Resumo:
Robust speaker verification on short utterances remains a key consideration when deploying automatic speaker recognition, as many real world applications often have access to only limited duration speech data. This paper explores how the recent technologies focused around total variability modeling behave when training and testing utterance lengths are reduced. Results are presented which provide a comparison of Joint Factor Analysis (JFA) and i-vector based systems including various compensation techniques; Within-Class Covariance Normalization (WCCN), LDA, Scatter Difference Nuisance Attribute Projection (SDNAP) and Gaussian Probabilistic Linear Discriminant Analysis (GPLDA). Speaker verification performance for utterances with as little as 2 sec of data taken from the NIST Speaker Recognition Evaluations are presented to provide a clearer picture of the current performance characteristics of these techniques in short utterance conditions.
Resumo:
Compressive Sensing (CS) is a popular signal processing technique, that can exactly reconstruct a signal given a small number of random projections of the original signal, provided that the signal is sufficiently sparse. We demonstrate the applicability of CS in the field of gait recognition as a very effective dimensionality reduction technique, using the gait energy image (GEI) as the feature extraction process. We compare the CS based approach to the principal component analysis (PCA) and show that the proposed method outperforms this baseline, particularly under situations where there are appearance changes in the subject. Applying CS to the gait features also avoids the need to train the models, by using a generalised random projection.
Resumo:
For the analysis of material nonlinearity, an effective shear modulus approach based on the strain control method is proposed in this paper by using point collocation method. Hencky’s total deformation theory is used to evaluate the effective shear modulus, Young’s modulus and Poisson’s ratio, which are treated as spatial field variables. These effective properties are obtained by the strain controlled projection method in an iterative manner. To evaluate the second order derivatives of shape function at the field point, the radial basis function (RBF) in the local support domain is used. Several numerical examples are presented to demonstrate the efficiency and accuracy of the proposed method and comparisons have been made with analytical solutions and the finite element method (ABAQUS).
Resumo:
An interactive installation with full body interface, digital projection, multi-touch sensitive screen surfaces, interactive 3D gaming software, motorised dioramas, 4.1 spatial sound & new furniture forms - investigating the cultural dimensions of sustainability through the lens of 'time'. “Time is change, time is finitude. Humans are a finite species. Every decision we make today brings that end closer, or alternatively pushes it further away. Nothing can be neutral”. Tony Fry DETAILS: Finitude (Mallee:Time) is a major new media/sculptural hybrid work premiered in 2011 in version 1 at the Ka-rama Motel for the Mildura Palimpsest #8 ('Collaborators and Saboteurs'). Each participant/viewer lies comfortably on their back on the double bed of Room 22. Directly above them, supported by a wooden structure, not unlike a house frame, is a semi-transparent Perspex screen that displays projected 3D imagery and is simultaneously sensitive to the lightest of finger touches. Depending upon the ever changing qualities of the projected image on this screen the participant can see through its surface to a series of physical dioramas suspended above, lit by subtle LED spotlighting. This diorama consists of a slowly rotating series of physical environments, which also include several animatronic components, allowing the realtime composition of whimsical ‘landscapes’ of both 'real' and 'virtual' media. Through subtle, non-didactic touch-sensitive interactivity the participant then has influence over both the 3D graphic imagery, the physical movements of the diorama and the 4.1 immersive soundscape, creating an uncanny blend of physical and virtual media. Five speakers positioned around the room deliver a rich interactive soundscape that responds both audibly and physically to interactions. VERSION 1, CONTEXT/THEORY: Finitude (Mallee: Time) is Version 1 of a series of presentations during 2012-14. This version has been inspired through a series of recent visits and residencies in the SW Victoria Mallee country. Further drawing on recent writings by post colonial author Paul Carter, the work is envisaged as an evolving ‘personal topography’ of place-discovery. By contrasting and melding readily available generalisations of the Mallee regions’ rational surfaces, climatic maps and ecological systems with what Carter calls “a fine capillary system of interconnected words, places, memories and sensations” generated through my own idiosyncratic research processes, Finitude (Mallee Time) invokes a “dark writing” of place through outside eyes - an approach that avoids concentration upon what 'everyone else knows', to instead imagine and develop a sense how things might be. This basis in re-imagining and re-invention becomes the vehicle for the work’s more fundamental intention - as a meditative re-imagination of 'time' (and region) as finite resources: Towards this end, every object, process and idea in the work is re-thought as having its own ‘time component’ or ‘residue’ that becomes deposited into our 'collective future'. Thought this way Finitude (Mallee Time) suggests the poverty of predominant images of time as ‘mechanism’ to instead envisage time as a plastic cyclical medium that we can each choose to ‘give to’ or ‘take away from’ our future. Put another way - time has become finitude.
Resumo:
We propose an approach to employ eigen light-fields for face recognition across pose on video. Faces of a subject are collected from video frames and combined based on the pose to obtain a set of probe light-fields. These probe data are then projected to the principal subspace of the eigen light-fields within which the classification takes place. We modify the original light-field projection and found that it is more robust in the proposed system. Evaluation on VidTIMIT dataset has demonstrated that the eigen light-fields method is able to take advantage of multiple observations contained in the video.
Resumo:
With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.
Resumo:
We report on analysis of discussions in an online community of people with chronic illness using socio-cognitively motivated, automatically produced semantic spaces. The analysis aims to further the emerging theory of "transition" (how people can learn to incorporate the consequences of illness into their lives). An automatically derived representation of sense of self for individuals is created in the semantic space by the analysis of the email utterances of the community members. The movement over time of the sense of self is visualised, via projection, with respect to axes of "ordinariness" and "extra-ordinariness". Qualitative evaluation shows that the visualisation is paralleled by the transitions of people during the course of their illness. The research aims to progress tools for analysis of textual data to promote greater use of tacit knowledge as found in online virtual communities. We hope it also encourages further interest in representation of sense-of-self.
Resumo:
This paper discusses and summarises a recent systematic study on the implication of global warming on air conditioned office buildings in Australia. Four areas are covered, including analysis of historical weather data, generation of future weather data for the impact study of global warming, projection of building performance under various global warming scenarios, and evaluation of various adaptation strategies under 2070 high global warming conditions. Overall, it is found that depending on the assumed future climate scenarios and the location considered, the increase of total building energy use for the sample Australian office building may range from 0.4 to 15.1%. When the increase of annual average outdoor temperature exceeds 2 °C, the risk of overheating will increase significantly. However, the potential overheating problem could be completely eliminated if internal load density is significantly reduced.