83 resultados para quantization artifacts

em Queensland University of Technology - ePrints Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we describe the Large Margin Vector Quantization algorithm (LMVQ), which uses gradient ascent to maximise the margin of a radial basis function classifier. We present a derivation of the algorithm, which proceeds from an estimate of the class-conditional probability densities. We show that the key behaviour of Kohonen's well-known LVQ2 and LVQ3 algorithms emerge as natural consequences of our formulation. We compare the performance of LMVQ with that of Kohonen's LVQ algorithms on an artificial classification problem and several well known benchmark classification tasks. We find that the classifiers produced by LMVQ attain a level of accuracy that compares well with those obtained via LVQ1, LVQ2 and LVQ3, with reduced storage complexity. We indicate future directions of enquiry based on the large margin approach to Learning Vector Quantization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quality and bitrate modeling is essential to effectively adapt the bitrate and quality of videos when delivered to multiplatform devices over resource constraint heterogeneous networks. The recent model proposed by Wang et al. estimates the bitrate and quality of videos in terms of the frame rate and quantization parameter. However, to build an effective video adaptation framework, it is crucial to incorporate the spatial resolution in the analytical model for bitrate and perceptual quality adaptation. Hence, this paper proposes an analytical model to estimate the bitrate of videos in terms of quantization parameter, frame rate, and spatial resolution. The model can fit the measured data accurately which is evident from the high Pearson correlation. The proposed model is based on the observation that the relative reduction in bitrate due to decreasing spatial resolution is independent of the quantization parameter and frame rate. This modeling can be used for rate-constrained bit-stream adaptation scheme which selects the scalability parameters to optimize the perceptual quality for a given bandwidth constraint.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gaussian mixture models (GMMs) have become an established means of modeling feature distributions in speaker recognition systems. It is useful for experimentation and practical implementation purposes to develop and test these models in an efficient manner particularly when computational resources are limited. A method of combining vector quantization (VQ) with single multi-dimensional Gaussians is proposed to rapidly generate a robust model approximation to the Gaussian mixture model. A fast method of testing these systems is also proposed and implemented. Results on the NIST 1996 Speaker Recognition Database suggest comparable and in some cases an improved verification performance to the traditional GMM based analysis scheme. In addition, previous research for the task of speaker identification indicated a similar system perfomance between the VQ Gaussian based technique and GMMs

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Radiographs are commonly used to assess articular reduction of the distal tibia (pilon) fractures postoperatively, but may reveal malreductions inaccurately. While Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are potential 3D alternatives they generate metal-related artifacts. This study aims to quantify the artifact size from orthopaedic screws using CT, 1.5T and 3T MRI data. Three screws were inserted into one intact human cadaver ankle specimen proximal to and along the distal articular surface, then CT, 1.5T and 3T MRI scanned. Four types of screws were investigated: titanium alloy (TA), stainless steel (SS) (Ø = 3.5 mm), cannulated TA (CTA) and cannulated SS (CSS)(Ø = 4.0 mm, Ø empty core = 2.6 mm). 3D artifact models were reconstructed using adaptive thresholding. The artifact size was measured by calculating the perpendicular distance from the central screw axis to the boundary of the artifact in four anatomical directions with respect to the distal tibia. The artifact sizes (in the order of TA, SS, CTA and CSS) from CT were 2.0 mm, 2.6 mm, 1.6 mm and 2.0 mm; from 1.5T MRI they were 3.7 mm, 10.9 mm, 2.9 mm, and 9 mm; and 3T MRI they were 4.4 mm, 15.3 mm, 3.8 mm, and 11.6 mm respectively. Therefore, CT can be used as long as the screws are at a safe distance of about 2 mm from the articular surface. MRI can be used if the screws are at least 3 mm away from the articular surface except SS and CSS. Artifacts from steel screws were too large thus obstructed the pilon from being visualised in MRI. Significant differences (P < 0.05) were found in the size of artifacts between all imaging modalities, screw types and material types, except 1.5T versus 3T MRI for the SS screws (P = 0.063). CTA screws near the joint surface can improve postoperative assessment in CT and MRI. MRI presents a favourable non-ionising alternative when using titanium hardware. Since these factors may influence the quality of postoperative assessment, potential improvements in operative techniques should be considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper explores the expertise in industrial (product) design and contribution of knowledge generated trough the design research. Within this approach the research is situated within the social structure that constitutes people, activity, context and culture where an artifact is seen to be a mediator for the generation of new knowledge and its application. The paper concludes about the importance of research and practice integration and points out that situating the research around the artifacts, as mediators of knowledge, is transferable to Human-Computer Interaction field and any other area of the design research

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SCOOT is a hybrid event combining the web, mobile devices, public displays and cultural artifacts across multiple public parks and museums in an effort to increase the perceived and actual access to cultural participation by everyday people. The research field is locative game design and the context was the re-invigoration of public sites as a means for exposing the underlying histories of sites and events. The key question was how to use game play technologies and processes within everyday places in ways that best promote playful and culturally meaningful experiences whilst shifting the loci of control away from commercial and governmental powers. The research methodology was primarily practice led underpinned by ethnographic and action research methods. In 2004 SCOOT established itself as a national leader in the field by demonstrating innovative methods for stimulating rich interactions across diverse urban places using technically-augmented game play. Despite creating a sophisticated range of software and communication tools SCOOT most dramatically highlighted the role of the ubiquitous mobile phone in facilitating socially beneficial experiences. Through working closely with the SCOOT team, collaborating organisations developed important new knowledge around the potential of new technologies and processes for motivating, sustaining and reinvigorating public engagement. Since 2004, SCOOT has been awarded $600,00 in competitive and community funding as well as countless in kind support from partner organisations such as Arts Victoria, National Gallery of Victoria, Melbourne Museum, Australian Centre for the Moving Image, Federation Square, Art Centre of Victoria, The State Library of Victoria, Brisbane River Festival, State Library of Queensland, Brisbane Maritime Museum, Queensland University of Technology, and Victoria University.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the approach taken to the XML Mining track at INEX 2008 by a group at the Queensland University of Technology. We introduce the K-tree clustering algorithm in an Information Retrieval context by adapting it for document clustering. Many large scale problems exist in document clustering. K-tree scales well with large inputs due to its low complexity. It offers promising results both in terms of efficiency and quality. Document classification was completed using Support Vector Machines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generative media systems present an opportunity for users to leverage computational systems to make sense of complex media forms through interactive and collaborative experiences. Generative music and art are a relatively new phenomenon that use procedural invention as a creative technique to produce music and visual media. These kinds of systems present a range of affordances that can facilitate new kinds of relationships with music and media performance and production. Early systems have demonstrated the potential to provide access to collaborative ensemble experiences to users with little formal musical or artistic expertise. This paper examines the relational affordances of these systems evidenced by selected field data drawn from the Network Jamming Project. These generative performance systems enable access to unique ensemble with very little musical knowledge or skill and they further offer the possibility of unique interactive relationships with artists and musical knowledge through collaborative performance. In this presentation I will focus on demonstrating how these simulated experiences might lead to understandings that may be of educational and social benefit. Conference participants will be invited to jam in real time using virtual interfaces and to view video artifacts that demonstrate an interactive relationship with artists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Focusing on the notion of street kids, the paper suggests that youth be viewed in an alternative way to the subculture theory associated with the Center for Contemporary Cultural Studies in Birmingham (CCCS). It is argued that not only is subculture theory an unsuitable mechanism for understanding homeless youth but also, and more importantly, is itself fundamentally problematic. It is suggested that the work of Michel Foucault necessitates a reevaluation of the domain assumptions underlying subculture theory and offers in its place a model that relocates street kids, and youth itself, as artifacts of a network of governmental strategies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Random Indexing K-tree is the combination of two algorithms suited for large scale document clustering.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emergent field of practice-led research is a unique research paradigm that situates creative practice as both a driver and outcome of the research process. The exegesis that accompanies the creative practice in higher research degrees remains open to experimentation and discussion around what content should be included, how it should be structured, and its orientations. This paper contributes to this discussion by reporting on a content analysis of a large, local sample of exegeses. We have observed a broad pattern in contents and structure within this sample. Besides the introduction and conclusion, it has three main parts: situating concepts (conceptual definitions and theories), practical contexts (precedents in related practices), and new creations (the creative process, the artifacts produced and their value as research). This model appears to combine earlier approaches to the exegesis, which oscillated between academic objectivity in providing a context for the practice and personal reflection or commentary upon the creative practice. We argue that this hybrid or connective model assumes both orientations and so allows the researcher to effectively frame the practice as a research contribution to a wider field while doing justice to its invested poetics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the major challenges facing a present day game development company is the removal of bugs from such complex virtual environments. This work presents an approach for measuring the correctness of synthetic scenes generated by a rendering system of a 3D application, such as a computer game. Our approach builds a database of labelled point clouds representing the spatiotemporal colour distribution for the objects present in a sequence of bug-free frames. This is done by converting the position that the pixels take over time into the 3D equivalent points with associated colours. Once the space of labelled points is built, each new image produced from the same game by any rendering system can be analysed by measuring its visual inconsistency in terms of distance from the database. Objects within the scene can be relocated (manually or by the application engine); yet the algorithm is able to perform the image analysis in terms of the 3D structure and colour distribution of samples on the surface of the object. We applied our framework to the publicly available game RacingGame developed for Microsoft(R) Xna(R). Preliminary results show how this approach can be used to detect a variety of visual artifacts generated by the rendering system in a professional quality game engine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Business Process Management (BPM) has emerged as a popular management approach in both Information Technology (IT) and management practice. While there has been much research on business process modelling and the BPM life cycle, there has been little attention given to managing the quality of a business process during its life cycle. This study addresses this gap by providing a framework for organisations to manage the quality of business processes during different phases of the BPM life cycle. This study employs a multi-method research design which is based on the design science approach and the action research methodology. During the design science phase, the artifacts to model a quality-aware business process were developed. These artifacts were then evaluated through three cycles of action research which were conducted within three large Australian-based organisations. This study contributes to the body of BPM knowledge in a number of ways. Firstly, it presents a quality-aware BPM life cycle that provides a framework on how quality can be incorporated into a business process and subsequently managed during the BPM life cycle. Secondly, it provides a framework to capture and model quality requirements of a business process as a set of measurable elements that can be incorporated into the business process model. Finally, it proposes a novel root cause analysis technique for determining the causes of quality issues within business processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mainstream business process modelling techniques promote a design paradigm wherein the activities to be performed within a case, together with their usual execution order, form the backbone of a process model, on top of which other aspects are anchored. This paradigm, while eective in standardised and production-oriented domains, shows some limitations when confronted with processes where case-by-case variations and exceptions are the norm. In this thesis we develop the idea that the eective design of exible process models calls for an alternative modelling paradigm, one in which process models are modularised along key business objects, rather than along activity decompositions. The research follows a design science method, starting from the formulation of a research problem expressed in terms of requirements, and culminating in a set of artifacts that have been devised to satisfy these requirements. The main contributions of the thesis are: (i) a meta-model for object-centric process modelling incorporating constructs for capturing exible processes; (ii) a transformation from this meta-model to an existing activity-centric process modelling language, namely YAWL, showing the relation between object-centric and activity-centric process modelling approaches; and (iii) a Coloured Petri Net that captures the semantics of the proposed meta-model. The meta-model has been evaluated using a framework consisting of a set of work ow patterns. Moreover, the meta-model has been embodied in a modelling tool that has been used to capture two industrial scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Identifying an individual from surveillance video is a difficult, time consuming and labour intensive process. The proposed system aims to streamline this process by filtering out unwanted scenes and enhancing an individual's face through super-resolution. An automatic face recognition system is then used to identify the subject or present the human operator with likely matches from a database. A person tracker is used to speed up the subject detection and super-resolution process by tracking moving subjects and cropping a region of interest around the subject's face to reduce the number and size of the image frames to be super-resolved respectively. In this paper, experiments have been conducted to demonstrate how the optical flow super-resolution method used improves surveillance imagery for visual inspection as well as automatic face recognition on an Eigenface and Elastic Bunch Graph Matching system. The optical flow based method has also been benchmarked against the ``hallucination'' algorithm, interpolation methods and the original low-resolution images. Results show that both super-resolution algorithms improved recognition rates significantly. Although the hallucination method resulted in slightly higher recognition rates, the optical flow method produced less artifacts and more visually correct images suitable for human consumption.