247 resultados para GRASP filtering


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dealing with the ever-growing information overload in the Internet, Recommender Systems are widely used online to suggest potential customers item they may like or find useful. Collaborative Filtering is the most popular techniques for Recommender Systems which collects opinions from customers in the form of ratings on items, services or service providers. In addition to the customer rating about a service provider, there is also a good number of online customer feedback information available over the Internet as customer reviews, comments, newsgroups post, discussion forums or blogs which is collectively called user generated contents. This information can be used to generate the public reputation of the service providers’. To do this, data mining techniques, specially recently emerged opinion mining could be a useful tool. In this paper we present a state of the art review of Opinion Mining from online customer feedback. We critically evaluate the existing work and expose cutting edge area of interest in opinion mining. We also classify the approaches taken by different researchers into several categories and sub-categories. Each of those steps is analyzed with their strength and limitations in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis argues that the end of Soviet Marxism and a bipolar global political imaginary at the dissolution of the short Twentieth Century poses an obstacle for anti-systemic political action. Such a blockage of alternate political imaginaries can be discerned by reading the work of Francis Fukuyama and "Endism" as performative invocations of the closure of political alternatives, and thus as an ideological proclamation which enables and constrains forms of social action. It is contended that the search through dialectical thought for a competing universal to posit against "liberal democracy" is a fruitless one, because it reinscribes the terms of teleological theories of history which work to effect closure. Rather, constructing a phenomenological analytic of the political conjuncture, the thesis suggests that the figure of messianism without a Messiah is central to a deconstructive reframing of the possibilities of political action - a reframing attentive to the rhetorical tone of texts. The project of recovering the political is viewed through a phenomenological lens. An agonistic political distinction must be made so as to memorialise the remainders and ghosts of progress, and thus to gesture towards an indeconstructible justice which would serve as a horizon for the articulation of an empty universal. This project is furthered by a return to a certain phenomenology inspired by Cornelius Castoriadis, Claude Lefort, Maurice Merleau-Ponty and Ernesto Laclau. The thesis provides a reading of Jacques Derrida and Walter Benjamin as thinkers of a minor universalism, a non-prescriptive utopia, and places their work in the context of new understandings of religion and the political as quasi-transcendentals which can be utilised to think through the aporias of political time in order to grasp shards of meaning. Derrida and Chantal Mouffe's deconstructive critique and supplement to Carl Schmitt's concept of the political is read as suggestive of a reframing of political thought which would leave the political question open and thus enable the articulation of social imaginary significations able to inscribe meaning in the field of political action. Thus, the thesis gestures towards a form of thought which enables rather than constrains action under the sign of justice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tagging has become one of the key activities in next generation websites which allow users selecting short labels to annotate, manage, and share multimedia information such as photos, videos and bookmarks. Tagging does not require users any prior training before participating in the annotation activities as they can freely choose any terms which best represent the semantic of contents without worrying about any formal structure or ontology. However, the practice of free-form tagging can lead to several problems, such as synonymy, polysemy and ambiguity, which potentially increase the complexity of managing the tags and retrieving information. To solve these problems, this research aims to construct a lightweight indexing scheme to structure tags by identifying and disambiguating the meaning of terms and construct a knowledge base or dictionary. News has been chosen as the primary domain of application to demonstrate the benefits of using structured tags for managing the rapidly changing and dynamic nature of news information. One of the main outcomes of this work is an automatically constructed vocabulary that defines the meaning of each named entity tag, which can be extracted from a news article (including person, location and organisation), based on experts suggestions from major search engines and the knowledge from public database such as Wikipedia. To demonstrate the potential applications of the vocabulary, we have used it to provide more functionalities in an online news website, including topic-based news reading, intuitive tagging, clipping and sharing of interesting news, as well as news filtering or searching based on named entity tags. The evaluation results on the impact of disambiguating tags have shown that the vocabulary can help to significantly improve news searching performance. The preliminary results from our user study have demonstrated that users can benefit from the additional functionalities on the news websites as they are able to retrieve more relevant news, clip and share news with friends and families effectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract—Corneal topography estimation that is based on the Placido disk principle relies on good quality of precorneal tear film and sufficiently wide eyelid (palpebral) aperture to avoid reflections from eyelashes. However, in practice, these conditions are not always fulfilled resulting in missing regions, smaller corneal coverage, and subsequently poorer estimates of corneal topography. Our aim was to enhance the standard operating range of a Placido disk videokeratoscope to obtain reliable corneal topography estimates in patients with poor tear film quality, such as encountered in those diagnosed with dry eye, and with narrower palpebral apertures as in the case of Asian subjects. This was achieved by incorporating in the instrument’s own topography estimation algorithm an image processing technique that comprises a polar-domain adaptive filter and amorphological closing operator. The experimental results from measurements of test surfaces and real corneas showed that the incorporation of the proposed technique results in better estimates of corneal topography, and, in many cases, to a significant increase in the estimated coverage area making such an enhanced videokeratoscope a better tool for clinicians.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many data mining techniques have been proposed for mining useful patterns in databases. However, how to effectively utilize discovered patterns is still an open research issue, especially in the domain of text mining. Most existing methods adopt term-based approaches. However, they all suffer from the problems of polysemy and synonymy. This paper presents an innovative technique, pattern taxonomy mining, to improve the effectiveness of using discovered patterns for finding useful information. Substantial experiments on RCV1 demonstrate that the proposed solution achieves encouraging performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An algorithm based on the concept of Kalman filtering is proposed in this paper for the estimation of power system signal attributes, like amplitude, frequency and phase angle. This technique can be used in protection relays, digital AVRs, DSTATCOMs, FACTS and other power electronics applications. Furthermore this algorithm is particularly suitable for the integration of distributed generation sources to power grids when fast and accurate detection of small variations of signal attributes are needed. Practical considerations such as the effect of noise, higher order harmonics, and computational issues of the algorithm are considered and tested in the paper. Several computer simulations are presented to highlight the usefulness of the proposed approach. Simulation results show that the proposed technique can simultaneously estimate the signal attributes, even if it is highly distorted due to the presence of non-linear loads and noise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Identifying an individual from surveillance video is a difficult, time consuming and labour intensive process. The proposed system aims to streamline this process by filtering out unwanted scenes and enhancing an individual's face through super-resolution. An automatic face recognition system is then used to identify the subject or present the human operator with likely matches from a database. A person tracker is used to speed up the subject detection and super-resolution process by tracking moving subjects and cropping a region of interest around the subject's face to reduce the number and size of the image frames to be super-resolved respectively. In this paper, experiments have been conducted to demonstrate how the optical flow super-resolution method used improves surveillance imagery for visual inspection as well as automatic face recognition on an Eigenface and Elastic Bunch Graph Matching system. The optical flow based method has also been benchmarked against the ``hallucination'' algorithm, interpolation methods and the original low-resolution images. Results show that both super-resolution algorithms improved recognition rates significantly. Although the hallucination method resulted in slightly higher recognition rates, the optical flow method produced less artifacts and more visually correct images suitable for human consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Before the Global Financial Crisis many providers of finance had growth mandates and actively pursued development finance deals as a way of gaining higher returns on funds with regular capital turnover and re-investment possible. This was able to be achieved through high gearing and low presales in a strong market. As asset prices fell, loan covenants breached and memories of the 1990’s returned, banks rapidly adjusted their risk appetite via retraction of gearing and expansion of presale requirements. Early signs of loosening in bank credit policy are emerging, however parties seeking development finance are faced with a severely reduced number of institutions from which to source funding. The few institutions that are lending are filtering out only the best credit risks by way of constrictive credit conditions including: low loan to value ratios, the corresponding requirement to contribute high levels of equity, lack of support in non-prime locations and the requirement for only borrowers with well established track records. In this risk averse and capital constrained environment, the ability of developers to proceed with real estate developments is still being constrained by their inability to obtain project finance. This paper will examine the pre and post GFC development finance environment. It will identify the key lending criteria relevant to real estate development finance and will detail the related changes to credit policies over this period. The associated impact to real estate development projects will be presented, highlighting the significant constraint to supply that the inability to obtain finance poses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper details the design of an autonomous helicopter control system using a low cost sensor suite. Control is maintained using simple nested PID loops. Aircraft attitude, velocity, and height is estimated using an in-house designed IMU and vision system. Information is combined using complimentary filtering. The aircraft is shown to be stabilised and responding to high level demands on all axes, including heading, height, lateral velocity and longitudinal velocity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper details the design of an autonomous helicopter control system using a low cost sensor suite. Control is maintained using simple nested PID loops. Aircraft attitude, velocity, and height is estimated using an in-house designed IMU and vision system. Information is combined using complimentary filtering. The aircraft is shown to be stabilised and responding to high level demands on all axes, including heading, height, lateral velocity and longitudinal velocity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper illustrates a method for finding useful visual landmarks for performing simultaneous localization and mapping (SLAM). The method is based loosely on biological principles, using layers of filtering and pooling to create learned templates that correspond to different views of the environment. Rather than using a set of landmarks and reporting range and bearing to the landmark, this system maps views to poses. The challenge is to produce a system that produces the same view for small changes in robot pose, but provides different views for larger changes in pose. The method has been developed to interface with the RatSLAM system, a biologically inspired method of SLAM. The paper describes the method of learning and recalling visual landmarks in detail, and shows the performance of the visual system in real robot tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The article described an open-source toolbox for machine vision called Machine Vision Toolbox (MVT). MVT includes more than 60 functions including image file reading and writing, acquisition, display, filtering, blob, point and line feature extraction, mathematical morphology, homographies, visual Jacobians, camera calibration, and color space conversion. MVT can be used for research into machine vision but is also versatile enough to be usable for real-time work and even control. MVT, combined with MATLAB and a model workstation computer, is a useful and convenient environment for the investigation of machine vision algorithms. The article illustrated the use of a subset of toolbox functions for some typical problems and described MVT operations including the simulation of a complete image-based visual servo system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A membrane filtration plant using suitable micro or ultra-filtration membranes has the potential to significantly increase pan stage capacity and improve sugar quality. Previous investigations by SRI and others have shown that membranes will remove polysaccharides, turbidity and colloidal impurities and result in lower viscosity syrups and molasses. However, the conclusion from those investigations was that membrane filtration was not economically viable. A comprehensive assessment of current generation membrane technology was undertaken by SRI. With the aid of two pilot plants provided by Applexion and Koch Membrane Systems, extensive trials were conducted at an Australian factory using clarified juice at 80–98°C as feed to each pilot plant. Conditions were varied during the trials to examine the effect of a range of operating parameters on the filtering characteristics of each of the membranes. These parameters included feed temperature and pressure, flow velocity, soluble solids and impurity concentrations. The data were then combined to develop models to predict the filtration rate (or flux) that could be expected for nominated operating conditions. The models demonstrated very good agreement with the data collected during the trials. The trials also identified those membranes that provided the highest flux levels per unit area of membrane surface for a nominated set of conditions. Cleaning procedures were developed that ensured the water flux level was recovered following a clean-in-place process. Bulk samples of clarified juice and membrane filtered juice from each pilot were evaporated to syrup to quantify the gain in pan stage productivity that results from the removal of high molecular weight impurities by membrane filtration. The results are in general agreement with those published by other research groups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.