392 resultados para TRUST-REGION ALGORITHM
Resumo:
With the size and state of the Internet today, a good quality approach to organizing this mass of information is of great importance. Clustering web pages into groups of similar documents is one approach, but relies heavily on good feature extraction and document representation as well as a good clustering approach and algorithm. Due to the changing nature of the Internet, resulting in a dynamic dataset, an incremental approach is preferred. In this work we propose an enhanced incremental clustering approach to develop a better clustering algorithm that can help to better organize the information available on the Internet in an incremental fashion. Experiments show that the enhanced algorithm outperforms the original histogram based algorithm by up to 7.5%.
Resumo:
The TraSe (Transform-Select) algorithm has been developed to investigate the morphing of electronic music through automatically applying a series of deterministic compositional transformations to the source, guided towards a target by similarity metrics. This is in contrast to other morphing techniques such as interpolation or parameters or probabilistic variation. TraSe allows control over stylistic elements of the music through user-defined weighting of numerous compositional transformations. The formal evaluation of TraSe was mostly qualitative and occurred through nine participants completing an online questionnaire. The music generated by TraSe was generally felt to be less coherent than a human composed benchmark but in some cases judged as more creative.
Resumo:
1. Species' distribution modelling relies on adequate data sets to build reliable statistical models with high predictive ability. However, the money spent collecting empirical data might be better spent on management. A less expensive source of species' distribution information is expert opinion. This study evaluates expert knowledge and its source. In particular, we determine whether models built on expert knowledge apply over multiple regions or only within the region where the knowledge was derived. 2. The case study focuses on the distribution of the brush-tailed rock-wallaby Petrogale penicillata in eastern Australia. We brought together from two biogeographically different regions substantial and well-designed field data and knowledge from nine experts. We used a novel elicitation tool within a geographical information system to systematically collect expert opinions. The tool utilized an indirect approach to elicitation, asking experts simpler questions about observable rather than abstract quantities, with measures in place to identify uncertainty and offer feedback. Bayesian analysis was used to combine field data and expert knowledge in each region to determine: (i) how expert opinion affected models based on field data and (ii) how similar expert-informed models were within regions and across regions. 3. The elicitation tool effectively captured the experts' opinions and their uncertainties. Experts were comfortable with the map-based elicitation approach used, especially with graphical feedback. Experts tended to predict lower values of species occurrence compared with field data. 4. Across experts, consensus on effect sizes occurred for several habitat variables. Expert opinion generally influenced predictions from field data. However, south-east Queensland and north-east New South Wales experts had different opinions on the influence of elevation and geology, with these differences attributable to geological differences between these regions. 5. Synthesis and applications. When formulated as priors in Bayesian analysis, expert opinion is useful for modifying or strengthening patterns exhibited by empirical data sets that are limited in size or scope. Nevertheless, the ability of an expert to extrapolate beyond their region of knowledge may be poor. Hence there is significant merit in obtaining information from local experts when compiling species' distribution models across several regions.
Resumo:
The population Monte Carlo algorithm is an iterative importance sampling scheme for solving static problems. We examine the population Monte Carlo algorithm in a simplified setting, a single step of the general algorithm, and study a fundamental problem that occurs in applying importance sampling to high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of estimate under conditions on the importance function. We demonstrate the exponential growth of the asymptotic variance with the dimension and show that the optimal covariance matrix for the importance function can be estimated in special cases.
Resumo:
A new method for noninvasive assessment of tear film surface quality (TFSQ) is proposed. The method is based on high-speed videokeratoscopy in which the corneal area for the analysis is dynamically estimated in a manner that removes videokeratoscopy interference from the shadows of eyelashes but not that related to the poor quality of the precorneal tear film that is of interest. The separation between the two types of seemingly similar videokeratoscopy interference is achieved by region-based classification in which the overall noise is first separated from the useful signal (unaltered videokeratoscopy pattern), followed by a dedicated interference classification algorithm that distinguishes between the two considered interferences. The proposed technique provides a much wider corneal area for the analysis of TFSQ than the previously reported techniques. A preliminary study with the proposed technique, carried out for a range of anterior eye conditions, showed an effective behavior in terms of noise to signal separation, interference classification, as well as consistent TFSQ results. Subsequently, the method proved to be able to not only discriminate between the bare eye and the lens on eye conditions but also to have the potential to discriminate between the two types of contact lenses.
Resumo:
In the field of semantic grid, QoS-based Web service composition is an important problem. In semantic and service rich environment like semantic grid, the emergence of context constraints on Web services is very common making the composition consider not only QoS properties of Web services, but also inter service dependencies and conflicts which are formed due to the context constraints imposed on Web services. In this paper, we present a repair genetic algorithm, namely minimal-conflict hill-climbing repair genetic algorithm, to address the Web service composition optimization problem in the presence of domain constraints and inter service dependencies and conflicts. Experimental results demonstrate the scalability and effectiveness of the genetic algorithm.
Resumo:
We report numerical analysis and experimental observation of strongly localized plasmons guided by triangular metal wedges and pay special attention to the effect of smooth (nonzero radius) tips. Dispersion, dissipation, and field structure of such wedge plasmons are analyzed using the compact two-dimensional finite-difference time-domain algorithm. Experimental observation is conducted by the end-fire excitation and near-field scanning optical microscope detection of the predicted plasmons on 40°silver nanowedges with the wedge tip radii of 20, 85, and 125 nm that were fabricated by the focused-ion beam method. The effect of smoothing wedge tips is shown to be similar to that of increasing wedge angle. Increasing wedge angle or wedge tip radius results in increasing propagation distance at the same time as decreasing field localization (decreasing wave number). Quantitative differences between the theoretical and experimental propagation distances are suggested to be due to a contribution of scattered bulk and surface waves near the excitation region as well as the addition of losses due to surface roughness. The theoretical and measured propagation distances are several plasmon wavelengths and are useful for a range of nano-optical applications
Resumo:
Identifying an individual from surveillance video is a difficult, time consuming and labour intensive process. The proposed system aims to streamline this process by filtering out unwanted scenes and enhancing an individual's face through super-resolution. An automatic face recognition system is then used to identify the subject or present the human operator with likely matches from a database. A person tracker is used to speed up the subject detection and super-resolution process by tracking moving subjects and cropping a region of interest around the subject's face to reduce the number and size of the image frames to be super-resolved respectively. In this paper, experiments have been conducted to demonstrate how the optical flow super-resolution method used improves surveillance imagery for visual inspection as well as automatic face recognition on an Eigenface and Elastic Bunch Graph Matching system. The optical flow based method has also been benchmarked against the ``hallucination'' algorithm, interpolation methods and the original low-resolution images. Results show that both super-resolution algorithms improved recognition rates significantly. Although the hallucination method resulted in slightly higher recognition rates, the optical flow method produced less artifacts and more visually correct images suitable for human consumption.