471 resultados para Component reuse
Resumo:
It is a big challenge to clearly identify the boundary between positive and negative streams. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on RCV1, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
This article explores two matrix methods to induce the ``shades of meaning" (SoM) of a word. A matrix representation of a word is computed from a corpus of traces based on the given word. Non-negative Matrix Factorisation (NMF) and Singular Value Decomposition (SVD) compute a set of vectors corresponding to a potential shade of meaning. The two methods were evaluated based on loss of conditional entropy with respect to two sets of manually tagged data. One set reflects concepts generally appearing in text, and the second set comprises words used for investigations into word sense disambiguation. Results show that for NMF consistently outperforms SVD for inducing both SoM of general concepts as well as word senses. The problem of inducing the shades of meaning of a word is more subtle than that of word sense induction and hence relevant to thematic analysis of opinion where nuances of opinion can arise.
Resumo:
We argue that web service discovery technology should help the user navigate a complex problem space by providing suggestions for services which they may not be able to formulate themselves as (s)he lacks the epistemic resources to do so. Free text documents in service environments provide an untapped source of information for augmenting the epistemic state of the user and hence their ability to search effectively for services. A quantitative approach to semantic knowledge representation is adopted in the form of semantic space models computed from these free text documents. Knowledge of the user’s agenda is promoted by associational inferences computed from the semantic space. The inferences are suggestive and aim to promote human abductive reasoning to guide the user from fuzzy search goals into a better understanding of the problem space surrounding the given agenda. Experimental results are discussed based on a complex and realistic planning activity.
Resumo:
While spoken term detection (STD) systems based on word indices provide good accuracy, there are several practical applications where it is infeasible or too costly to employ an LVCSR engine. An STD system is presented, which is designed to incorporate a fast phonetic decoding front-end and be robust to decoding errors whilst still allowing for rapid search speeds. This goal is achieved through mono-phone open-loop decoding coupled with fast hierarchical phone lattice search. Results demonstrate that an STD system that is designed with the constraint of a fast and simple phonetic decoding front-end requires a compromise to be made between search speed and search accuracy.
Resumo:
The use of the PC and Internet for placing telephone calls will present new opportunities to capture vast amounts of un-transcribed speech for a particular speaker. This paper investigates how to best exploit this data for speaker-dependent speech recognition. Supervised and unsupervised experiments in acoustic model and language model adaptation are presented. Using one hour of automatically transcribed speech per speaker with a word error rate of 36.0%, unsupervised adaptation resulted in an absolute gain of 6.3%, equivalent to 70% of the gain from the supervised case, with additional adaptation data likely to yield further improvements. LM adaptation experiments suggested that although there seems to be a small degree of speaker idiolect, adaptation to the speaker alone, without considering the topic of the conversation, is in itself unlikely to improve transcription accuracy.
Resumo:
Public transportation is an environment with great potential for applying location-based services through mobile devices. The BusTracker study is looking at how real-time passenger information systems can provide a core platform to improve commuters’ experiences. These systems rely on mobile computing and GPS technology to provide accurate information on transport vehicle locations. BusTracker builds on this mobile computing platform and geospatial information. The pilot study is running on the open source BugLabs computing platform, using a GPS module for accurate location information.
Resumo:
This position paper examines the development of a dedicated service aggregator role in business networks. We predict that these intermediaries will soon emerge in service ecosystems and add value through the application of dedicated domain knowledge in the process of creating new, innovative services or service bundles based on the aggregation, composition, integration or orchestration of existing services procured from different service providers in the service ecosystem. We discuss general foundations of service aggregators and present Fourth-Party Logistics Providers as a real-world example of emerging business service aggregators. We also point out a demand for future research, e.g. into governance models, risk management tools, service portfolio management approaches and service bundling techniques, to be able to better understand core determinants of competitiveness and success of service aggregators.
Resumo:
A method of improving the security of biometric templates which satisfies desirable properties such as (a) irreversibility of the template, (b) revocability and assignment of a new template to the same biometric input, (c) matching in the secure transformed domain is presented. It makes use of an iterative procedure based on the bispectrum that serves as an irreversible transformation for biometric features because signal phase is discarded each iteration. Unlike the usual hash function, this transformation preserves closeness in the transformed domain for similar biometric inputs. A number of such templates can be generated from the same input. These properties are illustrated using synthetic data and applied to images from the FRGC 3D database with Gabor features. Verification can be successfully performed using these secure templates with an EER of 5.85%
Resumo:
Technological and societal change, along with organisational and market change (driven by contracting-out and privatisation), are “creating a new generation of infrastructures” [1]. While inter-organisational contractual arrangements can improve maintenance efficiency through consistent and repeatable patterns of action - unanticipated difficulties in implementation can reduce the performance of these arrangements. When faced with unsatisfactory performance of contracting-out arrangements, government organisations may choose to adapt and change these arrangements over time, with the aim of improving performance. This paper enhances our understanding of ‘next generation infrastructures’ by examining adaptation of the organisational arrangements for the maintenance of these assets, in a case study spanning 20 years.
Resumo:
This paper presents a reliability-based reconfiguration methodology for power distribution systems. Probabilistic reliability models of the system components are considered and Monte Carlo method is used while evaluating the reliability of the distribution system. The reconfiguration is aimed at maximizing the reliability of the power supplied to the customers. A binary particle swarm optimization (BPSO) algorithm is used as a tool to determine the optimal configuration of the sectionalizing and tie switches in the system. The proposed methodology is applied on a modified IEEE 13-bus distribution system.
Resumo:
This paper proposes a method of enhancing system stability with a distribution static compensator (DSTATCOM) in an autonomous microgrid with multiple distributed generators (DG). It is assumed that there are both inertial and non-inertial DGs connected to the microgrid. The inertial DG can be a synchronous machine of smaller rating while inertia less DGs (solar) are assumed as DC sources. The inertia less DGs are connected through Voltage Source Converter (VSC) to the microgrid. The VSCs are controlled by either state feedback or current feedback mode to achieve desired voltage-current or power outputs respectively. The power sharing among the DGs is achieved by drooping voltage angle. Once the reference for the output voltage magnitude and angle is calculated from the droop, state feedback controllers are used to track the reference. The angle reference for the synchronous machine is compared with the output voltage angle of the machine and the error is fed to a PI controller. The controller output is used to set the power reference of the synchronous machine. The rate of change in the angle in a synchronous machine is restricted by the machine inertia and to mimic this nature, the rate of change in the VSCs angles are restricted by a derivative feedback in the droop control. The connected distribution static compensator (DSTATCOM) provides ride through capability during power imbalance in the microgrid, especially when the stored energy of the inertial DG is not sufficient to maintain stability. The inclusion of the DSATCOM in such cases ensures the system stability. The efficacies of the controllers are established through extensive simulation studies using PSCAD.
Resumo:
Bridges are an important part of society's infrastructure and reliable methods are necessary to monitor them and ensure their safety and efficiency. Bridges deteriorate with age and early detection of damage helps in prolonging the lives and prevent catastrophic failures. Most bridges still in used today were built decades ago and are now subjected to changes in load patterns, which can cause localized distress and if not corrected can result in bridge failure. In the past, monitoring of structures was usually done by means of visual inspection and tapping of the structures using a small hammer. Recent advancements of sensors and information technologies have resulted in new ways of monitoring the performance of structures. This paper briefly describes the current technologies used in bridge structures condition monitoring with its prime focus in the application of acoustic emission (AE) technology in the monitoring of bridge structures and its challenges.
Resumo:
A new method for noninvasive assessment of tear film surface quality (TFSQ) is proposed. The method is based on high-speed videokeratoscopy in which the corneal area for the analysis is dynamically estimated in a manner that removes videokeratoscopy interference from the shadows of eyelashes but not that related to the poor quality of the precorneal tear film that is of interest. The separation between the two types of seemingly similar videokeratoscopy interference is achieved by region-based classification in which the overall noise is first separated from the useful signal (unaltered videokeratoscopy pattern), followed by a dedicated interference classification algorithm that distinguishes between the two considered interferences. The proposed technique provides a much wider corneal area for the analysis of TFSQ than the previously reported techniques. A preliminary study with the proposed technique, carried out for a range of anterior eye conditions, showed an effective behavior in terms of noise to signal separation, interference classification, as well as consistent TFSQ results. Subsequently, the method proved to be able to not only discriminate between the bare eye and the lens on eye conditions but also to have the potential to discriminate between the two types of contact lenses.
Resumo:
A comprehensive voltage imbalance sensitivity analysis and stochastic evaluation based on the rating and location of single-phase grid-connected rooftop photovoltaic cells (PVs) in a residential low voltage distribution network are presented. The voltage imbalance at different locations along a feeder is investigated. In addition, the sensitivity analysis is performed for voltage imbalance in one feeder when PVs are installed in other feeders of the network. A stochastic evaluation based on Monte Carlo method is carried out to investigate the risk index of the non-standard voltage imbalance in the network in the presence of PVs. The network voltage imbalance characteristic based on different criteria of PV rating and location and network conditions is generalized. Improvement methods are proposed for voltage imbalance reduction and their efficacy is verified by comparing their risk index using Monte Carlo simulations.