823 resultados para Graph-based approach
Resumo:
Les systèmes logiciels sont devenus de plus en plus répondus et importants dans notre société. Ainsi, il y a un besoin constant de logiciels de haute qualité. Pour améliorer la qualité de logiciels, l’une des techniques les plus utilisées est le refactoring qui sert à améliorer la structure d'un programme tout en préservant son comportement externe. Le refactoring promet, s'il est appliqué convenablement, à améliorer la compréhensibilité, la maintenabilité et l'extensibilité du logiciel tout en améliorant la productivité des programmeurs. En général, le refactoring pourra s’appliquer au niveau de spécification, conception ou code. Cette thèse porte sur l'automatisation de processus de recommandation de refactoring, au niveau code, s’appliquant en deux étapes principales: 1) la détection des fragments de code qui devraient être améliorés (e.g., les défauts de conception), et 2) l'identification des solutions de refactoring à appliquer. Pour la première étape, nous traduisons des régularités qui peuvent être trouvés dans des exemples de défauts de conception. Nous utilisons un algorithme génétique pour générer automatiquement des règles de détection à partir des exemples de défauts. Pour la deuxième étape, nous introduisons une approche se basant sur une recherche heuristique. Le processus consiste à trouver la séquence optimale d'opérations de refactoring permettant d'améliorer la qualité du logiciel en minimisant le nombre de défauts tout en priorisant les instances les plus critiques. De plus, nous explorons d'autres objectifs à optimiser: le nombre de changements requis pour appliquer la solution de refactoring, la préservation de la sémantique, et la consistance avec l’historique de changements. Ainsi, réduire le nombre de changements permets de garder autant que possible avec la conception initiale. La préservation de la sémantique assure que le programme restructuré est sémantiquement cohérent. De plus, nous utilisons l'historique de changement pour suggérer de nouveaux refactorings dans des contextes similaires. En outre, nous introduisons une approche multi-objective pour améliorer les attributs de qualité du logiciel (la flexibilité, la maintenabilité, etc.), fixer les « mauvaises » pratiques de conception (défauts de conception), tout en introduisant les « bonnes » pratiques de conception (patrons de conception).
Resumo:
The theme of the thesis is centred around one important aspect of wireless sensor networks; the energy-efficiency.The limited energy source of the sensor nodes calls for design of energy-efficient routing protocols. The schemes for protocol design should try to minimize the number of communications among the nodes to save energy. Cluster based techniques were found energy-efficient. In this method clusters are formed and data from different nodes are collected under a cluster head belonging to each clusters and then forwarded it to the base station.Appropriate cluster head selection process and generation of desirable distribution of the clusters can reduce energy consumption of the network and prolong the network lifetime. In this work two such schemes were developed for static wireless sensor networks.In the first scheme, the energy wastage due to cluster rebuilding incorporating all the nodes were addressed. A tree based scheme is presented to alleviate this problem by rebuilding only sub clusters of the network. An analytical model of energy consumption of proposed scheme is developed and the scheme is compared with existing cluster based scheme. The simulation study proved the energy savings observed.The second scheme concentrated to build load-balanced energy efficient clusters to prolong the lifetime of the network. A voting based approach to utilise the neighbor node information in the cluster head selection process is proposed. The number of nodes joining a cluster is restricted to have equal sized optimum clusters. Multi-hop communication among the cluster heads is also introduced to reduce the energy consumption. The simulation study has shown that the scheme results in balanced clusters and the network achieves reduction in energy consumption.The main conclusion from the study was the routing scheme should pay attention on successful data delivery from node to base station in addition to the energy-efficiency. The cluster based protocols are extended from static scenario to mobile scenario by various authors. None of the proposals addresses cluster head election appropriately in view of mobility. An elegant scheme for electing cluster heads is presented to meet the challenge of handling cluster durability when all the nodes in the network are moving. The scheme has been simulated and compared with a similar approach.The proliferation of sensor networks enables users with large set of sensor information to utilise them in various applications. The sensor network programming is inherently difficult due to various reasons. There must be an elegant way to collect the data gathered by sensor networks with out worrying about the underlying structure of the network. The final work presented addresses a way to collect data from a sensor network and present it to the users in a flexible way.A service oriented architecture based application is built and data collection task is presented as a web service. This will enable composition of sensor data from different sensor networks to build interesting applications. The main objective of the thesis was to design energy-efficient routing schemes for both static as well as mobile sensor networks. A progressive approach was followed to achieve this goal.
Resumo:
This is a Named Entity Based Question Answering System for Malayalam Language. Although a vast amount of information is available today in digital form, no effective information access mechanism exists to provide humans with convenient information access. Information Retrieval and Question Answering systems are the two mechanisms available now for information access. Information systems typically return a long list of documents in response to a user’s query which are to be skimmed by the user to determine whether they contain an answer. But a Question Answering System allows the user to state his/her information need as a natural language question and receives most appropriate answer in a word or a sentence or a paragraph. This system is based on Named Entity Tagging and Question Classification. Document tagging extracts useful information from the documents which will be used in finding the answer to the question. Question Classification extracts useful information from the question to determine the type of the question and the way in which the question is to be answered. Various Machine Learning methods are used to tag the documents. Rule-Based Approach is used for Question Classification. Malayalam belongs to the Dravidian family of languages and is one of the four major languages of this family. It is one of the 22 Scheduled Languages of India with official language status in the state of Kerala. It is spoken by 40 million people. Malayalam is a morphologically rich agglutinative language and relatively of free word order. Also Malayalam has a productive morphology that allows the creation of complex words which are often highly ambiguous. Document tagging tools such as Parts-of-Speech Tagger, Phrase Chunker, Named Entity Tagger, and Compound Word Splitter are developed as a part of this research work. No such tools were available for Malayalam language. Finite State Transducer, High Order Conditional Random Field, Artificial Immunity System Principles, and Support Vector Machines are the techniques used for the design of these document preprocessing tools. This research work describes how the Named Entity is used to represent the documents. Single sentence questions are used to test the system. Overall Precision and Recall obtained are 88.5% and 85.9% respectively. This work can be extended in several directions. The coverage of non-factoid questions can be increased and also it can be extended to include open domain applications. Reference Resolution and Word Sense Disambiguation techniques are suggested as the future enhancements
Resumo:
This paper presents the design and development of a frame based approach for speech to sign language machine translation system in the domain of railways and banking. This work aims to utilize the capability of Artificial intelligence for the improvement of physically challenged, deaf-mute people. Our work concentrates on the sign language used by the deaf community of Indian subcontinent which is called Indian Sign Language (ISL). Input to the system is the clerk’s speech and the output of this system is a 3D virtual human character playing the signs for the uttered phrases. The system builds up 3D animation from pre-recorded motion capture data. Our work proposes to build a Malayalam to ISL
Resumo:
During the past few years, there has been much discussion of a shift from rule-based systems to principle-based systems for natural language processing. This paper outlines the major computational advantages of principle-based parsing, its differences from the usual rule-based approach, and surveys several existing principle-based parsing systems used for handling languages as diverse as Warlpiri, English, and Spanish, as well as language translation.
Resumo:
We present a component-based approach for recognizing objects under large pose changes. From a set of training images of a given object we extract a large number of components which are clustered based on the similarity of their image features and their locations within the object image. The cluster centers build an initial set of component templates from which we select a subset for the final recognizer. In experiments we evaluate different sizes and types of components and three standard techniques for component selection. The component classifiers are finally compared to global classifiers on a database of four objects.
Resumo:
Model based vision allows use of prior knowledge of the shape and appearance of specific objects to be used in the interpretation of a visual scene; it provides a powerful and natural way to enforce the view consistency constraint. A model based vision system has been developed within ESPRIT VIEWS: P2152 which is able to classify and track moving objects (cars and other vehicles) in complex, cluttered traffic scenes. The fundamental basis of the method has been previously reported. This paper presents recent developments which have extended the scope of the system to include (i) multiple cameras, (ii) variable camera geometry, and (iii) articulated objects. All three enhancements have easily been accommodated within the original model-based approach
Resumo:
Most existing crop scheduling models are cultivar specific and are developed using academic resources. As such they rarely meet the particular needs of a grower. A series of protocols have been created to generate effective schedules for a changing product range using data generated on site at a commercial nursery. A screening programme has been developed to help determine a cultivar's photoperiod sensitivity and vernalisation requirement. Experimental conditions were obtained using a cold store facility set to 5degreesC and photoperiod cloches. Eight and 16 hour photoperiod treatments were achieved at low cost by growing plants in cloches of opaque plastic with a motorised rolling screen. Natural light conditions were extended where necessary using a high pressure sodium lamp. Batches of plants were grown according to different schedules based on these treatments. The screening programme found Coreopsis grandiflora 'Flying Saucers' to be a long day plant. Data to form the basis of graphical tracks was taken using variations on commercial schedules. The work provides a nursery based approach to the continuous improvement of crop scheduling practises.
Resumo:
Microsatellites are widely used in genetic analyses, many of which require reliable estimates of microsatellite mutation rates, yet the factors determining mutation rates are uncertain. The most straightforward and conclusive method by which to study mutation is direct observation of allele transmissions in parent-child pairs, and studies of this type suggest a positive, possibly exponential, relationship between mutation rate and allele size, together with a bias toward length increase. Except for microsatellites on the Y chromosome, however, previous analyses have not made full use of available data and may have introduced bias: mutations have been identified only where child genotypes could not be generated by transmission from parents' genotypes, so that the probability that a mutation is detected depends on the distribution of allele lengths and varies with allele length. We introduce a likelihood-based approach that has two key advantages over existing methods. First, we can make formal comparisons between competing models of microsatellite evolution; second, we obtain asymptotically unbiased and efficient parameter estimates. Application to data composed of 118,866 parent-offspring transmissions of AC microsatellites supports the hypothesis that mutation rate increases exponentially with microsatellite length, with a suggestion that contractions become more likely than expansions as length increases. This would lead to a stationary distribution for allele length maintained by mutational balance. There is no evidence that contractions and expansions differ in their step size distributions.
Resumo:
Current e-learning systems are increasing their importance in higher education. However, the state of the art of e-learning applications, besides the state of the practice, does not achieve the level of interactivity that current learning theories advocate. In this paper, the possibility of enhancing e-learning systems to achieve deep learning has been studied by replicating an experiment in which students had to learn basic software engineering principles. One group learned these principles using a static approach, while the other group learned the same principles using a system-dynamics-based approach, which provided interactivity and feedback. The results show that, quantitatively, the latter group achieved a better understanding of the principles; furthermore, qualitatively, they enjoyed the learning experience
Resumo:
We introduce a classification-based approach to finding occluding texture boundaries. The classifier is composed of a set of weak learners, which operate on image intensity discriminative features that are defined on small patches and are fast to compute. A database that is designed to simulate digitized occluding contours of textured objects in natural images is used to train the weak learners. The trained classifier score is then used to obtain a probabilistic model for the presence of texture transitions, which can readily be used for line search texture boundary detection in the direction normal to an initial boundary estimate. This method is fast and therefore suitable for real-time and interactive applications. It works as a robust estimator, which requires a ribbon-like search region and can handle complex texture structures without requiring a large number of observations. We demonstrate results both in the context of interactive 2D delineation and of fast 3D tracking and compare its performance with other existing methods for line search boundary detection.
Resumo:
A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.
Resumo:
This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.
Resumo:
View-based and Cartesian representations provide rival accounts of visual navigation in humans, and here we explore possible models for the view-based case. A visual “homing” experiment was undertaken by human participants in immersive virtual reality. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on visual landmark configuration and relative goal location. A model based on simple visual cues captures important characteristics of these distributions. Augmenting visual features to include 3D elements such as stereo and motion parallax result in a set of models that describe the data accurately, demonstrating the effectiveness of a view-based approach.
Resumo:
An alternative approach to understanding innovation is made using two intersecting ideas. The first is that successful innovation requires consideration of the social and organizational contexts in which it is located. The complex context of construction work is characterized by inter-organizational collaboration, a project-based approach and power distributed amongst collaborating organizations. The second is that innovations can be divided into two modes: ‘bounded’, where the implications of innovation are restricted within a single, coherent sphere of influence, and ‘unbounded’, where the effects of implementation spill over beyond this. Bounded innovations are adequately explained within the construction literature. However, less discussed are unbounded innovations, where many firms' collaboration is required for successful implementation, even though many innovations can be considered unbounded within construction's inter-organizational context. It is argued that unbounded innovations require an approach to understand and facilitate the interactions both within a range of actors and between the actors and technological artefacts. The insights from a sociology of technology approach can be applied to the multiplicity of negotiations and alignments that constitute the implementation of unbounded innovation. The utility of concepts from the sociology of technology, including ‘system building’ and ‘heterogeneous engineering’, is demonstrated by applying them to an empirical study of an unbounded innovation on a major construction project (the new terminal at Heathrow Airport, London, UK). This study suggests that ‘system building’ contains outcomes that are not only transformations of practices, processes and systems, but also the potential transformation of technologies themselves.