820 resultados para Graph-based approach


Relevância:

90.00% 90.00%

Publicador:

Resumo:

During the past few years, there has been much discussion of a shift from rule-based systems to principle-based systems for natural language processing. This paper outlines the major computational advantages of principle-based parsing, its differences from the usual rule-based approach, and surveys several existing principle-based parsing systems used for handling languages as diverse as Warlpiri, English, and Spanish, as well as language translation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a component-based approach for recognizing objects under large pose changes. From a set of training images of a given object we extract a large number of components which are clustered based on the similarity of their image features and their locations within the object image. The cluster centers build an initial set of component templates from which we select a subset for the final recognizer. In experiments we evaluate different sizes and types of components and three standard techniques for component selection. The component classifiers are finally compared to global classifiers on a database of four objects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Model based vision allows use of prior knowledge of the shape and appearance of specific objects to be used in the interpretation of a visual scene; it provides a powerful and natural way to enforce the view consistency constraint. A model based vision system has been developed within ESPRIT VIEWS: P2152 which is able to classify and track moving objects (cars and other vehicles) in complex, cluttered traffic scenes. The fundamental basis of the method has been previously reported. This paper presents recent developments which have extended the scope of the system to include (i) multiple cameras, (ii) variable camera geometry, and (iii) articulated objects. All three enhancements have easily been accommodated within the original model-based approach

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most existing crop scheduling models are cultivar specific and are developed using academic resources. As such they rarely meet the particular needs of a grower. A series of protocols have been created to generate effective schedules for a changing product range using data generated on site at a commercial nursery. A screening programme has been developed to help determine a cultivar's photoperiod sensitivity and vernalisation requirement. Experimental conditions were obtained using a cold store facility set to 5degreesC and photoperiod cloches. Eight and 16 hour photoperiod treatments were achieved at low cost by growing plants in cloches of opaque plastic with a motorised rolling screen. Natural light conditions were extended where necessary using a high pressure sodium lamp. Batches of plants were grown according to different schedules based on these treatments. The screening programme found Coreopsis grandiflora 'Flying Saucers' to be a long day plant. Data to form the basis of graphical tracks was taken using variations on commercial schedules. The work provides a nursery based approach to the continuous improvement of crop scheduling practises.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Microsatellites are widely used in genetic analyses, many of which require reliable estimates of microsatellite mutation rates, yet the factors determining mutation rates are uncertain. The most straightforward and conclusive method by which to study mutation is direct observation of allele transmissions in parent-child pairs, and studies of this type suggest a positive, possibly exponential, relationship between mutation rate and allele size, together with a bias toward length increase. Except for microsatellites on the Y chromosome, however, previous analyses have not made full use of available data and may have introduced bias: mutations have been identified only where child genotypes could not be generated by transmission from parents' genotypes, so that the probability that a mutation is detected depends on the distribution of allele lengths and varies with allele length. We introduce a likelihood-based approach that has two key advantages over existing methods. First, we can make formal comparisons between competing models of microsatellite evolution; second, we obtain asymptotically unbiased and efficient parameter estimates. Application to data composed of 118,866 parent-offspring transmissions of AC microsatellites supports the hypothesis that mutation rate increases exponentially with microsatellite length, with a suggestion that contractions become more likely than expansions as length increases. This would lead to a stationary distribution for allele length maintained by mutational balance. There is no evidence that contractions and expansions differ in their step size distributions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Current e-learning systems are increasing their importance in higher education. However, the state of the art of e-learning applications, besides the state of the practice, does not achieve the level of interactivity that current learning theories advocate. In this paper, the possibility of enhancing e-learning systems to achieve deep learning has been studied by replicating an experiment in which students had to learn basic software engineering principles. One group learned these principles using a static approach, while the other group learned the same principles using a system-dynamics-based approach, which provided interactivity and feedback. The results show that, quantitatively, the latter group achieved a better understanding of the principles; furthermore, qualitatively, they enjoyed the learning experience

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We introduce a classification-based approach to finding occluding texture boundaries. The classifier is composed of a set of weak learners, which operate on image intensity discriminative features that are defined on small patches and are fast to compute. A database that is designed to simulate digitized occluding contours of textured objects in natural images is used to train the weak learners. The trained classifier score is then used to obtain a probabilistic model for the presence of texture transitions, which can readily be used for line search texture boundary detection in the direction normal to an initial boundary estimate. This method is fast and therefore suitable for real-time and interactive applications. It works as a robust estimator, which requires a ribbon-like search region and can handle complex texture structures without requiring a large number of observations. We demonstrate results both in the context of interactive 2D delineation and of fast 3D tracking and compare its performance with other existing methods for line search boundary detection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

View-based and Cartesian representations provide rival accounts of visual navigation in humans, and here we explore possible models for the view-based case. A visual “homing” experiment was undertaken by human participants in immersive virtual reality. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on visual landmark configuration and relative goal location. A model based on simple visual cues captures important characteristics of these distributions. Augmenting visual features to include 3D elements such as stereo and motion parallax result in a set of models that describe the data accurately, demonstrating the effectiveness of a view-based approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An alternative approach to understanding innovation is made using two intersecting ideas. The first is that successful innovation requires consideration of the social and organizational contexts in which it is located. The complex context of construction work is characterized by inter-organizational collaboration, a project-based approach and power distributed amongst collaborating organizations. The second is that innovations can be divided into two modes: ‘bounded’, where the implications of innovation are restricted within a single, coherent sphere of influence, and ‘unbounded’, where the effects of implementation spill over beyond this. Bounded innovations are adequately explained within the construction literature. However, less discussed are unbounded innovations, where many firms' collaboration is required for successful implementation, even though many innovations can be considered unbounded within construction's inter-organizational context. It is argued that unbounded innovations require an approach to understand and facilitate the interactions both within a range of actors and between the actors and technological artefacts. The insights from a sociology of technology approach can be applied to the multiplicity of negotiations and alignments that constitute the implementation of unbounded innovation. The utility of concepts from the sociology of technology, including ‘system building’ and ‘heterogeneous engineering’, is demonstrated by applying them to an empirical study of an unbounded innovation on a major construction project (the new terminal at Heathrow Airport, London, UK). This study suggests that ‘system building’ contains outcomes that are not only transformations of practices, processes and systems, but also the potential transformation of technologies themselves.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The UK has a target for an 80% reduction in CO2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Farming freshwater prawns with fish in rice fields is widespread in coastal regions of southwest Bangladesh because of favourable resources and ecological conditions. This article provides an overview of an ecosystem-based approach to integrated prawn-fish-rice farming in southwest Bangladesh. The practice of prawn and fish farming in rice fields is a form of integrated aquaculture-agriculture, which provides a wide range of social, economic and environmental benefits. Integrated prawn-fish-rice farming plays an important role in the economy of Bangladesh, earning foreign exchange and increasing food production. However, this unique farming system in coastal Bangladesh is particularly vulnerable to climatechange. We suggest that community-based adaptation strategies must be developed to cope with the challenges. We propose that integrated prawn-fish-rice farming could be relocated from the coastal region to less vulnerable upland areas, but caution that this will require appropriate adaptation strategies and an enabling institutional environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Business process modelling can help an organisation better understand and improve its business processes. Most business process modelling methods adopt a task- or activity-based approach to identifying business processes. Within our work, we use activity theory to categorise elements within organisations as being either human beings, activities or artefacts. Due to the direct relationship between these three elements, an artefact-oriented approach to organisation analysis emerges. Organisational semiotics highlights the ontological dependency between affordances within an organisation. We analyse the ontological dependency between organisational elements, and therefore produce the ontology chart for artefact-oriented business process modelling in order to clarify the relationship between the elements of an organisation. Furthermore, we adopt the techniques from semantic analysis and norm analysis, of organisational semiotics, to develop the artefact-oriented method for business process modelling. The proposed method provides a novel perspective for identifying and analysing business processes, as well as agents and artefacts, as the artefact-oriented perspective demonstrates the fundamental flow of an organisation. The modelling results enable an organisation to understand and model its processes from an artefact perspective, viewing an organisation as a network of artefacts. The information and practice captured and stored in artefact can also be shared and reused between organisations that produce similar artefacts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Concern that European forest biodiversity is depleted and declining has provoked widespread efforts to improve management practices. To gauge the success of these actions, appropriate monitoring of forest ecosystems is paramount. Multi-species indicators are frequently used to assess the state of biodiversity and its response to implemented management, but generally applicable and objective methodologies for species' selection are lacking. Here we use a niche-based approach, underpinned by coarse quantification of species' resource use, to objectively select species for inclusion in a pan-European forest bird indicator. We identify both the minimum number of species required to deliver full resource coverage and the most sensitive species' combination, and explore the trade-off between two key characteristics, sensitivity and redundancy, associated with indicators comprising different numbers of species. We compare our indicator to an existing forest bird indicator selected on the basis of expert opinion and show it is more representative of the wider community. We also present alternative indicators for regional and forest type specific monitoring and show that species' choice can have a significant impact on the indicator and consequent projections about the state of the biodiversity it represents. Furthermore, by comparing indicator sets drawn from currently monitored species and the full forest bird community, we identify gaps in the coverage of the current monitoring scheme. We believe that adopting this niche-based framework for species' selection supports the objective development of multi-species indicators and that it has good potential to be extended to a range of habitats and taxa.