951 resultados para Domain knowledge


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The study of the morphodynamics of tidal channel networks is important because of their role in tidal propagation and the evolution of salt-marshes and tidal flats. Channel dimensions range from tens of metres wide and metres deep near the low water mark to only 20-30cm wide and 20cm deep for the smallest channels on the marshes. The conventional method of measuring the networks is cumbersome, involving manual digitising of aerial photographs. This paper describes a semi-automatic knowledge-based network extraction method that is being implemented to work using airborne scanning laser altimetry (and later aerial photography). The channels exhibit a width variation of several orders of magnitude, making an approach based on multi-scale line detection difficult. The processing therefore uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels using a distance-with-destination transform. Breaks in the networks are repaired by extending channel ends in the direction of their ends to join with nearby channels, using domain knowledge that flow paths should proceed downhill and that any network fragment should be joined to a nearby fragment so as to connect eventually to the open sea.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Utilising the expressive power of S-Expressions in Learning Classifier Systems often prohibitively increases the search space due to increased flexibility of the endcoding. This work shows that selection of appropriate S-Expression functions through domain knowledge improves scaling in problems, as expected. It is also known that simple alphabets perform well on relatively small sized problems in a domain, e.g. ternary alphabet in the 6, 11 and 20 bit MUX domain. Once fit ternary rules have been formed it was investigated whether higher order learning was possible and whether this staged learning facilitated selection of appropriate functions in complex alphabets, e.g. selection of S-Expression functions. This novel methodology is shown to provide compact results (135-MUX) and exhibits potential for scaling well (1034-MUX), but is only a small step towards introducing abstraction to LCS.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The SPE taxonomy of evolving software systems, first proposed by Lehman in 1980, is re-examined in this work. The primary concepts of software evolution are related to generic theories of evolution, particularly Dawkins' concept of a replicator, to the hermeneutic tradition in philosophy and to Kuhn's concept of paradigm. These concepts provide the foundations that are needed for understanding the phenomenon of software evolution and for refining the definitions of the SPE categories. In particular, this work argues that a software system should be defined as of type P if its controlling stakeholders have made a strategic decision that the system must comply with a single paradigm in its representation of domain knowledge. The proposed refinement of SPE is expected to provide a more productive basis for developing testable hypotheses and models about possible differences in the evolution of E- and P-type systems than is provided by the original scheme. Copyright (C) 2005 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose – To describe some research done, as part of an EPSRC funded project, to assist engineers working together on collaborative tasks. Design/methodology/approach – Distributed finite state modelling and agent techniques are used successfully in a new hybrid self-organising decision making system applied to collaborative work support. For the particular application, analysis of the tasks involved has been performed and these tasks are modelled. The system then employs a novel generic agent model, where task and domain knowledge are isolated from the support system, which provides relevant information to the engineers. Findings – The method is applied in the despatch of transmission commands within the control room of The National Grid Company Plc (NGC) – tasks are completed significantly faster when the system is utilised. Research limitations/implications – The paper describes a generic approach and it would be interesting to investigate how well it works in other applications. Practical implications – Although only one application has been studied, the methodology could equally be applied to a general class of cooperative work environments. Originality/value – One key part of the work is the novel generic agent model that enables the task and domain knowledge, which are application specific, to be isolated from the support system, and hence allows the method to be applied in other domains.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The knowledge economy offers opportunity to a broad and diverse community of information systems users to efficiently gain information and know-how for improving qualifications and enhancing productivity in the work place. Such demand will continue and users will frequently require optimised and personalised information content. The advancement of information technology and the wide dissemination of information endorse individual users when constructing new knowledge from their experience in the real-world context. However, a design of personalised information provision is challenging because users’ requirements and information provision specifications are complex in their representation. The existing methods are not able to effectively support this analysis process. This paper presents a mechanism which can holistically facilitate customisation of information provision based on individual users’ goals, level of knowledge and cognitive styles preferences. An ontology model with embedded norms represents the domain knowledge of information provision in a specific context where users’ needs can be articulated and represented in a user profile. These formal requirements can then be transformed onto information provision specifications which are used to discover suitable information content from repositories and pedagogically organise the selected content to meet the users’ needs. The method is provided with adaptability which enables an appropriate response to changes in users’ requirements during the process of acquiring knowledge and skills.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Information modelling is a topic that has been researched a great deal, but still many questions around it have not been solved. An information model is essential in the design of a database which is the core of an information system. Currently most of databases only deal with information that represents facts, or asserted information. The ability of capturing semantic aspect has to be improved, and yet other types, such as temporal and intentional information, should be considered. Semantic Analysis, a method of information modelling, has offered a way to handle various aspects of information. It employs the domain knowledge and communication acts as sources of information modelling. It lends itself to a uniform structure whereby semantic, temporal and intentional information can be captured, which builds a sound foundation for building a semantic temporal database.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study aims to demonstrate that data from business games can be an important resource for improving efficiency and effectiveness of learning. The proposal presented here was developed from preliminary studies of data from Virtual Market games that pointed the possibility of identifying gaps in learning by analyzing the decisions of students. This proposal helps students to refine their learning processes and equips tutors with strategies for teaching and student assessment. The proposal also complements the group discussion and/or debriefing, which are widely used to enhance learning mediated by games. However, from a management perspective the model has the potential to be erroneous and miss opportunities, which cannot be detected because of the dependence on the characteristics of the individual, such as ability to communicate and work together. To illustrate the proposed technique, data sets from two business games were analyzed with the focus on managing working capital and it was found that students had difficulties managing this task. Similar trends were observed in all categories of students in the study-undergraduate, postgraduate and specialization. This discovery led us to the analysis of data for decisions made in the performance of the games, and it was determined that indicators could be developed that were capable of indentifying inconsistencies in the decisions. It was decided to apply some basic concepts of the finance management, such as management of the operational and non-operational expenditures, as well as production management concepts, such as the use of the production capacity. By analyzing the data from the Virtual Market games using the indicator concept, it was possible to detect the lack of domain knowledge of the students. Therefore, these indicators can be used to analyze the decisions of the players and guide them during the game, increasing their effectiveness and efficiency. As these indicators were developed from specific content, they can also be used to develop teaching materials to support learning. Viewed in this light, the proposal adds new possibilities for using business games in learning. In addition to the intrinsic learning that is achieved through playing the games, they also assist in driving the learning process. This study considers the applications and the methodology used.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The need for the representation of both semantics and common sense and its organization in a lexical database or knowledge base has motivated the development of large projects, such as Wordnets, CYC and Mikrokosmos. Besides the generic bases, another approach is the construction of ontologies for specific domains. Among the advantages of such approach there is the possibility of a greater and more detailed coverage of a specific domain and its terminology. Domain ontologies are important resources in several tasks related to the language processing, especially in those related to information retrieval and extraction in textual bases. Information retrieval or even question and answer systems can benefit from the domain knowledge represented in an ontology. Besides embracing the terminology of the field, the ontology makes the relationships among the terms explicit. Copyright 2007 ACM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ability to utilize information systems (IS) effectively is becoming a necessity for business professionals. However, individuals differ in their abilities to use IS effectively, with some achieving exceptional performance in IS use and others being unable to do so. Therefore, developing a set of skills and attributes to achieve IS user competency, or the ability to realize the fullest potential and the greatest performance from IS use, is important. Various constructs have been identified in the literature to describe IS users with regard to their intentions to use IS and their frequency of IS usage, but studies to describe the relevant characteristics associated with highly competent IS users, or those who have achieved IS user competency, are lacking. This research develops a model of IS user competency by using the Repertory Grid Technique to identify a broad set of characteristics of highly competent IS users. A qualitative analysis was carried out to identify categories and sub-categories of these characteristics. Then, based on the findings, a subset of the model of IS user competency focusing on the IS-specific factors – domain knowledge of and skills in IS, willingness to try and to explore IS, and perception of IS value – was developed and validated using the survey approach. The survey findings suggest that all three factors are relevant and important to IS user competency, with willingness to try and to explore IS being the most significant factor. This research generates a rich set of factors explaining IS user competency, such as perception of IS value. The results not only highlight characteristics that can be fostered in IS users to improve their performance with IS use, but also present research opportunities for IS training and potential hiring criteria for IS users in organizations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Biomedical analyses are becoming increasingly complex, with respect to both the type of the data to be produced and the procedures to be executed. This trend is expected to continue in the future. The development of information and protocol management systems that can sustain this challenge is therefore becoming an essential enabling factor for all actors in the field. The use of custom-built solutions that require the biology domain expert to acquire or procure software engineering expertise in the development of the laboratory infrastructure is not fully satisfactory because it incurs undesirable mutual knowledge dependencies between the two camps. We propose instead an infrastructure concept that enables the domain experts to express laboratory protocols using proper domain knowledge, free from the incidence and mediation of the software implementation artefacts. In the system that we propose this is made possible by basing the modelling language on an authoritative domain specific ontology and then using modern model-driven architecture technology to transform the user models in software artefacts ready for execution in a multi-agent based execution platform specialized for biomedical laboratories.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many industries and academic institutions share the vision that an appropriate use of information originated from the environment may add value to services in multiple domains and may help humans in dealing with the growing information overload which often seems to jeopardize our life. It is also clear that information sharing and mutual understanding between software agents may impact complex processes where many actors (humans and machines) are involved, leading to relevant socioeconomic benefits. Starting from these two input, architectural and technological solutions to enable “environment-related cooperative digital services” are here explored. The proposed analysis starts from the consideration that our environment is physical space and here diversity is a major value. On the other side diversity is detrimental to common technological solutions, and it is an obstacle to mutual understanding. An appropriate environment abstraction and a shared information model are needed to provide the required levels of interoperability in our heterogeneous habitat. This thesis reviews several approaches to support environment related applications and intends to demonstrate that smart-space-based, ontology-driven, information-sharing platforms may become a flexible and powerful solution to support interoperable services in virtually any domain and even in cross-domain scenarios. It also shows that semantic technologies can be fruitfully applied not only to represent application domain knowledge. For example semantic modeling of Human-Computer Interaction may support interaction interoperability and transformation of interaction primitives into actions, and the thesis shows how smart-space-based platforms driven by an interaction ontology may enable natural ad flexible ways of accessing resources and services, e.g, with gestures. An ontology for computational flow execution has also been built to represent abstract computation, with the goal of exploring new ways of scheduling computation flows with smart-space-based semantic platforms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Software must be constantly adapted due to evolving domain knowledge and unanticipated requirements changes. To adapt a system at run-time we need to reflect on its structure and its behavior. Object-oriented languages introduced reflection to deal with this issue, however, no reflective approach up to now has tried to provide a unified solution to both structural and behavioral reflection. This paper describes Albedo, a unified approach to structural and behavioral reflection. Albedo is a model of fined-grained unanticipated dynamic structural and behavioral adaptation. Instead of providing reflective capabilities as an external mechanism we integrate them deeply in the environment. We show how explicit meta-objects allow us to provide a range of reflective features and thereby evolve both application models and environments at run-time.