915 resultados para Aboriginal knowledge domain


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Debuggers are crucial tools for developing object-oriented software systems as they give developers direct access to the running systems. Nevertheless, traditional debuggers rely on generic mechanisms to explore and exhibit the execution stack and system state, while developers reason about and formulate domain-specific questions using concepts and abstractions from their application domains. This creates an abstraction gap between the debugging needs and the debugging support leading to an inefficient and error-prone debugging effort. To reduce this gap, we propose a framework for developing domain-specific debuggers called the Moldable Debugger. The Moldable Debugger is adapted to a domain by creating and combining domain-specific debugging operations with domain-specific debugging views, and adapts itself to a domain by selecting, at run time, appropriate debugging operations and views. We motivate the need for domain-specific debugging, identify a set of key requirements and show how our approach improves debugging by adapting the debugger to several domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The implementation of new surgical techniques offers chances but carries risks. Usually, several years pass before a critical appraisal and a balanced opinion of a new treatment method are available and rely on the evidence from the literature and expert's opinion. The frozen elephant trunk (FET) technique has been increasingly used to treat complex pathologies of the aortic arch and the descending aorta, but there still is an ongoing discussion within the surgical community about the optimal indications. This paper represents a common effort of the Vascular Domain of EACTS together with several surgeons with particular expertise in aortic surgery, and summarizes the current knowledge and the state of the art about the FET technique. The majority of the information about the FET technique has been extracted from 97 focused publications already available in the PubMed database (cohort studies, case reports, reviews, small series, meta-analyses and best evidence topics) published in English.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several theories assume that successful team coordination is partly based on knowledge that helps anticipating individual contributions necessary in a situational task. It has been argued that a more ecological perspective needs to be considered in contexts evolving dynamically and unpredictably. In football, defensive plays are usually coordinated according to strategic concepts spanning all members and large areas of the playfield. On the other hand, fewer people are involved in offensive plays as these are less projectable and strongly constrained by ecological characteristics. The aim of this study is to test the effects of ecological constraints and player knowledge on decision making in offensive game scenarios. It is hypothesized that both knowledge about team members and situational constraints will influence decisional processes. Effects of situational constraints are expected to be of higher magnitude. Two teams playing in the fourth league of the Swiss Football Federation participate in the study. Forty customized game scenarios were developed based on the coaches’ information about player positions and game strategies. Each player was shown in ball possession four times. Participants were asked to take the perspective of the player on the ball and to choose a passing destination and a recipient. Participants then rated domain specific strengths (e.g., technical skills, game intelligence) of each of their teammates. Multilevel models for categorical dependent variables (team members) will be specified. Player knowledge (rated skills) and ecological constraints (operationalized as each players’ proximity and availability for ball reception) are included as predictor variables. Data are currently being collected. Results will yield effects of parameters that are stable across situations as well as of variable parameters that are bound to situational context. These will enable insight into the degree to which ecological constraints and more enduring team knowledge are involved in decisional processes aimed at coordinating interpersonal action.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce gradient-domain rendering for Monte Carlo image synthesis.While previous gradient-domain Metropolis Light Transport sought to distribute more samples in areas of high gradients, we show, in contrast, that estimating image gradients is also possible using standard (non-Metropolis) Monte Carlo algorithms, and furthermore, that even without changing the sample distribution, this often leads to significant error reduction. This broadens the applicability of gradient rendering considerably. To gain insight into the conditions under which gradient-domain sampling is beneficial, we present a frequency analysis that compares Monte Carlo sampling of gradients followed by Poisson reconstruction to traditional Monte Carlo sampling. Finally, we describe Gradient-Domain Path Tracing (G-PT), a relatively simple modification of the standard path tracing algorithm that can yield far superior results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose dual-domain filtering, an image processing paradigm that couples spatial domain with frequency domain filtering. Our dual-domain defined filter removes artifacts like residual noise of other image denoising methods and compression artifacts. Moreover, iterating the filter achieves state-of-the-art image denoising results, but with a much simpler algorithm than competing approaches. The simplicity and versatility of the dual-domain filter makes it an attractive tool for image processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel algorithm to reconstruct high-quality images from sampled pixels and gradients in gradient-domain rendering. Our approach extends screened Poisson reconstruction by adding additional regularization constraints. Our key idea is to exploit local patches in feature images, which contain per-pixels normals, textures, position, etc., to formulate these constraints. We describe a GPU implementation of our approach that runs on the order of seconds on megapixel images. We demonstrate a significant improvement in image quality over screened Poisson reconstruction under the L1 norm. Because we adapt the regularization constraints to the noise level in the input, our algorithm is consistent and converges to the ground truth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An effective solution to model and apply planning domain knowledge for deliberation and action in probabilistic, agent-oriented control is presented. Specifically, the addition of a task structure planning component and supporting components to an agent-oriented architecture and agent implementation is described. For agent control in risky or uncertain environments, an approach and method of goal reduction to task plan sets and schedules of action is presented. Additionally, some issues related to component-wise, situation-dependent control of a task planning agent that schedules its tasks separately from planning them are motivated and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Zircons from the oldest magmatic and metasedimentary rocks in the Podolia domain of the Ukrainian shield were studied and dated by the U-Pb method on a NORDSIM secondary-ion mass spectrometer. Age of zircon cores in enderbite gneisses sampled in the Kazachii Yar and Odessa quarries on the opposite banks of the Yuzhnyi Bug River reaches 3790 Ma. Cores of terrigenous zircons in quartzites from the Odessa quarry as well as in garnet gneisses from the Zaval'e graphite quarry have age within 3650-3750 Ma. Zircon rims record two metamorphic events around 2750-2850 Ma and 1900-2000 Ma. Extremely low U content in zircons of the second age group indicates conditions of the granulite facies metamorphism in Paleoproterozoic within the Podolia domain. Measured data on orthorocks (enderbite-gneiss) and metasedimentary rocks unambiguously suggest existence of the ancient Paleoarchean crust in the Podolia (Dniester-Bug) domain of the Ukrainian shield. They contribute in our knowledge of scales of formation and geochemical features of the primordial crust.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time domain laser reflectance spectroscopy (TDRS) was applied for the first time to evaluate internal fruit quality. This technique, known in medicine-related knowledge areas, has not been used before in agricultural or food research. It allows the simultaneous non-destructive measuring of two optical characteristics of the tissues: light scattering and absorption. Models to measure firmness, sugar & acid contents in kiwifruit, tomato, apple, peach, nectarine and other fruits were built using sequential statistical techniques: principal component analysis, multiple stepwise linear regression, clustering and discriminant analysis. Consistent correlations were established between the two parameters measured with TDRS, i.e. absorption & transport scattering coefficients, with chemical constituents (sugars and acids) and firmness, respectively. Classification models were built to sort fruits into three quality grades, according to their firmness, soluble solids and acidity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The understanding of the structure and dynamics of the intricate network of connections among people that consumes products through Internet appears as an extremely useful asset in order to study emergent properties related to social behavior. This knowledge could be useful, for example, to improve the performance of personal recommendation algorithms. In this contribution, we analyzed five-year records of movie-rating transactions provided by Netflix, a movie rental platform where users rate movies from an online catalog. This dataset can be studied as a bipartite user-item network whose structure evolves in time. Even though several topological properties from subsets of this bipartite network have been reported with a model that combines random and preferential attachment mechanisms [Beguerisse Díaz et al., 2010], there are still many aspects worth to be explored, as they are connected to relevant phenomena underlying the evolution of the network. In this work, we test the hypothesis that bursty human behavior is essential in order to describe how a bipartite user-item network evolves in time. To that end, we propose a novel model that combines, for user nodes, a network growth prescription based on a preferential attachment mechanism acting not only in the topological domain (i.e. based on node degrees) but also in time domain. In the case of items, the model mixes degree preferential attachment and random selection. With these ingredients, the model is not only able to reproduce the asymptotic degree distribution, but also shows an excellent agreement with the Netflix data in several time-dependent topological properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge resource reuse has become a popular approach within the ontology engineering field, mainly because it can speed up the ontology development process, saving time and money and promoting the application of good practices. The NeOn Methodology provides guidelines for reuse. These guidelines include the selection of the most appropriate knowledge resources for reuse in ontology development. This is a complex decision-making problem where different conflicting objectives, like the reuse cost, understandability, integration workload and reliability, have to be taken into account simultaneously. GMAA is a PC-based decision support system based on an additive multi-attribute utility model that is intended to allay the operational difficulties involved in the Decision Analysis methodology. The paper illustrates how it can be applied to select multimedia ontologies for reuse to develop a new ontology in the multimedia domain. It also demonstrates that the sensitivity analyses provided by GMAA are useful tools for making a final recommendation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tradicionalmente, el uso de técnicas de análisis de datos ha sido una de las principales vías para el descubrimiento de conocimiento oculto en grandes cantidades de datos, recopilados por expertos en diferentes dominios. Por otra parte, las técnicas de visualización también se han usado para mejorar y facilitar este proceso. Sin embargo, existen limitaciones serias en la obtención de conocimiento, ya que suele ser un proceso lento, tedioso y en muchas ocasiones infructífero, debido a la dificultad de las personas para comprender conjuntos de datos de grandes dimensiones. Otro gran inconveniente, pocas veces tenido en cuenta por los expertos que analizan grandes conjuntos de datos, es la degradación involuntaria a la que someten a los datos durante las tareas de análisis, previas a la obtención final de conclusiones. Por degradación quiere decirse que los datos pueden perder sus propiedades originales, y suele producirse por una reducción inapropiada de los datos, alterando así su naturaleza original y llevando en muchos casos a interpretaciones y conclusiones erróneas que podrían tener serias implicaciones. Además, este hecho adquiere una importancia trascendental cuando los datos pertenecen al dominio médico o biológico, y la vida de diferentes personas depende de esta toma final de decisiones, en algunas ocasiones llevada a cabo de forma inapropiada. Ésta es la motivación de la presente tesis, la cual propone un nuevo framework visual, llamado MedVir, que combina la potencia de técnicas avanzadas de visualización y minería de datos para tratar de dar solución a estos grandes inconvenientes existentes en el proceso de descubrimiento de información válida. El objetivo principal es hacer más fácil, comprensible, intuitivo y rápido el proceso de adquisición de conocimiento al que se enfrentan los expertos cuando trabajan con grandes conjuntos de datos en diferentes dominios. Para ello, en primer lugar, se lleva a cabo una fuerte disminución en el tamaño de los datos con el objetivo de facilitar al experto su manejo, y a la vez preservando intactas, en la medida de lo posible, sus propiedades originales. Después, se hace uso de efectivas técnicas de visualización para representar los datos obtenidos, permitiendo al experto interactuar de forma sencilla e intuitiva con los datos, llevar a cabo diferentes tareas de análisis de datos y así estimular visualmente su capacidad de comprensión. De este modo, el objetivo subyacente se basa en abstraer al experto, en la medida de lo posible, de la complejidad de sus datos originales para presentarle una versión más comprensible, que facilite y acelere la tarea final de descubrimiento de conocimiento. MedVir se ha aplicado satisfactoriamente, entre otros, al campo de la magnetoencefalografía (MEG), que consiste en la predicción en la rehabilitación de lesiones cerebrales traumáticas (Traumatic Brain Injury (TBI) rehabilitation prediction). Los resultados obtenidos demuestran la efectividad del framework a la hora de acelerar y facilitar el proceso de descubrimiento de conocimiento sobre conjuntos de datos reales. ABSTRACT Traditionally, the use of data analysis techniques has been one of the main ways of discovering knowledge hidden in large amounts of data, collected by experts in different domains. Moreover, visualization techniques have also been used to enhance and facilitate this process. However, there are serious limitations in the process of knowledge acquisition, as it is often a slow, tedious and many times fruitless process, due to the difficulty for human beings to understand large datasets. Another major drawback, rarely considered by experts that analyze large datasets, is the involuntary degradation to which they subject the data during analysis tasks, prior to obtaining the final conclusions. Degradation means that data can lose part of their original properties, and it is usually caused by improper data reduction, thereby altering their original nature and often leading to erroneous interpretations and conclusions that could have serious implications. Furthermore, this fact gains a trascendental importance when the data belong to medical or biological domain, and the lives of people depends on the final decision-making, which is sometimes conducted improperly. This is the motivation of this thesis, which proposes a new visual framework, called MedVir, which combines the power of advanced visualization techniques and data mining to try to solve these major problems existing in the process of discovery of valid information. Thus, the main objective is to facilitate and to make more understandable, intuitive and fast the process of knowledge acquisition that experts face when working with large datasets in different domains. To achieve this, first, a strong reduction in the size of the data is carried out in order to make the management of the data easier to the expert, while preserving intact, as far as possible, the original properties of the data. Then, effective visualization techniques are used to represent the obtained data, allowing the expert to interact easily and intuitively with the data, to carry out different data analysis tasks, and so visually stimulating their comprehension capacity. Therefore, the underlying objective is based on abstracting the expert, as far as possible, from the complexity of the original data to present him a more understandable version, thus facilitating and accelerating the task of knowledge discovery. MedVir has been succesfully applied to, among others, the field of magnetoencephalography (MEG), which consists in predicting the rehabilitation of Traumatic Brain Injury (TBI). The results obtained successfully demonstrate the effectiveness of the framework to accelerate and facilitate the process of knowledge discovery on real world datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge modeling tools are software tools that follow a modeling approach to help developers in building a knowledge-based system. The purpose of this article is to show the advantages of using this type of tools in the development of complex knowledge-based decision support systems. In order to do so, the article describes the development of a system called SAIDA in the domain of hydrology with the help of the KSM modeling tool. SAIDA operates on real-time receiving data recorded by sensors (rainfall, water levels, flows, etc.). It follows a multi-agent architecture to interpret the data, predict the future behavior and recommend control actions. The system includes an advanced knowledge based architecture with multiple symbolic representation. KSM was especially useful to design and implement the complex knowledge based architecture in an efficient way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the adaptation approach of reusable knowledge representation components used in the KSM environment for the formulation and operationalisation of structured knowledge models. Reusable knowledge representation components in KSM are called primitives of representation. A primitive of representation provides: (1) a knowledge representation formalism (2) a set of tasks that use this knowledge together with several problem-solving methods to carry out these tasks (3) a knowledge acquisition module that provides different services to acquire and validate this knowledge (4) an abstract terminology about the linguistic categories included in the representation language associated to the primitive. Primitives of representation usually are domain independent. A primitive of representation can be adapted to support knowledge in a given domain by importing concepts from this domain. The paper describes how this activity can be carried out by mean of a terminological importation. Informally, a terminological importation partially populates an abstract terminology with concepts taken from a given domain. The information provided by the importation can be used by the acquisition and validation facilities to constraint the classes of knowledge that can be described using the representation formalism according to the domain knowledge. KSM provides the LINK-S language to specify terminological importation from a domain terminology to an abstract one. These terminologies are described in KSM by mean of the CONCEL language. Terminological importation is used to adapt reusable primitives of representation in order to increase the usability degree of such components in these domains. In addition, two primitives of representation can share a common vocabulary by importing common domain CONCEL terminologies (conceptual vocabularies). It is a necessary condition to make possible the interoperability between different, heterogeneous knowledge representation components in the framework of complex knowledge - based architectures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a particular knowledge acquisition tool for the construction and maintenance of the knowledge model of an intelligent system for emergency management in the field of hydrology. This tool has been developed following an innovative approach directed to end-users non familiarized in computer oriented terminology. According to this approach, the tool is conceived as a document processor specialized in a particular domain (hydrology) in such a way that the whole knowledge model is viewed by the user as an electronic document. The paper first describes the characteristics of the knowledge model of the intelligent system and summarizes the problems that we found during the development and maintenance of such type of model. Then, the paper describes the KATS tool, a software application that we have designed to help in this task to be used by users who are not experts in computer programming. Finally, the paper shows a comparison between KATS and other approaches for knowledge acquisition.