951 resultados para Domain knowledge


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Author Cocitation Analysis (ACA) can be defined as the analysis of a group of actors, writers or researchers structurally organized in a (social and cognitive) network of a particular scientific community. The greater the number of researchers selected, the greater the amplitude and the domain boundary under consideration; the more restricted the number of researchers chosen as representative and appropriate, the less extensive the domain. From the perspective of the first axis of Tennis (2003), the selection of authors involves setting parameters on the extent of the domain, i.e., its total scope and amplitude. Thus, from the point of view of Tennis’s (2003) approach to Domain Analysis, the selection of authors for Author Cocitation Analysis is associated with the designations and boundaries of the domain, as well as to their goals (Tennis, 2003). Still, the selection of authors through the most cited authors in the literature, reflects the core elements of a domain and constitute the most specific foundation of a domain, aligning to the Degrees of Specialization characterized by Tennis (2003). It is concluded that the Author Cocitation Analysis (ACA) is a relevant procedure to the analysis of the underlying structure of a scientific knowledge domain, which meets the theories and concepts of Domain Analysis researchers, in that it allows characterizing the science, identifying, analyzing and assessing the conditions under which scientific knowledge is constructed and socialized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is a study about the relationships between authors and the main thematic categories in the papers published in the last five International ISKO Conferences, held between 2002 and 2010. The aim is to map the domain as ISKO conferences are considered the most representative forum in the field. The published papers are considered to indicate the relationships between authors and themes. The Classification Scheme for Knowledge Organization Error! Bookmark not defined Literature (CSKOL) was used to categorize the papers. The theoretical and methodological foundations of the study can be found in the concept of domain analysis proposed by Hjorland. The analysis of the papers (n=146) led to the identification of the most productive authors, the networks representing the relationships between the authors as also the categories that constitute the primary areas of research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]Labile Fe(II) distributions were investigated in the Sub-Tropical South Atlantic and the Southern Ocean during the BONUS-GoodHope cruise from 34 to 57_ S (February? March 2008). Concentrations ranged from below the detection limit (0.009 nM) to values as high 5 as 0.125 nM. In the surface mixed layer, labile Fe(II) concentrations were always higher than the detection limit, with values higher than 0.060nM south of 47_ S, representing between 39% and 63% of dissolved Fe (DFe). Biological production was evidenced. At intermediate depth, local maxima were observed, with the highest values in the Sub-Tropical domain at around 200 m, and represented more than 70% of DFe. Remineralization processes were likely responsible for those sub-surface maxima. Below 1500 m, concentrations were close to or below the detection limit, except at two stations (at the vicinity of the Agulhas ridge and in the north of the Weddell Sea Gyre) where values remained as high as _0.030?0.050 nM. Hydrothermal or sediment inputs may provide Fe(II) to these deep waters. Fe(II) half life times (t1/2) at 4 _C were measured in the upper and deep waters and ranged from 2.9 to 11.3min, and from 10.0 to 72.3 min, respectively. Measured values compared quite well in the upper waters with theoretical values from two published models, but not in the deep waters. This may be due to the lack of knowledge for some parameters in the models and/or to organic complexation of Fe(II) that impact its oxidation rates. This study helped to considerably increase the Fe(II) data set in the Ocean and to better understand the Fe redox cycle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of the present research is to define a Semantic Web framework for precedent modelling, by using knowledge extracted from text, metadata, and rules, while maintaining a strong text-to-knowledge morphism between legal text and legal concepts, in order to fill the gap between legal document and its semantics. The framework is composed of four different models that make use of standard languages from the Semantic Web stack of technologies: a document metadata structure, modelling the main parts of a judgement, and creating a bridge between a text and its semantic annotations of legal concepts; a legal core ontology, modelling abstract legal concepts and institutions contained in a rule of law; a legal domain ontology, modelling the main legal concepts in a specific domain concerned by case-law; an argumentation system, modelling the structure of argumentation. The input to the framework includes metadata associated with judicial concepts, and an ontology library representing the structure of case-law. The research relies on the previous efforts of the community in the field of legal knowledge representation and rule interchange for applications in the legal domain, in order to apply the theory to a set of real legal documents, stressing the OWL axioms definitions as much as possible in order to enable them to provide a semantically powerful representation of the legal document and a solid ground for an argumentation system using a defeasible subset of predicate logics. It appears that some new features of OWL2 unlock useful reasoning features for legal knowledge, especially if combined with defeasible rules and argumentation schemes. The main task is thus to formalize legal concepts and argumentation patterns contained in a judgement, with the following requirement: to check, validate and reuse the discourse of a judge - and the argumentation he produces - as expressed by the judicial text.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plectin is a versatile cytolinker protein critically involved in the organization of the cytoskeletal filamentous system. The muscle-specific intermediate filament (IF) protein desmin, which progressively replaces vimentin during differentiation of myoblasts, is one of the important binding partners of plectin in mature muscle. Defects of either plectin or desmin cause muscular dystrophies. By cell transfection studies, yeast two-hybrid, overlay and pull-down assays for binding analysis, we have characterized the functionally important sequences for the interaction of plectin with desmin and vimentin. The association of plectin with both desmin and vimentin predominantly depended on its fifth plakin repeat domain and downstream linker region. Conversely, the interaction of desmin and vimentin with plectin required sequences contained within the segments 1A-2A of their central coiled-coil rod domain. This study furthers our knowledge of the interaction between plectin and IF proteins important for maintenance of cytoarchitecture in skeletal muscle. Moreover, binding of plectin to the conserved rod domain of IF proteins could well explain its broad interaction with most types of IFs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image denoising methods have been implemented in both spatial and transform domains. Each domain has its advantages and shortcomings, which can be complemented by each other. State-of-the-art methods like block-matching 3D filtering (BM3D) therefore combine both domains. However, implementation of such methods is not trivial. We offer a hybrid method that is surprisingly easy to implement and yet rivals BM3D in quality.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a generalized framework for gradient-domain Metropolis rendering, and introduce three techniques to reduce sampling artifacts and variance. The first one is a heuristic weighting strategy that combines several sampling techniques to avoid outliers. The second one is an improved mapping to generate offset paths required for computing gradients. Here we leverage the properties of manifold walks in path space to cancel out singularities. Finally, the third technique introduces generalized screen space gradient kernels. This approach aligns the gradient kernels with image structures such as texture edges and geometric discontinuities to obtain sparser gradients than with the conventional gradient kernel. We implement our framework on top of an existing Metropolis sampler, and we demonstrate significant improvements in visual and numerical quality of our results compared to previous work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software dependencies play a vital role in programme comprehension, change impact analysis and other software maintenance activities. Traditionally, these activities are supported by source code analysis; however, the source code is sometimes inaccessible or difficult to analyse, as in hybrid systems composed of source code in multiple languages using various paradigms (e.g. object-oriented programming and relational databases). Moreover, not all stakeholders have adequate knowledge to perform such analyses. For example, non-technical domain experts and consultants raise most maintenance requests; however, they cannot predict the cost and impact of the requested changes without the support of the developers. We propose a novel approach to predicting software dependencies by exploiting the coupling present in domain-level information. Our approach is independent of the software implementation; hence, it can be used to approximate architectural dependencies without access to the source code or the database. As such, it can be applied to hybrid systems with heterogeneous source code or legacy systems with missing source code. In addition, this approach is based solely on information visible and understandable to domain users; therefore, it can be efficiently used by domain experts without the support of software developers. We evaluate our approach with a case study on a large-scale enterprise system, in which we demonstrate how up to 65 of the source code dependencies and 77% of the database dependencies are predicted solely based on domain information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Answering run-time questions in object-oriented systems involves reasoning about and exploring connections between multiple objects. Developer questions exercise various aspects of an object and require multiple kinds of interactions depending on the relationships between objects, the application domain and the differing developer needs. Nevertheless, traditional object inspectors, the essential tools often used to reason about objects, favor a generic view that focuses on the low-level details of the state of individual objects. This leads to an inefficient effort, increasing the time spent in the inspector. To improve the inspection process, we propose the Moldable Inspector, a novel approach for an extensible object inspector. The Moldable Inspector allows developers to look at objects using multiple interchangeable presentations and supports a workflow in which multiple levels of connecting objects can be seen together. Both these aspects can be tailored to the domain of the objects and the question at hand. We further exemplify how the proposed solution improves the inspection process, introduce a prototype implementation and discuss new directions for extending the Moldable Inspector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Debuggers are crucial tools for developing object-oriented software systems as they give developers direct access to the running systems. Nevertheless, traditional debuggers rely on generic mechanisms to explore and exhibit the execution stack and system state, while developers reason about and formulate domain-specific questions using concepts and abstractions from their application domains. This creates an abstraction gap between the debugging needs and the debugging support leading to an inefficient and error-prone debugging effort. To reduce this gap, we propose a framework for developing domain-specific debuggers called the Moldable Debugger. The Moldable Debugger is adapted to a domain by creating and combining domain-specific debugging operations with domain-specific debugging views, and adapts itself to a domain by selecting, at run time, appropriate debugging operations and views. We motivate the need for domain-specific debugging, identify a set of key requirements and show how our approach improves debugging by adapting the debugger to several domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The implementation of new surgical techniques offers chances but carries risks. Usually, several years pass before a critical appraisal and a balanced opinion of a new treatment method are available and rely on the evidence from the literature and expert's opinion. The frozen elephant trunk (FET) technique has been increasingly used to treat complex pathologies of the aortic arch and the descending aorta, but there still is an ongoing discussion within the surgical community about the optimal indications. This paper represents a common effort of the Vascular Domain of EACTS together with several surgeons with particular expertise in aortic surgery, and summarizes the current knowledge and the state of the art about the FET technique. The majority of the information about the FET technique has been extracted from 97 focused publications already available in the PubMed database (cohort studies, case reports, reviews, small series, meta-analyses and best evidence topics) published in English.