902 resultados para domain knowledge
Resumo:
The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model's parsing mechanism. The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents.
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
Purpose – This paper aims to contribute towards understanding how safety knowledge can be elicited from railway experts for the purposes of supporting effective decision-making. Design/methodology/approach – A consortium of safety experts from across the British railway industry is formed. Collaborative modelling of the knowledge domain is used as an approach to the elicitation of safety knowledge from experts. From this, a series of knowledge models is derived to inform decision-making. This is achieved by using Bayesian networks as a knowledge modelling scheme, underpinning a Safety Prognosis tool to serve meaningful prognostics information and visualise such information to predict safety violations. Findings – Collaborative modelling of safety-critical knowledge is a valid approach to knowledge elicitation and its sharing across the railway industry. This approach overcomes some of the key limitations of existing approaches to knowledge elicitation. Such models become an effective tool for prediction of safety cases by using railway data. This is demonstrated using passenger–train interaction safety data. Practical implications – This study contributes to practice in two main directions: by documenting an effective approach to knowledge elicitation and knowledge sharing, while also helping the transport industry to understand safety. Social implications – By supporting the railway industry in their efforts to understand safety, this research has the potential to benefit railway passengers, staff and communities in general, which is a priority for the transport sector. Originality/value – This research applies a knowledge elicitation approach to understanding safety based on collaborative modelling, which is a novel approach in the context of transport.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Introduction. Research design should take into account both (a) the specific nature of the object under scrutiny, and (b) approaches to its study in the past. This is to ensure that informed decisions are made regarding research design in future empirical studies. Here these factors are taken into account with reference to methodological choice for a doctoral study on tacit knowledge sharing, and the extent to tacit knowledge sharing may be facilitated by online tools. The larger study responds to calls for the two domains of knowledge management and human information behaviour to be considered together in terms of their research approaches and theory development. Method. Relevant literature – both domain-specific (knowledge management) and general (research methods in social science) - was identified and analysed to identify the most appropriate approaches for an empirical study of tacit knowledge sharing. Analysis. The analysis shows that there are a number of challenges associated with studying an intangible entity such as tacit knowledge. Quantitative, qualitative and mixed methods have been adopted in prior work on this theme, each with their own strengths and weaknesses. Results. The analysis has informed a decision to adopt a research approach that deploys mixed methods for an inductive case study to extend knowledge of the influence of online tools on tacit knowledge sharing. Conclusion. This work intends to open the debate on methodological choice and routes to implementation for studies that are subject to practical constraints imposed by the context in which they are situated.
Resumo:
Critical thinking in learners is a goal of educators and professional organizations in nursing as well as other professions. However, few studies in nursing have examined the role of the important individual difference factors topic knowledge, individual interest, and general relational reasoning strategies in predicting critical thinking. In addition, most previous studies have used domain-general, standardized measures, with inconsistent results. Moreover, few studies have investigated critical thinking across multiple levels of experience. The major purpose of this study was to examine the degree to which topic knowledge, individual interest, and relational reasoning predict critical thinking in maternity nurses. For this study, 182 maternity nurses were recruited from national nursing listservs explicitly chosen to capture multiple levels of experience from prelicensure to very experienced nurses. The three independent measures included a domain-specific Topic Knowledge Assessment (TKA), consisting of 24 short-answer questions, a Professed and Engaged Interest Measure (PEIM), with 20 questions indicating level of interest and engagement in maternity nursing topics and activities, and the Test of Relational Reasoning (TORR), a graphical selected response measure with 32 items organized in scales corresponding to four forms of relational reasoning: analogy, anomaly, antithesis, and antinomy. The dependent measure was the Critical Thinking Task in Maternity Nursing (CT2MN), composed of a clinical case study providing cues with follow-up questions relating to nursing care. These questions align with the cognitive processes identified in a commonly-used definition of critical thinking in nursing. Reliable coding schemes for the measures were developed for this study. Key findings included a significant correlation between topic knowledge and individual interest. Further, the three individual difference factors explained a significant proportion of the variance in critical thinking with a large effect size. While topic knowledge was the strongest predictor of critical thinking performance, individual interest had a moderate significant effect, and relational reasoning had a small but significant effect. The findings suggest that these individual difference factors should be included in future studies of critical thinking in nursing. Implications for nursing education, research, and practice are discussed.
Resumo:
This paper adds two analytical devices to domain analysis, claiming that for domain analysis to work cumulatively transferable definitions of domains must be written. To establish this definition the author provides two axes to consider: Ar- eas of Modulation and Degrees of Specialization. These axes may serve as analytical devices for the domain analyst to delineate what is being studied and what is not being studied in a domain analysis.
Resumo:
The InterPARES 2 Terminology Cross-Domain has created three terminological instruments in service to the project, and by extension, Archival Science. Over the course of the five-year project this Cross-Domain has collected words, definition, and phrases from extant documents, research tools, models, and direct researcher submission and discussion. From these raw materials, the Cross-Domain has identified a systematic and pragmatic way establishing a coherent view on the concepts involved in dynamic, experiential, and interactive records and systems in the arts, sciences, and e-government.The three terminological instruments are the Glossary, Dictionary, and Ontologies. The first of these is an authoritative list of terms and definitions that are core to our understanding of the evolving records creation, keeping, and preservation environments. The Dictionary is a tool used to facilitate interdisciplinary communication. It contains multiple definitions for terms, from multiple disciplines. By using this tool, researchers can see how Archival Science deploys terminology compared to Computer Science, Library and Information Science, or Arts, etc. The third terminological instrument, the Ontologies, identify explicit relationships between concepts of records. This is useful for communicating the nuances of Diplomatics in the dynamic, experiential, and interactive environment.All three of these instruments were drawn from a Register of terms gathered over the course of the project. This Register served as a holding place for terms, definitions, and phrases, and allowed researchers to discuss, comment on, and modify submissions. The Register and the terminological instruments were housed in the Terminology Database. The Database provides searching, display, and file downloads – making it easy to navigate through the terminological instruments.Terminology used in InterPARES 1 and the UBC Project was carried forward to this Database. In this sense, we are building on our past knowledge, and making it relevant to the contemporary environment.
Resumo:
We find ourselves, after the close of the twentieth century, looking back at a mass of responses to the knowledge organization problem. Many institutions, such as the Dewey Decimal Classification (Furner, 2007), have grown up to address it. Increasingly, many diverse discourses are appropriating the problem and crafting a wide variety of responses. This includes many artistic interpretations of the act and products of knowledge organization. These surface as responses to the expressive power or limits of the Library and Information Studies institutions (e.g., DDC) and their often primarily utilitarian gaze.One way to make sense of this diversity is to approach the study from a descriptive stance, inventorying the population of types of KOS. This population perspective approaches the phenomenon of types and boundaries of Knowledge Organization Systems (KOS) as one that develops out of particular discourses, for particular purposes. For example, both DDC and Martianus Capella, a 5th Century encyclopedist, are KOS in this worldview. Both are part of the population of KOS. Approaching the study of KOS from the population perspective allows the researcher a systematic look at the diversity emergent at the constellation of different factors of design and implementation. However, it is not enough to render a model of core types, but we have to also consider the borders of KOS. Fringe types of KOS inform research, specifically to the basic principles of design and implementation used by others outside of the scholarly and professional discourse of Library and Information Studies.Four examples of fringe types of KOS are presented in this paper. Applying a rubric developed in previous papers, our aim here is to show how the conceptual anatomy of these fringe types relates to more established KOS, thereby laying bare the definitions of domain, purpose, structure, and practice. Fringe types, like Beghtol’s examples (2003), are drawn from areas outside of Library and Information Studies proper, and reflect the reinvention of structures to fit particular purposes in particular domains. The four fringe types discussed in this paper are (1) Roland Barthes’ text S/Z which “indexes” a text of an essay with particular “codes” that are meant to expose the literary rhythm of the work; (2) Mary Daly’s Wickedary, a reference work crafted for radical liberation theology – and specifically designed to remove patriarchy from the language used by what the author calls “wild women”; (3) Luigi Serafini’s Codex Seraphinianus a work of book art that plays on the trope of universal encyclopedia and back-of- the book index; and (4) Martinaus Capella – and his Marriage of Mercury and Philology, a fifth century encyclopedia. We compared these using previous analytic taxonomies (Wright, 2008; Tennis, 2006; Tudhope, 2006, Soergel, 2001, Hodge, 2000).
Resumo:
Biomarkers are nowadays essential tools to be one step ahead for fighting disease, enabling an enhanced focus on disease prevention and on the probability of its occurrence. Research in a multidisciplinary approach has been an important step towards the repeated discovery of new biomarkers. Biomarkers are defined as biochemical measurable indicators of the presence of disease or as indicators for monitoring disease progression. Currently, biomarkers have been used in several domains such as oncology, neurology, cardiovascular, inflammatory and respiratory disease, and several endocrinopathies. Bridging biomarkers in a One Health perspective has been proven useful in almost all of these domains. In oncology, humans and animals are found to be subject to the same environmental and genetic predisposing factors: examples include the existence of mutations in BR-CA1 gene predisposing to breast cancer, both in human and dogs, with increased prevalence in certain dog breeds and human ethnic groups. Also, breast feeding frequency and duration has been related to a decreased risk of breast cancer in women and bitches. When it comes to infectious diseases, this parallelism is prone to be even more important, for as much as 75% of all emerging diseases are believed to be zoonotic. Examples of successful use of biomarkers have been found in several zoonotic diseases such as Ebola, dengue, leptospirosis or West Nile virus infections. Acute Phase Proteins (APPs) have been used for quite some time as biomarkers of inflammatory conditions. These have been used in human health but also in the veterinary field such as in mastitis evaluation and PRRS (porcine respiratory and reproductive syndrome) diagnosis. Advantages rely on the fact that these biomarkers can be much easier to assess than other conventional disease diagnostic approaches (example: measured in easy to collect saliva samples). Another domain in which biomarkers have been essential is food safety: the possibility to measure exposure to chemical contaminants or other biohazards present in the food chain, which are sometimes analytical challenges due to their low bioavailability in body fluids, is nowadays a major breakthrough. Finally, biomarkers are considered the key to provide more personalized therapies, with more efficient outcomes and fewer side effects. This approach is expected to be the correct path to follow also in veterinary medicine, in the near future.
Resumo:
This research work analyses techniques for implementing a cell-centred finite-volume time-domain (ccFV-TD) computational methodology for the purpose of studying microwave heating. Various state-of-the-art spatial and temporal discretisation methods employed to solve Maxwell's equations on multidimensional structured grid networks are investigated, and the dispersive and dissipative errors inherent in those techniques examined. Both staggered and unstaggered grid approaches are considered. Upwind schemes using a Riemann solver and intensity vector splitting are studied and evaluated. Staggered and unstaggered Leapfrog and Runge-Kutta time integration methods are analysed in terms of phase and amplitude error to identify which method is the most accurate and efficient for simulating microwave heating processes. The implementation and migration of typical electromagnetic boundary conditions. from staggered in space to cell-centred approaches also is deliberated. In particular, an existing perfectly matched layer absorbing boundary methodology is adapted to formulate a new cell-centred boundary implementation for the ccFV-TD solvers. Finally for microwave heating purposes, a comparison of analytical and numerical results for standard case studies in rectangular waveguides allows the accuracy of the developed methods to be assessed.