997 resultados para domain expertise


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Engineering asset management (EAM) is a broad discipline and the EAM functions and processes are characterized by its distributed nature. However, engineering asset nowadays mostly relies on self-maintained experiential rule bases and periodic maintenance, which is lacking a collaborative engineering approach. This research proposes a collaborative environment integrated by a service center with domain expertise such as diagnosis, prognosis, and asset operations. The collaborative maintenance chain combines asset operation sites, service center (i.e., maintenance operation coordinator), system provider, first tier collaborators, and maintenance part suppliers. Meanwhile, to realize the automation of communication and negotiation among organizations, multiagent system (MAS) technique is applied to enhance the entire service level. During the MAS design processes, this research combines Prometheus MAS modeling approach with Petri-net modeling methodology and unified modeling language to visualize and rationalize the design processes of MAS. The major contributions of this research include developing a Petri-net enabled Prometheus MAS modeling methodology and constructing a collaborative agent-based maintenance chain framework for integrated EAM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: We explore how accurately and quickly nurses can identify melodic medical equipment alarms when no mnemonics are used, when alarms may overlap, and when concurrent tasks are performed. Background: The international standard IEC 60601-1-8 (International Electrotechnical Commission, 2005) has proposed simple melodies to distinguish seven alarm sources. Previous studies with nonmedical participants reveal poor learning of melodic alarms and persistent confusions between some of them. The effects of domain expertise, concurrent tasks, and alarm overlaps are unknown. Method: Fourteen intensive care and general medical unit nurses learned the melodic alarms without mnemonics in two sessions on separate days. In the second half of Day 2 the nurses identified single alarms or pairs of alarms played in sequential, partially overlapping, or nearly completely overlapping configurations. For half the experimental blocks nurses performed a concurrent mental arithmetic task. Results: Nurses' learning was poor and was no better than the learning of nonnurses in a previous study. Nurses showed the previously noted confusions between alarms. Overlapping alarms were exceptionally difficult to identify. The concurrent task affected response time but not accuracy. Conclusion: Because of a failure of auditory stream segregation, the melodic alarms cannot be discriminated when they overlap. Directives to sequence the sounding of alarms in medical electrical equipment must be strictly adhered to, or the alarms must redesigned to support better auditory streaming. Application: Actual or potential uses of this research include the implementation of IEC 60601-1-8 alarms in medical electrical equipment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Process models are used to convey semantics about business operations that are to be supported by an information system. A wide variety of professionals is targeted to use such models, including people who have little modeling or domain expertise. We identify important user characteristics that influence the comprehension of process models. Through a free simulation experiment, we provide evidence that selected cognitive abilities, learning style, and learning strategy influence the development of process model comprehension. These insights draw attention to the importance of research that views process model comprehension as an emergent learning process rather than as an attribute of the models as objects. Based on our findings, we identify a set of organizational intervention strategies that can lead to more successful process modeling workshops.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Citizen science projects have demonstrated the advantages of people with limited relevant prior knowledge participating in research. However, there is a difference between engaging the general public in a scientific project and entering an established expert community to conduct research. This paper describes our ongoing acoustic biodiversity monitoring collaborations with the bird watching community. We report on findings gathered over six years from participation in bird walks, observing conservation efforts, and records of personal activities of experienced birders. We offer an empirical study into extending existing protocols through in-context collaborative design involving scientists and domain experts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the HealthMap project for People With HIV, (PWHIV) designers employed a collaborative rapid ‘persona-building' workshop with health researchers to develop patient personas that embodied patient-centred design goals and contextual awareness from a variety of qualitative and quantitative data. On reflection, this collaborative rapid workshop was a process for drawing together the divergent user research insights and expertise of stakeholders into focus for a chronic disease self-management design. This paper discusses, (i) an analysis of the transcript of the workshop and, (ii) interviews with five practising senior designers, in order to reflect on how the persona-building process was enacted and its role in the HealthMap design evolution. The collaborative rapid persona-building methodology supported: embedding user research insights, eliciting domain expertise, introducing design thinking, facilitating stakeholder collaboration and defining early design requirements. The contribution of this paper is to model the process of collaborative rapid persona-building and to introduce the collaborative rapid persona-building framework as a method to generate design priorities from domain expertise and user research data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This short position paper considers issues in developing Data Architecture for the Internet of Things (IoT) through the medium of an exemplar project, Domain Expertise Capture in Authoring and Development ­Environments (DECADE). A brief discussion sets the background for IoT, and the development of the ­distinction between things and computers. The paper makes a strong argument to avoid reinvention of the wheel, and to reuse approaches to distributed heterogeneous data architectures and the lessons learned from that work, and apply them to this situation. DECADE requires an autonomous recording system, ­local data storage, semi-autonomous verification model, sign-off mechanism, qualitative and ­quantitative ­analysis ­carried out when and where required through web-service architecture, based on ontology and analytic agents, with a self-maintaining ontology model. To develop this, we describe a web-service ­architecture, ­combining a distributed data warehouse, web services for analysis agents, ontology agents and a ­verification engine, with a centrally verified outcome database maintained by certifying body for qualification/­professional status.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Requirements Engineers face an emerging set of challenges, which compound the traditional Requirements Engineering (RE) challenges (stakeholder identification, domain expertise, communication, analytic skills, problem solving, ...) that have arguably still not been fully addressed. This is the challenge of RE in the world of global software development, with requirements teams working in virtual mode (possibly on different continents), with the software having to operate in multiple contexts, addressing the needs of different cultures and legal jurisdictions, and having to build sales in different marketplaces. Further the need arises to specify software that is progressively enhanced through regular releases, rather than the “green field” specification of products.

This theoretical paper introduces these challenges, and presents an initial selection of theoretical models, drawn from many and varied source disciplines, which might be employed to gain insight into various features of RE in support of global software development. To illustrate the potential relevance of this selection of models, a longitudinal case study with a recently identified software developer, to follow the specification and subsequent roll-out of a future release of a software product for sale globally, is introduced. Features of the situation faced by that organisation are highlighted, to illustrate the potential relevance of the diverse models that have been identified.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Information portals are seen as an appropriate platform for personalised healthcare and wellbeing information provision. Efficient content management is a core capability of a successful smart health information portal (SHIP) and domain expertise is a vital input to content management when it comes to matching user profiles with the appropriate resources. The rate of generation of new health-related content far exceeds the numbers that can be manually examined by domain experts for relevance to a specific topic and audience. In this paper we investigate automated content discovery as a plausible solution to this shortcoming that capitalises on the existing database of expert-endorsed content as an implicit store of knowledge to guide such a solution. We propose a novel content discovery technique based on a text analytics approach that utilises an existing content repository to acquire new and relevant content. We also highlight the contribution of this technique towards realisation of smart content management for SHIPs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Distributed Computing frameworks belong to a class of programming models that allow developers to

launch workloads on large clusters of machines. Due to the dramatic increase in the volume of

data gathered by ubiquitous computing devices, data analytic workloads have become a common

case among distributed computing applications, making Data Science an entire field of

Computer Science. We argue that Data Scientist's concern lays in three main components: a dataset,

a sequence of operations they wish to apply on this dataset, and some constraint they may have

related to their work (performances, QoS, budget, etc). However, it is actually extremely

difficult, without domain expertise, to perform data science. One need to select the right amount

and type of resources, pick up a framework, and configure it. Also, users are often running their

application in shared environments, ruled by schedulers expecting them to specify precisely their resource

needs. Inherent to the distributed and concurrent nature of the cited frameworks, monitoring and

profiling are hard, high dimensional problems that block users from making the right

configuration choices and determining the right amount of resources they need. Paradoxically, the

system is gathering a large amount of monitoring data at runtime, which remains unused.

In the ideal abstraction we envision for data scientists, the system is adaptive, able to exploit

monitoring data to learn about workloads, and process user requests into a tailored execution

context. In this work, we study different techniques that have been used to make steps toward

such system awareness, and explore a new way to do so by implementing machine learning

techniques to recommend a specific subset of system configurations for Apache Spark applications.

Furthermore, we present an in depth study of Apache Spark executors configuration, which highlight

the complexity in choosing the best one for a given workload.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The creation of Causal Loop Diagrams (CLDs) is a major phase in the System Dynamics (SD) life-cycle, since the created CLDs express dependencies and feedback in the system under study, as well as, guide modellers in building meaningful simulation models. The cre-ation of CLDs is still subject to the modeller's domain expertise (mental model) and her ability to abstract the system, because of the strong de-pendency on semantic knowledge. Since the beginning of SD, available system data sources (written and numerical models) have always been sparsely available, very limited and imperfect and thus of little benefit to the whole modelling process. However, in recent years, we have seen an explosion in generated data, especially in all business related domains that are analysed via Business Dynamics (BD). In this paper, we intro-duce a systematic tool supported CLD creation approach, which analyses and utilises available disparate data sources within the business domain. We demonstrate the application of our methodology on a given business use-case and evaluate the resulting CLD. Finally, we propose directions for future research to further push the automation in the CLD creation and increase confidence in the generated CLDs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since manually constructing domain-specific sentiment lexicons is extremely time consuming and it may not even be feasible for domains where linguistic expertise is not available. Research on the automatic construction of domain-specific sentiment lexicons has become a hot topic in recent years. The main contribution of this paper is the illustration of a novel semi-supervised learning method which exploits both term-to-term and document-to-term relations hidden in a corpus for the construction of domain specific sentiment lexicons. More specifically, the proposed two-pass pseudo labeling method combines shallow linguistic parsing and corpusbase statistical learning to make domain-specific sentiment extraction scalable with respect to the sheer volume of opinionated documents archived on the Internet these days. Another novelty of the proposed method is that it can utilize the readily available user-contributed labels of opinionated documents (e.g., the user ratings of product reviews) to bootstrap the performance of sentiment lexicon construction. Our experiments show that the proposed method can generate high quality domain-specific sentiment lexicons as directly assessed by human experts. Moreover, the system generated domain-specific sentiment lexicons can improve polarity prediction tasks at the document level by 2:18% when compared to other well-known baseline methods. Our research opens the door to the development of practical and scalable methods for domain-specific sentiment analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Generic sentiment lexicons have been widely used for sentiment analysis these days. However, manually constructing sentiment lexicons is very time-consuming and it may not be feasible for certain application domains where annotation expertise is not available. One contribution of this paper is the development of a statistical learning based computational method for the automatic construction of domain-specific sentiment lexicons to enhance cross-domain sentiment analysis. Our initial experiments show that the proposed methodology can automatically generate domain-specific sentiment lexicons which contribute to improve the effectiveness of opinion retrieval at the document level. Another contribution of our work is that we show the feasibility of applying the sentiment metric derived based on the automatically constructed sentiment lexicons to predict product sales of certain product categories. Our research contributes to the development of more effective sentiment analysis system to extract business intelligence from numerous opinionated expressions posted to the Web

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pressures placed on the natural, environmental, economic, and cultural sectors from continued growth, population shifts, weather and climate, and environmental quality are increasing exponentially in the southeastern U.S. region. Our growing understanding of the relationship of humans with the marine environment is leading us to explore new ecosystem-based approaches to coastal management, marine resources planning, and coastal adaptation that engages multiple state jurisdictions. The urgency of the situation calls for coordinated regional actions by the states, in conjunction with supporting partners and leveraging a diversity of resources, to address critical issues in sustaining our coastal and ocean ecosystems and enhancing the quality of life of our citizens. The South Atlantic Alliance (www.southatlanticalliance.org) was formally established on October 19, 2009 to “implement science-based policies and solutions that enhance and protect the value of coastal and ocean resources of the southeastern United States which support the region's culture and economy now and for future generations.” The Alliance, which includes North Carolina, South Carolina, Georgia, and Florida, will provide a regional mechanism for collaborating, coordinating, and sharing information in support of resource sustainability; improved regional alignment; cooperative planning and leveraging of resources; integrated research, observations, and mapping; increased awareness of the challenges facing the South Atlantic region; and inclusiveness and integration at all levels. Although I am preparing and presenting this overview of the South Atlantic Alliance and its current status, there are a host of representatives from agencies within the four states, universities, NGOs, and ongoing southeastern regional ocean and coastal programs that are contributing significant time, expertise, and energy to the success of the Alliance; information presented herein and to be presented in my oral presentation was generated by the collaborative efforts of these professionals. I also wish to acknowledge the wisdom and foresight of the Governors of the four states in establishing this exciting regional ocean partnership. (PDF contains 4 pages)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We will take the view that the end result of problem solving in some world should be increased expertness. In the context of computers, increasing expertness means writing programs. This thesis is about a process, reasoning by analogy that writes programs. Analogy relates one problem world to another. We will call the world in which we have an expert problem solver the IMAGE world, and the other world the DOMAIN world. Analogy will construct an expert problem solver in the domain world using the image world expert for inspiration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper has three main aims. First, to trace – through the pages of the Journal – the changing ways in which lay understandings of health and illness have been represented during the 1979-2002 period. Second, to say something about the limits of lay knowledge (and particularly lay expertise) in matters of health and medicine. Third, to call for a re-assessment of what lay people can offer to a democratised and customer sensitive system of health care and to attempt to draw a boundary around the domain of expertise. In following through on those aims, the author calls upon data derived from three current projects. These latter concern the diagnosis of Alzheimer’s disease in people with Down’s syndrome; the development of an outcome measure for people who have suffered a traumatic brain injury; and a study of why older people might reject annual influenza vaccinations. Key words: Lay health beliefs, lay expertise, Alzheimer’s, Traumatic Brain Injury, Vaccinations