161 resultados para domain experts
em Queensland University of Technology - ePrints Archive
Resumo:
Construction organisations comprise geographically dispersed virtually-linked suborganisations that work together to realise projects. They increasingly do so using information and communication technology (ICT) to communicate, coordinate their activities and to solve complex problems. One salient problem they face is how to effectively use requisite ICT tools. One important tool at their disposal is the self-help group, a body of people that organically spring up to solve shared problems. The more recognised term for this organisational form is a community of practice (COP). COPs generate knowledge networks that enhance and sustain competitive advantage and they are also used to help COP members actually use ICT tools. Etienne Wenger defines communities of practice as “groups of people informally bound together by shared expertise and passion for a joint enterprise” (Wenger and Snyder 2000, p139). This ‘chicken-or-egg’ issue about needing a COP to use the tools that are needed to effective broaden COPs (beyond co-located these groups) led us to explore how best to improve the process of ICT diffusion through construction organisations— primarily using people supported by technology that improves knowledge sharing. We present insights gained from recent PhD research results in this area. A semistructured interview approach was used to collect data from ICT strategists and users in the three large Australian construction organisations that are among the 10 or so first tier companies by annual dollar turnover in Australia. The interviewees were categorised into five organisational levels: IT strategist, implementer, project or engineering manager, site engineer and foreman. The focus of the study was on the organisation and the way that it implements ICT diffusion of a groupware ICT diffusion initiative. Several types of COP networks from the three Australian cases are identified: withinorganisation COP; institutional, implementer or technical support; project manager/engineer focussed; and collegial support. Also, there are cross-organisational COPs that organically emerge as a result of people sharing an interest or experience in something significant. Firstly, an institutional network is defined as a strategic group, interested in development of technology innovation within an organisation. This COP principally links business process domain experts with an ICT strategist.
Resumo:
Effective management of groundwater requires stakeholders to have a realistic conceptual understanding of the groundwater systems and hydrological processes.However, groundwater data can be complex, confusing and often difficult for people to comprehend..A powerful way to communicate understanding of groundwater processes, complex subsurface geology and their relationships is through the use of visualisation techniques to create 3D conceptual groundwater models. In addition, the ability to animate, interrogate and interact with 3D models can encourage a higher level of understanding than static images alone. While there are increasing numbers of software tools available for developing and visualising groundwater conceptual models, these packages are often very expensive and are not readily accessible to majority people due to complexity. .The Groundwater Visualisation System (GVS) is a software framework that can be used to develop groundwater visualisation tools aimed specifically at non-technical computer users and those who are not groundwater domain experts. A primary aim of GVS is to provide management support for agencies, and enhancecommunity understanding.
Resumo:
Product Lifecycle Management has been developed as an approach to providing timely engineering information. However, the number of domain specializations within manufacturing makes such information communication disjointed, inefficient and error-prone. In this paper we propose an immersive 3D visualization of linked domain- specific information views for improving and accelerating communication processes in Product Lifecycle Management. With a common and yet understandable visualization of several domain views, interconnections and dependencies become obvious. The conceptual framework presented here links domain-specific information extracts from Product Lifecycle Management systems with each other and displays them via an integrated 3D representation scheme. We expect that this visualization framework should support holistic tactical decision making processes between domain-experts in operational and tactical manufacturing scenarios.
Resumo:
Accurate reliability prediction for large-scale, long lived engineering is a crucial foundation for effective asset risk management and optimal maintenance decision making. However, a lack of failure data for assets that fail infrequently, and changing operational conditions over long periods of time, make accurate reliability prediction for such assets very challenging. To address this issue, we present a Bayesian-Marko best approach to reliability prediction using prior knowledge and condition monitoring data. In this approach, the Bayesian theory is used to incorporate prior information about failure probabilities and current information about asset health to make statistical inferences, while Markov chains are used to update and predict the health of assets based on condition monitoring data. The prior information can be supplied by domain experts, extracted from previous comparable cases or derived from basic engineering principles. Our approach differs from existing hybrid Bayesian models which are normally used to update the parameter estimation of a given distribution such as the Weibull-Bayesian distribution or the transition probabilities of a Markov chain. Instead, our new approach can be used to update predictions of failure probabilities when failure data are sparse or nonexistent, as is often the case for large-scale long-lived engineering assets.
Resumo:
Electronic services are a leitmotif in ‘hot’ topics like Software as a Service, Service Oriented Architecture (SOA), Service oriented Computing, Cloud Computing, application markets and smart devices. We propose to consider these in what has been termed the Service Ecosystem (SES). The SES encompasses all levels of electronic services and their interaction, with human consumption and initiation on its periphery in much the same way the ‘Web’ describes a plethora of technologies that eventuate to connect information and expose it to humans. Presently, the SES is heterogeneous, fragmented and confined to semi-closed systems. A key issue hampering the emergence of an integrated SES is Service Discovery (SD). A SES will be dynamic with areas of structured and unstructured information within which service providers and ‘lay’ human consumers interact; until now the two are disjointed, e.g., SOA-enabled organisations, industries and domains are choreographed by domain experts or ‘hard-wired’ to smart device application markets and web applications. In a SES, services are accessible, comparable and exchangeable to human consumers closing the gap to the providers. This requires a new SD with which humans can discover services transparently and effectively without special knowledge or training. We propose two modes of discovery, directed search following an agenda and explorative search, which speculatively expands knowledge of an area of interest by means of categories. Inspired by conceptual space theory from cognitive science, we propose to implement the modes of discovery using concepts to map a lay consumer’s service need to terminologically sophisticated descriptions of services. To this end, we reframe SD as an information retrieval task on the information attached to services, such as, descriptions, reviews, documentation and web sites - the Service Information Shadow. The Semantic Space model transforms the shadow's unstructured semantic information into a geometric, concept-like representation. We introduce an improved and extended Semantic Space including categorization calling it the Semantic Service Discovery model. We evaluate our model with a highly relevant, service related corpus simulating a Service Information Shadow including manually constructed complex service agendas, as well as manual groupings of services. We compare our model against state-of-the-art information retrieval systems and clustering algorithms. By means of an extensive series of empirical evaluations, we establish optimal parameter settings for the semantic space model. The evaluations demonstrate the model’s effectiveness for SD in terms of retrieval precision over state-of-the-art information retrieval models (directed search) and the meaningful, automatic categorization of service related information, which shows potential to form the basis of a useful, cognitively motivated map of the SES for exploratory search.
Resumo:
Reasoning with uncertain knowledge and belief has long been recognized as an important research issue in Artificial Intelligence (AI). Several methodologies have been proposed in the past, including knowledge-based systems, fuzzy sets, and probability theory. The probabilistic approach became popular mainly due to a knowledge representation framework called Bayesian networks. Bayesian networks have earned reputation of being powerful tools for modeling complex problem involving uncertain knowledge. Uncertain knowledge exists in domains such as medicine, law, geographical information systems and design as it is difficult to retrieve all knowledge and experience from experts. In design domain, experts believe that design style is an intangible concept and that its knowledge is difficult to be presented in a formal way. The aim of the research is to find ways to represent design style knowledge in Bayesian net works. We showed that these networks can be used for diagnosis (inferences) and classification of design style. The furniture design style is selected as an example domain, however the method can be used for any other domain.
Resumo:
Due to the development of XML and other data models such as OWL and RDF, sharing data is an increasingly common task since these data models allow simple syntactic translation of data between applications. However, in order for data to be shared semantically, there must be a way to ensure that concepts are the same. One approach is to employ commonly usedschemas—called standard schemas —which help guarantee that syntactically identical objects have semantically similar meanings. As a result of the spread of data sharing, there has been widespread adoption of standard schemas in a broad range of disciplines and for a wide variety of applications within a very short period of time. However, standard schemas are still in their infancy and have not yet matured or been thoroughly evaluated. It is imperative that the data management research community takes a closer look at how well these standard schemas have fared in real-world applications to identify not only their advantages, but also the operational challenges that real users face. In this paper, we both examine the usability of standard schemas in a comparison that spans multiple disciplines, and describe our first step at resolving some of these issues in our Semantic Modeling System. We evaluate our Semantic Modeling System through a careful case study of the use of standard schemas in architecture, engineering, and construction, which we conducted with domain experts. We discuss how our Semantic Modeling System can help the broader problem and also discuss a number of challenges that still remain.
Resumo:
The tertiary sector is an important employer and its growth is well above average. The Texo project’s aim is to support this development by making services tradable. The composition of new or value-added services is a cornerstone of the proposed architecture. It is, however, intended to cater for build-time. Yet, at run-time unforseen exceptions may occur and user’s requirements may change. Varying circumstances require immediate sensemaking of the situation’s context and call for prompt extensions of existing services. Lightweight composition technology provided by the RoofTop project enables domain experts to create simple widget-like applications, also termed enterprise mashups, without extensive methodological skills. In this way RoofTop can assist and extend the idea of service delivery through the Texo platform and is a further step towards a next generation internet of services.
Resumo:
It is well-known that the use of off-site manufacture (OSM) techniques can assist in timely completion of a construction project though the utilisation of such techniques may have other disadvantages. Currently, OSM uptake within the Australian construction industry is limited. To successfully incorporate OSM practices within a construction project, it is crucial to understand the impact of OSM adoption on the processes used during a construction project. This paper presents how a systematic process-oriented approach may be able to support OSM utilisation within a construction project. Process modelling, analysis and automation techniques which are well-known within the Business Process Management (BPM) discipline have been applied to develop a collection of construction process models that represent the end-to-end generic construction value chain. The construction value chain enables researchers to identify key activities, resources, data, and stakeholders involved in construction processes in each defined construction phase. The collection of construction process models is then used as a basis for identification of potential OSM intervention points in collaboration with domain experts from the Australian construction industry. This ensures that the resulting changes reflect the needs of various stakeholders within the construction industry and have relevance in practice. Based on the input from the domain experts, these process models are further refined and operational requirements are taken into account to develop a prototype process automation (workflow) system that can support and coordinate OSM-related process activities. The resulting workflow system also has the potential to integrate with other IT solutions used within the construction industry (e.g., BIM, Acconex). As such, the paper illustrates the role that process-oriented thinking can play in assisting OSM adoption within the industry.
Resumo:
Process Modeling is a widely used concept for understanding, documenting and also redesigning the operations of organizations. The validation and usage of process models is however affected by the fact that only business analysts fully understand them in detail. This is in particular a problem because they are typically not domain experts. In this paper, we investigate in how far the concept of verbalization can be adapted from object-role modeling to process models. To this end, we define an approach which automatically transforms BPMN process models into natural language texts and combines different techniques from linguistics and graph decomposition in a flexible and accurate manner. The evaluation of the technique is based on a prototypical implementation and involves a test set of 53 BPMN process models showing that natural language texts can be generated in a reliable fashion.
Resumo:
The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.
Resumo:
Citizen science projects have demonstrated the advantages of people with limited relevant prior knowledge participating in research. However, there is a difference between engaging the general public in a scientific project and entering an established expert community to conduct research. This paper describes our ongoing acoustic biodiversity monitoring collaborations with the bird watching community. We report on findings gathered over six years from participation in bird walks, observing conservation efforts, and records of personal activities of experienced birders. We offer an empirical study into extending existing protocols through in-context collaborative design involving scientists and domain experts.
Resumo:
For the first decade of its existence, the concept of citizen journalism has described an approach which was seen as a broadening of the participant base in journalistic processes, but still involved only a comparatively small subset of overall society – for the most part, citizen journalists were news enthusiasts and “political junkies” (Coleman, 2006) who, as some exasperated professional journalists put it, “wouldn’t get a job at a real newspaper” (The Australian, 2007), but nonetheless followed many of the same journalistic principles. The investment – if not of money, then at least of time and effort – involved in setting up a blog or participating in a citizen journalism Website remained substantial enough to prevent the majority of Internet users from engaging in citizen journalist activities to any significant extent; what emerged in the form of news blogs and citizen journalism sites was a new online elite which for some time challenged the hegemony of the existing journalistic elite, but gradually also merged with it. The mass adoption of next-generation social media platforms such as Facebook and Twitter, however, has led to the emergence of a new wave of quasi-journalistic user activities which now much more closely resemble the “random acts of journalism” which JD Lasica envisaged in 2003. Social media are not exclusively or even predominantly used for citizen journalism; instead, citizen journalism is now simply a by-product of user communities engaging in exchanges about the topics which interest them, or tracking emerging stories and events as they happen. Such platforms – and especially Twitter with its system of ad hoc hashtags that enable the rapid exchange of information about issues of interest – provide spaces for users to come together to “work the story” through a process of collaborative gatewatching (Bruns, 2005), content curation, and information evaluation which takes place in real time and brings together everyday users, domain experts, journalists, and potentially even the subjects of the story themselves. Compared to the spaces of news blogs and citizen journalism sites, but also of conventional online news Websites, which are controlled by their respective operators and inherently position user engagement as a secondary activity to content publication, these social media spaces are centred around user interaction, providing a third-party space in which everyday as well as institutional users, laypeople as well as experts converge without being able to control the exchange. Drawing on a number of recent examples, this article will argue that this results in a new dynamic of interaction and enables the emergence of a more broadly-based, decentralised, second wave of citizen engagement in journalistic processes.
Resumo:
This study provides a detailed insight into the changing writing demands from the last year of university study to the first year in the workforce of engineering and accounting professionals. The study relates these to the demands of the writing component of IELTS, which is increasingly used for exit testing. The number of international and local students whose first language is not English and who are studying in English-medium universities has increased significantly in the past decade. Many of these students aim to start working in the country they studied in; however, some employers have suggested that graduates seeking employment have insufficient language skills. This study provides a detailed insight into the changing writing demands from the last year of university study to the first year in the workforce of engineering and accounting professionals (our two case study professions). It relates these to the demands of the writing component of IELTS, which is increasingly used for exit or professional entry testing, although not expressly designed for this purpose. Data include interviews with final year students, lecturers, employers and new graduates in their first few years in the workforce, as well as professional board members. Employers also reviewed both final year assignments, as well as IELTS writing samples at different levels. Most stakeholders agreed that graduates entering the workforce are underprepared for the writing demands in their professions. When compared with the university writing tasks, the workplace writing expected of new graduates was perceived as different in terms of genre, the tailoring of a text for a specific audience, and processes of review and editing involved. Stakeholders expressed a range of views on the suitability of the use of academic proficiency tests (such as IELTS) as university exit tests and for entry into the professions. With regard to IELTS, while some saw the relevance of the two writing tasks, particularly in relation to academic writing, others questioned the extent to which two timed tasks representing limited genres could elicit a representative sample of the professional writing required, particularly in the context of engineering. The findings are discussed in relation to different test purposes, the intersection between academic and specific purpose testing and the role of domain experts in test validation.
Resumo:
E-health can facilitate communication and interactions among stakeholders involved in pandemic responses. Its implementation, nevertheless, represents a disruptive change in the healthcare workplace. Organisational preparedness assessment is an essential requirement prior to e-health implementation; including this step in the planning process can increase the chances of programme success. The objective of this study is to develop an e-health preparedness assessment model for pandemic influenza (EHPM4P). Following the Analytic Hierarchy Process (AHP), 20 contextual interviews were conducted with domain experts from May to September 2010. We examined the importance of all preparedness components within a fivedimensional hierarchical framework that was recently published. We also calculated the relative weight for each component at all levels of the hierarchy. This paper presents the hierarchical model (EHPM4P) that can be used to precisely assess healthcare organisational and providers' preparedness for e-health implementation and potentially maximise e-health benefits in the context of an influenza pandemic. Copyright © 2013 Inderscience Enterprises Ltd.