966 resultados para domain experts
Resumo:
Citizen science projects have demonstrated the advantages of people with limited relevant prior knowledge participating in research. However, there is a difference between engaging the general public in a scientific project and entering an established expert community to conduct research. This paper describes our ongoing acoustic biodiversity monitoring collaborations with the bird watching community. We report on findings gathered over six years from participation in bird walks, observing conservation efforts, and records of personal activities of experienced birders. We offer an empirical study into extending existing protocols through in-context collaborative design involving scientists and domain experts.
Resumo:
For the first decade of its existence, the concept of citizen journalism has described an approach which was seen as a broadening of the participant base in journalistic processes, but still involved only a comparatively small subset of overall society – for the most part, citizen journalists were news enthusiasts and “political junkies” (Coleman, 2006) who, as some exasperated professional journalists put it, “wouldn’t get a job at a real newspaper” (The Australian, 2007), but nonetheless followed many of the same journalistic principles. The investment – if not of money, then at least of time and effort – involved in setting up a blog or participating in a citizen journalism Website remained substantial enough to prevent the majority of Internet users from engaging in citizen journalist activities to any significant extent; what emerged in the form of news blogs and citizen journalism sites was a new online elite which for some time challenged the hegemony of the existing journalistic elite, but gradually also merged with it. The mass adoption of next-generation social media platforms such as Facebook and Twitter, however, has led to the emergence of a new wave of quasi-journalistic user activities which now much more closely resemble the “random acts of journalism” which JD Lasica envisaged in 2003. Social media are not exclusively or even predominantly used for citizen journalism; instead, citizen journalism is now simply a by-product of user communities engaging in exchanges about the topics which interest them, or tracking emerging stories and events as they happen. Such platforms – and especially Twitter with its system of ad hoc hashtags that enable the rapid exchange of information about issues of interest – provide spaces for users to come together to “work the story” through a process of collaborative gatewatching (Bruns, 2005), content curation, and information evaluation which takes place in real time and brings together everyday users, domain experts, journalists, and potentially even the subjects of the story themselves. Compared to the spaces of news blogs and citizen journalism sites, but also of conventional online news Websites, which are controlled by their respective operators and inherently position user engagement as a secondary activity to content publication, these social media spaces are centred around user interaction, providing a third-party space in which everyday as well as institutional users, laypeople as well as experts converge without being able to control the exchange. Drawing on a number of recent examples, this article will argue that this results in a new dynamic of interaction and enables the emergence of a more broadly-based, decentralised, second wave of citizen engagement in journalistic processes.
Resumo:
This paper describes a new knowledge acquisition method using a generic design environment where context-sensitive knowledge is used to build specific DSS for rural business. Although standard knowledge acquisition methods have been applied in rural business applications, uptake remains low and familiar weaknesses such as obsolescence and brittleness apply. We describe a decision support system (DSS) building environment where contextual factors relevant to the end users are directly taken into consideration. This "end user enabled design environment" (EUEDE) engages both domain experts in creating an expert knowledge base and business operators/end users (such as farmers) in using this knowledge for building their specific DSS. We document the knowledge organisation for the problem domain, namely a dairy industry application. This development involved a case-study research approach used to explore dairy operational knowledge. In this system end users can tailor their decision-making requirements using their own judgement to build specific DSSs. In a specific end user's farming context, each specific DSS provides expert suggestions to assist farmers in improving their farming practice. The paper also shows the environment's generic capability.
Resumo:
The aim of the study was to analyze and facilitate collaborative design in a virtual learning environment (VLE). Discussions of virtual design in design education have typically focused on technological or communication issues, not on pedagogical issues. Yet in order to facilitate collaborative design, it is also necessary to address the pedagogical issues related to the virtual design process. In this study, the progressive inquiry model of collaborative designing was used to give a structural level of facilitation to students working in the VLE. According to this model, all aspects of inquiry, such as creating the design context, constructing a design idea, evaluating the idea, and searching for new information, can be shared in a design community. The study consists of three design projects: 1) designing clothes for premature babies, 2) designing conference bags for an international conference, and 3) designing tactile books for visually impaired children. These design projects constituted a continuum of design experiments, each of which highlighted certain perspectives on collaborative designing. The design experiments were organized so that the participants worked in design teams, both face-to-face and virtually. The first design experiment focused on peer collaboration among textile teacher students in the VLE. The second design experiment took into consideration end-users needs by using a participatory design approach. The third design experiment intensified computer-supported collaboration between students and domain experts. The virtual learning environments, in these design experiments, were designed to support knowledge-building pedagogy and progressive inquiry learning. These environments enabled a detailed recording of all computer-mediated interactions and data related to virtual designing. The data analysis was based on qualitative content analysis of design statements in the VLE. This study indicated four crucial issues concerning collaborative design in the VLE in craft and design education. Firstly, using the collaborative design process in craft and design education gives rise to special challenges of building learning communities, creating appropriate design tasks for them, and providing tools for collaborative activities. Secondly, the progressive inquiry model of collaborative designing can be used as a scaffold support for design thinking and for reflection on the design process. Thirdly, participation and distributed expertise can be facilitated by considering the key stakeholders who are related to the design task or design context, and getting them to participate in virtual designing. Fourthly, in the collaborative design process, it is important that team members create and improve visual and technical ideas together, not just agree or disagree about proposed ideas. Therefore, viewing the VLE as a medium for collaborative construction of the design objects appears crucial in order to understand and facilitate the complex processes in collaborative designing.
Resumo:
This study provides a detailed insight into the changing writing demands from the last year of university study to the first year in the workforce of engineering and accounting professionals. The study relates these to the demands of the writing component of IELTS, which is increasingly used for exit testing. The number of international and local students whose first language is not English and who are studying in English-medium universities has increased significantly in the past decade. Many of these students aim to start working in the country they studied in; however, some employers have suggested that graduates seeking employment have insufficient language skills. This study provides a detailed insight into the changing writing demands from the last year of university study to the first year in the workforce of engineering and accounting professionals (our two case study professions). It relates these to the demands of the writing component of IELTS, which is increasingly used for exit or professional entry testing, although not expressly designed for this purpose. Data include interviews with final year students, lecturers, employers and new graduates in their first few years in the workforce, as well as professional board members. Employers also reviewed both final year assignments, as well as IELTS writing samples at different levels. Most stakeholders agreed that graduates entering the workforce are underprepared for the writing demands in their professions. When compared with the university writing tasks, the workplace writing expected of new graduates was perceived as different in terms of genre, the tailoring of a text for a specific audience, and processes of review and editing involved. Stakeholders expressed a range of views on the suitability of the use of academic proficiency tests (such as IELTS) as university exit tests and for entry into the professions. With regard to IELTS, while some saw the relevance of the two writing tasks, particularly in relation to academic writing, others questioned the extent to which two timed tasks representing limited genres could elicit a representative sample of the professional writing required, particularly in the context of engineering. The findings are discussed in relation to different test purposes, the intersection between academic and specific purpose testing and the role of domain experts in test validation.
Resumo:
E-health can facilitate communication and interactions among stakeholders involved in pandemic responses. Its implementation, nevertheless, represents a disruptive change in the healthcare workplace. Organisational preparedness assessment is an essential requirement prior to e-health implementation; including this step in the planning process can increase the chances of programme success. The objective of this study is to develop an e-health preparedness assessment model for pandemic influenza (EHPM4P). Following the Analytic Hierarchy Process (AHP), 20 contextual interviews were conducted with domain experts from May to September 2010. We examined the importance of all preparedness components within a fivedimensional hierarchical framework that was recently published. We also calculated the relative weight for each component at all levels of the hierarchy. This paper presents the hierarchical model (EHPM4P) that can be used to precisely assess healthcare organisational and providers' preparedness for e-health implementation and potentially maximise e-health benefits in the context of an influenza pandemic. Copyright © 2013 Inderscience Enterprises Ltd.
Resumo:
The delivery of products and services for construction-based businesses is increasingly becoming knowledge-driven and information-intensive. The proliferation of building information modelling (BIM) has increased business opportunities as well as introduced new challenges for the architectural, engineering and construction and facilities management (AEC/FM) industry. As such, the effective use, sharing and exchange of building life cycle information and knowledge management in building design, construction, maintenance and operation assumes a position of paramount importance. This paper identifies a subset of construction management (CM) relevant knowledge for different design conditions of building components through a critical, comprehensive review of synthesized literature and other information gathering and knowledge acquisition techniques. It then explores how such domain knowledge can be formalized as ontologies and, subsequently, a query vocabulary in order to equip BIM users with the capacity to query digital models of a building for the retrieval of useful and relevant domain-specific information. The formalized construction knowledge is validated through interviews with domain experts in relation to four case study projects. Additionally, retrospective analyses of several design conditions are used to demonstrate the soundness (realism), completeness, and appeal of the knowledge base and query-based reasoning approach in relation to the state-of-the-art tools, Solibri Model Checker and Navisworks. The knowledge engineering process and the methods applied in this research for information representation and retrieval could provide useful mechanisms to leverage BIM in support of a number of knowledge intensive CM/FM tasks and functions.
Resumo:
There has been a demand for uniform CAD standards in the construction industry ever since the large-scale introduction of computer aided design systems in the late 1980s. While some standards have been widely adopted without much formal effort, other standards have failed to gain support even though considerable resources have been allocated for the purpose. Establishing a standard concerning building information modeling has been one particularly active area of industry development and scientific interest within recent years. In this paper, four different standards are discussed as cases: the IGES and DXF/DWG standards for representing the graphics in 2D drawings, the ISO 13567 standard for the structuring of building information on layers, and the IFC standard for building product models. Based on a literature study combined with two qualitative interview studies with domain experts, a process model is proposed to describe and interpret the contrasting histories of past CAD standardisation processes.
Resumo:
Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.
Resumo:
The performance of prediction models is often based on ``abstract metrics'' that estimate the model's ability to limit residual errors between the observed and predicted values. However, meaningful evaluation and selection of prediction models for end-user domains requires holistic and application-sensitive performance measures. Inspired by energy consumption prediction models used in the emerging ``big data'' domain of Smart Power Grids, we propose a suite of performance measures to rationally compare models along the dimensions of scale independence, reliability, volatility and cost. We include both application independent and dependent measures, the latter parameterized to allow customization by domain experts to fit their scenario. While our measures are generalizable to other domains, we offer an empirical analysis using real energy use data for three Smart Grid applications: planning, customer education and demand response, which are relevant for energy sustainability. Our results underscore the value of the proposed measures to offer a deeper insight into models' behavior and their impact on real applications, which benefit both data mining researchers and practitioners.
Resumo:
Toivonen, H., Srinivasan, A., King, R. D., Kramer, S. and Helma, C. (2003) Statistical Evaluation of the Predictive Toxicology Challenge 2000-2001. Bioinformatics 19: 1183-1193
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain
Resumo:
This paper describes the use of a blackboard architecture for building a hybrid case based reasoning (CBR) system. The Smartfire fire field modelling package has been built using this architecture and includes a CBR component. It allows the integration into the system of qualitative spatial reasoning knowledge from domain experts. The system can be used for the automatic set-up of fire field models. This enables fire safety practitioners who are not expert in modelling techniques to use a fire modelling tool. The paper discusses the integrating powers of the architecture, which is based on a common knowledge representation comprising a metric diagram and place vocabulary and mechanisms for adaptation and conflict resolution built on the Blackboard.
Resumo:
Composite Applications on top of SAPs implementation of SOA (Enterprise SOA) enable the extension of already existing business logic. In this paper we show, based on a case study, how Model-Driven Engineering concepts are applied in the development of such Composite Applications. Our Case Study extends a back-end business process which is required for the specific needs of a demo company selling wine. We use this to describe how the business centric models specifying the modified business behaviour of our case study can be utilized for business performance analysis where most of the actions are performed by humans. In particular, we apply a refined version of Model-Driven Performance Engineering that we proposed in our previous work and motivate which business domain specifics have to be taken into account for business performance analysis. We additionally motivate the need for performance related decision support for domain experts, who generally lack performance related skills. Such a support should offer visual guidance about what should be changed in the design and resource mapping to get improved results with respect to modification constraints and performance objectives, or objectives for time.
Resumo:
Composite Applications on top of SAPs implementation of SOA (Enterprise SOA) enable the extension of already existing business logic. In this paper we show, based on a case study, how Model-Driven Engineering concepts are applied in the development of such Composite Applications. Our Case Study extends a back-end business process which is required for the specific needs of a demo company selling wine. We use this to describe how the business centric models specifying the modified business behaviour of our case study can be utilized for business performance analysis where most of the actions are performed by humans. In particular, we apply a refined version of Model-Driven Performance Engineering that we proposed in our previous work and motivate which business domain specifics have to be taken into account for business performance analysis. We additionally motivate the need for performance related decision support for domain experts, who generally lack performance related skills. Such a support should offer visual guidance about what should be changed in the design and resource mapping to get improved results with respect to modification constraints and performance objectives, or objectives for time.