890 resultados para knowledge-based systems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Flexible information exchange is critical to successful design-analysis integration, but current top-down, standards-based and model-oriented strategies impose restrictions that contradicts this flexibility. In this article we present a bottom-up, user-controlled and process-oriented approach to linking design and analysis applications that is more responsive to the varied needs of designers and design teams. Drawing on research into scientific workflows, we present a framework for integration that capitalises on advances in cloud computing to connect discrete tools via flexible and distributed process networks. We then discuss how a shared mapping process that is flexible and user friendly supports non-programmers in creating these custom connections. Adopting a services-oriented system architecture, we propose a web-based platform that enables data, semantics and models to be shared on the fly. We then discuss potential challenges and opportunities for its development as a flexible, visual, collaborative, scalable and open system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Flexible information exchange is critical to successful design integration, but current top-down, standards-based and model-oriented strategies impose restrictions that are contradictory to this flexibility. In this paper we present a bottom-up, user-controlled and process-oriented approach to linking design and analysis applications that is more responsive to the varied needs of designers and design teams. Drawing on research into scientific workflows, we present a framework for integration that capitalises on advances in cloud computing to connect discrete tools via flexible and distributed process networks. Adopting a services-oriented system architecture, we propose a web-based platform that enables data, semantics and models to be shared on the fly. We discuss potential challenges and opportunities for the development thereof as a flexible, visual, collaborative, scalable and open system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Changing environments present a number of challenges to mobile robots, one of the most significant being mapping and localisation. This problem is particularly significant in vision-based systems where illumination and weather changes can cause feature-based techniques to fail. In many applications only sections of an environment undergo extreme perceptual change. Some range-based sensor mapping approaches exploit this property by combining occasional place recognition with the assumption that odometry is accurate over short periods of time. In this paper, we develop this idea in the visual domain, by using occasional vision-driven loop closures to infer loop closures in nearby locations where visual recognition is difficult due to extreme change. We demonstrate successful map creation in an environment in which change is significant but constrained to one area, where both the vanilla CAT-Graph and a Sum of Absolute Differences matcher fails, use the described techniques to link dissimilar images from matching locations, and test the robustness of the system against false inferences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dynamic capability theory asserts that the learning capabilities of construction organisations influence the degree to which value-for-money (VfM) is achieved on collaborative projects. However, there has been little study conducted to verify this relationship. The evidence is particularly limited within the empirical context of infrastructure delivery in Australia. Primarily drawing on the theoretical perspectives of the resource-based view of the firm (e.g. Barney 1991), dynamic capabilities (e.g. Helfat et al. 2007), absorptive capacity (e.g. Lane et al. 2006) and knowledge management (e.g. Nonaka 1994), this paper conceptualises learning capability as a knowledge-based dynamic capability. Learning capability builds on the micro-foundations of high-order learning routines, which are deliberately developed by construction organisations for managing collaborative projects. Based on this conceptualisation of learning capability, an exploratory case study was conducted. The study investigated the operational and higher-order learning routines adopted by a project alliance team to successfully achieve VfM. The case study demonstrated that the learning routines of the alliance project were developed and modified by the continual joint learning activities of participant organisations. Project-level learning routines were found to significantly influence the development of organisational-level learning routines. In turn, the learning outcomes generated from the alliance project appeared to significantly influence the development of project management routines and contractual arrangements applied by the participant organisations in subsequent collaborative projects. The case study findings imply that the higher-order learning routines that underpin the learning capability of construction organisations have the potential to influence the VfM achieved on both current and future collaborative projects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Learning capability (LC) is a special dynamic capability that a firm purposefully builds to develop a cognitive focus, so as to enable the configuration and improvement of other capabilities (both dynamic and operational) to create and respond to market changes. Empirical evidence regarding the essential role of LC in leveraging operational manufacturing capabilities is, however, limited in the literature. This study takes a routine-based approach to understand capability, and focuses on demonstrating leveraging power of LC upon two essential operational capabilities within the manufacturing context, i.e., operational new product development capability (ONPDC), and operational supplier integration capability (OSIC). A mixed-methods research framework was used, which combines sources of evidence derived from a survey study and a multiple case study. This study identified high-level routines of LC that can be designed and controlled by managers and practitioners, to reconfigure underlying routines of ONPDC and OSIC to achieve superior performance in a turbulent environment. Hence, the study advances the notion of knowledge-based dynamic capabilities, such as LC, as routine bundles. It also provides an impetus for managing manufacturing operations from a capability-based perspective in the fast changing knowledge era.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Particulate matter research is essential because of the well known significant adverse effects of aerosol particles on human health and the environment. In particular, identification of the origin or sources of particulate matter emissions is of paramount importance in assisting efforts to control and reduce air pollution in the atmosphere. This thesis aims to: identify the sources of particulate matter; compare pollution conditions at urban, rural and roadside receptor sites; combine information about the sources with meteorological conditions at the sites to locate the emission sources; compare sources based on particle size or mass; and ultimately, provide the basis for control and reduction in particulate matter concentrations in the atmosphere. To achieve these objectives, data was obtained from assorted local and international receptor sites over long sampling periods. The samples were analysed using Ion Beam Analysis and Scanning Mobility Particle Sizer methods to measure the particle mass with chemical composition and the particle size distribution, respectively. Advanced data analysis techniques were employed to derive information from large, complex data sets. Multi-Criteria Decision Making (MCDM), a ranking method, drew on data variability to examine the overall trends, and provided the rank ordering of the sites and years that sampling was conducted. Coupled with the receptor model Positive Matrix Factorisation (PMF), the pollution emission sources were identified and meaningful information pertinent to the prioritisation of control and reduction strategies was obtained. This thesis is presented in the thesis by publication format. It includes four refereed papers which together demonstrate a novel combination of data analysis techniques that enabled particulate matter sources to be identified and sampling site/year ranked. The strength of this source identification process was corroborated when the analysis procedure was expanded to encompass multiple receptor sites. Initially applied to identify the contributing sources at roadside and suburban sites in Brisbane, the technique was subsequently applied to three receptor sites (roadside, urban and rural) located in Hong Kong. The comparable results from these international and national sites over several sampling periods indicated similarities in source contributions between receptor site-types, irrespective of global location and suggested the need to apply these methods to air pollution investigations worldwide. Furthermore, an investigation into particle size distribution data was conducted to deduce the sources of aerosol emissions based on particle size and elemental composition. Considering the adverse effects on human health caused by small-sized particles, knowledge of particle size distribution and their elemental composition provides a different perspective on the pollution problem. This thesis clearly illustrates that the application of an innovative combination of advanced data interpretation methods to identify particulate matter sources and rank sampling sites/years provides the basis for the prioritisation of future air pollution control measures. Moreover, this study contributes significantly to knowledge based on chemical composition of airborne particulate matter in Brisbane, Australia and on the identity and plausible locations of the contributing sources. Such novel source apportionment and ranking procedures are ultimately applicable to environmental investigations worldwide.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce the use of Ingenuity Pathway Analysis to analyzing global metabonomics in order to characterize phenotypically biochemical perturbations and the potential mechanisms of the gentamicin-induced toxicity in multiple organs. A single dose of gentamicin was administered to Sprague Dawley rats (200 mg/kg, n = 6) and urine samples were collected at -24-0 h pre-dosage, 0-24, 24-48, 48-72 and 72-96 h post-dosage of gentamicin. The urine metabonomics analysis was performed by UPLC/MS, and the mass spectra signals of the detected metabolites were systematically deconvoluted and analyzed by pattern recognition analyses (Heatmap, PCA and PLS-DA), revealing a time-dependency of the biochemical perturbations induced by gentamicin toxicity. As result, the holistic metabolome change induced by gentamicin toxicity in the animal's organisms was characterized. Several metabolites involved in amino acid metabolism were identified in urine, and it was confirmed that gentamicin biochemical perturbations can be foreseen from these biomarkers. Notoriously, it was found that gentamicin induced toxicity in multiple organs system in the laboratory rats. The proof-of-knowledge based Ingenuity Pathway Analysis revealed gentamicin induced liver and heart toxicity, along with the previously known toxicity in kidney. The metabolites creatine, nicotinic acid, prostaglandin E2, and cholic acid were identified and validated as phenotypic biomarkers of gentamicin induced toxicity. Altogether, the significance of the use of metabonomics analyses in the assessment of drug toxicity is highlighted once more; furthermore, this work demonstrated the powerful predictive potential of the Ingenuity Pathway Analysis to study of drug toxicity and its valuable complementation for metabonomics based assessment of the drug toxicity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Policy makers increasingly recognise that an educated workforce with a high proportion of Science, Technology, Engineering and Mathematics (STEM) graduates is a pre-requisite to a knowledge-based, innovative economy. Over the past ten years, the proportion of first university degrees awarded in Australia in STEM fields is below the global average and continues to decrease from 22.2% in 2002 to 18.8% in 2010 [1]. These trends are mirrored by declines between 20% and 30% in the proportions of high school students enrolled in science or maths. These trends are not unique to Australia but their impact is of concern throughout the policy-making community. To redress these demographic trends, QUT embarked upon a long-term investment strategy to integrate education and research into the physical and virtual infrastructure of the campus, recognising that expectations of students change as rapidly as technology and learning practices change. To implement this strategy, physical infrastructure refurbishment/re-building is accompanied by upgraded technologies not only for learning but also for research. QUT’s vision for its city-based campuses is to create vibrant and attractive places to learn and research and to link strongly to the wider surrounding community. Over a five year period, physical infrastructure at the Gardens Point campus was substantially reconfigured in two key stages: (a) a >$50m refurbishment of heritage-listed buildings to encompass public, retail and social spaces, learning and teaching “test beds” and research laboratories and (b) destruction of five buildings to be replaced by a $230m, >40,000m2 Science and Engineering Centre designed to accommodate retail, recreation, services, education and research in an integrated, coordinated precinct. This landmark project is characterised by (i) self-evident, collaborative spaces for learning, research and social engagement, (ii) sustainable building practices and sustainable ongoing operation and; (iii) dynamic and mobile re-configuration of spaces or staffing to meet demand. Innovative spaces allow for transformative, cohort-driven learning and the collaborative use of space to prosecute joint class projects. Research laboratories are aggregated, centralised and “on display” to the public, students and staff. A major visualisation space – the largest multi-touch, multi-user facility constructed to date – is a centrepiece feature that focuses on demonstrating scientific and engineering principles or science oriented scenes at large scale (e.g. the Great Barrier Reef). Content on this visualisation facility is integrated with the regional school curricula and supports an in-house schools program for student and teacher engagement. Researchers are accommodated in a combined open-plan and office floor-space (80% open plan) to encourage interdisciplinary engagement and cross-fertilisation of skills, ideas and projects. This combination of spaces re-invigorates the on-campus experience, extends educational engagement across all ages and rapidly enhances research collaboration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently, vision-based systems have been deployed in professional sports to track the ball and players to enhance analysis of matches. Due to their unobtrusive nature, vision-based approaches are preferred to wearable sensors (e.g. GPS or RFID sensors) as it does not require players or balls to be instrumented prior to matches. Unfortunately, in continuous team sports where players need to be tracked continuously over long-periods of time (e.g. 35 minutes in field-hockey or 45 minutes in soccer), current vision-based tracking approaches are not reliable enough to provide fully automatic solutions. As such, human intervention is required to fix-up missed or false detections. However, in instances where a human can not intervene due to the sheer amount of data being generated - this data can not be used due to the missing/noisy data. In this paper, we investigate two representations based on raw player detections (and not tracking) which are immune to missed and false detections. Specifically, we show that both team occupancy maps and centroids can be used to detect team activities, while the occupancy maps can be used to retrieve specific team activities. An evaluation on over 8 hours of field hockey data captured at a recent international tournament demonstrates the validity of the proposed approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we describe a method to represent and discover adversarial group behavior in a continuous domain. In comparison to other types of behavior, adversarial behavior is heavily structured as the location of a player (or agent) is dependent both on their teammates and adversaries, in addition to the tactics or strategies of the team. We present a method which can exploit this relationship through the use of a spatiotemporal basis model. As players constantly change roles during a match, we show that employing a "role-based" representation instead of one based on player "identity" can best exploit the playing structure. As vision-based systems currently do not provide perfect detection/tracking (e.g. missed or false detections), we show that our compact representation can effectively "denoise" erroneous detections as well as enabe temporal analysis, which was previously prohibitive due to the dimensionality of the signal. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labelled data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Building knowledge economies seems synonymous with re-imaging urban fabrics. Cities producing vibrant public realms are believed to have better success in distinguishing themselves within a highly competitive market. Many governments are heavily investing in cultural enhancements burgeoning distinctive cosmopolitan centers of which public art is emerging as a significant stakeholder. Brisbane’s goal to grow a knowledge-based economy similarly addresses public art. To stimulate engagement with public art Brisbane City Council has delivered an online public art catalogue and assembled three public art trails, with a fourth newly augmented. While many pieces along these trails are obviously public others question the term ‘public’ through an obscured milieu where a ‘look but don’t touch’ policy is subtly implied. This study investigates the interactional relationship between publics and public art, and in doing so, explores the concept of accessibility. This paper recommends that installations of sculpture within an emerging city should be considered in terms of economic output measured through the degree in which the public engages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objectives: Experiential knowledge of elite athletes and coaches was investigated to reveal insights on expertise acquisition in cricket fast bowling. Design: Twenty-one past or present elite cricket fast bowlers and coaches of national or international level were interviewed using an in-depth, open-ended, semi-structured approach. Methods: Participants were asked about specific factors which they believed were markers of fast bowling expertise potential. Of specific interest was the relative importance of each potential component of fast bowling expertise and how components interacted or developed over time. Results: The importance of intrinsic motivation early in development was highlighted, along with physical, psychological and technical attributes. Results supported a multiplicative and interactive complex systems model of talent development in fast bowling, in which component weightings were varied due to individual differences in potential experts. Dropout rates in potential experts were attributed to misconceived current talent identification programmes and coaching practices, early maturation and physical attributes, injuries and lack of key psychological attributes and skills. Conclusions: Data are consistent with a dynamical systems model of expertise acquisition in fast bowling, with numerous trajectories available for talent development. Further work is needed to relate experiential and theoretical knowledge on expertise in other sports.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A major challenge for robot localization and mapping systems is maintaining reliable operation in a changing environment. Vision-based systems in particular are susceptible to changes in illumination and weather, and the same location at another time of day may appear radically different to a system using a feature-based visual localization system. One approach for mapping changing environments is to create and maintain maps that contain multiple representations of each physical location in a topological framework or manifold. However, this requires the system to be able to correctly link two or more appearance representations to the same spatial location, even though the representations may appear quite dissimilar. This paper proposes a method of linking visual representations from the same location without requiring a visual match, thereby allowing vision-based localization systems to create multiple appearance representations of physical locations. The most likely position on the robot path is determined using particle filter methods based on dead reckoning data and recent visual loop closures. In order to avoid erroneous loop closures, the odometry-based inferences are only accepted when the inferred path's end point is confirmed as correct by the visual matching system. Algorithm performance is demonstrated using an indoor robot dataset and a large outdoor camera dataset.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To harness safe operation of Web-based systems in Web environments, we propose an SSPA (Server-based SHA-1 Page-digest Algorithm) to verify the integrity of Web contents before the server issues an HTTP response to a user request. In addition to standard security measures, our Java implementation of the SSPA, which is called the Dynamic Security Surveillance Agent (DSSA), provides further security in terms of content integrity to Web-based systems. Its function is to prevent the display of Web contents that have been altered through the malicious acts of attackers and intruders on client machines. This is to protect the reputation of organisations from cyber-attacks and to ensure the safe operation of Web systems by dynamically monitoring the integrity of a Web site's content on demand. We discuss our findings in terms of the applicability and practicality of the proposed system. We also discuss its time metrics, specifically in relation to its computational overhead at the Web server, as well as the overall latency from the clients' point of view, using different Internet access methods. The SSPA, our DSSA implementation, some experimental results and related work are all discussed

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Paramedic education has evolved in recent times from vocational post-employment to tertiary pre-employment supplemented by clinical placement. Simulation is advocated as a means of transferring learned skills to clinical practice. Sole reliance of simulation learning using mannequin-based models may not be sufficient to prepare students for variance in human anatomy. In 2012, we trialled the use of fresh frozen human cadavers to supplement undergraduate paramedic procedural skill training. The purpose of this study is to evaluate whether cadaveric training is an effective adjunct to mannequin simulation and clinical placement. Methods A multi-method approach was adopted. The first step involved a Delphi methodology to formulate and validate the evaluation instrument. The instrument comprised of knowledge-based MCQs, Likert for self-evaluation of procedural skills and behaviours, and open answer. The second step involved a pre-post evaluation of the 2013 cadaveric training. Results One hundred and fourteen students attended the workshop and 96 evaluations were included in the analysis, representing a return rate of 84%. There was statistically significant improved anatomical knowledge after the workshop. Students' self-rated confidence in performing procedural skills on real patients improved significantly after the workshop: inserting laryngeal mask (MD 0.667), oropharyngeal (MD 0.198) and nasopharyngeal (MD 0.600) airways, performing Bag-Valve-Mask (MD 0.379), double (MD 0.344) and triple (MD 0.326,) airway manoeuvre, doing 12-lead electrocardiography (MD 0.729), using McGrath(R) laryngoscope (MD 0.726), using McGrath(R) forceps to remove foreign body (MD 0.632), attempting thoracocentesis (MD 1.240), and putting on a traction splint (MD 0.865). The students commented that the workshop provided context to their theoretical knowledge and that they gained an appreciation of the differences in normal tissue variation. Following engagement in/ completion of the workshop, students were more aware of their own clinical and non-clinical competencies. Conclusions The paramedic profession has evolved beyond patient transport with minimal intervention to providing comprehensive both emergency and non-emergency medical care. With limited availability of clinical placements for undergraduate paramedic training, there is an increasing demand on universities to provide suitable alternatives. Our findings suggested that cadaveric training using fresh frozen cadavers provides an effective adjunct to simulated learning and clinical placements.