833 resultados para Knowledge based system
Resumo:
Particulate matter research is essential because of the well known significant adverse effects of aerosol particles on human health and the environment. In particular, identification of the origin or sources of particulate matter emissions is of paramount importance in assisting efforts to control and reduce air pollution in the atmosphere. This thesis aims to: identify the sources of particulate matter; compare pollution conditions at urban, rural and roadside receptor sites; combine information about the sources with meteorological conditions at the sites to locate the emission sources; compare sources based on particle size or mass; and ultimately, provide the basis for control and reduction in particulate matter concentrations in the atmosphere. To achieve these objectives, data was obtained from assorted local and international receptor sites over long sampling periods. The samples were analysed using Ion Beam Analysis and Scanning Mobility Particle Sizer methods to measure the particle mass with chemical composition and the particle size distribution, respectively. Advanced data analysis techniques were employed to derive information from large, complex data sets. Multi-Criteria Decision Making (MCDM), a ranking method, drew on data variability to examine the overall trends, and provided the rank ordering of the sites and years that sampling was conducted. Coupled with the receptor model Positive Matrix Factorisation (PMF), the pollution emission sources were identified and meaningful information pertinent to the prioritisation of control and reduction strategies was obtained. This thesis is presented in the thesis by publication format. It includes four refereed papers which together demonstrate a novel combination of data analysis techniques that enabled particulate matter sources to be identified and sampling site/year ranked. The strength of this source identification process was corroborated when the analysis procedure was expanded to encompass multiple receptor sites. Initially applied to identify the contributing sources at roadside and suburban sites in Brisbane, the technique was subsequently applied to three receptor sites (roadside, urban and rural) located in Hong Kong. The comparable results from these international and national sites over several sampling periods indicated similarities in source contributions between receptor site-types, irrespective of global location and suggested the need to apply these methods to air pollution investigations worldwide. Furthermore, an investigation into particle size distribution data was conducted to deduce the sources of aerosol emissions based on particle size and elemental composition. Considering the adverse effects on human health caused by small-sized particles, knowledge of particle size distribution and their elemental composition provides a different perspective on the pollution problem. This thesis clearly illustrates that the application of an innovative combination of advanced data interpretation methods to identify particulate matter sources and rank sampling sites/years provides the basis for the prioritisation of future air pollution control measures. Moreover, this study contributes significantly to knowledge based on chemical composition of airborne particulate matter in Brisbane, Australia and on the identity and plausible locations of the contributing sources. Such novel source apportionment and ranking procedures are ultimately applicable to environmental investigations worldwide.
Resumo:
Higher-order thinking has featured persistently in the reform agenda for science education. The intended curriculum in various countries sets out aspirational statements for the levels of higher-order thinking to be attained by students. This study reports the extent to which chemistry examinations from four Australian states align and facilitate the intended higher-order thinking skills stipulated in curriculum documents. Through content analysis, the curriculum goals were identified for each state and compared to the nature of question items in the corresponding examinations. Categories of higher-order thinking were adapted from the OECD’s PISA Science test to analyze question items. There was considerable variation in the extent to which the examinations from the states supported the curriculum intent of developing and assessing higher-order thinking. Generally, examinations that used a marks-based system tended to emphasize lower-order thinking, with a greater distribution of marks allocated for lower-order thinking questions. Examinations associated with a criterion-referenced examination tended to award greater credit for higher-order thinking questions. The level of complexity of chemistry was another factor that limited the extent to which examination questions supported higher-order thinking. Implications from these findings are drawn for the authorities responsible for designing curriculum and assessment procedures and for teachers.
Resumo:
Purpose: The precise shape of the three-dimensional dose distributions created by intensity-modulated radiotherapy means that the verification of patient position and setup is crucial to the outcome of the treatment. In this paper, we investigate and compare the use of two different image calibration procedures that allow extraction of patient anatomy from measured electronic portal images of intensity-modulated treatment beams. Methods and Materials: Electronic portal images of the intensity-modulated treatment beam delivered using the dynamic multileaf collimator technique were acquired. The images were formed by measuring a series of frames or segments throughout the delivery of the beams. The frames were then summed to produce an integrated portal image of the delivered beam. Two different methods for calibrating the integrated image were investigated with the aim of removing the intensity modulations of the beam. The first involved a simple point-by-point division of the integrated image by a single calibration image of the intensity-modulated beam delivered to a homogeneous polymethyl methacrylate (PMMA) phantom. The second calibration method is known as the quadratic calibration method and required a series of calibration images of the intensity-modulated beam delivered to different thicknesses of homogeneous PMMA blocks. Measurements were made using two different detector systems: a Varian amorphous silicon flat-panel imager and a Theraview camera-based system. The methods were tested first using a contrast phantom before images were acquired of intensity-modulated radiotherapy treatment delivered to the prostate and pelvic nodes of cancer patients at the Royal Marsden Hospital. Results: The results indicate that the calibration methods can be used to remove the intensity modulations of the beam, making it possible to see the outlines of bony anatomy that could be used for patient position verification. This was shown for both posterior and lateral delivered fields. Conclusions: Very little difference between the two calibration methods was observed, so the simpler division method, requiring only the single extra calibration measurement and much simpler computation, was the favored method. This new method could provide a complementary tool to existing position verification methods, and it has the advantage that it is completely passive, requiring no further dose to the patient and using only the treatment fields.
Resumo:
Maritime security has emerged as a critical legal and political issue in the contemporary world. Terrorism in the maritime domain is a major maritime security issue. Ten out of the 44 major terrorist groups of the world, as identified in the US Department of State’s Country Reports on Terrorism, have maritime terrorism capabilities. Prosecution of maritime terrorists is a politically and legally difficult issue, which may create conflicts of jurisdiction. Prosecution of alleged maritime terrorists is carried out by national courts. There is no international judicial institution for the prosecution of maritime terrorists. International law has therefore anticipated a vital role for national courts in this respect. The international legal framework for combating maritime terrorism has been elaborately examined in existing literature therefore this paper will only highlight the issues regarding the prosecution of maritime terrorists. This paper argues that despite having comprehensive intentional legal framework for the prosecution of maritime terrorists there is still some scopes for conflicts of jurisdiction particularly where two or more States are interested to prosecute the same offender. This existing legal problem has been further aggravated in the post September 11 era. Due to the political and security implications, States may show reluctance in ensuring the international law safeguards of alleged perpetrators in the arrest, detention and prosecution process. Nevertheless, international law has established a comprehensive system for the prosecution of maritime terrorists where national courts is the main forum of ensuring the international law safeguards of alleged perpetrators as well as ensuring the effective prosecution of maritime terrorists thereby playing an instrumental role in establishing a rule based system for combating maritime terrorism. Using two case studies, this paper shows that the role of national courts has become more important in the present era because there may be some situations where no State is interested to initiate proceedings in international forums for vindicating rights of an alleged offender even if there is a clear evidence of violation of international human rights law in the arrest, detention and prosecution process. This paper presents that despite some bottlenecks national courts are actively playing this critical role. Overall, this paper highlights the instrumental role of national courts in the international legal order.
Resumo:
Policy makers increasingly recognise that an educated workforce with a high proportion of Science, Technology, Engineering and Mathematics (STEM) graduates is a pre-requisite to a knowledge-based, innovative economy. Over the past ten years, the proportion of first university degrees awarded in Australia in STEM fields is below the global average and continues to decrease from 22.2% in 2002 to 18.8% in 2010 [1]. These trends are mirrored by declines between 20% and 30% in the proportions of high school students enrolled in science or maths. These trends are not unique to Australia but their impact is of concern throughout the policy-making community. To redress these demographic trends, QUT embarked upon a long-term investment strategy to integrate education and research into the physical and virtual infrastructure of the campus, recognising that expectations of students change as rapidly as technology and learning practices change. To implement this strategy, physical infrastructure refurbishment/re-building is accompanied by upgraded technologies not only for learning but also for research. QUT’s vision for its city-based campuses is to create vibrant and attractive places to learn and research and to link strongly to the wider surrounding community. Over a five year period, physical infrastructure at the Gardens Point campus was substantially reconfigured in two key stages: (a) a >$50m refurbishment of heritage-listed buildings to encompass public, retail and social spaces, learning and teaching “test beds” and research laboratories and (b) destruction of five buildings to be replaced by a $230m, >40,000m2 Science and Engineering Centre designed to accommodate retail, recreation, services, education and research in an integrated, coordinated precinct. This landmark project is characterised by (i) self-evident, collaborative spaces for learning, research and social engagement, (ii) sustainable building practices and sustainable ongoing operation and; (iii) dynamic and mobile re-configuration of spaces or staffing to meet demand. Innovative spaces allow for transformative, cohort-driven learning and the collaborative use of space to prosecute joint class projects. Research laboratories are aggregated, centralised and “on display” to the public, students and staff. A major visualisation space – the largest multi-touch, multi-user facility constructed to date – is a centrepiece feature that focuses on demonstrating scientific and engineering principles or science oriented scenes at large scale (e.g. the Great Barrier Reef). Content on this visualisation facility is integrated with the regional school curricula and supports an in-house schools program for student and teacher engagement. Researchers are accommodated in a combined open-plan and office floor-space (80% open plan) to encourage interdisciplinary engagement and cross-fertilisation of skills, ideas and projects. This combination of spaces re-invigorates the on-campus experience, extends educational engagement across all ages and rapidly enhances research collaboration.
Resumo:
Building knowledge economies seems synonymous with re-imaging urban fabrics. Cities producing vibrant public realms are believed to have better success in distinguishing themselves within a highly competitive market. Many governments are heavily investing in cultural enhancements burgeoning distinctive cosmopolitan centers of which public art is emerging as a significant stakeholder. Brisbane’s goal to grow a knowledge-based economy similarly addresses public art. To stimulate engagement with public art Brisbane City Council has delivered an online public art catalogue and assembled three public art trails, with a fourth newly augmented. While many pieces along these trails are obviously public others question the term ‘public’ through an obscured milieu where a ‘look but don’t touch’ policy is subtly implied. This study investigates the interactional relationship between publics and public art, and in doing so, explores the concept of accessibility. This paper recommends that installations of sculpture within an emerging city should be considered in terms of economic output measured through the degree in which the public engages.
Resumo:
Modern international shipping is largely a flag state-based system. Only the flag state has complete authority over the vessels that fly its flag, and as a result, other states’ jurisdiction over these vessels is very limited. Against this backdrop, this article examines the flag state’s responsibility for maritime terrorism, a major security issue and vulnerability in the global supply chain. It is not an exaggeration that the global community’s repeated statements regarding the illegality of terrorism have created a customary international law obligation for states to take all possible steps for the prevention of terrorism. This article argues that providing flags to suspicious entities in an obscure registration system is not compatible with this obligation.
Resumo:
The fastest-growing segment of jobs in the creative sector are in those firms that provide creative services to other sectors (Hearn, Goldsmith, Bridgstock, Rodgers 2014, this volume; Cunningham 2014, this volume). There are also a large number of Creative Services (Architecture and Design, Advertising and Marketing, Software and Digital Content occupations) workers embedded in organizations in other industry sectors (Cunningham and Higgs 2009). Ben Goldsmith (2014, this volume) shows, for example, that the Financial Services sector is the largest employer of digital creative talent in Australia. But why should this be? We argue it is because ‘knowledge-based intangibles are increasingly the source of value creation and hence of sustainable competitive advantage (Mudambi 2008, 186). This value creation occurs primarily at the research and development (R and D) and the marketing ends of the supply chain. Both of these areas require strong creative capabilities in order to design for, and to persuade, consumers. It is no surprise that Jess Rodgers (2014, this volume), in a study of Australia’s Manufacturing sector, found designers and advertising and marketing occupations to be the most numerous creative occupations. Greg Hearn and Ruth Bridgstock (2013, forthcoming) suggest ‘the creative heart of the creative economy […] is the social and organisational routines that manage the generation of cultural novelty, both tacit and codified, internal and external, and [cultural novelty’s] combination with other knowledges […] produce and capture value’. 2 Moreover, the main “social and organisational routine” is usually a team (for example, Grabher 2002; 2004).
Resumo:
The operation of the law rests on the selection of an account of the facts. Whether this involves prediction or postdiction, it is not possible to achieve certainty. Any attempt to model the operation of the law completely will therefore raise questions of how to model the process of proof. In the selection of a model a crucial question will be whether the model is to be used normatively or descriptively. Focussing on postdiction, this paper presents and contrasts the mathematical model with the story model. The former carries the normative stamp of scientific approval, whereas the latter has been developed by experimental psychologists to describe how humans reason. Neil Cohen's attempt to use a mathematical model descriptively provides an illustration of the dangers in not clearly setting this parameter of the modelling process. It should be kept in mind that the labels 'normative' and 'descriptive' are not eternal. The mathematical model has its normative limits, beyond which we may need to critically assess models with descriptive origins.
Resumo:
Background Paramedic education has evolved in recent times from vocational post-employment to tertiary pre-employment supplemented by clinical placement. Simulation is advocated as a means of transferring learned skills to clinical practice. Sole reliance of simulation learning using mannequin-based models may not be sufficient to prepare students for variance in human anatomy. In 2012, we trialled the use of fresh frozen human cadavers to supplement undergraduate paramedic procedural skill training. The purpose of this study is to evaluate whether cadaveric training is an effective adjunct to mannequin simulation and clinical placement. Methods A multi-method approach was adopted. The first step involved a Delphi methodology to formulate and validate the evaluation instrument. The instrument comprised of knowledge-based MCQs, Likert for self-evaluation of procedural skills and behaviours, and open answer. The second step involved a pre-post evaluation of the 2013 cadaveric training. Results One hundred and fourteen students attended the workshop and 96 evaluations were included in the analysis, representing a return rate of 84%. There was statistically significant improved anatomical knowledge after the workshop. Students' self-rated confidence in performing procedural skills on real patients improved significantly after the workshop: inserting laryngeal mask (MD 0.667), oropharyngeal (MD 0.198) and nasopharyngeal (MD 0.600) airways, performing Bag-Valve-Mask (MD 0.379), double (MD 0.344) and triple (MD 0.326,) airway manoeuvre, doing 12-lead electrocardiography (MD 0.729), using McGrath(R) laryngoscope (MD 0.726), using McGrath(R) forceps to remove foreign body (MD 0.632), attempting thoracocentesis (MD 1.240), and putting on a traction splint (MD 0.865). The students commented that the workshop provided context to their theoretical knowledge and that they gained an appreciation of the differences in normal tissue variation. Following engagement in/ completion of the workshop, students were more aware of their own clinical and non-clinical competencies. Conclusions The paramedic profession has evolved beyond patient transport with minimal intervention to providing comprehensive both emergency and non-emergency medical care. With limited availability of clinical placements for undergraduate paramedic training, there is an increasing demand on universities to provide suitable alternatives. Our findings suggested that cadaveric training using fresh frozen cadavers provides an effective adjunct to simulated learning and clinical placements.
Resumo:
In elite sports, nearly all performances are captured on video. Despite the massive amounts of video that has been captured in this domain over the last 10-15 years, most of it remains in an 'unstructured' or 'raw' form, meaning it can only be viewed or manually annotated/tagged with higher-level event labels which is time consuming and subjective. As such, depending on the detail or depth of annotation, the value of the collected repositories of archived data is minimal as it does not lend itself to large-scale analysis and retrieval. One such example is swimming, where each race of a swimmer is captured on a camcorder and in-addition to the split-times (i.e., the time it takes for each lap), stroke rate and stroke-lengths are manually annotated. In this paper, we propose a vision-based system which effectively 'digitizes' a large collection of archived swimming races by estimating the location of the swimmer in each frame, as well as detecting the stroke rate. As the videos are captured from moving hand-held cameras which are located at different positions and angles, we show our hierarchical-based approach to tracking the swimmer and their different parts is robust to these issues and allows us to accurately estimate the swimmer location and stroke rates.
Resumo:
In this paper conditional hidden Markov model (HMM) filters and conditional Kalman filters (KF) are coupled together to improve demodulation of differential encoded signals in noisy fading channels. We present an indicator matrix representation for differential encoded signals and the optimal HMM filter for demodulation. The filter requires O(N3) calculations per time iteration, where N is the number of message symbols. Decision feedback equalisation is investigated via coupling the optimal HMM filter for estimating the message, conditioned on estimates of the channel parameters, and a KF for estimating the channel states, conditioned on soft information message estimates. The particular differential encoding scheme examined in this paper is differential phase shift keying. However, the techniques developed can be extended to other forms of differential modulation. The channel model we use allows for multiplicative channel distortions and additive white Gaussian noise. Simulation studies are also presented.
Resumo:
Semantic Web offers many possibilities for future Web technologies. Therefore, it is a need to search for ways that can bring the huge amount of unstructured documents from current Web to Semantic Web automatically. One big challenge in searching for such ways is how to understand patterns by both humans and machine. To address this issue, we present an innovative model which interprets patterns to high level concepts. These concepts can explain the patterns' meanings in a human understandable way while improving the information filtering performance. The model is evaluated by comparing it against one state-of-the-art benchmark model using standard Reuters dataset. The results show that the proposed model is successful. The significance of this model is three fold. It gives a way to interpret text mining output, provides a technique to find concepts relevant to the whole set of patterns which is an essential feature to understand the topic, and to some extent overcomes information mismatch and overload problems of existing models. This model will be very useful for knowledge based applications.
Resumo:
Objective This paper presents an automatic active learning-based system for the extraction of medical concepts from clinical free-text reports. Specifically, (1) the contribution of active learning in reducing the annotation effort, and (2) the robustness of incremental active learning framework across different selection criteria and datasets is determined. Materials and methods The comparative performance of an active learning framework and a fully supervised approach were investigated to study how active learning reduces the annotation effort while achieving the same effectiveness as a supervised approach. Conditional Random Fields as the supervised method, and least confidence and information density as two selection criteria for active learning framework were used. The effect of incremental learning vs. standard learning on the robustness of the models within the active learning framework with different selection criteria was also investigated. Two clinical datasets were used for evaluation: the i2b2/VA 2010 NLP challenge and the ShARe/CLEF 2013 eHealth Evaluation Lab. Results The annotation effort saved by active learning to achieve the same effectiveness as supervised learning is up to 77%, 57%, and 46% of the total number of sequences, tokens, and concepts, respectively. Compared to the Random sampling baseline, the saving is at least doubled. Discussion Incremental active learning guarantees robustness across all selection criteria and datasets. The reduction of annotation effort is always above random sampling and longest sequence baselines. Conclusion Incremental active learning is a promising approach for building effective and robust medical concept extraction models, while significantly reducing the burden of manual annotation.