77 resultados para Non-negative sources
em Queensland University of Technology - ePrints Archive
Resumo:
This thesis addressed issues that have prevented qualitative researchers from using thematic discovery algorithms. The central hypothesis evaluated whether allowing qualitative researchers to interact with thematic discovery algorithms and incorporate domain knowledge improved their ability to address research questions and trust the derived themes. Non-negative Matrix Factorisation and Latent Dirichlet Allocation find latent themes within document collections but these algorithms are rarely used, because qualitative researchers do not trust and cannot interact with the themes that are automatically generated. The research determined the types of interactivity that qualitative researchers require and then evaluated interactive algorithms that matched these requirements. Theoretical contributions included the articulation of design guidelines for interactive thematic discovery algorithms, the development of an Evaluation Model and a Conceptual Framework for Interactive Content Analysis.
Resumo:
This paper reports the application of multicriteria decision making techniques, PROMETHEE and GAIA, and receptor models, PCA/APCS and PMF, to data from an air monitoring site located on the campus of Queensland University of Technology in Brisbane, Australia and operated by Queensland Environmental Protection Agency (QEPA). The data consisted of the concentrations of 21 chemical species and meteorological data collected between 1995 and 2003. PROMETHEE/GAIA separated the samples into those collected when leaded and unleaded petrol were used to power vehicles in the region. The number and source profiles of the factors obtained from PCA/APCS and PMF analyses were compared. There are noticeable differences in the outcomes possibly because of the non-negative constraints imposed on the PMF analysis. While PCA/APCS identified 6 sources, PMF reduced the data to 9 factors. Each factor had distinctive compositions that suggested that motor vehicle emissions, controlled burning of forests, secondary sulphate, sea salt and road dust/soil were the most important sources of fine particulate matter at the site. The most plausible locations of the sources were identified by combining the results obtained from the receptor models with meteorological data. The study demonstrated the potential benefits of combining results from multi-criteria decision making analysis with those from receptor models in order to gain insights into information that could enhance the development of air pollution control measures.
Resumo:
Human exposures in transportation microenvironments are poorly represented by ambient stationary monitoring. A number of on-road studies using vehicle-based mobile monitoring have been conducted to address this. Most previous studies were conducted on urban roads in developed countries where the primary emission source was vehicles. Few studies have examined on-road pollution in developing countries in urban settings. Currently, no study has been conducted for roadways in rural environments where a substantial proportion of the population live. This study aimed to characterize on-road air quality on the East-West Highway (EWH) in Bhutan and identify its principal sources. We conducted six mobile measurements of PM10, particle number (PN) count and CO along the entire 570 km length of the EWH. We divided the EWH into five segments, R1-R5, taking the road length between two district towns as a single road segment. The pollutant concentrations varied widely along the different road segments, with the highest concentrations for R5 compared with other road segments (PM10 = 149 µg/m3, PN = 5.74 × 104 particles/cm-3, CO = 0.19 ppm), which is the final segment of the road to the capital. Apart from vehicle emissions, the dominant sources were road works, unpaved roads and roadside combustion activities. Overall, the highest contributions above the background levels were made by unpaved roads for PM10 (6 times background), and vehicle emissions for PN and CO (5 and 15 times background, respectively). Notwithstanding the differences in instrumentation used and particle size range measured, the current study showed lower PN concentrations compared with similar on-road studies. However, concentrations were still high enough that commuters, road maintenance workers and residents living along the EWH, were potentially exposed to elevated pollutant concentrations from combustion and non-combustion sources. Future studies should focus on assessing the dispersion patterns of roadway pollutants and defining the short- and long-term health impacts of exposure in Bhutan, as well as in other developing countries with similar characteristics.
Resumo:
This article explores two matrix methods to induce the ``shades of meaning" (SoM) of a word. A matrix representation of a word is computed from a corpus of traces based on the given word. Non-negative Matrix Factorisation (NMF) and Singular Value Decomposition (SVD) compute a set of vectors corresponding to a potential shade of meaning. The two methods were evaluated based on loss of conditional entropy with respect to two sets of manually tagged data. One set reflects concepts generally appearing in text, and the second set comprises words used for investigations into word sense disambiguation. Results show that for NMF consistently outperforms SVD for inducing both SoM of general concepts as well as word senses. The problem of inducing the shades of meaning of a word is more subtle than that of word sense induction and hence relevant to thematic analysis of opinion where nuances of opinion can arise.
Resumo:
With globalisation and severe budget constraints in the education sector in Australia and around the world it has become necessary for higher education institutions to be more outward looking and seek funding from non traditional sources to supplement the financial shortfalls. One way to overcome this problem is to work cooperatively with other institutions to share facilities and courses, at the same time generating valuable income to maintain the operation of the university. This paper describes the development of joint curricula in built environment and engineering courses in QUT. It outlines the stages of development starting from seeking international partners, developing memorandum of understanding, making visit to partner institution to inspect the facilities, curriculum development to meet the academic requirements of the institutions and professional bodies and finally the implementation process.
Resumo:
Campylobacter jejuni followed by Campylobacter coli contribute substantially to the economic and public health burden attributed to food-borne infections in Australia. Genotypic characterisation of isolates has provided new insights into the epidemiology and pathogenesis of C. jejuni and C. coli. However, currently available methods are not conducive to large scale epidemiological investigations that are necessary to elucidate the global epidemiology of these common food-borne pathogens. This research aims to develop high resolution C. jejuni and C. coli genotyping schemes that are convenient for high throughput applications. Real-time PCR and High Resolution Melt (HRM) analysis are fundamental to the genotyping schemes developed in this study and enable rapid, cost effective, interrogation of a range of different polymorphic sites within the Campylobacter genome. While the sources and routes of transmission of campylobacters are unclear, handling and consumption of poultry meat is frequently associated with human campylobacteriosis in Australia. Therefore, chicken derived C. jejuni and C. coli isolates were used to develop and verify the methods described in this study. The first aim of this study describes the application of MLST-SNP (Multi Locus Sequence Typing Single Nucleotide Polymorphisms) + binary typing to 87 chicken C. jejuni isolates using real-time PCR analysis. These typing schemes were developed previously by our research group using isolates from campylobacteriosis patients. This present study showed that SNP + binary typing alone or in combination are effective at detecting epidemiological linkage between chicken derived Campylobacter isolates and enable data comparisons with other MLST based investigations. SNP + binary types obtained from chicken isolates in this study were compared with a previously SNP + binary and MLST typed set of human isolates. Common genotypes between the two collections of isolates were identified and ST-524 represented a clone that could be worth monitoring in the chicken meat industry. In contrast, ST-48, mainly associated with bovine hosts, was abundant in the human isolates. This genotype was, however, absent in the chicken isolates, indicating the role of non-poultry sources in causing human Campylobacter infections. This demonstrates the potential application of SNP + binary typing for epidemiological investigations and source tracing. While MLST SNPs and binary genes comprise the more stable backbone of the Campylobacter genome and are indicative of long term epidemiological linkage of the isolates, the development of a High Resolution Melt (HRM) based curve analysis method to interrogate the hypervariable Campylobacter flagellin encoding gene (flaA) is described in Aim 2 of this study. The flaA gene product appears to be an important pathogenicity determinant of campylobacters and is therefore a popular target for genotyping, especially for short term epidemiological studies such as outbreak investigations. HRM curve analysis based flaA interrogation is a single-step closed-tube method that provides portable data that can be easily shared and accessed. Critical to the development of flaA HRM was the use of flaA specific primers that did not amplify the flaB gene. HRM curve analysis flaA interrogation was successful at discriminating the 47 sequence variants identified within the 87 C. jejuni and 15 C. coli isolates and correlated to the epidemiological background of the isolates. In the combinatorial format, the resolving power of flaA was additive to that of SNP + binary typing and CRISPR (Clustered regularly spaced short Palindromic repeats) HRM and fits the PHRANA (Progressive hierarchical resolving assays using nucleic acids) approach for genotyping. The use of statistical methods to analyse the HRM data enhanced sophistication of the method. Therefore, flaA HRM is a rapid and cost effective alternative to gel- or sequence-based flaA typing schemes. Aim 3 of this study describes the development of a novel bioinformatics driven method to interrogate Campylobacter MLST gene fragments using HRM, and is called ‘SNP Nucleated Minim MLST’ or ‘Minim typing’. The method involves HRM interrogation of MLST fragments that encompass highly informative “Nucleating SNPS” to ensure high resolution. Selection of fragments potentially suited to HRM analysis was conducted in silico using i) “Minimum SNPs” and ii) the new ’HRMtype’ software packages. Species specific sets of six “Nucleating SNPs” and six HRM fragments were identified for both C. jejuni and C. coli to ensure high typeability and resolution relevant to the MLST database. ‘Minim typing’ was tested empirically by typing 15 C. jejuni and five C. coli isolates. The association of clonal complexes (CC) to each isolate by ‘Minim typing’ and SNP + binary typing were used to compare the two MLST interrogation schemes. The CCs linked with each C. jejuni isolate were consistent for both methods. Thus, ‘Minim typing’ is an efficient and cost effective method to interrogate MLST genes. However, it is not expected to be independent, or meet the resolution of, sequence based MLST gene interrogation. ‘Minim typing’ in combination with flaA HRM is envisaged to comprise a highly resolving combinatorial typing scheme developed around the HRM platform and is amenable to automation and multiplexing. The genotyping techniques described in this thesis involve the combinatorial interrogation of differentially evolving genetic markers on the unified real-time PCR and HRM platform. They provide high resolution and are simple, cost effective and ideally suited to rapid and high throughput genotyping for these common food-borne pathogens.
Resumo:
Knowledge has been recognised as a powerful yet intangible asset, which is difficult to manage. This is especially true in a project environment where there is the potential to repeat mistakes, rather than learn from previous experiences. The literature in the project management field has recognised the importance of knowledge sharing (KS) within and between projects. However, studies in that field focus primarily on KS mechanisms including lessons learned (LL) and post project reviews as the source of knowledge for future projects, and only some preliminary research has been carried out on the aspects of project management offices (PMOs) and organisational culture (OC) in KS. This study undertook to investigate KS behaviours in an inter-project context, with a particular emphasis on the role of trust, OC and a range of knowledge sharing mechanisms (KSM) in achieving successful inter-project knowledge sharing (I-PKS). An extensive literature search resulted in the development of an I-PKS Framework, which defined the scope of the research and shaped its initial design. The literature review indicated that existing research relating to the three factors of OC, trust and KSM remains inadequate in its ability to fully explain the role of these contextual factors. In particular, the literature review identified these areas of interest: (1) the conflicting answers to some of the major questions related to KSM, (2) the limited empirical research on the role of different trust dimensions, (3) limited empirical evidence of the role of OC in KS, and (4) the insufficient research on KS in an inter-project context. The resulting Framework comprised the three main factors including: OC, trust and KSM, demonstrating a more integrated view of KS in the inter-project context. Accordingly, the aim of this research was to examine the relationships between these three factors and KS by investigating behaviours related to KS from the project managers‘ (PMs‘) perspective. In order to achieve the aim, this research sought to answer the following research questions: 1. How does organisational culture influence inter-project knowledge sharing? 2. How does the existence of three forms of trust — (i) ability, (ii) benevolence and (iii) integrity — influence inter-project knowledge sharing? 3. How can different knowledge sharing mechanisms (relational, project management tools and process, and technology) improve inter-project knowledge sharing behaviours? 4. How do the relationships between these three factors of organisational culture, trust and knowledge sharing mechanisms improve inter-project knowledge sharing? a. What are the relationships between the factors? b. What is the best fit for given cases to ensure more effective inter-project knowledge sharing? Using multiple case studies, this research was designed to build propositions emerging from cross-case data analysis. The four cases were chosen on the basis of theoretical sampling. All cases were large project-based organisations (PBOs), with a strong matrix-type structure, as per the typology proposed by the Project Management Body of Knowledge (PMBoK) (2008). Data were collected from project management departments of the respective organisations. A range of analytical techniques were used to deal with the data including pattern matching logic and explanation building analysis, complemented by the use of NVivo for data coding and management. Propositions generated at the end of the analyses were further compared with the extant literature, and practical implications based on the data and literature were suggested in order to improve I-PKS. Findings from this research conclude that OC, trust, and KSM contribute to inter-project knowledge sharing, and suggest the existence of relationships between these factors. In view of that, this research identified the relationships between different trust dimensions, suggesting that integrity trust reinforces the relationship between ability trust and knowledge sharing. Furthermore, this research demonstrated that characteristics of culture and trust interact to reinforce preferences for mechanisms of knowledge sharing. This means that cultures that facilitate characteristics of Clan type are more likely to result in trusting relationships, hence are more likely to use organic sources of knowledge for both tacit and explicit knowledge exchange. In contrast, cultures that are empirically driven, based on control, efficiency, and measures (characteristics of Hierarchy and Market types) display tendency to develop trust primarily in ability of non-organic sources, and therefore use these sources to share mainly explicit knowledge. This thesis contributes to the project management literature by providing a more integrative view of I-PKS, bringing the factors of OC, trust and KSM into the picture. A further contribution is related to the use of collaborative tools as a substitute for static LL databases and as a facilitator for tacit KS between geographically dispersed projects. This research adds to the literature on OC by providing rich empirical evidence of the relationships between OC and the willingness to share knowledge, and by providing empirical evidence that OC has an effect on trust; in doing so this research extends the theoretical propositions outlined by previous research. This study also extends the research on trust by identifying the relationships between different trust dimensions, suggesting that integrity trust reinforces the relationship between ability trust and KS. Finally, this research provides some directions for future studies.
Resumo:
The development of public service broadcasters (PSBs) in the 20th century was framed around debates about its difference compared to commercial broadcasting. These debates navigated between two poles. One concerned the relationship between non‐commercial sources of funding and the role played by statutory Charters as guarantors of the independence of PSBs. The other concerned the relationship between PSBs being both a complementary and a comprehensive service, although there are tensions inherent in this duality. In the 21st century, as reconfigured public service media organisations (PSMs) operate across multiple platforms in a convergent media environment, how are these debates changing, if at all? Is the case for PSM “exceptionalism” changed with Web‐based services, catch‐up TV, podcasting, ancillary product sales, and commissioning of programs from external sources in order to operate in highly diversified cross‐media environments? Do the traditional assumptions about non‐commercialism still hold as the basis for different forms of PSM governance and accountability? This paper will consider the question of PSM exceptionalism in the context of three reviews into Australian media that took place over 2011‐2012: the Convergence Review undertaken through the Department of Broadband, Communications and the Digital Economy; the National Classification Scheme Review undertaken by the Australian Law Reform Commission; and the Independent Media Inquiry that considered the future of news and journalism.
Resumo:
The aim of this paper is to provide a comparison of various algorithms and parameters to build reduced semantic spaces. The effect of dimension reduction, the stability of the representation and the effect of word order are examined in the context of the five algorithms bearing on semantic vectors: Random projection (RP), singular value decom- position (SVD), non-negative matrix factorization (NMF), permutations and holographic reduced representations (HRR). The quality of semantic representation was tested by means of synonym finding task using the TOEFL test on the TASA corpus. Dimension reduction was found to improve the quality of semantic representation but it is hard to find the optimal parameter settings. Even though dimension reduction by RP was found to be more generally applicable than SVD, the semantic vectors produced by RP are somewhat unstable. The effect of encoding word order into the semantic vector representation via HRR did not lead to any increase in scores over vectors constructed from word co-occurrence in context information. In this regard, very small context windows resulted in better semantic vectors for the TOEFL test.
Resumo:
Application of poultry litter (PL) to soil can lead to substantial nitrous oxide (N2O) emissions due to the co-application of labile carbon (C) and nitrogen (N). Slow pyrolysis of PL to produce biochar may mitigate N2O emissions from this source, whilst still providing agronomic benefits. In a corn crop on ferrosol with similarly matched available N inputs of ca. 116 kg N/ha, PL-biochar plus urea emitted significantly less N2O (1.5 kg N2O-N/ha) compared to raw PL at 4.9 kg N2O-N/ha. Urea amendment without the PL-biochar emitted 1.2 kg N2O-N/ha, and the PL-biochar alone emitted only 0.35 kg N2O-N/ha. Both PL and PL-biochar resulted in similar corn yields and total N uptake which was significantly greater than for urea alone. Using stable isotope methodology, the majority (~ 80%) of N2O emissions were shown to be from non-urea sources. Amendment with raw PL significantly increased C mineralisation and the quantity of permanganate oxidisable organic C. The low molar H/C (0.49) and O/C (0.16) ratios of the PL-biochar suggest its higher stability in soil than raw PL. The PL-biochar also had higher P and K fertiliser value than raw PL. This study suggests that PL-biochar is a valuable soil amendment with the potential to significantly reduce emissions of soil greenhouse gases compared to the raw product. Contrary to other studies, PL-biochar incorporated to 100 mm did not reduce N2O emissions from surface applied urea, which suggests that further field evaluation of biochar impacts, and methods of application of both biochar and fertiliser, are needed.
Resumo:
Samples of sea water contain phytoplankton taxa in varying amounts, and marine scientists are interested in the relative abundance of each taxa. Their relative biomass can be ascertained indirectly by measuring the quantity of various pigments using high performance liquid chromatography. However, the conversion from pigment to taxa is mathematically non trivial as it is a positive matrix factorisation problem where both matrices are unknown beyond the level of initial estimates. The prior information on the pigment to taxa conversion matrix is used to give the problem a unique solution. An iteration of two non-negative least squares algorithms gives satisfactory results. Some sample analysis of data indicates prospects for this type of analysis. An alternative more computationally intensive approach using Bayesian methods is discussed.
Resumo:
Since the revisions to the International Health Regulations (IHR) in 2005, much attention has turned to two concerns relating to infectious disease control. The first is how to assist states to strengthen their capacity to identify and verify public health emergencies of international concern (PHEIC). The second is the question of how the World Health Organization (WHO) will operate its expanded mandate under the revised IHR. Very little attention has been paid to the potential individual power that has been afforded under the IHR revisions – primarily through the first inclusion of human rights principles into the instrument and the allowance for the WHO to receive non-state surveillance intelligence and informal reports of health emergencies. These inclusions mark the individual as a powerful actor, but also recognise the vulnerability of the individual to the whim of the state in outbreak response and containment. In this paper we examine why these changes to the IHR occurred and explore the consequence of expanding the sovereignty-as-responsibility concept to disease outbreak response. To this end our paper considers both the strengths and weaknesses of incorporating reports from non-official sources and including human rights principles in the IHR framework.
Resumo:
The rate at which people move and resettle around the world is unprecedented. Mobility and resettlement is now greatly assisted by the use of inexpensive internet communication technologies (ICTs) for a wide variety of functions: to communicate locally and across territories, for localised information seeking, geo – locational mapping and for forging new social connections in host countries and cities. This article is based on a qualitative study of newly arrived migrants and mobile people from non English speaking backgrounds (NESB) to the city of Brisbane, Australia and investigates how the internet is used to assist the initial period of settling into the city. As increasing amounts of essential information is placed online, the study asks how people from NESB communities manage to negotiate the types of information they require during the early stages of resettlement, given varying levels of access to ICTs, digital and language literacy. The study finds that the internet is widely used for specific location information seeking (such as accommodation and job-seeking), but this is often supplemented with other non-mediated sources of information. The study identified implications for social policy in regard to the resourcing and access of information. While findings are specific to the study location, it is feasible that the patterns of internet use for resettlement have relevance in a broader context.
Resumo:
Description of a patient's injuries is recorded in narrative text form by hospital emergency departments. For statistical reporting, this text data needs to be mapped to pre-defined codes. Existing research in this field uses the Naïve Bayes probabilistic method to build classifiers for mapping. In this paper, we focus on providing guidance on the selection of a classification method. We build a number of classifiers belonging to different classification families such as decision tree, probabilistic, neural networks, and instance-based, ensemble-based and kernel-based linear classifiers. An extensive pre-processing is carried out to ensure the quality of data and, in hence, the quality classification outcome. The records with a null entry in injury description are removed. The misspelling correction process is carried out by finding and replacing the misspelt word with a soundlike word. Meaningful phrases have been identified and kept, instead of removing the part of phrase as a stop word. The abbreviations appearing in many forms of entry are manually identified and only one form of abbreviations is used. Clustering is utilised to discriminate between non-frequent and frequent terms. This process reduced the number of text features dramatically from about 28,000 to 5000. The medical narrative text injury dataset, under consideration, is composed of many short documents. The data can be characterized as high-dimensional and sparse, i.e., few features are irrelevant but features are correlated with one another. Therefore, Matrix factorization techniques such as Singular Value Decomposition (SVD) and Non Negative Matrix Factorization (NNMF) have been used to map the processed feature space to a lower-dimensional feature space. Classifiers with these reduced feature space have been built. In experiments, a set of tests are conducted to reflect which classification method is best for the medical text classification. The Non Negative Matrix Factorization with Support Vector Machine method can achieve 93% precision which is higher than all the tested traditional classifiers. We also found that TF/IDF weighting which works well for long text classification is inferior to binary weighting in short document classification. Another finding is that the Top-n terms should be removed in consultation with medical experts, as it affects the classification performance.
Resumo:
Narrative text is a useful way of identifying injury circumstances from the routine emergency department data collections. Automatically classifying narratives based on machine learning techniques is a promising technique, which can consequently reduce the tedious manual classification process. Existing works focus on using Naive Bayes which does not always offer the best performance. This paper proposes the Matrix Factorization approaches along with a learning enhancement process for this task. The results are compared with the performance of various other classification approaches. The impact on the classification results from the parameters setting during the classification of a medical text dataset is discussed. With the selection of right dimension k, Non Negative Matrix Factorization-model method achieves 10 CV accuracy of 0.93.