783 resultados para Information Mining


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experimental / pilot online journalistic publication. EUAustralia Online (www.euaustralia.com) is a pilot niche publication identifying and demonstrating dynamics of online journalism. The editor, an experienced and senior journalist and academic, specialist in European studies, commenced publication on 28.8.06 during one year’s “industry immersion” -- with media accreditation to the European Commission, Brussels. Reporting now is from Australia and from Europe on field trip exercises. Student editors participate making it partly a training operation. EUAustralia demonstrates adaptation of conventional, universal, “Western” liberal journalistic practices. Its first premise is to fill a knowledge gap in Australia about the European Union -- institutions, functions and directions. The second premise is to test the communications capacity of the online format, where the publication sets a strong standard of journalistic credibility – hence its transparency with sourcing or signposting of “commentary” or ”opinion”. EUAustralia uses modified, enhanced weblog software allowing for future allocation of closed pages to subscribers. An early exemplar of its kind, with modest upload rate (2010-13 average, 16 postings monthly), esteemed, it commands over 180000 site visits p.a. (half as unique visitors; AWB Statistics); strongly rated by search engines, see page one Googlr placements for “EU Australia”. Comment by the ISP (SeventhVision, Broadbeach, Queensland): “The site has good search engine recognition because seen as credible; can be used to generate revenue”. This journalistic exercise has been analysed in theoretical context twice, in published refereed conference proceedings (Communication and Media Policy Forum, Sydney; 2007, 2009).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Researching administrative history is problematical. A trail of authoritative documents is often hard to find; and useful summaries can be difficult to organise, especially if source material is in paper formats in geographically dispersed locations. In the absence of documents, the reasons for particular decisions and the rationale underpinning particular policies can be confounded as key personnel advance in their professions and retire. The rationale for past decisions may be lost for practical purposes; and if an organisation’s memory of events is diminished, its learning through experience is also diminished. Publishing this document tries to avoid unnecessary duplication of effort by other researchers that need to venture into how policies of charging for public sector information have been justified. The author compiled this work within a somewhat limited time period and the work does not pretend to be a complete or comprehensive analysis of the issues.----- A significant part of the role of government is to provide a framework of legally-enforceable rights and obligations that can support individuals and non-government organisations in their lawful activities. Accordingly, claims that governments should be more ‘business-like’ need careful scrutiny. A significant supply of goods and services occurs as non-market activity where neither benefits nor costs are quantified within conventional accounting systems or in terms of money. Where a government decides to provide information as a service; and information from land registries is archetypical, the transactions occur as a political decision made under a direct or a clearly delegated authority of a parliament with the requisite constitutional powers. This is not a market transaction and the language of the market confuses attempts to describe a number of aspects of how governments allocate resources.----- Cost recovery can be construed as an aspect of taxation that is a sole prerogative of a parliament. The issues are fundamental to political constitutions; but they become more complicated where states cede some taxing powers to a central government as part of a federal system. Nor should the absence of markets be construed necessarily as ‘market failure’ or even ‘government failure’. The absence is often attributable to particular technical, economic and political constraints that preclude the operation of markets. Arguably, greater care is needed in distinguishing between the polity and markets in raising revenues and allocating resources; and that needs to start by removing unhelpful references to ‘business’ in the context of government decision-making.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study explores strategic decision-making (SDM) in micro-firms, an economically significant business subsector. As extant large- and small-firm literature currently proffers an incomplete characterization of SDM in very small enterprises, a multiple-case methodology was used to investigate how these firms make strategic decisions. Eleven Australian Information Technology service micro-firms participated in the study. Using an information-processing lens, the study uncovered patterns of SDM in micro-firms and derived a theoretical micro-firm SDM model. This research also identifies several implications for micro-firm management and directions for future research, contributing to the understanding of micro-firm SDM in both theory and practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The impact of technology on the health and well-being of workers has been a topic of interest since computers and computerized technology were widely introduced in the 1980s. Of recent concern is the impact of rapid technological advances on individuals’ psychological well-being, especially due to advancements in mobile technology which have increased many workers’ accessibility and expected productivity. In this chapter we focus on the associations between occupational stress and technology, especially behavioral and psychological reactions. We discuss some key facilitators and barriers associated with users’ acceptance of and engagement with information and communication technology. We conclude with recommendations for ongoing research on managing occupational health and well-being in conjunction with technological advancements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Work-related injuries in Australia are estimated to cost around $57.5 billion annually, however there are currently insufficient surveillance data available to support an evidence-based public health response. Emergency departments (ED) in Australia are a potential source of information on work-related injuries though most ED’s do not have an ‘Activity Code’ to identify work-related cases with information about the presenting problem recorded in a short free text field. This study compared methods for interrogating text fields for identifying work-related injuries presenting at emergency departments to inform approaches to surveillance of work-related injury.---------- Methods: Three approaches were used to interrogate an injury description text field to classify cases as work-related: keyword search, index search, and content analytic text mining. Sensitivity and specificity were examined by comparing cases flagged by each approach to cases coded with an Activity code during triage. Methods to improve the sensitivity and/or specificity of each approach were explored by adjusting the classification techniques within each broad approach.---------- Results: The basic keyword search detected 58% of cases (Specificity 0.99), an index search detected 62% of cases (Specificity 0.87), and the content analytic text mining (using adjusted probabilities) approach detected 77% of cases (Specificity 0.95).---------- Conclusions The findings of this study provide strong support for continued development of text searching methods to obtain information from routine emergency department data, to improve the capacity for comprehensive injury surveillance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Road curves are an important feature of road infrastructure and many serious crashes occur on road curves. In Queensland, the number of fatalities is twice as many on curves as that on straight roads. Therefore, there is a need to reduce drivers’ exposure to crash risk on road curves. Road crashes in Australia and in the Organisation for Economic Co-operation and Development(OECD) have plateaued in the last five years (2004 to 2008) and the road safety community is desperately seeking innovative interventions to reduce the number of crashes. However, designing an innovative and effective intervention may prove to be difficult as it relies on providing theoretical foundation, coherence, understanding, and structure to both the design and validation of the efficiency of the new intervention. Researchers from multiple disciplines have developed various models to determine the contributing factors for crashes on road curves with a view towards reducing the crash rate. However, most of the existing methods are based on statistical analysis of contributing factors described in government crash reports. In order to further explore the contributing factors related to crashes on road curves, this thesis designs a novel method to analyse and validate these contributing factors. The use of crash claim reports from an insurance company is proposed for analysis using data mining techniques. To the best of our knowledge, this is the first attempt to use data mining techniques to analyse crashes on road curves. Text mining technique is employed as the reports consist of thousands of textual descriptions and hence, text mining is able to identify the contributing factors. Besides identifying the contributing factors, limited studies to date have investigated the relationships between these factors, especially for crashes on road curves. Thus, this study proposed the use of the rough set analysis technique to determine these relationships. The results from this analysis are used to assess the effect of these contributing factors on crash severity. The findings obtained through the use of data mining techniques presented in this thesis, have been found to be consistent with existing identified contributing factors. Furthermore, this thesis has identified new contributing factors towards crashes and the relationships between them. A significant pattern related with crash severity is the time of the day where severe road crashes occur more frequently in the evening or night time. Tree collision is another common pattern where crashes that occur in the morning and involves hitting a tree are likely to have a higher crash severity. Another factor that influences crash severity is the age of the driver. Most age groups face a high crash severity except for drivers between 60 and 100 years old, who have the lowest crash severity. The significant relationship identified between contributing factors consists of the time of the crash, the manufactured year of the vehicle, the age of the driver and hitting a tree. Having identified new contributing factors and relationships, a validation process is carried out using a traffic simulator in order to determine their accuracy. The validation process indicates that the results are accurate. This demonstrates that data mining techniques are a powerful tool in road safety research, and can be usefully applied within the Intelligent Transport System (ITS) domain. The research presented in this thesis provides an insight into the complexity of crashes on road curves. The findings of this research have important implications for both practitioners and academics. For road safety practitioners, the results from this research illustrate practical benefits for the design of interventions for road curves that will potentially help in decreasing related injuries and fatalities. For academics, this research opens up a new research methodology to assess crash severity, related to road crashes on curves.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information Retrieval is an important albeit imperfect component of information technologies. A problem of insufficient diversity of retrieved documents is one of the primary issues studied in this research. This study shows that this problem leads to a decrease of precision and recall, traditional measures of information retrieval effectiveness. This thesis presents an adaptive IR system based on the theory of adaptive dual control. The aim of the approach is the optimization of retrieval precision after all feedback has been issued. This is done by increasing the diversity of retrieved documents. This study shows that the value of recall reflects this diversity. The Probability Ranking Principle is viewed in the literature as the “bedrock” of current probabilistic Information Retrieval theory. Neither the proposed approach nor other methods of diversification of retrieved documents from the literature conform to this principle. This study shows by counterexample that the Probability Ranking Principle does not in general lead to optimal precision in a search session with feedback (for which it may not have been designed but is actively used). Retrieval precision of the search session should be optimized with a multistage stochastic programming model to accomplish the aim. However, such models are computationally intractable. Therefore, approximate linear multistage stochastic programming models are derived in this study, where the multistage improvement of the probability distribution is modelled using the proposed feedback correctness method. The proposed optimization models are based on several assumptions, starting with the assumption that Information Retrieval is conducted in units of topics. The use of clusters is the primary reasons why a new method of probability estimation is proposed. The adaptive dual control of topic-based IR system was evaluated in a series of experiments conducted on the Reuters, Wikipedia and TREC collections of documents. The Wikipedia experiment revealed that the dual control feedback mechanism improves precision and S-recall when all the underlying assumptions are satisfied. In the TREC experiment, this feedback mechanism was compared to a state-of-the-art adaptive IR system based on BM-25 term weighting and the Rocchio relevance feedback algorithm. The baseline system exhibited better effectiveness than the cluster-based optimization model of ADTIR. The main reason for this was insufficient quality of the generated clusters in the TREC collection that violated the underlying assumption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of role of the nurse in the clinical setting is that of coordinating communication across the healthcare team. On a daily basis nurses interact with the person receiving care, their family members, and multiple care providers thus placing the nurse in the central position with access to a vast array of information on the person. Through this nurses have historically functioned as “information repositories”. With the advent of Health Information Technology (HIT) tools there is a potential that HIT could impact interdisciplinary communication, practice efficiency and effectiveness, relationships and workflow in acute care settings \[1]\[3]. In 2005, the HIMSS Nursing Informatics Community developed the IHITScale to measure the impact of HIT on the nursing role and interdisciplinary communication in USA hospitals. In 2007, nursing informatics colleagues from Australia, Finland, Ireland, New Zealand, Scotland and the USA formed a research collaborative to validate the IHIT in six additional countries. This paper will discuss the background, methodology, results and implications from the Australian IHIT survey of over 1100 nurses. The results are currently being analyzed and will be presented at the conference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 2005, the Healthcare Information Management Systems Society (HIMSS) Nursing Informatics Community developed a survey to measure the impact of health information technology (HIT), the IHIT Scale, on the role of nurses and interdisciplinary communication in hospital settings. In 2007, nursing informatics colleagues from Australia, England, Finland, Ireland, New Zealand, Scotland and the United States formed a research collaborative to validate the IHIT across countries. All teams have completed construct and face validation in their countries. Five out of six teams have initiated reliability testing by practicing nurses. This paper reports the international collaborative’s validation of the IHIT Scale completed to date.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the nature and extent of statutory executive stock option (ESO) disclosures by Australian listed companies over the 2001 to 2004 period, and the influence of corporate governance mechanisms on these disclosures. Our results show a progressive increase in overall compliance from 2001 to 2004. However, despite the improved compliance, the results reveal managements’ continued reluctance to disclose more sensitive ESO information. Factors associated with good internal governance, including board independence, audit committee independence and effectiveness, and compensation committee independence and effectiveness are found to contribute to improved compliance. Similarly, certain external governance factors are associated with improved disclosure, including external auditor quality, shareholder activism (as proxied by companies identified as poor performers by the Australian Shareholders’ Association), and regulatory intervention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recently proposed data-driven background dataset refinement technique provides a means of selecting an informative background for support vector machine (SVM)-based speaker verification systems. This paper investigates the characteristics of the impostor examples in such highly-informative background datasets. Data-driven dataset refinement individually evaluates the suitability of candidate impostor examples for the SVM background prior to selecting the highest-ranking examples as a refined background dataset. Further, the characteristics of the refined dataset were analysed to investigate the desired traits of an informative SVM background. The most informative examples of the refined dataset were found to consist of large amounts of active speech and distinctive language characteristics. The data-driven refinement technique was shown to filter the set of candidate impostor examples to produce a more disperse representation of the impostor population in the SVM kernel space, thereby reducing the number of redundant and less-informative examples in the background dataset. Furthermore, data-driven refinement was shown to provide performance gains when applied to the difficult task of refining a small candidate dataset that was mis-matched to the evaluation conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study assesses the recently proposed data-driven background dataset refinement technique for speaker verification using alternate SVM feature sets to the GMM supervector features for which it was originally designed. The performance improvements brought about in each trialled SVM configuration demonstrate the versatility of background dataset refinement. This work also extends on the originally proposed technique to exploit support vector coefficients as an impostor suitability metric in the data-driven selection process. Using support vector coefficients improved the performance of the refined datasets in the evaluation of unseen data. Further, attempts are made to exploit the differences in impostor example suitability measures from varying features spaces to provide added robustness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information fusion in biometrics has received considerable attention. The architecture proposed here is based on the sequential integration of multi-instance and multi-sample fusion schemes. This method is analytically shown to improve the performance and allow a controlled trade-off between false alarms and false rejects when the classifier decisions are statistically independent. Equations developed for detection error rates are experimentally evaluated by considering the proposed architecture for text dependent speaker verification using HMM based digit dependent speaker models. The tuning of parameters, n classifiers and m attempts/samples, is investigated and the resultant detection error trade-off performance is evaluated on individual digits. Results show that performance improvement can be achieved even for weaker classifiers (FRR-19.6%, FAR-16.7%). The architectures investigated apply to speaker verification from spoken digit strings such as credit card numbers in telephone or VOIP or internet based applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Market failures involving the sale of complex merchandise, such as residential property, financial products and credit, have principally been attributed to information asymmetries. Existing legislative and regulatory responses were developed having regard to consumer protection policies based on traditional economic theories that focus on the notion of the ‘rational consumer’. Governmental responses therefore seek to impose disclosure obligations on sellers of complex goods or products to ensure that consumers have sufficient information upon which to make a decision. Emergent research, based on behavioural economics, challenges traditional ideas and instead focuses on the actual behaviour of consumers. This approach suggests that consumers as a whole do not necessarily benefit from mandatory disclosure because some, if not most, consumers do not pay attention to the disclosed information before they make a decision to purchase. The need for consumer policies to take consumer characteristics and behaviour into account is being increasingly recognised by governments, and most recently in the policy framework suggested by the Australian Productivity Commission

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The challenge for all educators is to fuse the learning of information literacy to an academic education in such a way that the outcome is systematic and sustainable learning for students. This challenge can be answered through long-term commitment to information literacy education bound to organisation-wide, renewable strategic planning and driven through systemic reform. This chapter seeks to explore the two sides of reforming information literacy education in an academic environment. Specifically, it will examine how one Australian university has undertaken the implementation of a rigorous strategic, systemic approach to information literacy learning and teaching.