239 resultados para metrics
Resumo:
An important responsibility of the Environment Protection Authority, Victoria, is to set objectives for levels of environmental contaminants. To support the development of environmental objectives for water quality, a need has been identified to understand the dual impacts of concentration and duration of a contaminant on biota in freshwater streams. For suspended solids contamination, information reported by Newcombe and Jensen [ North American Journal of Fisheries Management , 16(4):693--727, 1996] study of freshwater fish and the daily suspended solids data from the United States Geological Survey stream monitoring network is utilised. The study group was requested to examine both the utility of the Newcombe and Jensen and the USA data, as well as the formulation of a procedure for use by the Environment Protection Authority Victoria that takes concentration and duration of harmful episodes into account when assessing water quality. The extent to which the impact of a toxic event on fish health could be modelled deterministically was also considered. It was found that concentration and exposure duration were the main compounding factors on the severity of effects of suspended solids on freshwater fish. A protocol for assessing the cumulative effect on fish health and a simple deterministic model, based on the biology of gill harm and recovery, was proposed. References D. W. T. Au, C. A. Pollino, R. S. S Wu, P. K. S. Shin, S. T. F. Lau, and J. Y. M. Tang. Chronic effects of suspended solids on gill structure, osmoregulation, growth, and triiodothyronine in juvenile green grouper epinephelus coioides . Marine Ecology Press Series , 266:255--264, 2004. J.C. Bezdek, S.K. Chuah, and D. Leep. Generalized k-nearest neighbor rules. Fuzzy Sets and Systems , 18:237--26, 1986. E. T. Champagne, K. L. Bett-Garber, A. M. McClung, and C. Bergman. {Sensory characteristics of diverse rice cultivars as influenced by genetic and environmental factors}. Cereal Chem. , {81}:{237--243}, {2004}. S. G. Cheung and P. K. S. Shin. Size effects of suspended particles on gill damage in green-lipped mussel perna viridis. Marine Pollution Bulletin , 51(8--12):801--810, 2005. D. H. Evans. The fish gill: site of action and model for toxic effects of environmental pollutants. Environmental Health Perspectives , 71:44--58, 1987. G. C. Grigg. The failure of oxygen transport in a fish at low levels of ambient oxygen. Comp. Biochem. Physiol. , 29:1253--1257, 1969. G. Holmes, A. Donkin, and I.H. Witten. {Weka: A machine learning workbench}. In Proceedings of the Second Australia and New Zealand Conference on Intelligent Information Systems , volume {24}, pages {357--361}, {Brisbane, Australia}, {1994}. {IEEE Computer Society}. D. D. Macdonald and C. P. Newcombe. Utility of the stress index for predicting suspended sediment effects: response to comments. North American Journal of Fisheries Management , 13:873--876, 1993. C. P. Newcombe. Suspended sediment in aquatic ecosystems: ill effects as a function of concentration and duration of exposure. Technical report, British Columbia Ministry of Environment, Lands and Parks, Habitat Protection branch, Victoria, 1994. C. P. Newcombe and J. O. T. Jensen. Channel suspended sediment and fisheries: A synthesis for quantitative assessment of risk and impact. North American Journal of Fisheries Management , 16(4):693--727, 1996. C. P. Newcombe and D. D. Macdonald. Effects of suspended sediments on aquatic ecosystems. North American Journal of Fisheries Management , 11(1):72--82, 1991. K. Schmidt-Nielsen. Scaling. Why is animal size so important? Cambridge University Press, NY, 1984. J. S. Schwartz, A. Simon, and L. Klimetz. Use of fish functional traits to associate in-stream suspended sediment transport metrics with biological impairment. Environmental Monitoring and Assessment , 179(1--4):347--369, 2011. E. Al Shaw and J. S. Richardson. Direct and indirect effects of sediment pulse duration on stream invertebrate assemb ages and rainbow trout ( Oncorhynchus mykiss ) growth and survival. Canadian Journal of Fish and Aquatic Science , 58:2213--2221, 2001. P. Tiwari and H. Hasegawa. {Demand for housing in Tokyo: A discrete choice analysis}. Regional Studies , {38}:{27--42}, {2004}. Y. Tramblay, A. Saint-Hilaire, T. B. M. J. Ouarda, F. Moatar, and B Hecht. Estimation of local extreme suspended sediment concentrations in california rivers. Science of the Total Environment , 408:4221--
Resumo:
Process compliance measurement is getting increasing attention in companies due to stricter legal requirements and market pressure for operational excellence. On the other hand, the metrics to quantify process compliance have only been defined recently. A major criticism points to the fact that existing measures appear to be unintuitive. In this paper, we trace back this problem to a more foundational question: which notion of behavioural equivalence is appropriate for discussing compliance? We present a quantification approach based on behavioural profiles, which is a process abstraction mechanism. Behavioural profiles can be regarded as weaker than existing equivalence notions like trace equivalence, and they can be calculated efficiently. As a validation, we present a respective implementation that measures compliance of logs against a normative process model. This implementation is being evaluated in a case study with an international service provider.
Resumo:
The reporting and auditing of patient dose is an important component of radiotherapy quality assurance. The manual extraction of dose-volume metrics is time consuming and undesirable when auditing the dosimetric quality of a large cohort of patient plans. A dose assessment application was written to overcome this, allowing the calculation of various dose-volume metrics for large numbers of plans exported from treatment planning systems. This application expanded on the DICOM-handling functionality of the MCDTK software suite. The software extracts dose values in the volume of interest by using a ray casting point-in-polygon algorithm, where the polygons have been defined by the contours in the RTSTRUCT file...
Resumo:
For transmedia to be acknowledged as worthy of investment by the business world, and even by those considering a career in development of transmedia creative products, there first needs to be established a business case for the concept. This chapter seeks to inform transmedia advocates about the concept of value and the ROI of transmedia more generally. While it is by no means a template or formula for measurement of value, it is a reminder to transmedia professionals and theorists, that intangible benefits are neither valueless nor unquantifiable. The chapter is divided into four sections: 1. Definitions of transmedia – concept and scope of transmedia, expressed in a manner that is intelligible for a business audience. 2. Value and cost – discussion of the terms from an economic perspective. 3. Audience interaction and collaborative content development – discussion of how feedback and engagement systems of transmedia have facilitated rich experiences which offer more than mere content and audience reach outputs. 4. ROI metrics for transmedia – measurable criteria for articulation of value to business investors.
Resumo:
The ability to identify and assess user engagement with transmedia productions is vital to the success of individual projects and the sustainability of this mode of media production as a whole. It is essential that industry players have access to tools and methodologies that offer the most complete and accurate picture of how audiences/users engage with their productions and which assets generate the most valuable returns of investment. Drawing upon research conducted with Hoodlum Entertainment, a Brisbane-based transmedia producer, this chapter outlines an initial assessment of the way engagement tends to be understood, why standard web analytics tools are ill-suited to measuring it, how a customised tool could offer solutions, and why this question of measuring engagement is so vital to the future of transmedia as a sustainable industry.
Resumo:
The majority of individuals appear to have insight into their own sleepiness, but there is some evidence that this does not hold true for all, for example treated patients with obstructive sleep apnoea. Identification of sleep-related symptoms may help drivers determine their sleepiness, eye symptoms in particular show promise. Sixteen participants completed four motorway drives on two separate occasions. Drives were completed during daytime and night-time in both a driving simulator and on the real road. Ten eye symptoms were rated at the end of each drive, and compared with driving performance and subjective and objective sleep metrics recorded during driving. ‘Eye strain’, ‘difficulty focusing’, ‘heavy eyelids’ and ‘difficulty keeping the eyes open’ were identified as the four key sleep-related eye symptoms. Drives resulting in these eye symptoms were more likely to have high subjective sleepiness and more line crossings than drives where similar eye discomfort was not reported. Furthermore, drivers having unintentional line crossings were likely to have ‘heavy eyelids’ and ‘difficulty keeping the eyes open’. Results suggest that drivers struggling to identify sleepiness could be assisted with the advice ‘stop driving if you feel sleepy and/or have heavy eyelids or difficulty keeping your eyes open’.
Resumo:
1. Biodiversity, water quality and ecosystem processes in streams are known to be influenced by the terrestrial landscape over a range of spatial and temporal scales. Lumped attributes (i.e. per cent land use) are often used to characterise the condition of the catchment; however, they are not spatially explicit and do not account for the disproportionate influence of land located near the stream or connected by overland flow. 2. We compared seven landscape representation metrics to determine whether accounting for the spatial proximity and hydrological effects of land use can be used to account for additional variability in indicators of stream ecosystem health. The landscape metrics included the following: a lumped metric, four inverse-distance-weighted (IDW) metrics based on distance to the stream or survey site and two modified IDW metrics that also accounted for the level of hydrologic activity (HA-IDW). Ecosystem health data were obtained from the Ecological Health Monitoring Programme in Southeast Queensland, Australia and included measures of fish, invertebrates, physicochemistry and nutrients collected during two seasons over 4 years. Linear models were fitted to the stream indicators and landscape metrics, by season, and compared using an information-theoretic approach. 3. Although no single metric was most suitable for modelling all stream indicators, lumped metrics rarely performed as well as other metric types. Metrics based on proximity to the stream (IDW and HA-IDW) were more suitable for modelling fish indicators, while the HA-IDW metric based on proximity to the survey site generally outperformed others for invertebrates, irrespective of season. There was consistent support for metrics based on proximity to the survey site (IDW or HA-IDW) for all physicochemical indicators during the dry season, while a HA-IDW metric based on proximity to the stream was suitable for five of the six physicochemical indicators in the post-wet season. Only one nutrient indicator was tested and results showed that catchment area had a significant effect on the relationship between land use metrics and algal stable isotope ratios in both seasons. 4. Spatially explicit methods of landscape representation can clearly improve the predictive ability of many empirical models currently used to study the relationship between landscape, habitat and stream condition. A comparison of different metrics may provide clues about causal pathways and mechanistic processes behind correlative relationships and could be used to target restoration efforts strategically.
Resumo:
Notwithstanding the problems with identifying audiences (c.f. Hartley, 1987), nor with sampling them (c.f. Turner, 2005), we contend that by using social media, it is at least possible to gain an understanding of the habits of those who chose to engage with content through social media. In this chapter, we will broadly outline the ways in which networks such as Twitter and Facebook can stand as proxies for audiences in a number of scenarios, and enable content creators, networks and researchers to understand the ways in which audiences come into existence, change over time, and engage with content. Beginning with the classic audience – television – we will consider the evolution of metrics from baseline volume metrics to the more sophisticated ‘telemetrics’ that are the focus of our current work. We discuss the evolution of these metrics, from principles developed in the field of ‘sabermetrics’, and highlight their effectiveness as both a predictor and a baseline for producers and networks to measure the success of their social media campaigns. Moving beyond the evaluation of the audiences engagement, we then move to consider the ‘audiences’ themselves. Building on Hartley’s argument that audiences are “imagined” constructs (1987, p. 125), we demonstrate the continual shift of Australian television audiences, from episode to episode and series to series, demonstrating through our map of the Australian Twittersphere (Bruns, Burgess & Highfield, 2014) both the variation amongst those who directly engage with television content, and those who are exposed to it through their social media networks. Finally, by exploring overlaps between sporting events (such as the NRL and AFL Grand Finals), reality TV (such as Big Brother, My Kitchen Rules & Biggest Loser), soaps (e.g. Bold & The Beautiful, Home & Away), and current affairs programming (e.g. Morning Television & A Current Affair), we discuss to what extent it is possible to profile and categorize Australian television audiences. Finally, we move beyond television audiences to consider audiences around social media platforms themselves. Building on our map of the Australian Twittersphere (Bruns, Burgess & Highfield, 2014), and a pool of 5000 active Australian accounts, we discuss the interconnectedness of audiences around particular subjects, and how specific topics spread throughout the Twitter Userbase. Also, by using Twitter as a proxy, we consider the career of a number of popular YouTuber’s, utilizing a method we refer to as Twitter Accession charts (Bruns & Woodford, 2014) to identify the growth curves, and relate them to specific events in the YouTubers career, be that ‘viral’ videos or collaborations, to discuss how audiences form around specific content creators.
Resumo:
Social media is playing an ever-increasing role in both viewers engagement with television and in the television industries evaluation of programming, in Australia – which is the focus of our study - and beyond. Twitter hashtags and viewer comments are increasingly incorporated into broadcasts, while Facebook fan pages provide a means of marketing upcoming shows and television personalities directly into the social media feed of millions of users. Additionally, bespoke applications such as FanGo and ZeeBox, which interact with the mainstream social networks, are increasingly being utilized by broadcasters for interactive elements of programming (c.f. Harrington, Highfield and Bruns, 2012). However, both the academic and industry study of these platforms has focused on the measure of content during the specific broadcast of the show, or a period surrounding it (e.g. 3 hours before until 3 am the next day, in the case of 2013 Nielsen SocialGuide reports). In this paper, we argue that this focus ignores a significant period for both television producers and advertisers; the lead-up to the program. If, as we argue elsewhere (Bruns, Woodford, Highfield & Prowd, forthcoming), users are persuaded to engage with content both by advertising of the Twitter hash-tag or Facebook page and by observing their network connections engaging with such content, the period before and between shows may have a significant impact on a viewers likelihood to watch a show. The significance of this period for broadcasters is clearly highlighted by the efforts they afford to advertising forthcoming shows through several channels, including television and social media, but also more widely. Biltereyst (2004, p.123) has argued that reality television generates controversy to receive media attention, and our previous small-scale work on reality shows during 2013 and 2014 supports the theory that promoting controversial behavior is likely to lead to increased viewing (Woodford & Prowd, 2014a). It remains unclear, however, to what extent this applies to other television genres. Similarly, while networks use of social media has been increasing, best practices remain unclear. Thus, by applying our telemetrics, that is social media metrics for television based on sabermetric approaches (Woodford, Prowd & Bruns, forthcoming; c.f. Woodford & Prowd, 2014b), to the period between shows, we are able to better understand the period when key viewing decisions may be made, to establish the significance of observing discussions within your network during the period between shows, and identify best practice examples of promoting a show using social media.
Resumo:
To harness safe operation of Web-based systems in Web environments, we propose an SSPA (Server-based SHA-1 Page-digest Algorithm) to verify the integrity of Web contents before the server issues an HTTP response to a user request. In addition to standard security measures, our Java implementation of the SSPA, which is called the Dynamic Security Surveillance Agent (DSSA), provides further security in terms of content integrity to Web-based systems. Its function is to prevent the display of Web contents that have been altered through the malicious acts of attackers and intruders on client machines. This is to protect the reputation of organisations from cyber-attacks and to ensure the safe operation of Web systems by dynamically monitoring the integrity of a Web site's content on demand. We discuss our findings in terms of the applicability and practicality of the proposed system. We also discuss its time metrics, specifically in relation to its computational overhead at the Web server, as well as the overall latency from the clients' point of view, using different Internet access methods. The SSPA, our DSSA implementation, some experimental results and related work are all discussed
Resumo:
Over the past decade, the mining industry has come to recognise the importance of water both to itself and to others. Water accounting is a formalisation of this importance that quantifies and communicates how water is used by individual sites and the industry as a whole. While there are a number of different accounting frameworks that could be used within the industry, the Minerals Council of Australia’s (MCA) Water Accounting Framework (WAF) is an industry-led approach that provides a consistent representation of mine site water interactions regardless of their operational, social or environmental context that allows for valid comparisons between sites and companies. The WAF contains definitions of offsite water sources and destinations and onsite water use, a methodology for applying the definitions and a set of metrics to measure site performance. The WAF is comprised of two models: the Input-Output Model, which represents the interactions between sites and their surrounding community and the Operational Model, which represents onsite water interactions. Members of the MCA have recently adopted the WAF’s Input-Output Model to report on their external water interactions in their Australian operations with some adopting it on a global basis. To support this adoption, there is a need for companies to better understand how to implement the WAF in their own operations. Developing a water account is non-trivial, particularly for sites unfamiliar with the WAF or for sites with the need to represent unusual features. This work describes how to build a water account for a given site using the Input-Output Model with an emphasis on how to represent challenging situations.
Resumo:
This (seat) attribute target list and Design for Comfort taxonomy report is based on the literature review report (C3-21, Milestone 1), which specified different areas (factors) with specific influence on automotive seat comfort. The attribute target list summarizes seat factors established in the literature review (Figure 1) and subsumes detailed attributes derived from the literature findings within these factors/classes. The attribute target list (Milestone 2) then provides the basis for the “Design for Comfort” taxonomy (Milestone 3) and helps the project develop target settings (values) that will be measured during the testing phase of the C3-21 project. The attribute target list will become the core technical description of seat attributes, to be incorporated into the final comfort procedure that will be developed. The Attribute Target List and Design for Comfort Taxonomy complete the target definition process. They specify the context, markets and application (vehicle classes) for seat development. As multiple markets are addressed, the target setting requires flexibility of variables to accommodate the selected customer range. These ranges will be consecutively filled with data in forthcoming studies. The taxonomy includes how and where the targets are derived, reference points and standards, engineering and subjective data from previous studies as well as literature findings. The comfort parameters are ranked to identify which targets, variables or metrics have the biggest influence on comfort. Comfort areas included are seat kinematics (adjustability), seat geometry and pressure distribution (static comfort), seat thermal behavior and noise/vibration transmissibility (cruise comfort) and eventually material properties, design and features (seat harmony). Data from previous studies is fine tuned and will be validated in the nominated contexts and markets in follow-up dedicated studies.
Resumo:
Background The sequencing, de novo assembly and annotation of transcriptome datasets generated with next generation sequencing (NGS) has enabled biologists to answer genomic questions in non-model species with unprecedented ease. Reliable and accurate de novo assembly and annotation of transcriptomes, however, is a critically important step for transcriptome assemblies generated from short read sequences. Typical benchmarks for assembly and annotation reliability have been performed with model species. To address the reliability and accuracy of de novo transcriptome assembly in non-model species, we generated an RNAseq dataset for an intertidal gastropod mollusc species, Nerita melanotragus, and compared the assembly produced by four different de novo transcriptome assemblers; Velvet, Oases, Geneious and Trinity, for a number of quality metrics and redundancy. Results Transcriptome sequencing on the Ion Torrent PGM™ produced 1,883,624 raw reads with a mean length of 133 base pairs (bp). Both the Trinity and Oases de novo assemblers produced the best assemblies based on all quality metrics including fewer contigs, increased N50 and average contig length and contigs of greater length. Overall the BLAST and annotation success of our assemblies was not high with only 15-19% of contigs assigned a putative function. Conclusions We believe that any improvement in annotation success of gastropod species will require more gastropod genome sequences, but in particular an increase in mollusc protein sequences in public databases. Overall, this paper demonstrates that reliable and accurate de novo transcriptome assemblies can be generated from short read sequencers with the right assembly algorithms. Keywords: Nerita melanotragus; De novo assembly; Transcriptome; Heat shock protein; Ion torrent
Resumo:
Aim The assessment of treatment plans is an important component in the education of radiation therapists. The establishment of a grade for a plan is currently based on subjective assessment of a range of criteria. The automation of assessment could provide a number of advantages including faster feedback, reduced chance of human error, and simpler aggregation of past results. Method A collection of treatments planned by a cohort of 27 second year radiation therapy students were selected for quantitative evaluation. Treatment sites included the bladder, cervix, larynx, parotid and prostate, although only the larynx plans had been assessed in detail. The plans were designed with the Pinnacle system and exported using the DICOM framework. Assessment criteria included beam arrangement optimisation, volume contouring, target dose coverage and homogeneity, and organ-at-risk sparing. The in-house Treatment and Dose Assessor (TADA) software1 was evaluated for suitability in assisting with the quantitative assessment of these plans. Dose volume data were exported in per-student and per-structure data tables, along with beam complexity metrics, dose volume histograms, and reports on naming conventions. Results The treatment plans were exported and processed using TADA, with the processing of all 27 plans for each treatment site taking less than two minutes. Naming conventions were successfully checked against a reference protocol. Significant variations between student plans were found. Correlation with assessment feedback was established for the larynx plans. Conclusion The data generated could be used to inform the selection of future assessment criteria, monitor student development, and provide useful feedback to the students. The provision of objective, quantitative evaluations of plan quality would be a valuable addition to not only radiotherapy education programmes but also for staff development and potentially credentialing methods. New functionality within TADA developed for this work could be applied clinically to, for example, evaluate protocol compliance.
Resumo:
In this paper we present a novel scheme for improving speaker diarization by making use of repeating speakers across multiple recordings within a large corpus. We call this technique speaker re-diarization and demonstrate that it is possible to reuse the initial speaker-linked diarization outputs to boost diarization accuracy within individual recordings. We first propose and evaluate two novel re-diarization techniques. We demonstrate their complementary characteristics and fuse the two techniques to successfully conduct speaker re-diarization across the SAIVT-BNEWS corpus of Australian broadcast data. This corpus contains recurring speakers in various independent recordings that need to be linked across the dataset. We show that our speaker re-diarization approach can provide a relative improvement of 23% in diarization error rate (DER), over the original diarization results, as well as improve the estimated number of speakers and the cluster purity and coverage metrics.