248 resultados para Kähler-Einstein Metrics
Resumo:
Background: Hospitalisation for ambulatory care sensitive conditions (ACSHs) has become a recognised tool to measure access to primary care. Timely and effective outpatient care is highly relevant to refugee populations given the past exposure to torture and trauma, and poor access to adequate health care in their countries of origin and during flight. Little is known about ACSHs among resettled refugee populations. With the aim of examining the hypothesis that people from refugee backgrounds have higher ACSHs than people born in the country of hospitalisation, this study analysed a six-year state-wide hospital discharge dataset to estimate ACSH rates for residents born in refugee-source countries and compared them with the Australia-born population. Methods: Hospital discharge data between 1 July 1998 and 30 June 2004 from the Victorian Admitted Episodes Dataset were used to assess ACSH rates among residents born in eight refugee-source countries, and compare them with the Australia-born average. Rate ratios and 95% confidence levels were used to illustrate these comparisons. Four categories of ambulatory care sensitive conditions were measured: total, acute, chronic and vaccine-preventable. Country of birth was used as a proxy indicator of refugee status. Results: When compared with the Australia-born population, hospitalisations for total and acute ambulatory care sensitive conditions were lower among refugee-born persons over the six-year period. Chronic and vaccine-preventable ACSHs were largely similar between the two population groups. Conclusion: Contrary to our hypothesis, preventable hospitalisation rates among people born in refugee-source countries were no higher than Australia-born population averages. More research is needed to elucidate whether low rates of preventable hospitalisation indicate better health status, appropriate health habits, timely and effective care-seeking behaviour and outpatient care, or overall low levels of health care-seeking due to other more pressing needs during the initial period of resettlement. It is important to unpack dimensions of health status and health care access in refugee populations through ad-hoc surveys as the refugee population is not a homogenous group despite sharing a common experience of forced displacement and violence-related trauma.
Resumo:
The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. The objective is to produce a stereo vision sensor suited to close-range scenes consisting primarily of rocks. This sensor should be able to produce a dense depth map within real-time constraints. Speed and robustness are of foremost importance for this investigation. A number of area based matching metrics have been implemented, including the SAD, SSD, NCC, and their zero-meaned versions. The NCC and the zero meaned SAD and SSD were found to produce the disparity maps with the highest proportion of valid matches. The plain SAD and SSD were the least computationally expensive, due to all their operations taking place in integer arithmetic, however, they were extremely sensitive to radiometric distortion. Non-parametric techniques for matching, in particular, the rank and the census transform, have also been investigated. The rank and census transforms were found to be robust with respect to radiometric distortion, as well as being able to produce disparity maps with a high proportion of valid matches. An additional advantage of both the rank and the census transform is their amenability to fast hardware implementation.
Resumo:
Traditional area-based matching techniques make use of similarity metrics such as the Sum of Absolute Differences(SAD), Sum of Squared Differences (SSD) and Normalised Cross Correlation (NCC). Non-parametric matching algorithms such as the rank and census rely on the relative ordering of pixel values rather than the pixels themselves as a similarity measure. Both traditional area-based and non-parametric stereo matching techniques have an algorithmic structure which is amenable to fast hardware realisation. This investigation undertakes a performance assessment of these two families of algorithms for robustness to radiometric distortion and random noise. A generic implementation framework is presented for the stereo matching problem and the relative hardware requirements for the various metrics investigated.
Resumo:
The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This paper assesses the suitability of a number of matching techniques for use in a stereo vision sensor for close range scenes consisting primarily of rocks. These include traditional area-based matching metrics, and non-parametric transforms, in particular, the rank and census transforms. Experimental results show that the rank and census transforms exhibit a number of clear advantages over area-based matching metrics, including their low computational complexity, and robustness to certain types of distortion.
Resumo:
A frame-rate stereo vision system, based on non-parametric matching metrics, is described. Traditional metrics, such as normalized cross-correlation, are expensive in terms of logic. Non-parametric measures require only simple, parallelizable, functions such as comparators, counters and exclusive-or, and are thus very well suited to implementation in reprogrammable logic.
Resumo:
To date, the available literature mainly discusses Twitter activity patterns in the context of individual case studies, while comparative research on a large number of communicative events and their dynamics and patterns is missing. By conducting a comparative study of more than 40 different cases (covering topics such as elections, natural disasters, corporate crises, and televised events) we identify a number of distinct types of discussion that can be observed on Twitter. Drawing on a range of communicative metrics, we show that thematic and contextual factors influence the usage of different communicative tools available to Twitter users, such as original tweets, @replies, retweets, and URLs. Based on this first analysis of the overall metrics of Twitter discussions, we also demonstrate stable patterns in the use of Twitter in the context of major topics and events.
Resumo:
Twitter is now well-established as an important platform for real-time public communication. Twitter research continues to lag behind these developments, with many studies remaining focused on individual case studies and utilizing home-grown, idiosyncratic, non-repeatable, and non-verifiable research methodologies. While the development of a full-blown “science of Twitter” may remain illusory, it is nonetheless necessary to move beyond such individual scholarship and toward the development of more comprehensive, transferable, and rigorous tools and methods for the study of Twitter on a large scale and in close to real time.
Resumo:
This paper analyses the expenditure patterns of 97 Australian international aid and development organisations, and examines the extent to which they disclose information about their expenditure in order to discharge their accountability. Not-for-profit (NFP) expenditure attracts media attention, with perceptions of excessive costs potentially damaging stakeholder trust in NFP organisations. This makes it important for organisations to be proactive in communicating their expenditure stories to stakeholders, rather than being judged on their performance by standardised expenditure metrics. By highlighting what it costs to ensure longer-term operational capability, NFP organisations will contribute to the discharge of their financial accountability and play a part in educating all stakeholders about the dangers of relying on a single metric.
Resumo:
This paper describes in detail our Security-Critical Program Analyser (SCPA). SCPA is used to assess the security of a given program based on its design or source code with regard to data flow-based metrics. Furthermore, it allows software developers to generate a UML-like class diagram of their program and annotate its confidential classes, methods and attributes. SCPA is also capable of producing Java source code for the generated design of a given program. This source code can then be compiled and the resulting Java bytecode program can be used by the tool to assess the program's overall security based on our security metrics.
Resumo:
Refactoring is a common approach to producing better quality software. Its impact on many software quality properties, including reusability, maintainability and performance, has been studied and measured extensively. However, its impact on the information security of programs has received relatively little attention. In this work, we assess the impact of a number of the most common code-level refactoring rules on data security, using security metrics that are capable of measuring security from the viewpoint of potential information flow. The metrics are calculated for a given Java program using a static analysis tool we have developed to automatically analyse compiled Java bytecode. We ran our Java code analyser on various programs which were refactored according to each rule. New values of the metrics for the refactored programs then confirmed that the code changes had a measurable effect on information security.
Curbing resource consumption using team-based feedback : paper printing in a longitudinal case study
Resumo:
This paper details a team-based feedback approach for reducing resource consumption. The approach uses paper printing within office environments as a case study. It communicates the print usage of each participant’s team rather than the participant’s individual print usage. Feedback is provided weekly via emails and contains normative information, along with eco-metrics and team-based comparative statistics. The approach was empirically evaluated to study the effectiveness of the feedback method. The experiment comprised of 16 people belonging to 4 teams with data on their print usage gathered over 58 weeks, using the first 30-35 weeks as a baseline. The study showed a significant reduction in individual printing with an average of 28%. The experiment confirms the underlying hypothesis that participants are persuaded to reduce their print usage in order to improve the overall printing behaviour of their teams. The research provides clear pathways for future research to qualitatively investigate our findings.
Resumo:
This chapter presents a comparative survey of recent key management (key distribution, discovery, establishment and update) solutions for wireless sensor networks. We consider both distributed and hierarchical sensor network architectures where unicast, multicast and broadcast types of communication take place. Probabilistic, deterministic and hybrid key management solutions are presented, and we determine a set of metrics to quantify their security properties and resource usage such as processing, storage and communication overheads. We provide a taxonomy of solutions, and identify trade-offs in these schemes to conclude that there is no one-size-fits-all solution.
Resumo:
Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.
Resumo:
Process mining encompasses the research area which is concerned with knowledge discovery from information system event logs. Within the process mining research area, two prominent tasks can be discerned. First of all, process discovery deals with the automatic construction of a process model out of an event log. Secondly, conformance checking focuses on the assessment of the quality of a discovered or designed process model in respect to the actual behavior as captured in event logs. Hereto, multiple techniques and metrics have been developed and described in the literature. However, the process mining domain still lacks a comprehensive framework for assessing the goodness of a process model from a quantitative perspective. In this study, we describe the architecture of an extensible framework within ProM, allowing for the consistent, comparative and repeatable calculation of conformance metrics. For the development and assessment of both process discovery as well as conformance techniques, such a framework is considered greatly valuable.
Resumo:
Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.