811 resultados para Data-driven analysis
Resumo:
This paper presents a critical review of past research in the work-related driving field in light vehicle fleets (e.g., vehicles < 4.5 tonnes) and an intervention framework that provides future direction for practitioners and researchers. Although work-related driving crashes have become the most common cause of death, injury, and absence from work in Australia and overseas, very limited research has progressed in establishing effective strategies to improve safety outcomes. In particular, the majority of past research has been data-driven, and therefore, limited attention has been given to theoretical development in establishing the behavioural mechanism underlying driving behaviour. As such, this paper argues that to move forward in the field of work-related driving safety, practitioners and researchers need to gain a better understanding of the individual and organisational factors influencing safety through adopting relevant theoretical frameworks, which in turn will inform the development of specifically targeted theory-driven interventions. This paper presents an intervention framework that is based on relevant theoretical frameworks and sound methodological design, incorporating interventions that can be directed at the appropriate level, individual and driving target group.
Resumo:
Seventy-six librarians participated in a series of focus groups in support of research exploring the skills, knowledge and attributes required by the contemporary library and information professional in a world of every changing technology. The project was funded by the Australian Learning and Teaching Council. Text data mining analysis revealed three main thematic clusters (libraries, people, jobs) and one minor thematic cluster (community). Library 2.0 was broadly viewed by participants as being about change whilst librarian 2.0 was perceived by participants as not a new creation but just about good librarian practices. Participants expressed the general belief that personality traits, not just qualifications, were critical to be a successful librarian or information worker in the future.
Resumo:
This research discusses some of the issues encountered while developing a set of WGEN parameters for Chile and advice for others interested in developing WGEN parameters for arid climates. The WGEN program is a commonly used and a valuable research tool; however, it has specific limitations in arid climates that need careful consideration. These limitations are analysed in the context of generating a set of WGEN parameters for Chile. Fourteen to 26 years of precipitation data are used to calculate precipitation parameters for 18 locations in Chile, and 3–8 years of temperature and solar radiation data are analysed to generate parameters for seven of these locations. Results indicate that weather generation parameters in arid regions are sensitive to erroneous or missing precipitation data. Research shows that the WGEN-estimated gamma distribution shape parameter (α) for daily precipitation in arid zones will tend to cluster around discrete values of 0 or 1, masking the high sensitivity of these parameters to additional data. Rather than focus on the length in years when assessing the adequacy of a data record for estimation of precipitation parameters, researchers should focus on the number of wet days in dry months in a data set. Analysis of the WGEN routines for the estimation of temperature and solar radiation parameters indicates that errors can occur when individual ‘months’ have fewer than two wet days in the data set. Recommendations are provided to improve methods for estimation of WGEN parameters in arid climates.
Resumo:
This thesis describes a discrete component of a larger mixed-method (survey and interview) study that explored the health-promotion and risk-reduction practices of younger premenopausal survivors of ovarian, breast and haematological cancers. This thesis outlines my distinct contribution to the larger study, which was to: (1) Produce a literature review that thoroughly explored all longer-term breast cancer treatment outcomes, and which outlined the health risks to survivors associated with these; (2) Describe and analyse the health-promotion and risk-reduction behaviours of nine younger female survivors of breast cancer as articulated in the qualitative interview dataset; and (3) Test the explanatory power of the Precede-Proceed theoretical framework underpinning the study in relation to the qualitative data from the breast cancer cohort. The thesis reveals that breast cancer survivors experienced many adverse outcomes as a result of treatment. While they generally engaged in healthy lifestyle practices, a lack of knowledge about many recommended health behaviours emerged throughout the interviews. The participants also described significant internal and external pressures to behave in certain ways because of the social norms surrounding the disease. This thesis also reports that the Precede-Proceed model is a generally robust approach to data collection, analysis and interpretation in the context of breast cancer survivorship. It provided plausible explanations for much of the data in this study. However, profound sociological and psychological implications arose during the analysis that were not effectively captured or explained by the theories underpinning the model. A sociological filter—such as Turner’s explanation of the meaning of the body and embodiment in the social sphere (Turner, 2008)—and the psychological concerns teased out in Mishel’s (1990) Uncertainty in Illness Theory, provided a useful dimension to the findings generated through the Precede-Proceed model. The thesis concludes with several recommendations for future research, clinical practice and education in this context.
Resumo:
This work proposes to improve spoken term detection (STD) accuracy by optimising the Figure of Merit (FOM). In this article, the index takes the form of phonetic posterior-feature matrix. Accuracy is improved by formulating STD as a discriminative training problem and directly optimising the FOM, through its use as an objective function to train a transformation of the index. The outcome of indexing is then a matrix of enhanced posterior-features that are directly tailored for the STD task. The technique is shown to improve the FOM by up to 13% on held-out data. Additional analysis explores the effect of the technique on phone recognition accuracy, examines the actual values of the learned transform, and demonstrates that using an extended training data set results in further improvement in the FOM.
Resumo:
This Preliminary Report has been prepared by researchers at The Australian Expert Group in Industry Studies (AEGIS) for the Commonwealth Department of Industry, Science and Resources. It is intended to provide a preliminary 'product system map' of the building and construction industries which defines the system, identifies the major segments, describes key industry players and institutions and provides the basis for exploring relationships, innovation and information flows within the industries. This Preliminary Report is the first of a series of five which will explore the building and construction product system in some depth. This first report does not present original research, although it does include some new interview data and analysis of a variety of written sources. This report is rather a reformulation of existing statistical and analytical material from a product system-based perspective. It is intended to provide the basis for subsequent studies by putting what is already known into an alternative framework and allowing us to see it through a new lens.
Resumo:
This study determines whether the inclusion of low-cost airlines in a dataset of international and domestic airlines has an impact on the efficiency scores of so-called ‘prestigious’ purportedly ‘efficient’ airlines. This is because while many airline studies concern efficiency, none has truly included a combination of international, domestic and budget airlines. The present study employs the nonparametric technique of data envelopment analysis (DEA) to investigate the technical efficiency of 53 airlines in 2006. The findings reveal that the majority of budget airlines are efficient relative to their more prestigious counterparts. Moreover, most airlines identified as inefficient are so largely because of the overutilization of non-flight assets.
Resumo:
Between 2001 and 2005, the US airline industry faced financial turmoil. At the same time, the European airline industry entered a period of substantive deregulation. This period witnessed opportunities for low-cost carriers to become more competitive in the market as a result of these combined events. To help assess airline performance in the aftermath of these events, this paper provides new evidence of technical efficiency for 42 national and international airlines in 2006 using the data envelopment analysis (DEA) bootstrap approach first proposed by Simar and Wilson (J Econ, 136:31-64, 2007). In the first stage, technical efficiency scores are estimated using a bootstrap DEA model. In the second stage, a truncated regression is employed to quantify the economic drivers underlying measured technical efficiency. The results highlight the key role played by non-discretionary inputs in measures of airline technical efficiency.
Resumo:
The motivation of the study stems from the results reported in the Excellence in Research for Australia (ERA) 2010 report. The report showed that only 12 universities performed research at or above international standards, of which, the Group of Eight (G8) universities filled the top eight spots. While performance of universities was based on number of research outputs, total amount of research income and other quantitative indicators, the measure of efficiency or productivity was not considered. The objectives of this paper are twofold. First, to provide a review of the research performance of 37 Australian universities using the data envelopment analysis (DEA) bootstrap approach of Simar and Wilson (2007). Second, to determine sources of productivity drivers by regressing the efficiency scores against a set of environmental variables.
Resumo:
As civil infrastructures such as bridges age, there is a concern for safety and a need for cost-effective and reliable monitoring tool. Different diagnostic techniques are available nowadays for structural health monitoring (SHM) of bridges. Acoustic emission is one such technique with potential of predicting failure. The phenomenon of rapid release of energy within a material by crack initiation or growth in form of stress waves is known as acoustic emission (AE). AEtechnique involves recording the stress waves bymeans of sensors and subsequent analysis of the recorded signals,which then convey information about the nature of the source. AE can be used as a local SHM technique to monitor specific regions with visible presence of cracks or crack prone areas such as welded regions and joints with bolted connection or as a global technique to monitor the whole structure. Strength of AE technique lies in its ability to detect active crack activity, thus helping in prioritising maintenance work by helping focus on active cracks rather than dormant cracks. In spite of being a promising tool, some challenges do still exist behind the successful application of AE technique. One is the generation of large amount of data during the testing; hence an effective data analysis and management is necessary, especially for long term monitoring uses. Complications also arise as a number of spurious sources can giveAEsignals, therefore, different source discrimination strategies are necessary to identify genuine signals from spurious ones. Another major challenge is the quantification of damage level by appropriate analysis of data. Intensity analysis using severity and historic indices as well as b-value analysis are some important methods and will be discussed and applied for analysis of laboratory experimental data in this paper.
Resumo:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.
Resumo:
This note examines the productive efficiency of 62 starting guards during the 2011/12 National Basketball Association (NBA) season. This period coincides with the phenomenal and largely unanticipated performance of New York Knicks’ starting point guard Jeremy Lin and the attendant public and media hype known as Linsanity. We employ a data envelopment analysis (DEA) approach that includes allowance for an undesirable output, here turnovers per game, with the desirable outputs of points, rebounds, assists, steals, and blocks per game and an input of minutes per game. The results indicate that depending upon the specification, between 29 and 42 percent of NBA guards are fully efficient, including Jeremy Lin, with a mean inefficiency of 3.7 and 19.2 percent. However, while Jeremy Lin is technically efficient, he seldom serves as a benchmark for inefficient players, at least when compared with established players such as Chris Paul and Dwayne Wade. This suggests the uniqueness of Jeremy Lin’s productive solution and may explain why his unique style of play, encompassing individual brilliance, unselfish play, and team leadership, is of such broad public appeal.
Resumo:
This presentation will deal with the transformations that have occurred in news journalism worldwide in the early 21st century. I will argue that they have been the most significant changes to the profession for 100 years, and the challenges facing the news media industry in responding to them are substantial, as are those facing journalism education. It will develop this argument in relation to the crisis of the newspaper business model, and why social media, blogging and citizen journalism have not filled the gap left by the withdrawal of resources from traditional journalism. It will also draw upon Wikileaks as a case study in debates about computational and data-driven journalism, and whether large-scale "leaks" of electronic documents may be the future of investigative journalism.
Resumo:
Cancer poses an undeniable burden to the health and wellbeing of the Australian community. In a recent report commissioned by the Australian Institute for Health and Welfare(AIHW, 2010), one in every two Australians on average will be diagnosed with cancer by the age of 85, making cancer the second leading cause of death in 2007, preceded only by cardiovascular disease. Despite modest decreases in standardised combined cancer mortality over the past few decades, in part due to increased funding and access to screening programs, cancer remains a significant economic burden. In 2010, all cancers accounted for an estimated 19% of the country's total burden of disease, equating to approximately $3:8 billion in direct health system costs (Cancer Council Australia, 2011). Furthermore, there remains established socio-economic and other demographic inequalities in cancer incidence and survival, for example, by indigenous status and rurality. Therefore, in the interests of the nation's health and economic management, there is an immediate need to devise data-driven strategies to not only understand the socio-economic drivers of cancer but also facilitate the implementation of cost-effective resource allocation for cancer management...