811 resultados para Data-driven analysis
Resumo:
Design of hydraulic turbines has often to deal with hydraulic instability. It is well-known that Francis and Kaplan types present hydraulic instability in their design power range. Even if modern CFD tools may help to define these dangerous operating conditions and optimize runner design, hydraulic instabilities may fortuitously arise during the turbine life and should be timely detected in order to assure a long-lasting operating life. In a previous paper, the authors have considered the phenomenon of helical vortex rope, which happens at low flow rates when a swirling flow, in the draft tube conical inlet, occupies a large portion of the inlet. In this condition, a strong helical vortex rope appears. The vortex rope causes mechanical effects on the runner, on the whole turbine and on the draft tube, which may eventually produce severe damages on the turbine unit and whose most evident symptoms are vibrations. The authors have already shown that vibration analysis is suitable for detecting vortex rope onset, thanks to an experimental test campaign performed during the commissioning of a 23 MW Kaplan hydraulic turbine unit. In this paper, the authors propose a sophisticated data driven approach to detect vortex rope onset at different power load, based on the analysis of the vibration signals in the order domain and introducing the so-called "residual order spectrogram", i.e. an order-rotation representation of the vibration signal. Some experimental test runs are presented and the possibility to detect instability onset, especially in real-time, is discussed.
Resumo:
There is currently a wide range of research into the recent introduction of student response systems in higher education and tertiary settings (Banks 2006; Kay and Le Sange, 2009; Beatty and Gerace 2009; Lantz 2010; Sprague and Dahl 2009). However, most of this pedagogical literature has generated ‘how to’ approaches regarding the use of ‘clickers’, keypads, and similar response technologies. There are currently no systematic reviews on the effectiveness of ‘GoSoapBox’ – a more recent, and increasingly popular student response system – for its capacity to enhance critical thinking, and achieve sustained learning outcomes. With rapid developments in teaching and learning technologies across all undergraduate disciplines, there is a need to obtain comprehensive, evidence-based advice on these types of technologies, their uses, and overall efficacy. This paper addresses this current gap in knowledge. Our teaching team, in an undergraduate Sociology and Public Health unit at the Queensland University of Technology (QUT), introduced GoSoapBox as a mechanism for discussing controversial topics, such as sexuality, gender, economics, religion, and politics during lectures, and to take opinion polls on social and cultural issues affecting human health. We also used this new teaching technology to allow students to interact with each other during class – both on both social and academic topics – and to generate discussions and debates during lectures. The paper reports on a data-driven study into how this interactive online tool worked to improve engagement and the quality of academic work produced by students. This paper will firstly, cover the recent literature reviewing student response systems in tertiary settings. Secondly, it will outline the theoretical framework used to generate this pedagogical research. In keeping with the social and collaborative features of Web 2.0 technologies, Bandura’s Social Learning Theory (SLT) will be applied here to investigate the effectiveness of GoSoapBox as an online tool for improving learning experiences and the quality of academic output by students. Bandura has emphasised the Internet as a tool for ‘self-controlled learning’ (Bandura 2001), as it provides the education sector with an opportunity to reconceptualise the relationship between learning and thinking (Glassman & Kang 2011). Thirdly, we describe the methods used to implement the use of GoSoapBox in our lectures and tutorials, and which aspects of the technology we drew on for learning purposes, as well as the methods for obtaining feedback from the students about the effectiveness or otherwise of this tool. Fourthly, we report cover findings from an examination of all student/staff activity on GoSoapBox as well as reports from students about the benefits and limitations of it as a learning aid. We then display a theoretical model that is produced via an iterative analytical process between SLT and our data analysis for use by academics and teachers across the undergraduate curriculum. The model has implications for all teachers considering the use of student response systems to improve the learning experiences of their students. Finally, we consider some of the negative aspects of GoSoapBox as a learning aid.
Resumo:
Technoeconomic analysis of renewable aviatin fuels has not been widely considered, despite the increasing global attention that the field has received. We present three process models for production of aviation fuel from microalgae, Pongamia pinnata, and sugarcane molasses. The models and assumptions have been deposited on a wiki (http://qsafi.aibn.uq.edu.au) and are open and accessible to the community. Based on currently available long-term reputable technological data, this analysis indicates that the biorefinieries processing the microalgae, Pongamia seeds, and sugarcane feedstocks would be competitive with crude oil at $1343, $374, and $301/bbl, respectively. Sensitivity analyses of the major economic drivers suggest technological and market developments that would bring the corresponding figures down to $385, $255, and $168/bbl. The dynamic nature of the freely accessible models will allow the community to track progress toward economic competitiveness of aviation fuels from these renewable feedstocks.
Computation of ECG signal features using MCMC modelling in software and FPGA reconfigurable hardware
Resumo:
Computational optimisation of clinically important electrocardiogram signal features, within a single heart beat, using a Markov-chain Monte Carlo (MCMC) method is undertaken. A detailed, efficient data-driven software implementation of an MCMC algorithm has been shown. Initially software parallelisation is explored and has been shown that despite the large amount of model parameter inter-dependency that parallelisation is possible. Also, an initial reconfigurable hardware approach is explored for future applicability to real-time computation on a portable ECG device, under continuous extended use.
Resumo:
Molecular biology is a scientific discipline which has changed fundamentally in character over the past decade to rely on large scale datasets – public and locally generated - and their computational analysis and annotation. Undergraduate education of biologists must increasingly couple this domain context with a data-driven computational scientific method. Yet modern programming and scripting languages and rich computational environments such as R and MATLAB present significant barriers to those with limited exposure to computer science, and may require substantial tutorial assistance over an extended period if progress is to be made. In this paper we report our experience of undergraduate bioinformatics education using the familiar, ubiquitous spreadsheet environment of Microsoft Excel. We describe a configurable extension called QUT.Bio.Excel, a custom ribbon, supporting a rich set of data sources, external tools and interactive processing within the spreadsheet, and a range of problems to demonstrate its utility and success in addressing the needs of students over their studies.
Resumo:
Metaphors are a common instrument of human cognition, activated when seeking to make sense of novel and abstract phenomena. In this article we assess some of the values and assumptions encoded in the framing of the term big data, drawing on the framework of conceptual metaphor. We first discuss the terms data and big data and the meanings historically attached to them by different usage communities and then proceed with a discourse analysis of Internet news items about big data. We conclude by characterizing two recurrent framings of the concept: as a natural force to be controlled and as a resource to be consumed.
Resumo:
This article presents the field applications and validations for the controlled Monte Carlo data generation scheme. This scheme was previously derived to assist the Mahalanobis squared distance–based damage identification method to cope with data-shortage problems which often cause inadequate data multinormality and unreliable identification outcome. To do so, real-vibration datasets from two actual civil engineering structures with such data (and identification) problems are selected as the test objects which are then shown to be in need of enhancement to consolidate their conditions. By utilizing the robust probability measures of the data condition indices in controlled Monte Carlo data generation and statistical sensitivity analysis of the Mahalanobis squared distance computational system, well-conditioned synthetic data generated by an optimal controlled Monte Carlo data generation configurations can be unbiasedly evaluated against those generated by other set-ups and against the original data. The analysis results reconfirm that controlled Monte Carlo data generation is able to overcome the shortage of observations, improve the data multinormality and enhance the reliability of the Mahalanobis squared distance–based damage identification method particularly with respect to false-positive errors. The results also highlight the dynamic structure of controlled Monte Carlo data generation that makes this scheme well adaptive to any type of input data with any (original) distributional condition.
Resumo:
This article measures Japanese prefectures' productivity from 1991 to 2002, taking CO2 emissions into consideration, and examines the factors that impact on productivity. We use the data envelopment analysis and measure the Luenberger productivity indicator, incorporating CO2 emissions in the analysis. Our results show that productivity was decreasing during the period of investigation. According to the results of the generalized method of moment estimation, the operations rate, the share of the energy intensive industries and social capital significantly impact on productivity.
Resumo:
The Japanese electricity industry has experienced regulatory reforms since the mid-1990s. This article measures productivity in Japan's steam power-generation sector and examines the effect of reforms on the productivity of this industry over the period 1978-2003. We estimate the Luenberger productivity indicator, which is a generalization of the commonly used Malmquist productivity index, using a data envelopment analysis approach. Factors associated with productivity change are investigated through dynamic generalized method of moments (GMM) estimation of panel data. Our empirical analysis shows that the regulatory reforms have contributed to productivity growth in the steam power-generation sector in Japan.
Resumo:
The Echology: Making Sense of Data initiative seeks to break new ground in arts practice by asking artists to innovate with respect to a) the possible forms of data representation in public art and b) the artist's role in engaging publics on environmental sustainability in new urban developments. Initiated by ANAT and Carbon Arts in 2011, Echology has seen three artists selected by National competition in 2012 for Lend Lease sites across Australia. In 2013 commissioning of one of these works, the Mussel Choir by Natalie Jeremijenko, began in Melbourne's Victoria Harbour development. This emerging practice of data - driven and environmentally engaged public artwork presents multiple challenges to established systems of public arts production and management, at the same time as offering up new avenues for artists to forge new modes of collaboration. The experience of Echology and in particular, the Mussel Choir is examined here to reveal opportunities for expansion of this practice through identification of the factors that lead to a resilient 'ecology of part nership' between stakeholders that include science and technology researchers, education providers, city administrators, and urban developers.
Resumo:
This paper presents an online, unsupervised training algorithm enabling vision-based place recognition across a wide range of changing environmental conditions such as those caused by weather, seasons, and day-night cycles. The technique applies principal component analysis to distinguish between aspects of a location’s appearance that are condition-dependent and those that are condition-invariant. Removing the dimensions associated with environmental conditions produces condition-invariant images that can be used by appearance-based place recognition methods. This approach has a unique benefit – it requires training images from only one type of environmental condition, unlike existing data-driven methods that require training images with labelled frame correspondences from two or more environmental conditions. The method is applied to two benchmark variable condition datasets. Performance is equivalent or superior to the current state of the art despite the lesser training requirements, and is demonstrated to generalise to previously unseen locations.
Resumo:
With the increasing need to adapt to new environments, data-driven approaches have been developed to estimate terrain traversability by learning the rover’s response on the terrain based on experience. Multiple learning inputs are often used to adequately describe the various aspects of terrain traversability. In a complex learning framework, it can be difficult to identify the relevance of each learning input to the resulting estimate. This paper addresses the suitability of each learning input by systematically analyzing the impact of each input on the estimate. Sensitivity Analysis (SA) methods provide a means to measure the contribution of each learning input to the estimate variability. Using a variance-based SA method, we characterize how the prediction changes as one or more of the input changes, and also quantify the prediction uncertainty as attributed from each of the inputs in the framework of dependent inputs. We propose an approach built on Analysis of Variance (ANOVA) decomposition to examine the prediction made in a near-to-far learning framework based on multi-task GP regression. We demonstrate the approach by analyzing the impact of driving speed and terrain geometry on the prediction of the rover’s attitude and chassis configuration in a Marsanalogue terrain using our prototype rover Mawson.
Resumo:
Background Over half of the residents in long-term care have a diagnosis of dementia. Maintaining quality of life is important, as there is no cure for dementia. Quality of life may be used as a benchmark for caregiving, and can help to enhance respect for the person with dementia and to improve care provision. The purpose of this study was to describe quality of life as reported by people living with dementia in long-term care in terms of the influencers of, as well as the strategies needed, to improve quality of life. Methods A descriptive exploratory approach. A subsample of twelve residents across two Australian states from a national quantitative study on quality of life was interviewed. Data were analysed thematically from a realist perspective. The approach to the thematic analysis was inductive and data-driven. Results Three themes emerged in relation to influencers and strategies related to quality of life: (a) maintaining independence; (b) having something to do, and; (c) the importance of social interaction. Conclusions The findings highlight the importance of understanding individual resident needs and consideration of the complexity of living in large group living situations, in particular in regard to resident decision-making.
Resumo:
Objective The aim of this systematic review and meta-analysis was to determine the overall effect of resistance training (RT) on measures of muscular strength in people with Parkinson’s disease (PD). Methods Controlled trials with parallel-group-design were identified from computerized literature searching and citation tracking performed until August 2014. Two reviewers independently screened for eligibility and assessed the quality of the studies using the Cochrane risk-of-bias-tool. For each study, mean differences (MD) or standardized mean differences (SMD) and 95% confidence intervals (CI) were calculated for continuous outcomes based on between-group comparisons using post-intervention data. Subgroup analysis was conducted based on differences in study design. Results Nine studies met the inclusion criteria; all had a moderate to high risk of bias. Pooled data showed that knee extension, knee flexion and leg press strength were significantly greater in PD patients who undertook RT compared to control groups with or without interventions. Subgroups were: RT vs. control-without-intervention, RT vs. control-with-intervention, RT-with-other-form-of-exercise vs. control-without-intervention, RT-with-other-form-of-exercise vs. control-with-intervention. Pooled subgroup analysis showed that RT combined with aerobic/balance/stretching exercise resulted in significantly greater knee extension, knee flexion and leg press strength compared with no-intervention. Compared to treadmill or balance exercise it resulted in greater knee flexion, but not knee extension or leg press strength. RT alone resulted in greater knee extension and flexion strength compared to stretching, but not in greater leg press strength compared to no-intervention. Discussion Overall, the current evidence suggests that exercise interventions that contain RT may be effective in improving muscular strength in people with PD compared with no exercise. However, depending on muscle group and/or training dose, RT may not be superior to other exercise types. Interventions which combine RT with other exercise may be most effective. Findings should be interpreted with caution due to the relatively high risk of bias of most studies.
Resumo:
Successful project management depends upon forming and maintaining relationships between and among project team members and stakeholder groups. The nature of these relationships and the patterns that they form affect communication, collaboration and resource flows. Networks affect us directly, and we use them to influence people and processes. Social Network Analysis (SNA) can be an extremely valuable research tool to better understand how critical social networks develop and influence work processes, particularly as projects become larger and more complex. This chapter introduces foundational network concepts, helps you determine if SNA could help you answer your research questions, and explains how to design and implement a social network study. At the end of this chapter, the reader can: understand foundational concepts about social networks; decide if SNA is an appropriate research methodology to address particular questions or problems; design and implement a basic social network study.