242 resultados para Filmic approach methods
Resumo:
Rolling-element bearing failures are the most frequent problems in rotating machinery, which can be catastrophic and cause major downtime. Hence, providing advance failure warning and precise fault detection in such components are pivotal and cost-effective. The vast majority of past research has focused on signal processing and spectral analysis for fault diagnostics in rotating components. In this study, a data mining approach using a machine learning technique called anomaly detection (AD) is presented. This method employs classification techniques to discriminate between defect examples. Two features, kurtosis and Non-Gaussianity Score (NGS), are extracted to develop anomaly detection algorithms. The performance of the developed algorithms was examined through real data from a test to failure bearing. Finally, the application of anomaly detection is compared with one of the popular methods called Support Vector Machine (SVM) to investigate the sensitivity and accuracy of this approach and its ability to detect the anomalies in early stages.
Resumo:
Background Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. Methods We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Results Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Conclusions Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.
Resumo:
In the emergent field of creative practice higher degrees by research, first generation supervisors have developed new models of supervision for an unprecedented form of research that combines creative practice and written thesis. In a national research project, entitled 'Effective supervision of creative practice higher research degrees', we set out to capture and share early supervisors' insights, strategies and approaches to supporting their creative practice PhD students. From the insights we gained during the early interview process, we expanded our research methods in line with a distributed leadership model and developed a dialogic framework. This led us to unanticipated conclusions and unexpected recommendations. In this study, we primarily draw on philosopher and literary theorist Mikhail Bakhtin's dialogics to explain how giving precedence to the voices of supervisors not only facilitated the articulation of dispersed tacit knowledge, but also led to other 20 discoveries. These include the nature of supervisors' resistance to prescribed models, policies and central academic development programmes; the importance of polyvocality and responsive dialogue in enabling continued innovation in the field; the benefits to supervisors of reflecting, discussing and sharing practices with colleagues; and the value of distributed leadership and dialogue to academic development and supervision capacity building in research education.
Resumo:
Within online learning communities, receiving timely and meaningful insights into the quality of learning activities is an important part of an effective educational experience. Commonly adopted methods – such as the Community of Inquiry framework – rely on manual coding of online discussion transcripts, which is a costly and time consuming process. There are several efforts underway to enable the automated classification of online discussion messages using supervised machine learning, which would enable the real-time analysis of interactions occurring within online learning communities. This paper investigates the importance of incorporating features that utilise the structure of on-line discussions for the classification of "cognitive presence" – the central dimension of the Community of Inquiry framework focusing on the quality of students' critical thinking within online learning communities. We implemented a Conditional Random Field classification solution, which incorporates structural features that may be useful in increasing classification performance over other implementations. Our approach leads to an improvement in classification accuracy of 5.8% over current existing techniques when tested on the same dataset, with a precision and recall of 0.630 and 0.504 respectively.
Resumo:
The richness of the iris texture and its variability across individuals make it a useful biometric trait for personal authentication. One of the key stages in classical iris recognition is the normalization process, where the annular iris region is mapped to a dimensionless pseudo-polar coordinate system. This process results in a rectangular structure that can be used to compensate for differences in scale and variations in pupil size. Most iris recognition methods in the literature adopt linear sampling in the radial and angular directions when performing iris normalization. In this paper, a biomechanical model of the iris is used to define a novel nonlinear normalization scheme that improves iris recognition accuracy under different degrees of pupil dilation. The proposed biomechanical model is used to predict the radial displacement of any point in the iris at a given dilation level, and this information is incorporated in the normalization process. Experimental results on the WVU pupil light reflex database (WVU-PLR) indicate the efficacy of the proposed technique, especially when matching iris images with large differences in pupil size.
Resumo:
This paper proposes and explores the Deep Customer Insight Innovation Framework in order to develop an understanding as to how design can be integrated within existing innovation processes. The Deep Customer Insight Innovation Framework synthesises the work of Beckman and Barry (2007) as a theoretical foundation, with the framework explored within a case study of Australian Airport Corporation seeking to drive airport innovations in operations and retail performance. The integration of a deep customer insight approach develops customer-centric and highly integrated solutions as a function of concentrated problem exploration and design-led idea generation. Businesses’ facing complex innovation challenges or seeking to making sense of future opportunities will be able to integrate design into existing innovation processes, anchoring the new approach between existing market research and business development activities. This paper contributes a framework and novel understanding as to how design methods are integrated into existing innovation processes for operationalization within industry.
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
We consider rank regression for clustered data analysis and investigate the induced smoothing method for obtaining the asymptotic covariance matrices of the parameter estimators. We prove that the induced estimating functions are asymptotically unbiased and the resulting estimators are strongly consistent and asymptotically normal. The induced smoothing approach provides an effective way for obtaining asymptotic covariance matrices for between- and within-cluster estimators and for a combined estimator to take account of within-cluster correlations. We also carry out extensive simulation studies to assess the performance of different estimators. The proposed methodology is substantially Much faster in computation and more stable in numerical results than the existing methods. We apply the proposed methodology to a dataset from a randomized clinical trial.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
Quasi-likelihood (QL) methods are often used to account for overdispersion in categorical data. This paper proposes a new way of constructing a QL function that stems from the conditional mean-variance relationship. Unlike traditional QL approaches to categorical data, this QL function is, in general, not a scaled version of the ordinary log-likelihood function. A simulation study is carried out to examine the performance of the proposed QL method. Fish mortality data from quantal response experiments are used for illustration.
Resumo:
During the past few decades, developing efficient methods to solve dynamic facility layout problems has been focused on significantly by practitioners and researchers. More specifically meta-heuristic algorithms, especially genetic algorithm, have been proven to be increasingly helpful to generate sub-optimal solutions for large-scale dynamic facility layout problems. Nevertheless, the uncertainty of the manufacturing factors in addition to the scale of the layout problem calls for a mixed genetic algorithm–robust approach that could provide a single unlimited layout design. The present research aims to devise a customized permutation-based robust genetic algorithm in dynamic manufacturing environments that is expected to be generating a unique robust layout for all the manufacturing periods. The numerical outcomes of the proposed robust genetic algorithm indicate significant cost improvements compared to the conventional genetic algorithm methods and a selective number of other heuristic and meta-heuristic techniques.
Resumo:
This paper reports on the fourth stage of an evolving study to develop a systems model for embedding education for sustainability (EfS) into pre-service teacher education. The fourth stage trialled the extension of the model to a comprehensive state-wide systems approach involving representatives from all eight Queensland teacher education institutions and other key policy agencies and professional associations. Support for trialling the model included regular meetings among the participating representatives and an implementation guide. This paper describes the first three stages of developing and trialling the model before presenting the case study and action research methods employed, four key lessons learned from the project, and the implications of the major outcomes for teacher education policies and practices. The Queensland-wide multi-site case study revealed processes and strategies that can enable institutional change agents to engage productively in building capacity for embedding EfS at the individual, institutional and state levels in pre-service teacher education. Collectively, the project components provide a system-wide framework that offers strategies, examples, insights and resources that can serve as a model for other states and/or territories wishing to implement EfS in a systematic and coherent fashion.
Resumo:
Between-subject and within-subject variability is ubiquitous in biology and physiology and understanding and dealing with this is one of the biggest challenges in medicine. At the same time it is difficult to investigate this variability by experiments alone. A recent modelling and simulation approach, known as population of models (POM), allows this exploration to take place by building a mathematical model consisting of multiple parameter sets calibrated against experimental data. However, finding such sets within a high-dimensional parameter space of complex electrophysiological models is computationally challenging. By placing the POM approach within a statistical framework, we develop a novel and efficient algorithm based on sequential Monte Carlo (SMC). We compare the SMC approach with Latin hypercube sampling (LHS), a method commonly adopted in the literature for obtaining the POM, in terms of efficiency and output variability in the presence of a drug block through an in-depth investigation via the Beeler-Reuter cardiac electrophysiological model. We show improved efficiency via SMC and that it produces similar responses to LHS when making out-of-sample predictions in the presence of a simulated drug block.
Resumo:
This case-study examines innovative experimentation with mobile and cloud-based technologies, utilising “Guerrilla Research Tactics” (GRT), as a means of covertly retrieving data from the urban fabric. Originally triggered by participatory action research (Kindon et al., 2008) and unobtrusive research methods (Kellehear, 1993), the potential for GRT lies in its innate ability to offer researchers an alternative, creative approach to data acquisition, whilst simultaneously allowing them to engage with the public, who are active co-creators of knowledge. Key characteristics are political agenda, the unexpected and the unconventional, which allow for an interactive, unique and thought-provoking experience for both researcher and participant.
Resumo:
Red blood cells (RBCs) are the most common type of blood cells in the blood and 99% of the blood cells are RBCs. During the circulation of blood in the cardiovascular network, RBCs squeeze through the tiny blood vessels (capillaries). They exhibit various types of motions and deformed shapes, when flowing through these capillaries with diameters varying between 5 10 µm. RBCs occupy about 45 % of the whole blood volume and the interaction between the RBCs directly influences on the motion and the deformation of the RBCs. However, most of the previous numerical studies have explored the motion and deformation of a single RBC when the interaction between RBCs has been neglected. In this study, motion and deformation of two 2D (two-dimensional) RBCs in capillaries are comprehensively explored using a coupled smoothed particle hydrodynamics (SPH) and discrete element method (DEM) model. In order to clearly model the interactions between RBCs, only two RBCs are considered in this study even though blood with RBCs is continuously flowing through the blood vessels. A spring network based on the DEM is employed to model the viscoelastic membrane of the RBC while the inside and outside fluid of RBC is modelled by SPH. The effect of the initial distance between two RBCs, membrane bending stiffness (Kb) of one RBC and undeformed diameter of one RBC on the motion and deformation of both RBCs in a uniform capillary is studied. Finally, the deformation behavior of two RBCs in a stenosed capillary is also examined. Simulation results reveal that the interaction between RBCs has significant influence on their motion and deformation.