467 resultados para Prediction techniques
Resumo:
Changing environments pose a serious problem to current robotic systems aiming at long term operation under varying seasons or local weather conditions. This paper is built on our previous work where we propose to learn to predict the changes in an environment. Our key insight is that the occurring scene changes are in part systematic, repeatable and therefore predictable. The goal of our work is to support existing approaches to place recognition by learning how the visual appearance of an environment changes over time and by using this learned knowledge to predict its appearance under different environmental conditions. We describe the general idea of appearance change prediction (ACP) and investigate properties of our novel implementation based on vocabularies of superpixels (SP-ACP). Our previous work showed that the proposed approach significantly improves the performance of SeqSLAM and BRIEF-Gist for place recognition on a subset of the Nordland dataset under extremely different environmental conditions in summer and winter. This paper deepens the understanding of the proposed SP-ACP system and evaluates the influence of its parameters. We present the results of a large-scale experiment on the complete 10 h Nordland dataset and appearance change predictions between different combinations of seasons.
Resumo:
Nowadays, demand for automated Gas metal arc welding (GMAW) is growing and consequently need for intelligent systems is increased to ensure the accuracy of the procedure. To date, welding pool geometry has been the most used factor in quality assessment of intelligent welding systems. But, it has recently been found that Mahalanobis Distance (MD) not only can be used for this purpose but also is more efficient. In the present paper, Artificial Neural Networks (ANN) has been used for prediction of MD parameter. However, advantages and disadvantages of other methods have been discussed. The Levenberg–Marquardt algorithm was found to be the most effective algorithm for GMAW process. It is known that the number of neurons plays an important role in optimal network design. In this work, using trial and error method, it has been found that 30 is the optimal number of neurons. The model has been investigated with different number of layers in Multilayer Perceptron (MLP) architecture and has been shown that for the aim of this work the optimal result is obtained when using MLP with one layer. Robustness of the system has been evaluated by adding noise into the input data and studying the effect of the noise in prediction capability of the network. The experiments for this study were conducted in an automated GMAW setup that was integrated with data acquisition system and prepared in a laboratory for welding of steel plate with 12 mm in thickness. The accuracy of the network was evaluated by Root Mean Squared (RMS) error between the measured and the estimated values. The low error value (about 0.008) reflects the good accuracy of the model. Also the comparison of the predicted results by ANN and the test data set showed very good agreement that reveals the predictive power of the model. Therefore, the ANN model offered in here for GMA welding process can be used effectively for prediction goals.
Resumo:
Objective To synthesise recent research on the use of machine learning approaches to mining textual injury surveillance data. Design Systematic review. Data sources The electronic databases which were searched included PubMed, Cinahl, Medline, Google Scholar, and Proquest. The bibliography of all relevant articles was examined and associated articles were identified using a snowballing technique. Selection criteria For inclusion, articles were required to meet the following criteria: (a) used a health-related database, (b) focused on injury-related cases, AND used machine learning approaches to analyse textual data. Methods The papers identified through the search were screened resulting in 16 papers selected for review. Articles were reviewed to describe the databases and methodology used, the strength and limitations of different techniques, and quality assurance approaches used. Due to heterogeneity between studies meta-analysis was not performed. Results Occupational injuries were the focus of half of the machine learning studies and the most common methods described were Bayesian probability or Bayesian network based methods to either predict injury categories or extract common injury scenarios. Models were evaluated through either comparison with gold standard data or content expert evaluation or statistical measures of quality. Machine learning was found to provide high precision and accuracy when predicting a small number of categories, was valuable for visualisation of injury patterns and prediction of future outcomes. However, difficulties related to generalizability, source data quality, complexity of models and integration of content and technical knowledge were discussed. Conclusions The use of narrative text for injury surveillance has grown in popularity, complexity and quality over recent years. With advances in data mining techniques, increased capacity for analysis of large databases, and involvement of computer scientists in the injury prevention field, along with more comprehensive use and description of quality assurance methods in text mining approaches, it is likely that we will see a continued growth and advancement in knowledge of text mining in the injury field.
Resumo:
Peggy Shaw’s RUFF, (USA 2013) and Queensland Theatre Company’s collaboration with Queensland University of Technology, Total Dik!, (Australia 2013) overtly and evocatively draw on an aestheticized use of the cinematic techniques and technologies of Chroma Key to reveal the tensions in their production and add layers to their performances. In doing so they offer invaluable insight where the filmic and theatrical approaches overlap. This paper draws on Eckersall, Grehan and Scheer’s New Media Dramaturgy (2014) to reposition the frame as a contribution to intermedial theatre and performance practices in light of increasing convergence between seemingly disparate discourses. In RUFF, the scenic environment replicates a chroma-key ‘studio’ which facilitates the reconstruction of memory displaced after a stroke. RUFF uses the screen and projections to recall crooners, lounge singers, movie stars, rock and roll bands, and an eclectic line of eccentric family members living inside Shaw. While the show pays tribute to those who have kept her company across decades of theatrical performance, use of non-composited chroma-key technique as a theatrical device and the work’s taciturn revelation of the production process during performance, play a central role in its exploration of the juxtaposition between its reconstructed form and content. In contrast Total Dik! uses real-time green screen compositing during performance as a scenic device. Actors manipulate scale models, refocus cameras and generate scenes within scenes in the construction of the work’s examination of an isolated Dictator. The ‘studio’ is again replicated as a site for (re)construction, only in this case Total Dik! actively seeks to reveal the process of production as the performance plays out. Building on RUFF, and other works such as By the Way, Meet Vera Stark, (2012) and Hotel Modern’s God’s Beard (2012), this work blends a convergence of mobile technologies, models, and green screen capture to explore aspects of transmedia storytelling in a theatrical environment (Jenkins, 2009, 2013). When a green screen is placed on stage, it reads at once as metaphor and challenge to the language of theatre. It becomes, or rather acts, as a ‘sign’ that alludes to the nature of the reconstructed, recomposited, manipulated and controlled. In RUFF and in Total Dik!, it is also a place where as a mode of production and subsequent reveal, it adds weight to performance. These works are informed by Auslander (1999) and Giesenkam (2007) and speak to and echo Lehmann’s Postdramatic Theatre (2006). This paper’s consideration of the integration of studio technique and live performance as a dynamic approach to multi-layered theatrical production develops our understanding of their combinatory use in a live performance environment.
Resumo:
The study sought to explore the initial impact of the ACT's implementation of roadside oral fluid drug screening program. The results suggest that a number of individuals reported intentions to drug drive in the future. The classical deterrence theory variables of certainty of apprehension, severity and swiftness of sanctions were not predictive of intentions to drug drive in the future. In contrast, having avoided apprehension and having known of others that have avoided apprehension were predictive of intentions to drug drive in the future. Increasing perceptions of the certainty of apprehension, increased testing frequency, and increased awareness of the oral fluid drug screening program could potentially lead to reductions of drug driving and result in safer road environment for all ACT community members.
Resumo:
Organizations executing similar business processes need to understand the differences and similarities in activities performed across work environments. Presently, research interest is directed towards the potential of visualization for the display of process models, to support users in their analysis tasks. Although recent literature in process mining and comparison provide several methods and algorithms to perform process and log comparison, few contributions explore novel visualization approaches. This paper analyses process comparison from a design perspective, providing some practical visualization techniques as anal- ysis solutions (/to support process analysis). The design of the visual comparison has been tackled through three different points of view: the general model, the projected model and the side-by-side comparison in order to support the needs of business analysts. A case study is presented showing the application of process mining and visualization techniques to patient treatment across two Australian hospitals.
Resumo:
In this paper we excite bound long range stripe plasmon modes with a highly focused laser beam. We demonstrate highly confined plasmons propagating along a 50 μm long silver stripe 750 nm wide and 30 nm thick. Two excitation techniques were studied: focusing the laser spot onto the waveguide end and focusing the laser spot onto a silver grating. By comparing the intensity of the out-coupling photons at the end of the stripe for both grating and end excitation we are able to show that gratings provide an increase of a factor of two in the output intensity and thus out-coupling of plasmons excited by this technique are easier to detect. Authors expect that the outcome of this paper will prove beneficial for the development of passive nano-optical devices based on stripe waveguides, by providing insight into the different excitation techniques available and the advantages of each technique.
Resumo:
Resource assignment and scheduling is a difficult task when job processing times are stochastic, and resources are to be used for both known and unknown demand. To operate effectively within such an environment, several novel strategies are investigated. The first focuses upon the creation of a robust schedule, and utilises the concept of strategically placed idle time (i.e. buffering). The second approach introduces the idea of maintaining a number of free resources at each time, and culminates in another form of strategically placed buffering. The attraction of these approaches is that they are easy to grasp conceptually, and mimic what practitioners already do in practice. Our extensive numerical testing has shown that these techniques ensure more prompt job processing, and reduced job cancellations and waiting time. They are effective in the considered setting and could easily be adapted for many real life problems, for instance those in health care. This article has more importantly demonstrated that integrating the two approaches is a better strategy and will provide an effective stochastic scheduling approach.
Resumo:
The ability to estimate the expected Remaining Useful Life (RUL) is critical to reduce maintenance costs, operational downtime and safety hazards. In most industries, reliability analysis is based on the Reliability Centred Maintenance (RCM) and lifetime distribution models. In these models, the lifetime of an asset is estimated using failure time data; however, statistically sufficient failure time data are often difficult to attain in practice due to the fixed time-based replacement and the small population of identical assets. When condition indicator data are available in addition to failure time data, one of the alternate approaches to the traditional reliability models is the Condition-Based Maintenance (CBM). The covariate-based hazard modelling is one of CBM approaches. There are a number of covariate-based hazard models; however, little study has been conducted to evaluate the performance of these models in asset life prediction using various condition indicators and data availability. This paper reviews two covariate-based hazard models, Proportional Hazard Model (PHM) and Proportional Covariate Model (PCM). To assess these models’ performance, the expected RUL is compared to the actual RUL. Outcomes demonstrate that both models achieve convincingly good results in RUL prediction; however, PCM has smaller absolute prediction error. In addition, PHM shows over-smoothing tendency compared to PCM in sudden changes of condition data. Moreover, the case studies show PCM is not being biased in the case of small sample size.
Resumo:
Drying of food materials offers a significant increase in the shelf life of food materials, along with the modification of quality attributes due to simultaneous heat and mass transfer. Shrinkage and variations in porosity are the common micro and microstructural changes that take place during the drying of mostly the food materials. Although extensive research has been carried out on the prediction of shrinkage and porosity over the time of drying, no single model exists which consider both material properties and process condition in the same model. In this study, an attempt has been made to develop and validate shrinkage and porosity models of food materials during drying considering both process parameters and sample properties. The stored energy within the sample, elastic potential energy, glass transition temperature and physical properties of the sample such as initial porosity, particle density, bulk density and moisture content have been taken into consideration. Physical properties and validation have been made by using a universal testing machine ( Instron 2kN), a profilometer (Nanovea) and a pycnometer. Apart from these, COMSOL Multiphysics 4.4 has been used to solve heat and mass transfer physics. Results obtained from models of shrinkage and porosity is quite consistent with the experimental data. Successful implementation of these models would ensure the use of optimum energy in the course of drying and better quality retention of dried foods.
Resumo:
Objectives Currently, there are no studies combining electromyography (EMG) and sonography to estimate the absolute and relative strength values of erector spinae (ES) muscles in healthy individuals. The purpose of this study was to establish whether the maximum voluntary contraction (MVC) of the ES during isometric contractions could be predicted from the changes in surface EMG as well as in fiber pennation and thickness as measured by sonography. Methods Thirty healthy adults performed 3 isometric extensions at 45° from the vertical to calculate the MVC force. Contractions at 33% and 100% of the MVC force were then used during sonographic and EMG recordings. These measurements were used to observe the architecture and function of the muscles during contraction. Statistical analysis was performed using bivariate regression and regression equations. Results The slope for each regression equation was statistically significant (P < .001) with R2 values of 0.837 and 0.986 for the right and left ES, respectively. The standard error estimate between the sonographic measurements and the regression-estimated pennation angles for the right and left ES were 0.10 and 0.02, respectively. Conclusions Erector spinae muscle activation can be predicted from the changes in fiber pennation during isometric contractions at 33% and 100% of the MVC force. These findings could be essential for developing a regression equation that could estimate the level of muscle activation from changes in the muscle architecture.
Resumo:
A study was undertaken to examine further the effects of perceived work control on employee adjustment. On the basis of the stress antidote model, it was proposed that high levels of prediction, understanding, and control of work-related events would have direct, indirect, and interactive effects on levels of employee adjustment. These hypotheses were tested in a short-term longitudinal study of 137 employees of a large retail organization. The stress antidote measures appeared to be indirectly related to employee adjustment, via their effects on perceptions of work stress. There was weak evidence for the proposal that prediction, understanding, and control would buffer the negative effects of work stress. Additional analyses indicated that the observed effects of prediction, understanding, and control were independent of employees' generalized control beliefs. However, there was no support for the proposal that the effects of the stress antidote measures would be dependent on employees' generalized control beliefs.
Resumo:
In recent years, rapid advances in information technology have led to various data collection systems which are enriching the sources of empirical data for use in transport systems. Currently, traffic data are collected through various sensors including loop detectors, probe vehicles, cell-phones, Bluetooth, video cameras, remote sensing and public transport smart cards. It has been argued that combining the complementary information from multiple sources will generally result in better accuracy, increased robustness and reduced ambiguity. Despite the fact that there have been substantial advances in data assimilation techniques to reconstruct and predict the traffic state from multiple data sources, such methods are generally data-driven and do not fully utilize the power of traffic models. Furthermore, the existing methods are still limited to freeway networks and are not yet applicable in the urban context due to the enhanced complexity of the flow behavior. The main traffic phenomena on urban links are generally caused by the boundary conditions at intersections, un-signalized or signalized, at which the switching of the traffic lights and the turning maneuvers of the road users lead to shock-wave phenomena that propagate upstream of the intersections. This paper develops a new model-based methodology to build up a real-time traffic prediction model for arterial corridors using data from multiple sources, particularly from loop detectors and partial observations from Bluetooth and GPS devices.
Resumo:
Objective There are many prediction equations available in the literature for the assessment of body composition from skinfold thickness (SFT). This study aims to cross validate some of those prediction equations to determine the suitability of their use on Sri Lankan children. Methods Height, weight and SFT of 5 different sites were measured. Total body water was assessed using the isotope dilution method (D2O). Percentage Fat mass (%FM) was estimated from SFT using prediction equations described by five authors in the literature. Results Five to 15 year old healthy, 282 Sri Lankan children were studied. The equation of Brook gave Ihe lowest bias but limits of agreement were high. Equations described by Deurenberg et al gave slightly higher bias but limits of agreement were narrowest and bias was not influence by extremes of body fat. Although prediction equations did not estimate %FM adequately, the association between %FM and SFT measures, were quite satisfactory. Conclusion We conclude that SFT can be used effectively in the assessment of body composition in children. However, for the assessment of body composition using SFT, either prediction equations should be derived to suit the local populations or existing equations should be cross-validated to determine the suitability before its application.