475 resultados para Probabilistic methods
Resumo:
We implemented six different boarding strategies (Wilma, Steffen, Reverse Pyramid, Random, Blocks and By letter) in order to investigate boarding times for Boeing 777 and Airbus 380 aircraft. We also introduce three new boarding methods to find the optimum boarding strategy. Our models explicitly simulate the behaviour of groups of people travelling together and we explicitly simulate the timing to store their luggage as part of the boarding process. Results from the simulation demonstrates the Reverse Pyramid method is the best boarding method for Boeing 777, and the Steffen method is the best boarding method for Airbus 380. For the new suggested boarding methods, aisle first boarding method is the best boarding strategy for Boeing 777 and row arrangement method is the best boarding strategy for Airbus 380. Overall best boarding strategy is aisle first boarding method for Boeing 777 and Steffen method for Airbus 380.
Resumo:
Age-related Macular Degeneration (AMD) is one of the major causes of vision loss and blindness in ageing population. Currently, there is no cure for AMD, however early detection and subsequent treatment may prevent the severe vision loss or slow the progression of the disease. AMD can be classified into two types: dry and wet AMDs. The people with macular degeneration are mostly affected by dry AMD. Early symptoms of AMD are formation of drusen and yellow pigmentation. These lesions are identified by manual inspection of fundus images by the ophthalmologists. It is a time consuming, tiresome process, and hence an automated diagnosis of AMD screening tool can aid clinicians in their diagnosis significantly. This study proposes an automated dry AMD detection system using various entropies (Shannon, Kapur, Renyi and Yager), Higher Order Spectra (HOS) bispectra features, Fractional Dimension (FD), and Gabor wavelet features extracted from greyscale fundus images. The features are ranked using t-test, Kullback–Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance (CBBD), Receiver Operating Characteristics (ROC) curve-based and Wilcoxon ranking methods in order to select optimum features and classified into normal and AMD classes using Naive Bayes (NB), k-Nearest Neighbour (k-NN), Probabilistic Neural Network (PNN), Decision Tree (DT) and Support Vector Machine (SVM) classifiers. The performance of the proposed system is evaluated using private (Kasturba Medical Hospital, Manipal, India), Automated Retinal Image Analysis (ARIA) and STructured Analysis of the Retina (STARE) datasets. The proposed system yielded the highest average classification accuracies of 90.19%, 95.07% and 95% with 42, 54 and 38 optimal ranked features using SVM classifier for private, ARIA and STARE datasets respectively. This automated AMD detection system can be used for mass fundus image screening and aid clinicians by making better use of their expertise on selected images that require further examination.
Resumo:
Existing crowd counting algorithms rely on holistic, local or histogram based features to capture crowd properties. Regression is then employed to estimate the crowd size. Insufficient testing across multiple datasets has made it difficult to compare and contrast different methodologies. This paper presents an evaluation across multiple datasets to compare holistic, local and histogram based methods, and to compare various image features and regression models. A K-fold cross validation protocol is followed to evaluate the performance across five public datasets: UCSD, PETS 2009, Fudan, Mall and Grand Central datasets. Image features are categorised into five types: size, shape, edges, keypoints and textures. The regression models evaluated are: Gaussian process regression (GPR), linear regression, K nearest neighbours (KNN) and neural networks (NN). The results demonstrate that local features outperform equivalent holistic and histogram based features; optimal performance is observed using all image features except for textures; and that GPR outperforms linear, KNN and NN regression
Resumo:
Plant food materials have a very high demand in the consumer market and therefore, improved food products and efficient processing techniques are concurrently being researched in food engineering. In this context, numerical modelling and simulation techniques have a very high potential to reveal fundamentals of the underlying mechanisms involved. However, numerical modelling of plant food materials during drying becomes quite challenging, mainly due to the complexity of the multiphase microstructure of the material, which undergoes excessive deformations during drying. In this regard, conventional grid-based modelling techniques have limited applicability due to their inflexible grid-based fundamental limitations. As a result, meshfree methods have recently been developed which offer a more adaptable approach to problem domains of this nature, due to their fundamental grid-free advantages. In this work, a recently developed meshfree based two-dimensional plant tissue model is used for a comparative study of microscale morphological changes of several food materials during drying. The model involves Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) to represent fluid and solid phases of the cellular structure. Simulation are conducted on apple, potato, carrot and grape tissues and the results are qualitatively and quantitatively compared and related with experimental findings obtained from the literature. The study revealed that cellular deformations are highly sensitive to cell dimensions, cell wall physical and mechanical properties, middle lamella properties and turgor pressure. In particular, the meshfree model is well capable of simulating critically dried tissues at lower moisture content and turgor pressure, which lead to cell wall wrinkling. The findings further highlighted the potential applicability of the meshfree approach to model large deformations of the plant tissue microstructure during drying, providing a distinct advantage over the state of the art grid-based approaches.
Resumo:
Barmah Forest virus (BFV) disease is an emerging mosquito-borne disease in Australia. We aimed to outline some recent methods in using GIS for the analysis of BFV disease in Queensland, Australia. A large database of geocoded BFV cases has been established in conjunction with population data. The database has been used in recently published studies conducted by the authors to determine spatio-temporal BFV disease hotspots and spatial patterns using spatial autocorrelation and semi-variogram analysis in conjunction with the development of interpolated BFV disease standardised incidence maps. This paper briefly outlines spatial analysis methodologies using GIS tools used in those studies. This paper summarises methods and results from previous studies by the authors, and presents a GIS methodology to be used in future spatial analytical studies in attempt to enhance the understanding of BFV disease in Queensland. The methodology developed is useful in improving the analysis of BFV disease data and will enhance the understanding of the BFV disease distribution in Queensland, Australia.
Resumo:
Background Situational driving factors, including fatigue, distraction, inattention and monotony, are recognised killers in Australia, contributing to an estimated 40% of fatal crashes and 34% of all crashes . More often than not the main contributing factor is identified as fatigue, yet poor driving performance has been found to emerge early in monotonous conditions, independent of fatigue symptoms and time on task. This early emergence suggests an important role for monotony. However, much road safety research suggests that monotony is solely a task characteristic that directly causes fatigue and associated symptoms and there remains an absence of consistent evidence explaining the relationship. Objectives We report an experimental study designed to disentangle the characteristics and effects of monotony from those associated with fatigue. Specifically, we examined whether poor driving performance associated with hypovigilance emerges as a consequence of monotony, independent of fatigue. We also examined whether monotony is a multidimensional construct, determined by environmental characteristics and/or task demands that independently moderate sustained attention and associated driving performance. Method Using a driving simulator, participants completed four, 40 minute driving scenarios. The scenarios varied in the degree of monotony as determined by the degree of variation in road design (e.g., straight roads vs. curves) and/or road side scenery. Fatigue, as well as a number of other factors known to moderate vigilance and driving performance, was controlled for. To track changes across time, driving performance was assessed in five minute time periods using a range of behavioural, subjective and physiological measures, including steering wheel movements, lane positioning, electroencephalograms, skin conductance, and oculomotor activity. Results Results indicate that driving performance is worse in monotonous driving conditions characterised by low variability in road design. Critically, performance decrements associated with monotony emerge very early, suggesting monotony effects operate independent of fatigue. Conclusion Monotony is a multi-dimensional construct where, in a driving context, roads containing low variability in design are monotonous and those high in variability are non-monotonous. Importantly, low variability in road side scenery does not appear to exacerbate monotony or associated poor performance. However, high variability in road side scenery can act as a distraction and impair sustained attention and poor performance when driving on monotonous roads. Furthermore, high sensation seekers seem to be more susceptible to distraction when driving on monotonous roads. Implications of our results for the relationship between monotony and fatigue, and the possible construct-specific detection methods in a road safety context, will be discussed.
Resumo:
AIM: To assess the cost-effectiveness of an automated telephone-linked care intervention, Australian TLC Diabetes, delivered over 6 months to patients with established Type 2 diabetes mellitus and high glycated haemoglobin level, compared to usual care. METHODS: A Markov model was designed to synthesize data from a randomized controlled trial of TLC Diabetes (n=120) and other published evidence. The 5-year model consisted of three health states related to glycaemic control: 'sub-optimal' HbA1c ≥58mmol/mol (7.5%); 'average' ≥48-57mmol/mol (6.5-7.4%) and 'optimal' <48mmol/mol (6.5%) and a fourth state 'all-cause death'. Key outcomes of the model include discounted health system costs and quality-adjusted life years (QALYS) using SF-6D utility weights. Univariate and probabilistic sensitivity analyses were undertaken. RESULTS: Annual medication costs for the intervention group were lower than usual care [Intervention: £1076 (95%CI: £947, £1206) versus usual care £1271 (95%CI: £1115, £1428) p=0.052]. The estimated mean cost for intervention group participants over five years, including the intervention cost, was £17,152 versus £17,835 for the usual care group. The corresponding mean QALYs were 3.381 (SD 0.40) for the intervention group and 3.377 (SD 0.41) for the usual care group. Results were sensitive to the model duration, utility values and medication costs. CONCLUSION: The Australian TLC Diabetes intervention was a low-cost investment for individuals with established diabetes and may result in medication cost-savings to the health system. Although QALYs were similar between groups, other benefits arising from the intervention should also be considered when determining the overall value of this strategy.