263 resultados para Statistical mean


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of appropriate features to characterise an output class or object is critical for all classification problems. In order to find optimal feature descriptors for vegetation species classification in a power line corridor monitoring application, this article evaluates the capability of several spectral and texture features. A new idea of spectral–texture feature descriptor is proposed by incorporating spectral vegetation indices in statistical moment features. The proposed method is evaluated against several classic texture feature descriptors. Object-based classification method is used and a support vector machine is employed as the benchmark classifier. Individual tree crowns are first detected and segmented from aerial images and different feature vectors are extracted to represent each tree crown. The experimental results showed that the proposed spectral moment features outperform or can at least compare with the state-of-the-art texture descriptors in terms of classification accuracy. A comprehensive quantitative evaluation using receiver operating characteristic space analysis further demonstrates the strength of the proposed feature descriptors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the statistical analyses used to derive bridge live loads models for Hong Kong from a 10-year weigh-in-motion (WIM) data. The statistical concepts required and the terminologies adopted in the development of bridge live load models are introduced. This paper includes studies for representative vehicles from the large amount of WIM data in Hong Kong. Different load affecting parameters such as gross vehicle weights, axle weights, axle spacings, average daily number of trucks etc are first analyzed by various stochastic processes in order to obtain the mathematical distributions of these parameters. As a prerequisite to determine accurate bridge design loadings in Hong Kong, this study not only takes advantages of code formulation methods used internationally but also presents a new method for modelling collected WIM data using a statistical approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many traffic situations require drivers to cross or merge into a stream having higher priority. Gap acceptance theory enables us to model such processes to analyse traffic operation. This discussion demonstrated that numerical search fine tuned by statistical analysis can be used to determine the most likely critical gap for a sample of drivers, based on their largest rejected gap and accepted gap. This method shares some common features with the Maximum Likelihood Estimation technique (Troutbeck 1992) but lends itself well to contemporary analysis tools such as spreadsheet and is particularly analytically transparent. This method is considered not to bias estimation of critical gap due to very small rejected gaps or very large rejected gaps. However, it requires a sufficiently large sample that there is reasonable representation of largest rejected gap/accepted gap pairs within a fairly narrow highest likelihood search band.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Elevated plasma fibronectin levels occur in various clinical states including arterial disease. Increasing evidence suggests that atherothrombosis and venous thromboembolism (VTE) share common risk factors. To assess the hypothesis that high plasma fibronectin levels are associated with VTE, we compared plasma fibronectin levels in the Scripps Venous Thrombosis Registry for 113 VTE cases vs. age and sex matched controls. VTE cases had significantly higher mean fibronectin concentration compared to controls (127% vs. 103%, p<0.0001); the difference was greater for idiopathic VTE cases compared to secondary VTE cases (133% vs. 120%, respectively). Using a cut-off of >90% of the control values, the odds ratio (OR) for association of VTE for fibronectin plasma levels above the 90th percentile were 9.37 (95% CI 2.73-32.2; p<0.001) and this OR remained significant after adjustment for sex, age, body mass index (BMI), factor V Leiden and prothrombin nt20210A (OR 7.60, 95% CI 2.14-27.0; p=0.002). In particular, the OR was statistically significant for idiopathic VTE before and after these statistical adjustments. For the total male cohort, the OR was significant before and after statistical adjustments and was not significant for the total female cohort. In summary, our results suggest that elevated plasma fibronectin levels are associated with VTE especially in males, and extend the potential association between biomarkers and risk factors for arterial atherothrombosis and VTE. © 2008 Schattauer GmbH.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A classical condition for fast learning rates is the margin condition, first introduced by Mammen and Tsybakov. We tackle in this paper the problem of adaptivity to this condition in the context of model selection, in a general learning framework. Actually, we consider a weaker version of this condition that allows one to take into account that learning within a small model can be much easier than within a large one. Requiring this “strong margin adaptivity” makes the model selection problem more challenging. We first prove, in a general framework, that some penalization procedures (including local Rademacher complexities) exhibit this adaptivity when the models are nested. Contrary to previous results, this holds with penalties that only depend on the data. Our second main result is that strong margin adaptivity is not always possible when the models are not nested: for every model selection procedure (even a randomized one), there is a problem for which it does not demonstrate strong margin adaptivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel technique for the tracking of moving lips for the purpose of speaker identification. In our system, a model of the lip contour is formed directly from chromatic information in the lip region. Iterative refinement of contour point estimates is not required. Colour features are extracted from the lips via concatenated profiles taken around the lip contour. Reduction of order in lip features is obtained via principal component analysis (PCA) followed by linear discriminant analysis (LDA). Statistical speaker models are built from the lip features based on the Gaussian mixture model (GMM). Identification experiments performed on the M2VTS1 database, show encouraging results

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lower fruit and vegetable intake among socioeconomically disadvantaged groups has been well documented, and may be a consequence of a higher consumption of take-out foods. This study examined whether, and to what extent, take-out food consumption mediated (explained) the association between socioeconomic position and fruit and vegetable intake. A cross-sectional postal survey was conducted among 1500 randomly selected adults aged 25–64 years in Brisbane, Australia in 2009 (response rate = 63.7%, N = 903). A food frequency questionnaire assessed usual daily servings of fruits and vegetables (0 to 6), overall take-out consumption (times/week) and the consumption of 22 specific take-out items (never to ≥once/day). These specific take-out items were grouped into “less healthy” and “healthy” choices and indices were created for each type of choice (0 to 100). Socioeconomic position was ascertained by education. The analyses were performed using linear regression, and a bootstrap re-sampling approach estimated the statistical significance of the mediated effects. Mean daily serves of fruits and vegetables was 1.89 (SD 1.05) and 2.47 (SD 1.12) respectively. The least educated group were more likely to consume fewer serves of fruit (B= –0.39, p<0.001) and vegetables (B= –0.43, p<0.001) compared with the highest educated. The consumption of “less healthy” take-out food partly explained (mediated) education differences in fruit and vegetable intake; however, no mediating effects were observed for overall and “healthy” take-out consumption. Regular consumption of “less healthy” take-out items may contribute to socioeconomic differences in fruit and vegetable intake, possibly by displacing these foods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates profiling and differentiating customers through the use of statistical data mining techniques. The business application of our work centres on examining individuals’ seldomly studied yet critical consumption behaviour over an extensive time period within the context of the wireless telecommunication industry; consumption behaviour (as oppose to purchasing behaviour) is behaviour that has been performed so frequently that it become habitual and involves minimal intentions or decision making. Key variables investigated are the activity initialised timestamp and cell tower location as well as the activity type and usage quantity (e.g., voice call with duration in seconds); and the research focuses are on customers’ spatial and temporal usage behaviour. The main methodological emphasis is on the development of clustering models based on Gaussian mixture models (GMMs) which are fitted with the use of the recently developed variational Bayesian (VB) method. VB is an efficient deterministic alternative to the popular but computationally demandingMarkov chainMonte Carlo (MCMC) methods. The standard VBGMMalgorithm is extended by allowing component splitting such that it is robust to initial parameter choices and can automatically and efficiently determine the number of components. The new algorithm we propose allows more effective modelling of individuals’ highly heterogeneous and spiky spatial usage behaviour, or more generally human mobility patterns; the term spiky describes data patterns with large areas of low probability mixed with small areas of high probability. Customers are then characterised and segmented based on the fitted GMM which corresponds to how each of them uses the products/services spatially in their daily lives; this is essentially their likely lifestyle and occupational traits. Other significant research contributions include fitting GMMs using VB to circular data i.e., the temporal usage behaviour, and developing clustering algorithms suitable for high dimensional data based on the use of VB-GMM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is important to promote a sustainable development approach to ensure that economic, environmental and social developments are maintained in balance. Sustainable development and its implications are not just a global concern, it also affects Australia. In particular, rural Australian communities are facing various economic, environmental and social challenges. Thus, the need for sustainable development in rural regions is becoming increasingly important. To promote sustainable development, proper frameworks along with the associated tools optimised for the specific regions, need to be developed. This will ensure that the decisions made for sustainable development are evidence based, instead of subjective opinions. To address these issues, Queensland University of Technology (QUT), through an Australian Research Council (ARC) linkage grant, has initiated research into the development of a Rural Statistical Sustainability Framework (RSSF) to aid sustainable decision making in rural Queensland. This particular branch of the research developed a decision support tool that will become the integrating component of the RSSF. This tool is developed on the web-based platform to allow easy dissemination, quick maintenance and to minimise compatibility issues. The tool is developed based on MapGuide Open Source and it follows the three-tier architecture: Client tier, Web tier and the Server tier. The developed tool is interactive and behaves similar to a familiar desktop-based application. It has the capability to handle and display vector-based spatial data and can give further visual outputs using charts and tables. The data used in this tool is obtained from the QUT research team. Overall the tool implements four tasks to help in the decision-making process. These are the Locality Classification, Trend Display, Impact Assessment and Data Entry and Update. The developed tool utilises open source and freely available software and accounts for easy extensibility and long-term sustainability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: To compare the intraocular pressure readings obtained with the iCare rebound tonometer and the 7CR non-contact tonometer with those measured by Goldmann applanation tonometry in treated glaucoma patients. Design: A prospective, cross sectional study was conducted in a private tertiary glaucoma clinic. Participants: 109 (54M:55F) patients including only eyes under medical treatment for glaucoma. Methods: Measurement by Goldmann applanation tonometry, iCare rebound tonometry and 7CR non-contact tonometry. Main Outcome Measures: Intraocular pressure. Results: There were strong correlations between the intraocular pressure measurements obtained with Goldmann and both the rebound and non-contact tonometers (Spearman r values ≥ 0.79, p < 0.001). However, there were small, statistically significant differences between the average readings for each tonometer. For the rebound tonometer, the mean intraocular pressure was slightly higher compared to the Goldmann applanation tonometer in the right eyes (p = 0.02), and similar in the left eyes (p = 0.93) however these differences did not reach statistical significance. The Goldmann correlated measurements from the noncontact tonometer were lower than the average Goldmann reading for both right (p < 0.001) and left (p > 0.01) eyes. The corneal compensated measurements from the non-contact tonometer were significantly higher compared to the other tonometers (p ≤ 0.001). Conclusions: The iCare rebound tonometer and the 7CR non-contact tonometer measure IOP in fundamentally different ways to the Goldmann applanation tonometer. The resulting IOP values vary between the instruments and will need to be considered when comparing clinical versus home acquired measurements.