217 resultados para confidence level

em Queensland University of Technology - ePrints Archive


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Vehicle emitted particles are of significant concern based on their potential to influence local air quality and human health. Transport microenvironments usually contain higher vehicle emission concentrations compared to other environments, and people spend a substantial amount of time in these microenvironments when commuting. Currently there is limited scientific knowledge on particle concentration, passenger exposure and the distribution of vehicle emissions in transport microenvironments, partially due to the fact that the instrumentation required to conduct such measurements is not available in many research centres. Information on passenger waiting time and location in such microenvironments has also not been investigated, which makes it difficult to evaluate a passenger’s spatial-temporal exposure to vehicle emissions. Furthermore, current emission models are incapable of rapidly predicting emission distribution, given the complexity of variations in emission rates that result from changes in driving conditions, as well as the time spent in driving condition within the transport microenvironment. In order to address these scientific gaps in knowledge, this work conducted, for the first time, a comprehensive statistical analysis of experimental data, along with multi-parameter assessment, exposure evaluation and comparison, and emission model development and application, in relation to traffic interrupted transport microenvironments. The work aimed to quantify and characterise particle emissions and human exposure in the transport microenvironments, with bus stations and a pedestrian crossing identified as suitable research locations representing a typical transport microenvironment. Firstly, two bus stations in Brisbane, Australia, with different designs, were selected to conduct measurements of particle number size distributions, particle number and PM2.5 concentrations during two different seasons. Simultaneous traffic and meteorological parameters were also monitored, aiming to quantify particle characteristics and investigate the impact of bus flow rate, station design and meteorological conditions on particle characteristics at stations. The results showed higher concentrations of PN20-30 at the station situated in an open area (open station), which is likely to be attributed to the lower average daily temperature compared to the station with a canyon structure (canyon station). During precipitation events, it was found that particle number concentration in the size range 25-250 nm decreased greatly, and that the average daily reduction in PM2.5 concentration on rainy days compared to fine days was 44.2 % and 22.6 % at the open and canyon station, respectively. The effect of ambient wind speeds on particle number concentrations was also examined, and no relationship was found between particle number concentration and wind speed for the entire measurement period. In addition, 33 pairs of average half-hourly PN7-3000 concentrations were calculated and identified at the two stations, during the same time of a day, and with the same ambient wind speeds and precipitation conditions. The results of a paired t-test showed that the average half-hourly PN7-3000 concentrations at the two stations were not significantly different at the 5% confidence level (t = 0.06, p = 0.96), which indicates that the different station designs were not a crucial factor for influencing PN7-3000 concentrations. A further assessment of passenger exposure to bus emissions on a platform was evaluated at another bus station in Brisbane, Australia. The sampling was conducted over seven weekdays to investigate spatial-temporal variations in size-fractionated particle number and PM2.5 concentrations, as well as human exposure on the platform. For the whole day, the average PN13-800 concentration was 1.3 x 104 and 1.0 x 104 particle/cm3 at the centre and end of the platform, respectively, of which PN50-100 accounted for the largest proportion to the total count. Furthermore, the contribution of exposure at the bus station to the overall daily exposure was assessed using two assumed scenarios of a school student and an office worker. It was found that, although the daily time fraction (the percentage of time spend at a location in a whole day) at the station was only 0.8 %, the daily exposure fractions (the percentage of exposures at a location accounting for the daily exposure) at the station were 2.7% and 2.8 % for exposure to PN13-800 and 2.7% and 3.5% for exposure to PM2.5 for the school student and the office worker, respectively. A new parameter, “exposure intensity” (the ratio of daily exposure fraction and the daily time fraction) was also defined and calculated at the station, with values of 3.3 and 3.4 for exposure to PN13-880, and 3.3 and 4.2 for exposure to PM2.5, for the school student and the office worker, respectively. In order to quantify the enhanced emissions at critical locations and define the emission distribution in further dispersion models for traffic interrupted transport microenvironments, a composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. This model does not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bidirectional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. The CLSE model was also applied at a signalled pedestrian crossing, in order to assess increased particle number emissions from motor vehicles when forced to stop and accelerate from rest. The CLSE model was used to calculate the total emissions produced by a specific number and mix of light petrol cars and diesel passenger buses including 1 car travelling in 1 direction (/1 direction), 14 cars / 1 direction, 1 bus / 1 direction, 28 cars / 2 directions, 24 cars and 2 buses / 2 directions, and 20 cars and 4 buses / 2 directions. It was found that the total emissions produced during stopping on a red signal were significantly higher than when the traffic moved at a steady speed. Overall, total emissions due to the interruption of the traffic increased by a factor of 13, 11, 45, 11, 41, and 43 for the above 6 cases, respectively. In summary, this PhD thesis presents the results of a comprehensive study on particle number and mass concentration, together with particle size distribution, in a bus station transport microenvironment, influenced by bus flow rates, meteorological conditions and station design. Passenger spatial-temporal exposure to bus emitted particles was also assessed according to waiting time and location along the platform, as well as the contribution of exposure at the bus station to overall daily exposure. Due to the complexity of the interrupted traffic flow within the transport microenvironments, a unique CLSE model was also developed, which is capable of quantifying emission levels at critical locations within the transport microenvironment, for the purpose of evaluating passenger exposure and conducting simulations of vehicle emission dispersion. The application of the CLSE model at a pedestrian crossing also proved its applicability and simplicity for use in a real-world transport microenvironment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A satellite based observation system can continuously or repeatedly generate a user state vector time series that may contain useful information. One typical example is the collection of International GNSS Services (IGS) station daily and weekly combined solutions. Another example is the epoch-by-epoch kinematic position time series of a receiver derived by a GPS real time kinematic (RTK) technique. Although some multivariate analysis techniques have been adopted to assess the noise characteristics of multivariate state time series, statistic testings are limited to univariate time series. After review of frequently used hypotheses test statistics in univariate analysis of GNSS state time series, the paper presents a number of T-squared multivariate analysis statistics for use in the analysis of multivariate GNSS state time series. These T-squared test statistics have taken the correlation between coordinate components into account, which is neglected in univariate analysis. Numerical analysis was conducted with the multi-year time series of an IGS station to schematically demonstrate the results from the multivariate hypothesis testing in comparison with the univariate hypothesis testing results. The results have demonstrated that, in general, the testing for multivariate mean shifts and outliers tends to reject less data samples than the testing for univariate mean shifts and outliers under the same confidence level. It is noted that neither univariate nor multivariate data analysis methods are intended to replace physical analysis. Instead, these should be treated as complementary statistical methods for a prior or posteriori investigations. Physical analysis is necessary subsequently to refine and interpret the results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A statistical approach is used in the design of a battery-supercapacitor energy storage system for a wind farm. The design exploits the technical merits of the two energy storage mediums, in terms of the differences in their specific power and energy densities, and their ability to accommodate different rates of change in the charging/discharging powers. By treating the input wind power as random and using a proposed coordinated power flows control strategy for the battery and the supercapacitor, the approach evaluates the energy storage capacities, the corresponding expected life cycle cost/year of the storage mediums, and the expected cost/year of unmet power dispatch. A computational procedure is then developed for the design of a least-cost/year hybrid energy storage system to realize wind power dispatch at a specified confidence level.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the electricity market environment, load-serving entities (LSEs) will inevitably face risks in purchasing electricity because there are a plethora of uncertainties involved. To maximize profits and minimize risks, LSEs need to develop an optimal strategy to reasonably allocate the purchased electricity amount in different electricity markets such as the spot market, bilateral contract market, and options market. Because risks originate from uncertainties, an approach is presented to address the risk evaluation problem by the combined use of the lower partial moment and information entropy (LPME). The lower partial moment is used to measure the amount and probability of the loss, whereas the information entropy is used to represent the uncertainty of the loss. Electricity purchasing is a repeated procedure; therefore, the model presented represents a dynamic strategy. Under the chance-constrained programming framework, the developed optimization model minimizes the risk of the electricity purchasing portfolio in different markets because the actual profit of the LSE concerned is not less than the specified target under a required confidence level. Then, the particle swarm optimization (PSO) algorithm is employed to solve the optimization model. Finally, a sample example is used to illustrate the basic features of the developed model and method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A hybrid energy storage system (HESS) consisting of battery and supercapacitor (SC) is proposed for use in a wind farm in order to achieve power dispatchability. In the designed scheme, the rate of charging/discharging powers of the battery is controlled while the faster wind power transients are diverted to the SC. This enhances the lifetime of the battery. Furthermore, by taking into consideration the random nature of the wind power, a statistical design method is developed to determine the capacities of the HESS needed to achieve specified confidence level in the power dispatch. The proposed approach is useful in the planning of the wind farm-HESS scheme and the coordination of the power flows between the battery and SC.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Foot dorsiflexion plays an essential role in both controlling balance and human gait. Electromyography (EMG) and sonomyography (SMG) can provide information on several aspects of muscle function. The aim was to establish the relationship between the EMG and SMG variables during isotonic contractions of foot dorsiflexors. Methods Twenty-seven healthy young adults performed the foot dorsiflexion test on a device designed ad hoc. EMG variables were maximum peak and area under the curve. Muscular architecture variables were muscle thickness and pennation angle. Descriptive statistical analysis, inferential analysis and a multivariate linear regression model were carried out. The confidence level was established with a statistically significant p-value of less than 0.05. Results The correlation between EMG variables and SMG variables was r = 0.462 (p < 0.05). The linear regression model to the dependent variable “peak normalized tibialis anterior (TA)” from the independent variables “pennation angle and thickness”, was significant (p = 0.002) with an explained variance of R2 = 0.693 and SEE = 0.16. Conclusions There is a significant relationship and degree of contribution between EMG and SMG variables during isotonic contractions of the TA muscle. Our results suggest that EMG and SMG can be feasible tools for monitoring and assessment of foot dorsiflexors. TA muscle parameterization and assessment is relevant in order to know that increased strength accelerates the recovery of lower limb injuries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction: A number of genetic-association studies have identified genes contributing to ankylosing spondylitis (AS) susceptibility but such approaches provide little information as to the gene activity changes occurring during the disease process. Transcriptional profiling generates a 'snapshot' of the sampled cells' activity and thus can provide insights into the molecular processes driving the disease process. We undertook a whole-genome microarray approach to identify candidate genes associated with AS and validated these gene-expression changes in a larger sample cohort. Methods: A total of 18 active AS patients, classified according to the New York criteria, and 18 gender- and age-matched controls were profiled using Illumina HT-12 whole-genome expression BeadChips which carry cDNAs for 48,000 genes and transcripts. Class comparison analysis identified a number of differentially expressed candidate genes. These candidate genes were then validated in a larger cohort using qPCR-based TaqMan low density arrays (TLDAs). Results: A total of 239 probes corresponding to 221 genes were identified as being significantly different between patients and controls with a P-value <0.0005 (80% confidence level of false discovery rate). Forty-seven genes were then selected for validation studies, using the TLDAs. Thirteen of these genes were validated in the second patient cohort with 12 downregulated 1.3- to 2-fold and only 1 upregulated (1.6-fold). Among a number of identified genes with well-documented inflammatory roles we also validated genes that might be of great interest to the understanding of AS progression such as SPOCK2 (osteonectin) and EP300, which modulate cartilage and bone metabolism. Conclusions: We have validated a gene expression signature for AS from whole blood and identified strong candidate genes that may play roles in both the inflammatory and joint destruction aspects of the disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Association rule mining is one technique that is widely used when querying databases, especially those that are transactional, in order to obtain useful associations or correlations among sets of items. Much work has been done focusing on efficiency, effectiveness and redundancy. There has also been a focusing on the quality of rules from single level datasets with many interestingness measures proposed. However, with multi-level datasets now being common there is a lack of interestingness measures developed for multi-level and cross-level rules. Single level measures do not take into account the hierarchy found in a multi-level dataset. This leaves the Support-Confidence approach,which does not consider the hierarchy anyway and has other drawbacks, as one of the few measures available. In this paper we propose two approaches which measure multi-level association rules to help evaluate their interestingness. These measures of diversity and peculiarity can be used to help identify those rules from multi-level datasets that are potentially useful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we investigate an alternative bootstrap approach based on a result of Ramsey [F.L. Ramsey, Characterization of the partial autocorrelation function, Ann. Statist. 2 (1974), pp. 1296-1301] and on the Durbin-Levinson algorithm to obtain a surrogate series from linear Gaussian processes with long range dependence. We compare this bootstrap method with other existing procedures in a wide Monte Carlo experiment by estimating, parametrically and semi-parametrically, the memory parameter d. We consider Gaussian and non-Gaussian processes to prove the robustness of the method to deviations from normality. The approach is also useful to estimate confidence intervals for the memory parameter d by improving the coverage level of the interval.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Governments around the world are increasingly investing in information and communications technology (ICT) as a means of improving service delivery to citizens. Government ICT adoption is also being driven by a desire to streamline information accessibility and information flows within government - both between different levels of government and between different departments at the same level. Increasing the availability of information internally and to citizens has clear and compelling benefits but it also carries risks that must be carefully managed. This talk will examine the implications of such E-government initiatives for a range of compliance obligations, with a focus on information privacy. It will review recent developments in the area of systems-based enforcement of privacy policies and the particular privacy challenges presented by the aggregation of geospatial information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s electronic world vast amounts of knowledge is stored within many datasets and databases. Often the default format of this data means that the knowledge within is not immediately accessible, but rather has to be mined and extracted. This requires automated tools and they need to be effective and efficient. Association rule mining is one approach to obtaining knowledge stored with datasets / databases which includes frequent patterns and association rules between the items / attributes of a dataset with varying levels of strength. However, this is also association rule mining’s downside; the number of rules that can be found is usually very big. In order to effectively use the association rules (and the knowledge within) the number of rules needs to be kept manageable, thus it is necessary to have a method to reduce the number of association rules. However, we do not want to lose knowledge through this process. Thus the idea of non-redundant association rule mining was born. A second issue with association rule mining is determining which ones are interesting. The standard approach has been to use support and confidence. But they have their limitations. Approaches which use information about the dataset’s structure to measure association rules are limited, but could yield useful association rules if tapped. Finally, while it is important to be able to get interesting association rules from a dataset in a manageable size, it is equally as important to be able to apply them in a practical way, where the knowledge they contain can be taken advantage of. Association rules show items / attributes that appear together frequently. Recommendation systems also look at patterns and items / attributes that occur together frequently in order to make a recommendation to a person. It should therefore be possible to bring the two together. In this thesis we look at these three issues and propose approaches to help. For discovering non-redundant rules we propose enhanced approaches to rule mining in multi-level datasets that will allow hierarchically redundant association rules to be identified and removed, without information loss. When it comes to discovering interesting association rules based on the dataset’s structure we propose three measures for use in multi-level datasets. Lastly, we propose and demonstrate an approach that allows for association rules to be practically and effectively used in a recommender system, while at the same time improving the recommender system’s performance. This especially becomes evident when looking at the user cold-start problem for a recommender system. In fact our proposal helps to solve this serious problem facing recommender systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the conditions that are necessary at system and local levels for teacher assessment to be valid, reliable and rigorous. With sustainable assessment cultures as a goal, the paper examines how education systems can support local level efforts for quality learning and dependable teacher assessment. This is achieved through discussion of relevant research and consideration of a case study involving an evaluation of a cross-sectoral approach to promoting confidence in school-based assessment in Queensland, Australia. Building on the reported case study, essential characteristics for developing sustainable assessment cultures are presented, including: leadership in learning; alignment of curriculum, pedagogy and assessment; the design of quality assessment tasks and accompanying standards, and evidence-based judgement and moderation. Taken together, these elements constitute a new framework for building assessment capabilities and promoting quality assurance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Foot ulcers are a frequent reason for diabetes-related hospitalisation. Clinical training is known to have a beneficial impact on foot ulcer outcomes. Clinical training using simulation techniques has rarely been used in the management of diabetes-related foot complications or chronic wounds. Simulation can be defined as a device or environment that attempts to replicate the real world. The few non-web-based foot-related simulation courses have focused solely on training for a single skill or “part task” (for example, practicing ingrown toenail procedures on models). This pilot study aimed to primarily investigate the effect of a training program using multiple methods of simulation on participants’ clinical confidence in the management of foot ulcers. Methods: Sixteen podiatrists participated in a two-day Foot Ulcer Simulation Training (FUST) course. The course included pre-requisite web-based learning modules, practicing individual foot ulcer management part tasks (for example, debriding a model foot ulcer), and participating in replicated clinical consultation scenarios (for example, treating a standardised patient (actor) with a model foot ulcer). The primary outcome measure of the course was participants’ pre- and post completion of confidence surveys, using a five-point Likert scale (1 = Unacceptable-5 = Proficient). Participants’ knowledge, satisfaction and their perception of the relevance and fidelity (realism) of a range of course elements were also investigated. Parametric statistics were used to analyse the data. Pearson’s r was used for correlation, ANOVA for testing the differences between groups, and a paired-sample t-test to determine the significance between pre- and post-workshop scores. A minimum significance level of p < 0.05 was used. Results: An overall 42% improvement in clinical confidence was observed following completion of FUST (mean scores 3.10 compared to 4.40, p < 0.05). The lack of an overall significant change in knowledge scores reflected the participant populations’ high baseline knowledge and pre-requisite completion of web-based modules. Satisfaction, relevance and fidelity of all course elements were rated highly. Conclusions: This pilot study suggests simulation training programs can improve participants’ clinical confidence in the management of foot ulcers. The approach has the potential to enhance clinical training in diabetes-related foot complications and chronic wounds in general.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To examine the association between individual- and neighborhood-level disadvantage and self-reported arthritis. Methods: We used data from a population-based cross-sectional study conducted in 2007 among 10,757 men and women ages 40–65 years, selected from 200 neighborhoods in Brisbane, Queensland, Australia using a stratified 2-stage cluster design. Data were collected using a mail survey (68.5% response). Neighborhood disadvantage was measured using a census-based composite index, and individual disadvantage was measured using self-reported education, household income, and occupation. Arthritis was indicated by self-report. Data were analyzed using multilevel modeling. Results: The overall rate of self-reported arthritis was 23% (95% confidence interval [95% CI] 22–24). After adjustment for sociodemographic factors, arthritis prevalence was greatest for women (odds ratio [OR] 1.5, 95% CI 1.4–1.7) and in those ages 60–65 years (OR 4.4, 95% CI 3.7–5.2), those with a diploma/associate diploma (OR 1.3, 95% CI 1.1–1.6), those who were permanently unable to work (OR 4.0, 95% CI 3.1–5.3), and those with a household income <$25,999 (OR 2.1, 95% CI 1.7–2.6). Independent of individual-level factors, residents of the most disadvantaged neighborhoods were 42% (OR 1.4, 95% CI 1.2–1.7) more likely than those in the least disadvantaged neighborhoods to self-report arthritis. Cross-level interactions between neighborhood disadvantage and education, occupation, and household income were not significant. Conclusion: Arthritis prevalence is greater in more socially disadvantaged neighborhoods. These are the first multilevel data to examine the relationship between individual- and neighborhood-level disadvantage upon arthritis and have important implications for policy, health promotion, and other intervention strategies designed to reduce the rates of arthritis, indicating that intervention efforts may need to focus on both people and places.