112 resultados para Uncertainty of forecasts
Resumo:
The call for enhanced financial literacy amongst consumers is a global phenomenon, driven by the growing complexity of financial markets and products, and government concerns about the affordability of supporting an ageing population. Worldwide, defined benefit pensions are giving way to the risk and uncertainty of defined contribution superannuation/pension funds where fund members now make choices and decisions that were once made on their behalf. An important prerequisite for informed financial decision-making is adequate financial knowledge and skills to make competent investment decisions. This paper reports the findings of an online survey of the members of a large Australian public sector-based superannuation fund and shows that although respondents generally understand basic financial matters, on average, their understanding of investments concepts, such as the relationship between risk and returns, is inadequate. These results highlight the need for education programs focusing specifically on developing fund members’ investment knowledge and skills to facilitate informed retirement savings decisions.
Resumo:
This chapter focuses on two challenges to science teachers’ knowledge that Fensham identifies as having recently emerged—one a challenge from beyond Science and the other a challenge from within Science. Both challenges stem from common features of contemporary society, namely, its complexity and uncertainty. Both also confront science teachers with teaching situations that contrast markedly with the simplicity and certainty that have been characteristic of most school science education, and hence both present new demands for science teachers’ knowledge and skill. The first, the challenge from without Science, comes from the new world of work and the “knowledge society”. Regardless of their success in traditional school learning, many young persons in many modern economies are now seen as lacking other knowledge and skills that are essential for their personal, social and economic life. The second, the challenge from within Science, derives from changing notions of the nature of science itself. If the complexity and uncertainty of the knowledge society demand new understandings and contributions from science teachers, these are certainly matched by the demands that are posed by the role of complexity and uncertainty in science itself.
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
Resumo:
Exploiting wind-energy is one possible way to ex- tend flight duration for Unmanned Arial Vehicles. Wind-energy can also be used to minimise energy consumption for a planned path. In this paper, we consider uncertain time-varying wind fields and plan a path through them. A Gaussian distribution is used to determine uncertainty in the Time-varying wind fields. We use Markov Decision Process to plan a path based upon the uncertainty of Gaussian distribution. Simulation results that compare the direct line of flight between start and target point and our planned path for energy consumption and time of travel are presented. The result is a robust path using the most visited cell while sampling the Gaussian distribution of the wind field in each cell.
Resumo:
Exploiting wind-energy is one possible way to extend flight duration for Unmanned Arial Vehicles. Wind-energy can also be used to minimise energy consumption for a planned path. In this paper, we consider uncertain time-varying wind fields and plan a path through them. A Gaussian distribution is used to determine uncertainty in the Time-varying wind fields. We use Markov Decision Process to plan a path based upon the uncertainty of Gaussian distribution. Simulation results that compare the direct line of flight between start and target point and our planned path for energy consumption and time of travel are presented. The result is a robust path using the most visited cell while sampling the Gaussian distribution of the wind field in each cell.
Resumo:
The UN Convention on the Rights of Persons with Disability (CRPD) promotes equal and full participation by children in education. Equity of educational access for all students, including students with disability, free from discrimination, is the first stated national goal of Australian education (MCEETYA 2008). Australian federal disability discrimination law, the Disability Discrimination Act 1992 (DDA), follows the Convention, with the federal Disability Standards for Education 2005 (DSE) enacting specific requirements for education. This article discusses equity of processes for inclusion of students with disability in Australian educational accountability testing, including international tests in which many countries participate. The conclusion drawn is that equitable inclusion of students with disability in current Australian educational accountability testing in not occurring from a social perspective and is not in principle compliant with law. However, given the reluctance of courts to intervene in education matters and the uncertainty of an outcome in any court consideration, the discussion shows that equitable inclusion in accountability systems is available through policy change rather than expensive, and possibly unsuccessful, legal challenges.
Resumo:
Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.
Resumo:
This research is part of a major project with a stimulus that rose from the need to manage a large number of ageing bridges in low traffic volume roads (LTVR) in Australia. The project investigated, designed and consequently constructed, involved replacing an ageing super-structure of a 10m span bridge with a disused Flat-bed Rail Wagon (FRW). This research, therefore, is developed on the premises that the FRW can be adopted as the main structural system for the bridges in LTVR network. The main focus of this research is to present two alternate deck wearing systems (DWS) as part of the design of the FRW as road bridge deck conforming to AS5100 (2004). The bare FRW structural components were first examined for their adequacy (ultimate and serviceability) in resisting the critical loads specified in AS5100(2004). Two options of DWSs were evaluated and their effects on the FRW examined. The first option involved usage of timber DWS; the idea of this option was to use all the primary and secondary members of the FRW in load sharing and to provide additional members where weaknesses in the original members arose. The second option involved usage of reinforced concrete DWS with only the primary members of the FRW sharing the AS5100 (2004) loading. This option inherently minimised the risk associated with any uncertainty of the secondary members to their structural adequacy. This thesis reports the design phases of both options with conclusions of the selection of the ideal option for better structural performance, ease of construction and cost. The comparison carried out here focuses on the distribution of the traffic load by the FRW as a superstructure. Advantages and disadvantages highlighting cost comparisons and ease of constructability of the two systems are also included.
Resumo:
Medical research represents a substantial departure from conventional medical care. Medical care is patient-orientated, with decisions based on the best interests and/or wishes of the person receiving the care. In contrast, medical research is future-directed. Primarily it aims to contribute new knowledge about illness or disease, or new knowledge about interventions, such as drugs, that impact upon some human condition. Current State and Territory laws and research ethics guidelines in Australia relating to the review of medical research appropriately acknowledge that the functions of medical care and medical research differ. Prior to a medical research project commencing, the study must be reviewed and approved by a Human Research Ethics Committee (HREC). For medical research involving incompetent adults, some jurisdictions require an additional, independent safeguard by way of tribunal or court approval of medical research protocols. This extra review process reflects the uncertainty of medical research involvement, and the difficulties surrogate decision-makers of incompetent adults face in making decisions about others, and deliberating about the risks and benefits of research involvement. Parents of children also face the same difficulties when making decisions about their child’s research involvement. However, unlike the position concerning incompetent adults, there are no similar safeguards under Australian law in relation to the approval of medical research involving children. This column questions why this discrepancy exists with a view to generating further dialogue on the topic.
Resumo:
Purpose The goal of this work was to set out a methodology for measuring and reporting small field relative output and to assess the application of published correction factors across a population of linear accelerators. Methods and materials Measurements were made at 6 MV on five Varian iX accelerators using two PTW T60017 unshielded diodes. Relative output readings and profile measurements were made for nominal square field sizes of side 0.5 to 1.0 cm. The actual in-plane (A) and cross-plane (B) field widths were taken to be the FWHM at the 50% isodose level. An effective field size, defined as FSeff=A·B, was calculated and is presented as a field size metric. FSeffFSeff was used to linearly interpolate between published Monte Carlo (MC) calculated kQclin,Qmsrfclin,fmsr values to correct for the diode over-response in small fields. Results The relative output data reported as a function of the nominal field size were different across the accelerator population by up to nearly 10%. However, using the effective field size for reporting showed that the actual output ratios were consistent across the accelerator population to within the experimental uncertainty of ±1.0%. Correcting the measured relative output using kQclin,Qmsrfclin,fmsr at both the nominal and effective field sizes produce output factors that were not identical but differ by much less than the reported experimental and/or MC statistical uncertainties. Conclusions In general, the proposed methodology removes much of the ambiguity in reporting and interpreting small field dosimetric quantities and facilitates a clear dosimetric comparison across a population of linacs
Resumo:
Techniques for evaluating and selecting multivariate volatility forecasts are not yet understood as well as their univariate counterparts. This paper considers the ability of different loss functions to discriminate between a set of competing forecasting models which are subsequently applied in a portfolio allocation context. It is found that a likelihood-based loss function outperforms its competitors, including those based on the given portfolio application. This result indicates that considering the particular application of forecasts is not necessarily the most effective basis on which to select models.
Resumo:
Introduction The dose to skin surface is an important factor for many radiotherapy treatment techniques. It is known that TPS predicted surface doses can be significantly different from actual ICRP skin doses as defined at 70 lm. A number of methods have been implemented for the accurate determination of surface dose including use of specific dosimeters such as TLDs and radiochromic film as well as Monte Carlo calculations. Stereotactic radiosurgery involves delivering very high doses per treatment fraction using small X-ray fields. To date, there has been limited data on surface doses for these very small field sizes. The purpose of this work is to evaluate surface doses by both measurements and Monte Carlo calculations for very small field sizes. Methods All measurements were performed on a Novalis Tx linear accelerator which has a 6 MV SRS X-ray beam mode which uses a specially thin flattening filter. Beam collimation was achieved by circular cones with apertures that gave field sizes ranging from 4 to 30 mm at the isocentre. The relative surface doses were measured using Gafchromic EBT3 film which has the active layer at a depth similar to the ICRP skin dose depth. Monte Carlo calculations were performed using the BEAMnrc/EGSnrc Monte Carlo codes (V4 r225). The specifications of the linear accelerator, including the collimator, were provided by the manufacturer. Optimisation of the incident X-ray beam was achieved by an iterative adjustment of the energy, spatial distribution and radial spread of the incident electron beam striking the target. The energy cutoff parameters were PCUT = 0.01 MeV and ECUT = 0.700 - MeV. Directional bremsstrahlung splitting was switched on for all BEAMnrc calculations. Relative surface doses were determined in a layer defined in a water phantom of the same thickness and depth as compared to the active later in the film. Results Measured surface doses using the EBT3 film varied between 13 and 16 % for the different cones with an uncertainty of 3 %. Monte Carlo calculated surface doses were in agreement to better than 2 % to the measured doses for all the treatment cones. Discussion and conclusions This work has shown the consistency of surface dose measurements using EBT3 film with Monte Carlo predicted values within the uncertainty of the measurements. As such, EBT3 film is recommended for in vivo surface dose measurements.
Resumo:
In the electricity market environment, load-serving entities (LSEs) will inevitably face risks in purchasing electricity because there are a plethora of uncertainties involved. To maximize profits and minimize risks, LSEs need to develop an optimal strategy to reasonably allocate the purchased electricity amount in different electricity markets such as the spot market, bilateral contract market, and options market. Because risks originate from uncertainties, an approach is presented to address the risk evaluation problem by the combined use of the lower partial moment and information entropy (LPME). The lower partial moment is used to measure the amount and probability of the loss, whereas the information entropy is used to represent the uncertainty of the loss. Electricity purchasing is a repeated procedure; therefore, the model presented represents a dynamic strategy. Under the chance-constrained programming framework, the developed optimization model minimizes the risk of the electricity purchasing portfolio in different markets because the actual profit of the LSE concerned is not less than the specified target under a required confidence level. Then, the particle swarm optimization (PSO) algorithm is employed to solve the optimization model. Finally, a sample example is used to illustrate the basic features of the developed model and method.
Resumo:
The cultural and creative industries are closely intertwined with government. This chapter reviews key economic rationales for public policy interventions for the arts, cultural and creative industries. Market failure justifications depend on the status of arts and culture as non-rival public goods, as ‘merit goods’, or the need to moderate the effects of up-front investment costs or monopoly, and the inherent uncertainty of creative production. ‘Systems failure’ too is a regular rationale for policy intervention. Using the United Kingdom as an example, the chapter shows how emphasis on these rationales has shifted over the last three decades, first in the context of industrial policies for traditional aims such as exports and job growth, which have been joined in recent years by the need for investment in intangibles, knowledge exchange, and spillover effects in the wider economy.
Resumo:
During the past few decades, developing efficient methods to solve dynamic facility layout problems has been focused on significantly by practitioners and researchers. More specifically meta-heuristic algorithms, especially genetic algorithm, have been proven to be increasingly helpful to generate sub-optimal solutions for large-scale dynamic facility layout problems. Nevertheless, the uncertainty of the manufacturing factors in addition to the scale of the layout problem calls for a mixed genetic algorithm–robust approach that could provide a single unlimited layout design. The present research aims to devise a customized permutation-based robust genetic algorithm in dynamic manufacturing environments that is expected to be generating a unique robust layout for all the manufacturing periods. The numerical outcomes of the proposed robust genetic algorithm indicate significant cost improvements compared to the conventional genetic algorithm methods and a selective number of other heuristic and meta-heuristic techniques.