167 resultados para stochastic linear programming
Resumo:
Reliable budget/cost estimates for road maintenance and rehabilitation are subjected to uncertainties and variability in road asset condition and characteristics of road users. The CRC CI research project 2003-029-C ‘Maintenance Cost Prediction for Road’ developed a method for assessing variation and reliability in budget/cost estimates for road maintenance and rehabilitation. The method is based on probability-based reliable theory and statistical method. The next stage of the current project is to apply the developed method to predict maintenance/rehabilitation budgets/costs of large networks for strategic investment. The first task is to assess the variability of road data. This report presents initial results of the analysis in assessing the variability of road data. A case study of the analysis for dry non reactive soil is presented to demonstrate the concept in analysing the variability of road data for large road networks. In assessing the variability of road data, large road networks were categorised into categories with common characteristics according to soil and climatic conditions, pavement conditions, pavement types, surface types and annual average daily traffic. The probability distributions, statistical means, and standard deviation values of asset conditions and annual average daily traffic for each type were quantified. The probability distributions and the statistical information obtained in this analysis will be used to asset the variation and reliability in budget/cost estimates in later stage. Generally, we usually used mean values of asset data of each category as input values for investment analysis. The variability of asset data in each category is not taken into account. This analysis method demonstrated that it can be used for practical application taking into account the variability of road data in analysing large road networks for maintenance/rehabilitation investment analysis.
Resumo:
Novice programmers have difficulty developing an algorithmic solution while simultaneously obeying the syntactic constraints of the target programming language. To see how students fare in algorithmic problem solving when not burdened by syntax, we conducted an experiment in which a large class of beginning programmers were required to write a solution to a computational problem in structured English, as if instructing a child, without reference to program code at all. The students produced an unexpectedly wide range of correct, and attempted, solutions, some of which had not occurred to their teachers. We also found that many common programming errors were evident in the natural language algorithms, including failure to ensure loop termination, hardwiring of solutions, failure to properly initialise the computation, and use of unnecessary temporary variables, suggesting that these mistakes are caused by inexperience at thinking algorithmically, rather than difficulties in expressing solutions as program code.
Resumo:
The solution of linear ordinary differential equations (ODEs) is commonly taught in first year undergraduate mathematics classrooms, but the understanding of the concept of a solution is not always grasped by students until much later. Recognising what it is to be a solution of a linear ODE and how to postulate such solutions, without resorting to tables of solutions, is an important skill for students to carry with them to advanced studies in mathematics. In this study we describe a teaching and learning strategy that replaces the traditional algorithmic, transmission presentation style for solving ODEs with a constructive, discovery based approach where students employ their existing skills as a framework for constructing the solutions of first and second order linear ODEs. We elaborate on how the strategy was implemented and discuss the resulting impact on a first year undergraduate class. Finally we propose further improvements to the strategy as well as suggesting other topics which could be taught in a similar manner.
Resumo:
This study focuses on trends in contemporary Australian playwrighting, discussing recent investigations into the playwrighting process. The study analyses the current state of this country’s playwrighting industry, with a particular focus on programming trends since 1998. It seeks to explore the implications of this current theatrical climate, in particular the types of work most commonly being favoured for production. It argues that Australian plays are under-represented (compared to non-Australian plays) on ‘mainstream’ stages and that audiences might benefit from more challenging modes of writing than the popular three-act realist play models. The thesis argues that ‘New Lyricism’ might fill this position of offering an innovative Australian playwrighting mode. New Lyricism is characterised by a set of common aesthetics, including a non-linear narrative structure, a poetic use of language and magic realism. Several Australian playwrights who have adopted this mode of writing are identified and their works examined. The author’s play Floodlands is presented as a case study and the author’s creative process is examined in light of the published critical discussions about experimental playwriting work.
Resumo:
How and why visualisations support learning was the subject of this qualitative instrumental collective case study. Five computer programming languages (PHP, Visual Basic, Alice, GameMaker, and RoboLab) supporting differing degrees of visualisation were used as cases to explore the effectiveness of software visualisation to develop fundamental computer programming concepts (sequence, iteration, selection, and modularity). Cognitive theories of visual and auditory processing, cognitive load, and mental models provided a framework in which student cognitive development was tracked and measured by thirty-one 15-17 year old students drawn from a Queensland metropolitan secondary private girls’ school, as active participants in the research. Seventeen findings in three sections increase our understanding of the effects of visualisation on the learning process. The study extended the use of mental model theory to track the learning process, and demonstrated application of student research based metacognitive analysis on individual and peer cognitive development as a means to support research and as an approach to teaching. The findings also forward an explanation for failures in previous software visualisation studies, in particular the study has demonstrated that for the cases examined, where complex concepts are being developed, the mixing of auditory (or text) and visual elements can result in excessive cognitive load and impede learning. This finding provides a framework for selecting the most appropriate instructional programming language based on the cognitive complexity of the concepts under study.
Resumo:
Traditionally, the aquisition of skills and sport movement has been characterised by numerous repetitions of presumed model movement pattern to be acquired by learners. This approach has been questioned by research identifying the presence of individualised movement patterns and the low probability of occurrence of two identical movements within and between individuals. In contrast, the differential learning approach claims advantage for incurring variability in the learning process by adding stochastic perturbations during practice. These ideas are exemplified by data from a high jump experiment which compared the effectiveness of classical and a differential training approach with pre-post test design. Results showed clear advantages for the group with additional stochastic perturbation during the aquisition phase in comparison to classically trained athletes. Analogies to similar phenomenological effects in the neurobiological literature are discussed.
Resumo:
Poor student engagement and high failure rates in first year units were addressed at the Queensland University of Technology (QUT) with a course restructure involving a fresh approach to introducing programming. Students’ first taste of programming in the new course focused less on the language and syntax, and more on problem solving and design, and the role of programming in relation to other technologies they are likely to encounter in their studies. In effect, several technologies that have historically been compartmentalised and taught in isolation have been brought together as a breadth-first introduction to IT. Incorporating databases and Web development technologies into what used to be a purely programming unit gave students a very short introduction to each technology, with programming acting as the glue between each of them. As a result, students not only had a clearer understanding of the application of programming in the real world, but were able to determine their preference or otherwise for each of the technologies introduced, which will help them when the time comes for choosing a course major. Students engaged well in an intensely collaborative learning environment for this unit which was designed to both support the needs of students and meet industry expectations. Attrition from the unit was low, with computer laboratory practical attendance rates for the first time remaining high throughout semester, and the failure rate falling to a single figure percentage.
Resumo:
Invited one hour presentation at Microsoft Tech Ed 2009 about getting students interested in games programming at QUT.
Resumo:
In this paper, we consider the following non-linear fractional reaction–subdiffusion process (NFR-SubDP): Formula where f(u, x, t) is a linear function of u, the function g(u, x, t) satisfies the Lipschitz condition and 0Dt1–{gamma} is the Riemann–Liouville time fractional partial derivative of order 1 – {gamma}. We propose a new computationally efficient numerical technique to simulate the process. Firstly, the NFR-SubDP is decoupled, which is equivalent to solving a non-linear fractional reaction–subdiffusion equation (NFR-SubDE). Secondly, we propose an implicit numerical method to approximate the NFR-SubDE. Thirdly, the stability and convergence of the method are discussed using a new energy method. Finally, some numerical examples are presented to show the application of the present technique. This method and supporting theoretical results can also be applied to fractional integrodifferential equations.
Resumo:
The results of a numerical investigation into the errors for least squares estimates of function gradients are presented. The underlying algorithm is obtained by constructing a least squares problem using a truncated Taylor expansion. An error bound associated with this method contains in its numerator terms related to the Taylor series remainder, while its denominator contains the smallest singular value of the least squares matrix. Perhaps for this reason the error bounds are often found to be pessimistic by several orders of magnitude. The circumstance under which these poor estimates arise is elucidated and an empirical correction of the theoretical error bounds is conjectured and investigated numerically. This is followed by an indication of how the conjecture is supported by a rigorous argument.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.