77 resultados para [JEL:C20] Mathematical and Quantitative Methods - Econometric Methods: Single Equation Models
Resumo:
During the past three decades, the subject of fractional calculus (that is, calculus of integrals and derivatives of arbitrary order) has gained considerable popularity and importance, mainly due to its demonstrated applications in numerous diverse and widespread fields in science and engineering. For example, fractional calculus has been successfully applied to problems in system biology, physics, chemistry and biochemistry, hydrology, medicine, and finance. In many cases these new fractional-order models are more adequate than the previously used integer-order models, because fractional derivatives and integrals enable the description of the memory and hereditary properties inherent in various materials and processes that are governed by anomalous diffusion. Hence, there is a growing need to find the solution behaviour of these fractional differential equations. However, the analytic solutions of most fractional differential equations generally cannot be obtained. As a consequence, approximate and numerical techniques are playing an important role in identifying the solution behaviour of such fractional equations and exploring their applications. The main objective of this thesis is to develop new effective numerical methods and supporting analysis, based on the finite difference and finite element methods, for solving time, space and time-space fractional dynamical systems involving fractional derivatives in one and two spatial dimensions. A series of five published papers and one manuscript in preparation will be presented on the solution of the space fractional diffusion equation, space fractional advectiondispersion equation, time and space fractional diffusion equation, time and space fractional Fokker-Planck equation with a linear or non-linear source term, and fractional cable equation involving two time fractional derivatives, respectively. One important contribution of this thesis is the demonstration of how to choose different approximation techniques for different fractional derivatives. Special attention has been paid to the Riesz space fractional derivative, due to its important application in the field of groundwater flow, system biology and finance. We present three numerical methods to approximate the Riesz space fractional derivative, namely the L1/ L2-approximation method, the standard/shifted Gr¨unwald method, and the matrix transform method (MTM). The first two methods are based on the finite difference method, while the MTM allows discretisation in space using either the finite difference or finite element methods. Furthermore, we prove the equivalence of the Riesz fractional derivative and the fractional Laplacian operator under homogeneous Dirichlet boundary conditions – a result that had not previously been established. This result justifies the aforementioned use of the MTM to approximate the Riesz fractional derivative. After spatial discretisation, the time-space fractional partial differential equation is transformed into a system of fractional-in-time differential equations. We then investigate numerical methods to handle time fractional derivatives, be they Caputo type or Riemann-Liouville type. This leads to new methods utilising either finite difference strategies or the Laplace transform method for advancing the solution in time. The stability and convergence of our proposed numerical methods are also investigated. Numerical experiments are carried out in support of our theoretical analysis. We also emphasise that the numerical methods we develop are applicable for many other types of fractional partial differential equations.
Resumo:
Evaluation, selection and finally decision making are all among important issues, which engineers face in long run of projects. Engineers implement mathematical and nonmathematical methods to make accurate and correct decisions, whenever needed. As extensive as these methods are, effects of any selected method on outputs achieved and decisions made are still suspicious. This is more controversial and challengeable, where evaluation is made among non-quantitative alternatives. In civil engineering and construction management problems, criteria include both quantitative and qualitative ones, such as aesthetic, construction duration, building and operation costs, and environmental considerations. As the result, decision making frequently takes place among non-quantitative alternatives. It should be noted that traditional comparison methods, including clear-cut and inflexible mathematics, have always been criticized. This paper demonstrates a brief review of traditional methods of evaluating alternatives. It also offers a new decision making method using, fuzzy calculations. The main focus of this research is some engineering issues, which have flexible nature and vague borders. Suggested method provides analyzability of evaluation for decision makers. It is also capable to overcome multi criteria and multi-referees problems. In order to ease calculations, a program named DeMA is introduced.
Resumo:
In many modeling situations in which parameter values can only be estimated or are subject to noise, the appropriate mathematical representation is a stochastic ordinary differential equation (SODE). However, unlike the deterministic case in which there are suites of sophisticated numerical methods, numerical methods for SODEs are much less sophisticated. Until a recent paper by K. Burrage and P.M. Burrage (1996), the highest strong order of a stochastic Runge-Kutta method was one. But K. Burrage and P.M. Burrage (1996) showed that by including additional random variable terms representing approximations to the higher order Stratonovich (or Ito) integrals, higher order methods could be constructed. However, this analysis applied only to the one Wiener process case. In this paper, it will be shown that in the multiple Wiener process case all known stochastic Runge-Kutta methods can suffer a severe order reduction if there is non-commutativity between the functions associated with the Wiener processes. Importantly, however, it is also suggested how this order can be repaired if certain commutator operators are included in the Runge-Kutta formulation. (C) 1998 Elsevier Science B.V. and IMACS. All rights reserved.
Resumo:
Quantitative market data has traditionally been used throughout marketing and business as a tool to inform and direct design decisions. However, in our changing economic climate, businesses need to innovate and create products their customers will love. Deep customer insight methods move beyond just questioning customers and aims to provoke true emotional responses in order to reveal new opportunities that go beyond functional product requirements. This paper explores traditional market research methods and compares them to methods used to gain deep customer insights. This study reports on a collaborative research project with seven small to medium enterprises and four multi-national organisations. Firms were introduced to a design led innovation approach, and were taught the different methods to gain deep customer insights. Interviews were conducted to understand the experience and outcomes of pre-existing research methods and deep customer insight approaches. Findings concluded that deep customer insights were unlikely to be revealed through traditional market research techniques. The theoretical outcome of this study is a complementary methods matrix, providing guidance on appropriate research methods in accordance to a project’s timeline.
ADI-Euler and extrapolation methods for the two-dimensional fractional advection-dispersion equation
Resumo:
As part of an ongoing research on the development of a longer life insulated rail joint (IRJ), this paper reports a field experiment and a simplified 2D numerical modelling for the purpose of investigating the behaviour of rail web in the vicinity of endpost in an insulated rail joint (IRJ) due to wheel passages. A simplified 2D plane stress finite element model is used to simulate the wheel-rail rolling contact impact at IRJ. This model is validated using data from a strain gauged IRJ that was installed in a heavy haul network; data in terms of the vertical and shear strains at specific positions of the IRJ during train passing were captured and compared with the results of the FE model. The comparison indicates a satisfactory agreement between the FE model and the field testing. Furthermore, it demonstrates that the experimental and numerical analyses reported in this paper provide a valuable datum for developing further insight into the behaviour of IRJ under wheel impacts.
Resumo:
The United States Supreme Court has handed down a once in a generation patent law decision that will have important ramifications for the patentability of non-physical methods, both internationally and in Australia. In Bilski v Kappos, the Supreme Court considered whether an invention must either be tied to a machine or apparatus, or transform an article into a different state or thing to be patentable. It also considered for the first time whether business methods are patentable subject matter. The decision will be of particular interest to practitioners who followed the litigation in Grant v Commissioner of Patents, a Federal Court decision in which a Brisbane-based inventor was denied a patent over a method of protecting an asset from the claims of creditors.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
Mixed methods research is the use of qualitative and quantitative methods in the same study to gain a more rounded and holistic understanding of the phenomena under investigation. This type of research approach is gaining popularity in the nursing literature as a way to understand the complexity of nursing care and as a means to enhance evidenced-based practice. This paper introduces nephrology nurses to mixed methods research, its terminology and application to nephrology nursing. Five common mixed methods designs will be described highlighting the purposes, strengths and weaknesses of each design. Examples of mixed methods research will be given to illustrate the wide application of mixed methods research to nursing and its usefulness in nephrology nursing research.
Resumo:
Parametric and generative modelling methods are ways in which computer models are made more flexible, and of formalising domain-specific knowledge. At present, no open standard exists for the interchange of parametric and generative information. The Industry Foundation Classes (IFC) which are an open standard for interoperability in building information models is presented as the base for an open standard in parametric modelling. The advantage of allowing parametric and generative representations are that the early design process can allow for more iteration and changes can be implemented quicker than with traditional models. This paper begins with a formal definition of what constitutes to be parametric and generative modelling methods and then proceeds to describe an open standard in which the interchange of components could be implemented. As an illustrative example of generative design, Frazer’s ‘Reptiles’ project from 1968 is reinterpreted.
Resumo:
1. Autonomous acoustic recorders are widely available and can provide a highly efficient method of species monitoring, especially when coupled with software to automate data processing. However, the adoption of these techniques is restricted by a lack of direct comparisons with existing manual field surveys. 2. We assessed the performance of autonomous methods by comparing manual and automated examination of acoustic recordings with a field-listening survey, using commercially available autonomous recorders and custom call detection and classification software. We compared the detection capability, time requirements, areal coverage and weather condition bias of these three methods using an established call monitoring programme for a nocturnal bird, the little spotted kiwi(Apteryx owenii). 3. The autonomous recorder methods had very high precision (>98%) and required <3% of the time needed for the field survey. They were less sensitive, with visual spectrogram inspection recovering 80% of the total calls detected and automated call detection 40%, although this recall increased with signal strength. The areal coverage of the spectrogram inspection and automatic detection methods were 85% and 42% of the field survey. The methods using autonomous recorders were more adversely affected by wind and did not show a positive association between ground moisture and call rates that was apparent from the field counts. However, all methods produced the same results for the most important conservation information from the survey: the annual change in calling activity. 4. Autonomous monitoring techniques incur different biases to manual surveys and so can yield different ecological conclusions if sampling is not adjusted accordingly. Nevertheless, the sensitivity, robustness and high accuracy of automated acoustic methods demonstrate that they offer a suitable and extremely efficient alternative to field observer point counts for species monitoring.
Resumo:
The methodology undertaken, the channel model and the system model created for developing a novel adaptive equalization method and a novel channel tracking method for uplink of MU-MIMO-OFDM systems is presented in this paper. The results show that the channel tracking method works with 97% accuracy, while the training-based initial channel estimation method shows poor performance in estimating the actual channel comparatively.