156 resultados para Multiple-scale processing
em Queensland University of Technology - ePrints Archive
Resumo:
This paper introduces a straightforward method to asymptotically solve a variety of initial and boundary value problems for singularly perturbed ordinary differential equations whose solution structure can be anticipated. The approach is simpler than conventional methods, including those based on asymptotic matching or on eliminating secular terms. © 2010 by the Massachusetts Institute of Technology.
Resumo:
The emergence of pseudo-marginal algorithms has led to improved computational efficiency for dealing with complex Bayesian models with latent variables. Here an unbiased estimator of the likelihood replaces the true likelihood in order to produce a Bayesian algorithm that remains on the marginal space of the model parameter (with latent variables integrated out), with a target distribution that is still the correct posterior distribution. Very efficient proposal distributions can be developed on the marginal space relative to the joint space of model parameter and latent variables. Thus psuedo-marginal algorithms tend to have substantially better mixing properties. However, for pseudo-marginal approaches to perform well, the likelihood has to be estimated rather precisely. This can be difficult to achieve in complex applications. In this paper we propose to take advantage of multiple central processing units (CPUs), that are readily available on most standard desktop computers. Here the likelihood is estimated independently on the multiple CPUs, with the ultimate estimate of the likelihood being the average of the estimates obtained from the multiple CPUs. The estimate remains unbiased, but the variability is reduced. We compare and contrast two different technologies that allow the implementation of this idea, both of which require a negligible amount of extra programming effort. The superior performance of this idea over the standard approach is demonstrated on simulated data from a stochastic volatility model.
Resumo:
In this paper we introduce a new technique to obtain the slow-motion dynamics in nonequilibrium and singularly perturbed problems characterized by multiple scales. Our method is based on a straightforward asymptotic reduction of the order of the governing differential equation and leads to amplitude equations that describe the slowly-varying envelope variation of a uniformly valid asymptotic expansion. This may constitute a simpler and in certain cases a more general approach toward the derivation of asymptotic expansions, compared to other mainstream methods such as the method of Multiple Scales or Matched Asymptotic expansions because of its relation with the Renormalization Group. We illustrate our method with a number of singularly perturbed problems for ordinary and partial differential equations and recover certain results from the literature as special cases. © 2010 - IOS Press and the authors. All rights reserved.
Resumo:
In this paper the renormalization group (RG) method of Chen, Goldenfeld, and Oono [Phys. Rev. Lett., 73 (1994), pp.1311-1315; Phys. Rev. E, 54 (1996), pp.376-394] is presented in a pedagogical way to increase its visibility in applied mathematics and to argue favorably for its incorporation into the corresponding graduate curriculum.The method is illustrated by some linear and nonlinear singular perturbation problems. Key word. © 2012 Society for Industrial and Applied Mathematics.
Resumo:
Following the derivation of amplitude equations through a new two-time-scale method [O'Malley, R. E., Jr. & Kirkinis, E (2010) A combined renormalization group-multiple scale method for singularly perturbed problems. Stud. Appl. Math. 124, 383-410], we show that a multi-scale method may often be preferable for solving singularly perturbed problems than the method of matched asymptotic expansions. We illustrate this approach with 10 singularly perturbed ordinary and partial differential equations. © 2011 Cambridge University Press.
Resumo:
Microalgae dewatering is a major obstruction to industrial-scale processing of microalgae for biofuel prodn. The dil. nature of harvested microalgal cultures creates a huge operational cost during dewatering, thereby, rendering algae-based fuels less economically attractive. Currently there is no superior method of dewatering microalgae. A technique that may result in a greater algal biomass may have drawbacks such as a high capital cost or high energy consumption. The choice of which harvesting technique to apply will depend on the species of microalgae and the final product desired. Algal properties such as a large cell size and the capability of the microalgae to autoflocculate can simplify the dewatering process. This article reviews and addresses the various technologies currently used for dewatering microalgal cultures along with a comparative study of the performances of the different technologies.
Resumo:
Dewatering of microalgal culture is a major bottleneck towards the industrial-scale processing of microalgae for bio-diesel production. The dilute nature of harvested microalgal cultures poses a huge operation cost to dewater; thereby rendering microalgae-based fuels less economically attractive. This study explores the influence of microalgal growth phases and intercellular interactions during cultivation on dewatering efficiency of microalgae cultures. Experimental results show that microalgal cultures harvested during a low growth rate phase (LGRP) of 0.03 d-1 allowed a higher rate of settling than those harvested during a high growth rate phase (HGRP) of 0.11 d-1, even though the latter displayed a higher average differential biomass concentration of 0.2 g L-1 d-1. Zeta potential profile during the cultivation process showed a maximum electronegative value of -43.2 ± 0.7 mV during the HGRP which declined to stabilization at -34.5 ± 0.4 mV in the LGRP. The lower settling rate observed for HGRP microalgae is hence attributed to the high stability of the microalgal cells which electrostatically repel each other during this growth phase. Tangential flow filtration of 20 L HGRP culture concentrated 23 times by consuming 0.51 kWh/m3 of supernatant removed whilst 0.38 kWh/m3 was consumed to concentrate 20 L of LGRP by 48 times.
Resumo:
Benefit finding is a meaning making construct that has been shown to be related to adjustment in people with MS and their carers. This study investigated the dimensions, stability and potency of benefit finding in predicting adjustment over a 12 month interval using a newly developed Benefit Finding in Multiple Sclerosis Scale (BFiMSS). Usable data from 388 persons with MS and 232 carers was obtained from questionnaires completed at Time 1 and 12 months later (Time 2). Factor analysis of the BFiMSS revealed seven psychometrically sound factors: Compassion/Empathy, Spiritual Growth, Mindfulness, Family Relations Growth, Life Style Gains, Personal Growth, New Opportunities. BFiMSS total and factors showed satisfactory internal and retest reliability coefficients, and convergent, criterion and external validity. Results of regression analyses indicated that the Time 1 BFiMSS factors accounted for significant amounts of variance in each of the Time 2 adjustment outcomes (positive states of mind, positive affect, anxiety, depression) after controlling for Time 1 adjustment, and relevant demographic and illness variables. Findings delineate the dimensional structure of benefit finding in MS, the differential links between benefit finding dimensions and adjustment and the temporal unfolding of benefit finding in chronic illness.
Resumo:
Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.
Resumo:
A laboratory scale twin screw extruder has been interfaced with a near infrared (NIR) spectrometer via a fibre optic link so that NIR spectra can be collected continuously during the small scale experimental melt state processing of polymeric materials. This system can be used to investigate melt state processes such as reactive extrusion, in real time, in order to explore the kinetics and mechanism of the reaction. A further advantage of the system is that it has the capability to measure apparent viscosity simultaneously which gives important additional information about molecular weight changes and polymer degradation during processing. The system was used to study the melt processing of a nanocomposite consisting of a thermoplastic polyurethane and an organically modified layered silicate.
Resumo:
Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.
Resumo:
Symmetric multi-processor (SMP) systems, or multiple-CPU servers, are suitable for implementing parallel algorithms because they employ dedicated communication devices to enhance the inter-processor communication bandwidth, so that a better performance can be obtained. However, the cost for a multiple-CPU server is high and therefore, the server is usually shared among many users. The work-load due to other users will certainly affect the performance of the parallel programs so it is desirable to derive a method to optimize parallel programs under different loading conditions. In this paper, we present a simple method, which can be applied in SPMD type parallel programs, to improve the speedup by controlling the number of threads within the programs.
Resumo:
This paper presents a new multi-scale place recognition system inspired by the recent discovery of overlapping, multi-scale spatial maps stored in the rodent brain. By training a set of Support Vector Machines to recognize places at varying levels of spatial specificity, we are able to validate spatially specific place recognition hypotheses against broader place recognition hypotheses without sacrificing localization accuracy. We evaluate the system in a range of experiments using cameras mounted on a motorbike and a human in two different environments. At 100% precision, the multiscale approach results in a 56% average improvement in recall rate across both datasets. We analyse the results and then discuss future work that may lead to improvements in both robotic mapping and our understanding of sensory processing and encoding in the mammalian brain.