189 resultados para asymptotic preserving
An indexing model for sustainable urban environmental management : the case of Gold Coast, Australia
Resumo:
Improving urban ecosystems and the quality of life of citizens have become a central issue in the global effort of creating sustainable built environments. As human beings our lives completely depend on the sustainability of the nature and we need to protect and manage natural resources in a more sustainable way in order to sustain our existence. As a result of population growth and rapid urbanisation, increasing demand of productivity depletes and degrades natural resources. However, the increasing activities and rapid development require more resources, and therefore, ecological planning becomes an essential vehicle in preserving scarce natural resources. This paper aims to indentify the interation between urban ecosystems and human activities in the context of urban sustainability and explores the degrading environmental impacts of this interaction and the necessity and benefits of using sustainability indicators as a tool in sustainable urban evnironmental management. Additionally, the paper also introduces an environmental sustainability indexing model (ASSURE) as an innovative approach to evaluate the environmental conditions of built environment.
Resumo:
Intuitively, any `bag of words' approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distri- butions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document's initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur's search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.
Resumo:
Privacy enhancing protocols (PEPs) are a family of protocols that allow secure exchange and management of sensitive user information. They are important in preserving users’ privacy in today’s open environment. Proof of the correctness of PEPs is necessary before they can be deployed. However, the traditional provable security approach, though well established for verifying cryptographic primitives, is not applicable to PEPs. We apply the formal method of Coloured Petri Nets (CPNs) to construct an executable specification of a representative PEP, namely the Private Information Escrow Bound to Multiple Conditions Protocol (PIEMCP). Formal semantics of the CPN specification allow us to reason about various security properties of PIEMCP using state space analysis techniques. This investigation provides us with preliminary insights for modeling and verification of PEPs in general, demonstrating the benefit of applying the CPN-based formal approach to proving the correctness of PEPs.
Resumo:
This paper considers some of the implications of the rise of design as a master-metaphor of the information age. It compares the terms 'interaction design' and 'mass communication', suggesting that both can be seen as a contradiction in terms, inappropriately preserving an industrial-age division between producers and consumers. With the shift from mass media to interactive media, semiotic and political power seems to be shifting too - from media producers to designers. This paper argues that it is important for the new discipline of 'interactive design' not to fall into habits of thought inherited from the 'mass' industrial era. Instead it argues for the significance, for designers and producers alike, of what I call 'distributed expertise' -including social network markets, a DIY-culture, user-led innovation, consumer co-created content, and the use of Web 2.0 affordances for social, scientific and creative purposes as well as for entertainment. It considers the importance of the growth of 'distributed expertise' as part of a new paradigm in the growth of knowledge, which has 'evolved' through a number of phases, from 'abstraction' to 'representation', to 'productivity'. In the context of technologically mediated popular participation in the growth of knowledge and social relationships, the paper argues that design and media-production professions need to cross rather than to maintain the gap between experts and everyone else, enabling all the agents in the system to navigate the shift into the paradigm of mass productivity.
Resumo:
The population Monte Carlo algorithm is an iterative importance sampling scheme for solving static problems. We examine the population Monte Carlo algorithm in a simplified setting, a single step of the general algorithm, and study a fundamental problem that occurs in applying importance sampling to high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of estimate under conditions on the importance function. We demonstrate the exponential growth of the asymptotic variance with the dimension and show that the optimal covariance matrix for the importance function can be estimated in special cases.
Resumo:
The results of a numerical investigation into the errors for least squares estimates of function gradients are presented. The underlying algorithm is obtained by constructing a least squares problem using a truncated Taylor expansion. An error bound associated with this method contains in its numerator terms related to the Taylor series remainder, while its denominator contains the smallest singular value of the least squares matrix. Perhaps for this reason the error bounds are often found to be pessimistic by several orders of magnitude. The circumstance under which these poor estimates arise is elucidated and an empirical correction of the theoretical error bounds is conjectured and investigated numerically. This is followed by an indication of how the conjecture is supported by a rigorous argument.
Resumo:
Bob Baxt, the third Chairman of the Trade Practices Commission, served for a single three year term from 1988 to 1991. He followed Bob McComas, who had deliberately adopted a non-litigious approach to preserving the competitive process, believing that he understood business as an insider and that much of what it did was not anti-competitive, when correctly viewed. Baxt was far more pro-active in his approach, and more closely aligned with that of the first Chairman, Ron Bannerman. Baxt sought to push the frontiers of investigation and precedent, and perhaps, more significantly, sought to influence his Ministers, the government, public servants and public opinion about the need to expand the coverage of the Trade Practices Act, increase penalties and properly resource the Commission so that it could perform its assigned roles. This article examines Baxt’s early and on-going role in teaching Australian students and professionals through his interdisciplinary Trade Practices Workshops, the political context of Baxt’s tenure, including his relations with the Attorney-General ,Michael Duffy, and his skilful handling of the Queensland Wire case.
Resumo:
A novel H-bridge multilevel PWM converter topology based on a series connection of a high voltage (HV) diode-clamped inverter and a low voltage (LV) conventional inverter is proposed. A DC link voltage arrangement for the new hybrid and asymmetric solution is presented to have a maximum number of output voltage levels by preserving the adjacent switching vectors between voltage levels. Hence, a fifteen-level hybrid converter can be attained with a minimum number of power components. A comparative study has been carried out to present high performance of the proposed configuration to approach a very low THD of voltage and current, which leads to the possible elimination of output filter. Regarding the proposed configuration, a new cascade inverter is verified by cascading an asymmetrical diode-clamped inverter, in which nineteen levels can be synthesized in output voltage with the same number of components. To balance the DC link capacitor voltages for the maximum output voltage resolution as well as synthesise asymmetrical DC link combination, a new Multi-output Boost (MOB) converter is utilised at the DC link voltage of a seven-level H-bridge diode-clamped inverter. Simulation and hardware results based on different modulations are presented to confirm the validity of the proposed approach to achieve a high quality output voltage.
Resumo:
Lifecycle funds offered by retirement plan providers allocate aggressively to risky asset classes when the employee participants are young, gradually switching to more conservative asset classes as they grow older and approach retirement. This approach focuses on maximizing growth of the accumulation fund in the initial years and preserving its value in the later years. The authors simulate terminal wealth outcomes based on conventional lifecycle asset allocation rules as well as on contrarian strategies that reverse the direction of asset switching. The evidence suggests that the growth in portfolio size over time significantly impacts the asset allocation decision. Due to the portfolio size effect that is observed by the authors, the terminal value of accumulation in retirement accounts is influenced more by the asset allocation strategy adopted in later years relative to that adopted in early years. By mechanistically switching to conservative assets in the later years of a plan, lifecycle strategies sacrifice significant growth opportunity and prove counterproductive to the participant's wealth accumulation objective. The authors' conclude that this sacrifice does not seem to be compensated adequately in terms of reducing the risk of potentially adverse outcomes.
Resumo:
One of the primary treatment goals of adolescent idiopathic scoliosis (AIS) surgery is to achieve maximum coronal plane correction while maintaining coronal balance. However maintaining or restoring sagittal plane spinal curvature has become increasingly important in maintaining the long-term health of the spine. Patients with AIS are characterised by pre-operative thoracic hypokyphosis, and it is generally agreed that operative treatment of thoracic idiopathic scoliosis should aim to restore thoracic kyphosis to normal values while maintaining lumbar lordosis and good overall sagittal balance. The aim of this study was to evaluate CT sagittal plane parameters, with particular emphasis on thoracolumbar junctional alignment, in patients with AIS who underwent Video Assisted Thoracoscopic Spinal Fusion and Instrumentation (VATS). This study concluded that video-assisted thoracoscopic spinal fusion and instrumentation reliably increases thoracic kyphosis while preserving junctional alignment and lumbar lordosis in thoracic AIS.
Resumo:
This thesis addresses computational challenges arising from Bayesian analysis of complex real-world problems. Many of the models and algorithms designed for such analysis are ‘hybrid’ in nature, in that they are a composition of components for which their individual properties may be easily described but the performance of the model or algorithm as a whole is less well understood. The aim of this research project is to after a better understanding of the performance of hybrid models and algorithms. The goal of this thesis is to analyse the computational aspects of hybrid models and hybrid algorithms in the Bayesian context. The first objective of the research focuses on computational aspects of hybrid models, notably a continuous finite mixture of t-distributions. In the mixture model, an inference of interest is the number of components, as this may relate to both the quality of model fit to data and the computational workload. The analysis of t-mixtures using Markov chain Monte Carlo (MCMC) is described and the model is compared to the Normal case based on the goodness of fit. Through simulation studies, it is demonstrated that the t-mixture model can be more flexible and more parsimonious in terms of number of components, particularly for skewed and heavytailed data. The study also reveals important computational issues associated with the use of t-mixtures, which have not been adequately considered in the literature. The second objective of the research focuses on computational aspects of hybrid algorithms for Bayesian analysis. Two approaches will be considered: a formal comparison of the performance of a range of hybrid algorithms and a theoretical investigation of the performance of one of these algorithms in high dimensions. For the first approach, the delayed rejection algorithm, the pinball sampler, the Metropolis adjusted Langevin algorithm, and the hybrid version of the population Monte Carlo (PMC) algorithm are selected as a set of examples of hybrid algorithms. Statistical literature shows how statistical efficiency is often the only criteria for an efficient algorithm. In this thesis the algorithms are also considered and compared from a more practical perspective. This extends to the study of how individual algorithms contribute to the overall efficiency of hybrid algorithms, and highlights weaknesses that may be introduced by the combination process of these components in a single algorithm. The second approach to considering computational aspects of hybrid algorithms involves an investigation of the performance of the PMC in high dimensions. It is well known that as a model becomes more complex, computation may become increasingly difficult in real time. In particular the importance sampling based algorithms, including the PMC, are known to be unstable in high dimensions. This thesis examines the PMC algorithm in a simplified setting, a single step of the general sampling, and explores a fundamental problem that occurs in applying importance sampling to a high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of the estimate under conditions on the importance function. Additionally, the exponential growth of the asymptotic variance with the dimension is demonstrated and we illustrates that the optimal covariance matrix for the importance function can be estimated in a special case.
Resumo:
When asymptotic series methods are applied in order to solve problems that arise in applied mathematics in the limit that some parameter becomes small, they are unable to demonstrate behaviour that occurs on a scale that is exponentially small compared to the algebraic terms of the asymptotic series. There are many examples of physical systems where behaviour on this scale has important effects and, as such, a range of techniques known as exponential asymptotic techniques were developed that may be used to examinine behaviour on this exponentially small scale. Many problems in applied mathematics may be represented by behaviour within the complex plane, which may subsequently be examined using asymptotic methods. These problems frequently demonstrate behaviour known as Stokes phenomenon, which involves the rapid switches of behaviour on an exponentially small scale in the neighbourhood of some curve known as a Stokes line. Exponential asymptotic techniques have been applied in order to obtain an expression for this exponentially small switching behaviour in the solutions to orginary and partial differential equations. The problem of potential flow over a submerged obstacle has been previously considered in this manner by Chapman & Vanden-Broeck (2006). By representing the problem in the complex plane and applying an exponential asymptotic technique, they were able to detect the switching, and subsequent behaviour, of exponentially small waves on the free surface of the flow in the limit of small Froude number, specifically considering the case of flow over a step with one Stokes line present in the complex plane. We consider an extension of this work to flow configurations with multiple Stokes lines, such as flow over an inclined step, or flow over a bump or trench. The resultant expressions are analysed, and demonstrate interesting implications, such as the presence of exponentially sub-subdominant intermediate waves and the possibility of trapped surface waves for flow over a bump or trench. We then consider the effect of multiple Stokes lines in higher order equations, particu- larly investigating the behaviour of higher-order Stokes lines in the solutions to partial differential equations. These higher-order Stokes lines switch off the ordinary Stokes lines themselves, adding a layer of complexity to the overall Stokes structure of the solution. Specifically, we consider the different approaches taken by Howls et al. (2004) and Chap- man & Mortimer (2005) in applying exponential asymptotic techniques to determine the higher-order Stokes phenomenon behaviour in the solution to a particular partial differ- ential equation.
Resumo:
We present a novel, simple and effective approach for tele-operation of aerial robotic vehicles with haptic feedback. Such feedback provides the remote pilot with an intuitive feel of the robot’s state and perceived local environment that will ensure simple and safe operation in cluttered 3D environments common in inspection and surveillance tasks. Our approach is based on energetic considerations and uses the concepts of network theory and port-Hamiltonian systems. We provide a general framework for addressing problems such as mapping the limited stroke of a ‘master’ joystick to the infinite stroke of a ‘slave’ vehicle, while preserving passivity of the closed-loop system in the face of potential time delays in communications links and limited sensor data
Resumo:
This paper considers the question of designing a fully image-based visual servo control for a class of dynamic systems. The work is motivated by the ongoing development of image-based visual servo control of small aerial robotic vehicles. The kinematics and dynamics of a rigid-body dynamical system (such as a vehicle airframe) maneuvering over a flat target plane with observable features are expressed in terms of an unnormalized spherical centroid and an optic flow measurement. The image-plane dynamics with respect to force input are dependent on the height of the camera above the target plane. This dependence is compensated by introducing virtual height dynamics and adaptive estimation in the proposed control. A fully nonlinear adaptive control design is provided that ensures asymptotic stability of the closed-loop system for all feasible initial conditions. The choice of control gains is based on an analysis of the asymptotic dynamics of the system. Results from a realistic simulation are presented that demonstrate the performance of the closed-loop system. To the author's knowledge, this paper documents the first time that an image-based visual servo control has been proposed for a dynamic system using vision measurement for both position and velocity.