141 resultados para Exponential Splines
Resumo:
The foliage of a plant performs vital functions. As such, leaf models are required to be developed for modelling the plant architecture from a set of scattered data captured using a scanning device. The leaf model can be used for purely visual purposes or as part of a further model, such as a fluid movement model or biological process. For these reasons, an accurate mathematical representation of the surface and boundary is required. This paper compares three approaches for fitting a continuously differentiable surface through a set of scanned data points from a leaf surface, with a technique already used for reconstructing leaf surfaces. The techniques which will be considered are discrete smoothing D2-splines [R. Arcangeli, M. C. Lopez de Silanes, and J. J. Torrens, Multidimensional Minimising Splines, Springer, 2004.], the thin plate spline finite element smoother [S. Roberts, M. Hegland, and I. Altas, Approximation of a Thin Plate Spline Smoother using Continuous Piecewise Polynomial Functions, SIAM, 1 (2003), pp. 208--234] and the radial basis function Clough-Tocher method [M. Oqielat, I. Turner, and J. Belward, A hybrid Clough-Tocher method for surface fitting with application to leaf data., Appl. Math. Modelling, 33 (2009), pp. 2582-2595]. Numerical results show that discrete smoothing D2-splines produce reconstructed leaf surfaces which better represent the original physical leaf.
Resumo:
Realistic virtual models of leaf surfaces are important for a number of applications in the plant sciences, such as modelling agrichemical spray droplet movement and spreading on the surface. In this context, the virtual surfaces are required to be sufficiently smooth to facilitate the use of the mathematical equations that govern the motion of the droplet. While an effective approach is to apply discrete smoothing D2-spline algorithms to reconstruct the leaf surfaces from three-dimensional scanned data, difficulties arise when dealing with wheat leaves that tend to twist and bend. To overcome this topological difficulty, we develop a parameterisation technique that rotates and translates the original data, allowing the surface to be fitted using the discrete smoothing D2-spline methods in the new parameter space. Our algorithm uses finite element methods to represent the surface as a linear combination of compactly supported shape functions. Numerical results confirm that the parameterisation, along with the use of discrete smoothing D2-spline techniques, produces realistic virtual representations of wheat leaves.
Resumo:
Extensive research has highlighted the positive and exponential relationship between vehicle speed and crash risk and severity. Speed enforcement policies and practices throughout the world have developed dramatically as new technology becomes available, however speeding remains a pervasive problem internationally that significantly contributes to road trauma. This paper adopted a three-pronged approach to review speed enforcement policies and practices by: (i) describing and comparing policies and practices adopted in a cross-section of international jurisdictions; (ii) reviewing the available empirical evidence evaluating the effectiveness of various approaches, and; (iii) providing recommendations for the optimisation speed enforcement. The review shows the enforcement strategies adopted in various countries differ both in terms of the approaches used and how they are specifically applied. The literature review suggests strong and consistent evidence that police speed enforcement, in particular speed cameras, can be an effective tool for reducing vehicle speeds and subsequent traffic crashes. Drawing from this evidence, recommendations for best practice are proposed, including the specific instances in which various speed enforcement approaches typically produce the greatest road safety benefits, and perhaps most importantly, that speed enforcement programs must utilise a variety of strategies tailored to specific situations, rather than a one-size-fits-all approach.
Resumo:
This paper presents a novel framework for the modelling of passenger facilitation in a complex environment. The research is motivated by the challenges in the airport complex system, where there are multiple stakeholders, differing operational objectives and complex interactions and interdependencies between different parts of the airport system. Traditional methods for airport terminal modelling do not explicitly address the need for understanding causal relationships in a dynamic environment. Additionally, existing Bayesian Network (BN) models, which provide a means for capturing causal relationships, only present a static snapshot of a system. A method to integrate a BN complex systems model with stochastic queuing theory is developed based on the properties of the Poisson and exponential distributions. The resultant Hybrid Queue-based Bayesian Network (HQBN) framework enables the simulation of arbitrary factors, their relationships, and their effects on passenger flow and vice versa. A case study implementation of the framework is demonstrated on the inbound passenger facilitation process at Brisbane International Airport. The predicted outputs of the model, in terms of cumulative passenger flow at intermediary and end points in the inbound process, are found to have an R2 goodness of fit of 0.9994 and 0.9982 respectively over a 10 h test period. The utility of the framework is demonstrated on a number of usage scenarios including causal analysis and ‘what-if’ analysis. This framework provides the ability to analyse and simulate a dynamic complex system, and can be applied to other socio-technical systems such as hospitals.
Resumo:
Background The benign reputation of Plasmodium vivax is at odds with the burden and severity of the disease. This reputation, combined with restricted in vitro techniques, has slowed efforts to gain an understanding of the parasite biology and interaction with its human host. Methods A simulation model of the within-host dynamics of P. vivax infection is described, incorporating distinctive characteristics of the parasite such as the preferential invasion of reticulocytes and hypnozoite production. The developed model is fitted using digitized time-series’ from historic neurosyphilis studies, and subsequently validated against summary statistics from a larger study of the same population. The Chesson relapse pattern was used to demonstrate the impact of released hypnozoites. Results The typical pattern for dynamics of the parasite population is a rapid exponential increase in the first 10 days, followed by a gradual decline. Gametocyte counts follow a similar trend, but are approximately two orders of magnitude lower. The model predicts that, on average, an infected naïve host in the absence of treatment becomes infectious 7.9 days post patency and is infectious for a mean of 34.4 days. In the absence of treatment, the effect of hypnozoite release was not apparent as newly released parasites were obscured by the existing infection. Conclusions The results from the model provides useful insights into the dynamics of P. vivax infection in human hosts, in particular the timing of host infectiousness and the role of the hypnozoite in perpetuating infection.
Resumo:
Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods which have been recently employed to analyse PNSD data, however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K-means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and Silhouette width validation values and the K-means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K-means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectra to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help researchers immensely in analysing PNSD data for characterisation and source apportionment purposes.
Resumo:
Most standard algorithms for prediction with expert advice depend on a parameter called the learning rate. This learning rate needs to be large enough to fit the data well, but small enough to prevent overfitting. For the exponential weights algorithm, a sequence of prior work has established theoretical guarantees for higher and higher data-dependent tunings of the learning rate, which allow for increasingly aggressive learning. But in practice such theoretical tunings often still perform worse (as measured by their regret) than ad hoc tuning with an even higher learning rate. To close the gap between theory and practice we introduce an approach to learn the learning rate. Up to a factor that is at most (poly)logarithmic in the number of experts and the inverse of the learning rate, our method performs as well as if we would know the empirically best learning rate from a large range that includes both conservative small values and values that are much higher than those for which formal guarantees were previously available. Our method employs a grid of learning rates, yet runs in linear time regardless of the size of the grid.
Resumo:
The fractional Fokker-Planck equation is an important physical model for simulating anomalous diffusions with external forces. Because of the non-local property of the fractional derivative an interesting problem is to explore high accuracy numerical methods for fractional differential equations. In this paper, a space-time spectral method is presented for the numerical solution of the time fractional Fokker-Planck initial-boundary value problem. The proposed method employs the Jacobi polynomials for the temporal discretization and Fourier-like basis functions for the spatial discretization. Due to the diagonalizable trait of the Fourier-like basis functions, this leads to a reduced representation of the inner product in the Galerkin analysis. We prove that the time fractional Fokker-Planck equation attains the same approximation order as the time fractional diffusion equation developed in [23] by using the present method. That indicates an exponential decay may be achieved if the exact solution is sufficiently smooth. Finally, some numerical results are given to demonstrate the high order accuracy and efficiency of the new numerical scheme. The results show that the errors of the numerical solutions obtained by the space-time spectral method decay exponentially.
Resumo:
As the society matures, there was an increasing pressure to preserve historic buildings. The economic cost in maintaining these important heritage legacies has become the prime consideration of every state. Dedicated intelligent monitoring systems supplementing the traditional building inspections will enable the stakeholder to carry out not only timely reactive response but also plan the maintenance in a more vigilant approach; thus, preventing further degradation which was very costly and difficult to address if neglected. The application of the intelligent structural health monitoring system in this case studies of ‘modern heritage’ buildings is on its infancy but it is an innovative approach in building maintenance. ‘Modern heritage’ buildings were the product of technological change and were made of synthetic materials such as reinforced concrete and steel. Architectural buildings that was very common in Oceania and The Pacific. Engineering problems that arose from this type of building calls for immediate engineering solution since the deterioration rate is exponential. The application of this newly emerging monitoring system will improve the traditional maintenance system on heritage conservation. Savings in time and resources can be achieved if only pathological results were on hand. This case study will validate that approach. This publication will serve as a position paper to the on-going research regarding application of (Structural Health Monitoring) SHM systems to heritage buildings in Brisbane, Australia. It will be investigated with the application of the SHM systems and devices to validate the integrity of the recent structural restoration of the newly re-strengthened heritage building, the Brisbane City Hall.
Resumo:
Advancements in sleep medicine have been escalating ever since research began appearing in the 1950s. As with most early clinical trials, women were excluded from participation. Even if researchers included women or addressed sex differences by age, reproductive stage was seldom considered. Recently, there has been an exponential increase in research on sleep in midlife and older women. This Practice Pearl briefly reviews the importance of adequate sleep, clinical assessment for sleep disorders, and guidelines for practice.
Resumo:
Introduction A pedagogical relationship - the relationship produced through teaching and learning - is, according to phenomenologist Max van Maanen, ‘the most profound relationship an adult can have with a child’ (van Maanen 1982). But what does it mean for a teacher to have a ‘profound’ relationship with a student in digital times? What, indeed, is an optimal pedagogical relationship at a time when the exponential proliferation and transformation of information across the globe is making for unprecedented social and cultural change? Does it involve both parties in a Facebook friendship? Being snappy with Snapchat? Tumbling around on Tumblr? There is now ample evidence of a growing trend to displace face-to-face interaction by virtual connections. One effect of these technologically mediated relationships is that a growing number of young people experience relationships as ‘mile-wide, inch-deep’ phenomena. It is timely, in this context, to explore how pedagogical relationships are being transmuted by Big Data, and to ask about the implications this has for current and future generations of professional educators.
Resumo:
This project constructed virtual plant leaf surfaces from digitised data sets for use in droplet spray models. Digitisation techniques for obtaining data sets for cotton, chenopodium and wheat leaves are discussed and novel algorithms for the reconstruction of the leaves from these three plant species are developed. The reconstructed leaf surfaces are included into agricultural droplet spray models to investigate the effect of the nozzle and spray formulation combination on the proportion of spray retained by the plant. A numerical study of the post-impaction motion of large droplets that have formed on the leaf surface is also considered.
Resumo:
The world is facing an energy crisis due to exponential population growth and limited availability of fossil fuels. Carbon, one of the most abundant materials found on earth, and its allotrope forms have been proposed in this project for novel energy generation and storage devices. This studied investigated the synthesis and properties of these carbon nanomaterials for applications in organic solar cells and supercapacitors.
Resumo:
The access to mobile technologies is growing at an exponential rate in developed and developing countries, with some developing countries surpassing developed countries in terms of device ownership. It is both the demand for, and high usage of mobile technologies that have driven new and emerging pedagogical practices in higher education. These technologies have also exponentially increased access to information in a knowledge economy. While differences are often drawn between developing and developed countries in terms of the access and use of information and communication technologies (ICT), this paper will report on a study detailing how higher education students use mobile technologies and social media in their studies and in their personal lives. It will contrast the similarities in how students from an Australian and Vietnamese university access and use mobile and social media technologies while also highlighting ways in which these technologies can be embraced by academics to connect and engage with students.
Resumo:
High-angular resolution diffusion imaging (HARDI) can reconstruct fiber pathways in the brain with extraordinary detail, identifying anatomical features and connections not seen with conventional MRI. HARDI overcomes several limitations of standard diffusion tensor imaging, which fails to model diffusion correctly in regions where fibers cross or mix. As HARDI can accurately resolve sharp signal peaks in angular space where fibers cross, we studied how many gradients are required in practice to compute accurate orientation density functions, to better understand the tradeoff between longer scanning times and more angular precision. We computed orientation density functions analytically from tensor distribution functions (TDFs) which model the HARDI signal at each point as a unit-mass probability density on the 6D manifold of symmetric positive definite tensors. In simulated two-fiber systems with varying Rician noise, we assessed how many diffusionsensitized gradients were sufficient to (1) accurately resolve the diffusion profile, and (2) measure the exponential isotropy (EI), a TDF-derived measure of fiber integrity that exploits the full multidirectional HARDI signal. At lower SNR, the reconstruction accuracy, measured using the Kullback-Leibler divergence, rapidly increased with additional gradients, and EI estimation accuracy plateaued at around 70 gradients.