915 resultados para state-space methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the method and results of a survey of 27 of the 33 Australian universities teaching engineering education in late 2007, undertaken by The Natural Edge Project (hosted by Griffith University and the Australian National University) and supported by the National Framework for Energy Efficiency. This survey aimed to ascertain the extent of energy efficiency (EE) education, and to identify preferred methods to assist in increasing the extent to which EE education is embedded in engineering curriculum. In this paper the context for the survey is supported by a summary of the key results from a variety of surveys undertaken over the last decade internationally. The paper concludes that EE education across universities and engineering disciplines in Australia is currently highly variable and ad hoc. Based on the results of the survey, this paper highlights a number of preferred options to support educators to embed sustainability within engineering programs, and future opportunities for monitoring EE, within the context of engineering education for sustainable development (EESD).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational models for cardiomyocyte action potentials (AP) often make use of a large parameter set. This parameter set can contain some elements that are fitted to experimental data independently of any other element, some elements that are derived concurrently with other elements to match experimental data, and some elements that are derived purely from phenomenological fitting to produce the desired AP output. Furthermore, models can make use of several different data sets, not always derived for the same conditions or even the same species. It is consequently uncertain whether the parameter set for a given model is physiologically accurate. Furthermore, it is only recently that the possibility of degeneracy in parameter values in producing a given simulation output has started to be addressed. In this study, we examine the effects of varying two parameters (the L-type calcium current (I(CaL)) and the delayed rectifier potassium current (I(Ks))) in a computational model of a rabbit ventricular cardiomyocyte AP on both the membrane potential (V(m)) and calcium (Ca(2+)) transient. It will subsequently be determined if there is degeneracy in this model to these parameter values, which will have important implications on the stability of these models to cell-to-cell parameter variation, and also whether the current methodology for generating parameter values is flawed. The accuracy of AP duration (APD) as an indicator of AP shape will also be assessed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently the application of the quasi-steady-state approximation (QSSA) to the stochastic simulation algorithm (SSA) was suggested for the purpose of speeding up stochastic simulations of chemical systems that involve both relatively fast and slow chemical reactions [Rao and Arkin, J. Chem. Phys. 118, 4999 (2003)] and further work has led to the nested and slow-scale SSA. Improved numerical efficiency is obtained by respecting the vastly different time scales characterizing the system and then by advancing only the slow reactions exactly, based on a suitable approximation to the fast reactions. We considerably extend these works by applying the QSSA to numerical methods for the direct solution of the chemical master equation (CME) and, in particular, to the finite state projection algorithm [Munsky and Khammash, J. Chem. Phys. 124, 044104 (2006)], in conjunction with Krylov methods. In addition, we point out some important connections to the literature on the (deterministic) total QSSA (tQSSA) and place the stochastic analogue of the QSSA within the more general framework of aggregation of Markov processes. We demonstrate the new methods on four examples: Michaelis–Menten enzyme kinetics, double phosphorylation, the Goldbeter–Koshland switch, and the mitogen activated protein kinase cascade. Overall, we report dramatic improvements by applying the tQSSA to the CME solver.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biologists are increasingly conscious of the critical role that noise plays in cellular functions such as genetic regulation, often in connection with fluctuations in small numbers of key regulatory molecules. This has inspired the development of models that capture this fundamentally discrete and stochastic nature of cellular biology - most notably the Gillespie stochastic simulation algorithm (SSA). The SSA simulates a temporally homogeneous, discrete-state, continuous-time Markov process, and of course the corresponding probabilities and numbers of each molecular species must all remain positive. While accurately serving this purpose, the SSA can be computationally inefficient due to very small time stepping so faster approximations such as the Poisson and Binomial τ-leap methods have been suggested. This work places these leap methods in the context of numerical methods for the solution of stochastic differential equations (SDEs) driven by Poisson noise. This allows analogues of Euler-Maruyuma, Milstein and even higher order methods to be developed through the Itô-Taylor expansions as well as similar derivative-free Runge-Kutta approaches. Numerical results demonstrate that these novel methods compare favourably with existing techniques for simulating biochemical reactions by more accurately capturing crucial properties such as the mean and variance than existing methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, development of Unmanned Aerial Vehicles (UAV) has become a significant growing segment of the global aviation industry. These vehicles are developed with the intention of operating in regions where the presence of onboard human pilots is either too risky or unnecessary. Their popularity with both the military and civilian sectors have seen the use of UAVs in a diverse range of applications, from reconnaissance and surveillance tasks for the military, to civilian uses such as aid relief and monitoring tasks. Efficient energy utilisation on an UAV is essential to its functioning, often to achieve the operational goals of range, endurance and other specific mission requirements. Due to the limitations of the space available and the mass budget on the UAV, it is often a delicate balance between the onboard energy available (i.e. fuel) and achieving the operational goals. This thesis presents an investigation of methods for increasing the energy efficiency on UAVs. One method is via the development of a Mission Waypoint Optimisation (MWO) procedure for a small fixed-wing UAV, focusing on improving the onboard fuel economy. MWO deals with a pre-specified set of waypoints by modifying the given waypoints within certain limits to achieve its optimisation objectives of minimising/maximising specific parameters. A simulation model of a UAV was developed in the MATLAB Simulink environment, utilising the AeroSim Blockset and the in-built Aerosonde UAV block and its parameters. This simulation model was separately integrated with a multi-objective Evolutionary Algorithm (MOEA) optimiser and a Sequential Quadratic Programming (SQP) solver to perform single-objective and multi-objective optimisation procedures of a set of real-world waypoints in order to minimise the onboard fuel consumption. The results of both procedures show potential in reducing fuel consumption on a UAV in a ight mission. Additionally, a parallel Hybrid-Electric Propulsion System (HEPS) on a small fixedwing UAV incorporating an Ideal Operating Line (IOL) control strategy was developed. An IOL analysis of an Aerosonde engine was performed, and the most efficient (i.e. provides greatest torque output at the least fuel consumption) points of operation for this engine was determined. Simulation models of the components in a HEPS were designed and constructed in the MATLAB Simulink environment. It was demonstrated through simulation that an UAV with the current HEPS configuration was capable of achieving a fuel saving of 6.5%, compared to the ICE-only configuration. These components form the basis for the development of a complete simulation model of a Hybrid-Electric UAV (HEUAV).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we seek to expand the use of direct methods in real-time applications by proposing a vision-based strategy for pose estimation of aerial vehicles. The vast majority of approaches make use of features to estimate motion. Conversely, the strategy we propose is based on a MR (Multi- Resolution) implementation of an image registration technique (Inverse Compositional Image Alignment ICIA) using direct methods. An on-board camera in a downwards-looking configuration, and the assumption of planar scenes, are the bases of the algorithm. The motion between frames (rotation and translation) is recovered by decomposing the frame-to-frame homography obtained by the ICIA algorithm applied to a patch that covers around the 80% of the image. When the visual estimation is required (e.g. GPS drop-out), this motion is integrated with the previous known estimation of the vehicles’ state, obtained from the on-board sensors (GPS/IMU), and the subsequent estimations are based only on the vision-based motion estimations. The proposed strategy is tested with real flight data in representative stages of a flight: cruise, landing, and take-off, being two of those stages considered critical: take-off and landing. The performance of the pose estimation strategy is analyzed by comparing it with the GPS/IMU estimations. Results show correlation between the visual estimation obtained with the MR-ICIA and the GPS/IMU data, that demonstrate that the visual estimation can be used to provide a good approximation of the vehicle’s state when it is required (e.g. GPS drop-outs). In terms of performance, the proposed strategy is able to maintain an estimation of the vehicle’s state for more than one minute, at real-time frame rates based, only on visual information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The United States Supreme Court has handed down a once in a generation patent law decision that will have important ramifications for the patentability of non-physical methods, both internationally and in Australia. In Bilski v Kappos, the Supreme Court considered whether an invention must either be tied to a machine or apparatus, or transform an article into a different state or thing to be patentable. It also considered for the first time whether business methods are patentable subject matter. The decision will be of particular interest to practitioners who followed the litigation in Grant v Commissioner of Patents, a Federal Court decision in which a Brisbane-based inventor was denied a patent over a method of protecting an asset from the claims of creditors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 2009 the Australian Federal and State governments are expected to have spent some AU$30 billion procuring infrastructure projects. For governments with finite resources but many competing projects, formal capital rationing is achieved through use of Business Cases. These Business cases articulate the merits of investing in particular projects along with the estimated costs and risks of each project. Despite the sheer size and impact of infrastructure projects, there is very little research in Australia, or internationally, on the performance of these projects against Business Case assumptions when the decision to invest is made. If such assumptions (particularly cost assumptions) are not met, then there is serious potential for the misallocation of Australia’s finite financial resources. This research addresses this important gap in the literature by using combined quantitative and qualitative research methods, to examine the actual performance of 14 major Australian government infrastructure projects. The research findings are controversial as they challenge widely held perceptions of the effectiveness of certain infrastructure delivery practices. Despite this controversy, the research has had a significant impact on the field and has been described as ‘outstanding’ and ‘definitive’ (Alliancing Association of Australasia), "one of the first of its kind" (Infrastructure Partnerships of Australia) and "making a critical difference to infrastructure procurement" (Victorian Department of Treasury). The implications for practice of the research have been profound and included the withdrawal by Government of various infrastructure procurement guidelines, the formulation of new infrastructure policies by several state governments and the preparation of new infrastructure guidelines that substantially reflect the research findings. Building on the practical research, a more rigorous academic investigation focussed on the comparative cost uplift of various project delivery strategies was submitted to Australia’s premier academic management conference, the Australian and New Zealand Academy of Management (ANZAM) Annual Conference. This paper has been accepted for the 2010 ANZAM National Conference following a process of double blind peer review with reviewers rating the paper’s overall contribution as "Excellent" and "Good".

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existing recommendation systems often recommend products to users by capturing the item-to-item and user-to-user similarity measures. These types of recommendation systems become inefficient in people-to-people networks for people to people recommendation that require two way relationship. Also, existing recommendation methods use traditional two dimensional models to find inter relationships between alike users and items. It is not efficient enough to model the people-to-people network with two-dimensional models as the latent correlations between the people and their attributes are not utilized. In this paper, we propose a novel tensor decomposition-based recommendation method for recommending people-to-people based on users profiles and their interactions. The people-to-people network data is multi-dimensional data which when modeled using vector based methods tend to result in information loss as they capture either the interactions or the attributes of the users but not both the information. This paper utilizes tensor models that have the ability to correlate and find latent relationships between similar users based on both information, user interactions and user attributes, in order to generate recommendations. Empirical analysis is conducted on a real-life online dating dataset. As demonstrated in results, the use of tensor modeling and decomposition has enabled the identification of latent correlations between people based on their attributes and interactions in the network and quality recommendations have been derived using the 'alike' users concept.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Written for Redland City Council in collaboration with the Capalaba Stakeholders Group. The provisions detailed in this report constitute a protocol agreement developed through the Capalaba Stakeholders Group between 2009 and 2011 around young people and public spaces in Redland City, Queensland. These provisions include agreed principles, standards and responses to tensions or unacceptable behaviour, how various tensions and problems can be resolved in constructive ways and how people, including young people can work together to make a public or community accessed space safe and accessible. It is based on the recognition that young people are part of the community and that strategies to resolve tensions that arise should be inclusive and employ a mixed methods approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Screening tests of basic cognitive status or ‘mental state’ have been shown to predict mortality and functional outcomes in adults. This study examined the relationship between mental state and outcomes in children with type 1 diabetes. Objective We aimed to determine whether mental state at diagnosis predicts longer term cognitive function of children with a new diagnosis of type 1 diabetes. Methods Mental state of 87 patients presenting with newly diagnosed type 1 diabetes was assessed using the School-Years Screening Test for the Evaluation of Mental Status. Cognitive abilities were assessed 1 wk and 6 months postdiagnosis using standardized tests of attention, memory, and intelligence. Results Thirty-seven children (42.5%) had reduced mental state at diagnosis. Children with impaired mental state had poorer attention and memory in the week following diagnosis, and, after controlling for possible confounding factors, significantly lower IQ at 6 months compared to those with unimpaired mental state (p < 0.05). Conclusions Cognition is impaired acutely in a significant number of children presenting with newly diagnosed type 1 diabetes. Mental state screening is an effective method of identifying children at risk of ongoing cognitive difficulties in the days and months following diagnosis. Clinicians may consider mental state screening for all newly diagnosed diabetic children to identify those at risk of cognitive sequelae.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A standard method for the numerical solution of partial differential equations (PDEs) is the method of lines. In this approach the PDE is discretised in space using �finite di�fferences or similar techniques, and the resulting semidiscrete problem in time is integrated using an initial value problem solver. A significant challenge when applying the method of lines to fractional PDEs is that the non-local nature of the fractional derivatives results in a discretised system where each equation involves contributions from many (possibly every) spatial node(s). This has important consequences for the effi�ciency of the numerical solver. First, since the cost of evaluating the discrete equations is high, it is essential to minimise the number of evaluations required to advance the solution in time. Second, since the Jacobian matrix of the system is dense (partially or fully), methods that avoid the need to form and factorise this matrix are preferred. In this paper, we consider a nonlinear two-sided space-fractional di�ffusion equation in one spatial dimension. A key contribution of this paper is to demonstrate how an eff�ective preconditioner is crucial for improving the effi�ciency of the method of lines for solving this equation. In particular, we show how to construct suitable banded approximations to the system Jacobian for preconditioning purposes that permit high orders and large stepsizes to be used in the temporal integration, without requiring dense matrices to be formed. The results of numerical experiments are presented that demonstrate the effectiveness of this approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an analysis of the stream cipher Mixer, a bit-based cipher with structural components similar to the well-known Grain cipher and the LILI family of keystream generators. Mixer uses a 128-bit key and 64-bit IV to initialise a 217-bit internal state. The analysis is focused on the initialisation function of Mixer and shows that there exist multiple key-IV pairs which, after initialisation, produce the same initial state, and consequently will generate the same keystream. Furthermore, if the number of iterations of the state update function performed during initialisation is increased, then the number of distinct initial states that can be obtained decreases. It is also shown that there exist some distinct initial states which produce the same keystream, resulting in a further reduction of the effective key space