788 resultados para heavy vehicle modelling
Resumo:
LiFePO4 is a commercially available battery material with good theoretical discharge capacity, excellent cycle life and increased safety compared with competing Li-ion chemistries. It has been the focus of considerable experimental and theoretical scrutiny in the past decade, resulting in LiFePO4 cathodes that perform well at high discharge rates. This scrutiny has raised several questions about the behaviour of LiFePO4 material during charge and discharge. In contrast to many other battery chemistries that intercalate homogeneously, LiFePO4 can phase-separate into highly and lowly lithiated phases, with intercalation proceeding by advancing an interface between these two phases. The main objective of this thesis is to construct mathematical models of LiFePO4 cathodes that can be validated against experimental discharge curves. This is in an attempt to understand some of the multi-scale dynamics of LiFePO4 cathodes that can be difficult to determine experimentally. The first section of this thesis constructs a three-scale mathematical model of LiFePO4 cathodes that uses a simple Stefan problem (which has been used previously in the literature) to describe the assumed phase-change. LiFePO4 crystals have been observed agglomerating in cathodes to form a porous collection of crystals and this morphology motivates the use of three size-scales in the model. The multi-scale model developed validates well against experimental data and this validated model is then used to examine the role of manufacturing parameters (including the agglomerate radius) on battery performance. The remainder of the thesis is concerned with investigating phase-field models as a replacement for the aforementioned Stefan problem. Phase-field models have recently been used in LiFePO4 and are a far more accurate representation of experimentally observed crystal-scale behaviour. They are based around the Cahn-Hilliard-reaction (CHR) IBVP, a fourth-order PDE with electrochemical (flux) boundary conditions that is very stiff and possesses multiple time and space scales. Numerical solutions to the CHR IBVP can be difficult to compute and hence a least-squares based Finite Volume Method (FVM) is developed for discretising both the full CHR IBVP and the more traditional Cahn-Hilliard IBVP. Phase-field models are subject to two main physicality constraints and the numerical scheme presented performs well under these constraints. This least-squares based FVM is then used to simulate the discharge of individual crystals of LiFePO4 in two dimensions. This discharge is subject to isotropic Li+ diffusion, based on experimental evidence that suggests the normally orthotropic transport of Li+ in LiFePO4 may become more isotropic in the presence of lattice defects. Numerical investigation shows that two-dimensional Li+ transport results in crystals that phase-separate, even at very high discharge rates. This is very different from results shown in the literature, where phase-separation in LiFePO4 crystals is suppressed during discharge with orthotropic Li+ transport. Finally, the three-scale cathodic model used at the beginning of the thesis is modified to simulate modern, high-rate LiFePO4 cathodes. High-rate cathodes typically do not contain (large) agglomerates and therefore a two-scale model is developed. The Stefan problem used previously is also replaced with the phase-field models examined in earlier chapters. The results from this model are then compared with experimental data and fit poorly, though a significant parameter regime could not be investigated numerically. Many-particle effects however, are evident in the simulated discharges, which match the conclusions of recent literature. These effects result in crystals that are subject to local currents very different from the discharge rate applied to the cathode, which impacts the phase-separating behaviour of the crystals and raises questions about the validity of using cathodic-scale experimental measurements in order to determine crystal-scale behaviour.
Resumo:
Normally, vehicles queued at an intersection reach maximum flow rate after the fourth vehicle and results in a start-up lost time. This research demonstrated that the Enlarged Stopping Distance (ESD) concept could assist in reducing the start-up time and therefore increase traffic flow capacity at signalised intersections. In essence ESD gives sufficient space for a queuing vehicle to accelerate simultaneously without having to wait for the front vehicle to depart, hence reducing start-up lost time. In practice, the ESD concept would be most effective when enlarged stopping distance between the first and second vehicle allowing faster clearance of the intersection.
Resumo:
This thesis reports on an investigation to develop an advanced and comprehensive milling process model of the raw sugar factory. Although the new model can be applied to both, the four-roller and six-roller milling units, it is primarily developed for the six-roller mills which are widely used in the Australian sugar industry. The approach taken was to gain an understanding of the previous milling process simulation model "MILSIM" developed at the University of Queensland nearly four decades ago. Although the MILSIM model was widely adopted in the Australian sugar industry for simulating the milling process it did have some incorrect assumptions. The study aimed to eliminate all the incorrect assumptions of the previous model and develop an advanced model that represents the milling process correctly and tracks the flow of other cane components in the milling process which have not been considered in the previous models. The development of the milling process model was done is three stages. Firstly, an enhanced milling unit extraction model (MILEX) was developed to access the mill performance parameters and predict the extraction performance of the milling process. New definitions for the milling performance parameters were developed and a complete milling train along with the juice screen was modelled. The MILEX model was validated with factory data and the variation in the mill performance parameters was observed and studied. Some case studies were undertaken to study the effect of fibre in juice streams, juice in cush return and imbibition% fibre on extraction performance of the milling process. It was concluded from the study that the empirical relations developed for the mill performance parameters in the MILSIM model were not applicable to the new model. New empirical relations have to be developed before the model is applied with confidence. Secondly, a soluble and insoluble solids model was developed using modelling theory and experimental data to track the flow of sucrose (pol), reducing sugars (glucose and fructose), soluble ash, true fibre and mud solids entering the milling train through the cane supply and their distribution in juice and bagasse streams.. The soluble impurities and mud solids in cane affect the performance of the milling train and further processing of juice and bagasse. New mill performance parameters were developed in the model to track the flow of cane components. The developed model is the first of its kind and provides some additional insight regarding the flow of soluble and insoluble cane components and the factors affecting their distribution in juice and bagasse. The model proved to be a good extension to the MILEX model to study the overall performance of the milling train. Thirdly, the developed models were incorporated in a proprietary software package "SysCAD’ for advanced operational efficiency and for availability in the ‘whole of factory’ model. The MILEX model was developed in SysCAD software to represent a single milling unit. Eventually the entire milling train and the juice screen were developed in SysCAD using series of different controllers and features of the software. The models developed in SysCAD can be run from macro enabled excel file and reports can be generated in excel sheets. The flexibility of the software, ease of use and other advantages are described broadly in the relevant chapter. The MILEX model is developed in static mode and dynamic mode. The application of the dynamic mode of the model is still under progress.
Resumo:
The invited presentation was delivered at Queensland Department of Main Roads, Brisbane Australia, 17th June 2013
Resumo:
Denial-of-service (DoS) attacks are a growing concern to networked services like the Internet. In recent years, major Internet e-commerce and government sites have been disabled due to various DoS attacks. A common form of DoS attack is a resource depletion attack, in which an attacker tries to overload the server's resources, such as memory or computational power, rendering the server unable to service honest clients. A promising way to deal with this problem is for a defending server to identify and segregate malicious traffic as earlier as possible. Client puzzles, also known as proofs of work, have been shown to be a promising tool to thwart DoS attacks in network protocols, particularly in authentication protocols. In this thesis, we design efficient client puzzles and propose a stronger security model to analyse client puzzles. We revisit a few key establishment protocols to analyse their DoS resilient properties and strengthen them using existing and novel techniques. Our contributions in the thesis are manifold. We propose an efficient client puzzle that enjoys its security in the standard model under new computational assumptions. Assuming the presence of powerful DoS attackers, we find a weakness in the most recent security model proposed to analyse client puzzles and this study leads us to introduce a better security model for analysing client puzzles. We demonstrate the utility of our new security definitions by including two hash based stronger client puzzles. We also show that using stronger client puzzles any protocol can be converted into a provably secure DoS resilient key exchange protocol. In other contributions, we analyse DoS resilient properties of network protocols such as Just Fast Keying (JFK) and Transport Layer Security (TLS). In the JFK protocol, we identify a new DoS attack by applying Meadows' cost based framework to analyse DoS resilient properties. We also prove that the original security claim of JFK does not hold. Then we combine an existing technique to reduce the server cost and prove that the new variant of JFK achieves perfect forward secrecy (the property not achieved by original JFK protocol) and secure under the original security assumptions of JFK. Finally, we introduce a novel cost shifting technique which reduces the computation cost of the server significantly and employ the technique in the most important network protocol, TLS, to analyse the security of the resultant protocol. We also observe that the cost shifting technique can be incorporated in any Diffine{Hellman based key exchange protocol to reduce the Diffie{Hellman exponential cost of a party by one multiplication and one addition.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.
Resumo:
The rapid growth of visual information on Web has led to immense interest in multimedia information retrieval (MIR). While advancement in MIR systems has achieved some success in specific domains, particularly the content-based approaches, general Web users still struggle to find the images they want. Despite the success in content-based object recognition or concept extraction, the major problem in current Web image searching remains in the querying process. Since most online users only express their needs in semantic terms or objects, systems that utilize visual features (e.g., color or texture) to search images create a semantic gap which hinders general users from fully expressing their needs. In addition, query-by-example (QBE) retrieval imposes extra obstacles for exploratory search because users may not always have the representative image at hand or in mind when starting a search (i.e. the page zero problem). As a result, the majority of current online image search engines (e.g., Google, Yahoo, and Flickr) still primarily use textual queries to search. The problem with query-based retrieval systems is that they only capture users’ information need in terms of formal queries;; the implicit and abstract parts of users’ information needs are inevitably overlooked. Hence, users often struggle to formulate queries that best represent their needs, and some compromises have to be made. Studies of Web search logs suggest that multimedia searches are more difficult than textual Web searches, and Web image searching is the most difficult compared to video or audio searches. Hence, online users need to put in more effort when searching multimedia contents, especially for image searches. Most interactions in Web image searching occur during query reformulation. While log analysis provides intriguing views on how the majority of users search, their search needs or motivations are ultimately neglected. User studies on image searching have attempted to understand users’ search contexts in terms of users’ background (e.g., knowledge, profession, motivation for search and task types) and the search outcomes (e.g., use of retrieved images, search performance). However, these studies typically focused on particular domains with a selective group of professional users. General users’ Web image searching contexts and behaviors are little understood although they represent the majority of online image searching activities nowadays. We argue that only by understanding Web image users’ contexts can the current Web search engines further improve their usefulness and provide more efficient searches. In order to understand users’ search contexts, a user study was conducted based on university students’ Web image searching in News, Travel, and commercial Product domains. The three search domains were deliberately chosen to reflect image users’ interests in people, time, event, location, and objects. We investigated participants’ Web image searching behavior, with the focus on query reformulation and search strategies. Participants’ search contexts such as their search background, motivation for search, and search outcomes were gathered by questionnaires. The searching activity was recorded with participants’ think aloud data for analyzing significant search patterns. The relationships between participants’ search contexts and corresponding search strategies were discovered by Grounded Theory approach. Our key findings include the following aspects: - Effects of users' interactive intents on query reformulation patterns and search strategies - Effects of task domain on task specificity and task difficulty, as well as on some specific searching behaviors - Effects of searching experience on result expansion strategies A contextual image searching model was constructed based on these findings. The model helped us understand Web image searching from user perspective, and introduced a context-aware searching paradigm for current retrieval systems. A query recommendation tool was also developed to demonstrate how users’ query reformulation contexts can potentially contribute to more efficient searching.
Resumo:
The success or effectiveness for any aircraft design is a function of many trade-offs. Over the last 100 years of aircraft design these trade-offs have been optimized and dominant aircraft design philosophies have emerged. Pilotless aircraft (or uninhabited airborne systems, UAS) present new challenges in the optimization of their configuration. Recent developments in battery and motor technology have seen an upsurge in the utility and performance of electric powered aircraft. Thus, the opportunity to explore hybrid-electric aircraft powerplant configurations is compelling. This thesis considers the design of such a configuration from an overall propulsive, and energy efficiency perspective. A prototype system was constructed using a representative small UAS internal combustion engine (10cc methanol two-stroke) and a 600W brushless Direct current (BLDC) motor. These components were chosen to be representative of those that would be found on typical small UAS. The system was tested on a dynamometer in a wind-tunnel and the results show an improvement in overall propulsive efficiency of 17% when compared to a non-hybrid powerplant. In this case, the improvement results from the utilization of a larger propeller that the hybrid solution allows, which shows that general efficiency improvements are possible using hybrid configurations for aircraft propulsion. Additionally this approach provides new improvements in operational and mission flexibility (such as the provision of self-starting) which are outlined in the thesis. Specifically, the opportunity to use the windmilling propeller for energy regeneration was explored. It was found (in the prototype configuration) that significant power (60W) is recoverable in a steep dive, and although the efficiency of regeneration is low, the capability can allow several options for improved mission viability. The thesis concludes with the general statement that a hybrid powerplant improves the overall mission effectiveness and propulsive efficiency of small UAS.
Resumo:
Cell migration is a behaviour critical to many key biological effects, including wound healing, cancerous cell invasion and morphogenesis, the development of an organism from an embryo. However, given that each of these situations is distinctly different and cells are extremely complicated biological objects, interest lies in more basic experiments which seek to remove conflating factors and present a less complex environment within which cell migration can be experimentally examined. These include in vitro studies like the scratch assay or circle migration assay, and ex vivo studies like the colonisation of the hindgut by neural crest cells. The reduced complexity of these experiments also makes them much more enticing as problems to mathematically model, like done here. The primary goal of the mathematical models used in this thesis is to shed light on which cellular behaviours work to generate the travelling waves of invasion observed in these experiments, and to explore how variations in these behaviours can potentially predict differences in this invasive pattern which are experimentally observed when cell types or chemical environment are changed. Relevant literature has already identified the difficulty of distinguishing between these behaviours when using traditional mathematical biology techniques operating on a macroscopic scale, and so here a sophisticated individual-cell-level model, an extension of the Cellular Potts Model (CPM), is been constructed and used to model a scratch assay experiment. This model includes a novel mechanism for dealing with cell proliferations that allowed for the differing properties of quiescent and proliferative cells to be implemented into their behaviour. This model is considered both for its predictive power and used to make comparisons with the travelling waves which result in more traditional macroscopic simulations. These comparisons demonstrate a surprising amount of agreement between the two modelling frameworks, and suggest further novel modifications to the CPM that would allow it to better model cell migration. Considerations of the model’s behaviour are used to argue that the dominant effect governing cell migration (random motility or signal-driven taxis) likely depends on the sort of invasion demonstrated by cells, as easily seen by microscopic photography. Additionally, a scratch assay simulated on a non-homogeneous domain consisting of a ’fast’ and ’slow’ region is also used to further differentiate between these different potential cell motility behaviours. A heterogeneous domain is a novel situation which has not been considered mathematically in this context, nor has it been constructed experimentally to the best of the candidate’s knowledge. Thus this problem serves as a thought experiment used to test the conclusions arising from the simulations on homogeneous domains, and to suggest what might be observed should this non-homogeneous assay situation be experimentally realised. Non-intuitive cell invasion patterns are predicted for diffusely-invading cells which respond to a cell-consumed signal or nutrient, contrasted with rather expected behaviour in the case of random-motility-driven invasion. The potential experimental observation of these behaviours is demonstrated by the individual-cell-level model used in this thesis, which does agree with the PDE model in predicting these unexpected invasion patterns. In the interest of examining such a case of a non-homogeneous domain experimentally, some brief suggestion is made as to how this could be achieved.
Resumo:
The research study discussed in the paper investigated the adsorption/desorption behaviour of heavy metals commonly deposited on urban road surfaces, namely, Zn, Cu, Cr and Pb for different particle size ranges of solids. The study outcomes, based on field studies and batch experiments confirmed that road deposited solids particles contain a significantly high amount of vacant charge sites with the potential to adsorb additional heavy metals. Kinetic studies and adsorption experiments indicated that Cr is the most preferred metal element to associate with solids due to the relatively high electro negativity and high charge density of trivalent cation (Cr3+). However, the relatively low availability of Cr in the urban road environment could influence this behaviour. Comparing total adsorbed metals present in solids particles, it was found that Zn has the highest capacity for adsorption to solids. Desorption experiments confirmed that a low concentration of Cu, Cr and Pb in solids was present in water-soluble and exchangeable form, whilst a significant fraction of adsorbed Zn has a high likelihood of being released back into solution. Among heavy metals, Zn is considered to be the most commonly available metal among road surface pollutants.
Resumo:
In this thesis, three mathematical models describing the growth of solid tumour incorporating the host tissue and the immune system response are developed and investigated. The initial model describes the dynamics of the growing tumour and immune response before being extended in the second model by introducing a time-varying dendritic cell-based treatment strategy. Finally, in the third model, we present a mathematical model of a growing tumour using a hybrid cellular automata. These models can provide information to pre-experimental work to assist in designing more effective and efficient laboratory experiments related to tumour growth and interactions with the immune system and immunotherapy.
Resumo:
Nitrous oxide is a major greenhouse gas emission. The aim of this research was to develop and apply statistical models to characterize the complex spatial and temporal variation in nitrous oxide emissions from soils under different land use conditions. This is critical when developing site-specific management plans to reduce nitrous oxide emissions. These studies can improve predictions and increase our understanding of environmental factors that influence nitrous oxide emissions. They also help to identify areas for future research, which can further improve the prediction of nitrous oxide in practice.
Resumo:
Despite its potential multiple contributions to sustainable policy objectives, urban transit is generally not widely used by the public in terms of its market share compared to that of automobiles, particularly in affluent societies with low-density urban forms like Australia. Transit service providers need to attract more people to transit by improving transit quality of service. The key to cost-effective transit service improvements lies in accurate evaluation of policy proposals by taking into account their impacts on transit users. If transit providers knew what is more or less important to their customers, they could focus their efforts on optimising customer-oriented service. Policy interventions could also be specified to influence transit users’ travel decisions, with targets of customer satisfaction and broader community welfare. This significance motivates the research into the relationship between urban transit quality of service and its user perception as well as behaviour. This research focused on two dimensions of transit user’s travel behaviour: route choice and access arrival time choice. The study area chosen was a busy urban transit corridor linking Brisbane central business district (CBD) and the St. Lucia campus of The University of Queensland (UQ). This multi-system corridor provided a ‘natural experiment’ for transit users between the CBD and UQ, as they can choose between busway 109 (with grade-separate exclusive right-of-way), ordinary on-street bus 412, and linear fast ferry CityCat on the Brisbane River. The population of interest was set as the attendees to UQ, who travelled from the CBD or from a suburb via the CBD. Two waves of internet-based self-completion questionnaire surveys were conducted to collect data on sampled passengers’ perception of transit service quality and behaviour of using public transit in the study area. The first wave survey is to collect behaviour and attitude data on respondents’ daily transit usage and their direct rating of importance on factors of route-level transit quality of service. A series of statistical analyses is conducted to examine the relationships between transit users’ travel and personal characteristics and their transit usage characteristics. A factor-cluster segmentation procedure is applied to respodents’ importance ratings on service quality variables regarding transit route preference to explore users’ various perspectives to transit quality of service. Based on the perceptions of service quality collected from the second wave survey, a series of quality criteria of the transit routes under study was quantitatively measured, particularly, the travel time reliability in terms of schedule adherence. It was proved that mixed traffic conditions and peak-period effects can affect transit service reliability. Multinomial logit models of transit user’s route choice were estimated using route-level service quality perceptions collected in the second wave survey. Relative importance of service quality factors were derived from choice model’s significant parameter estimates, such as access and egress times, seat availability, and busway system. Interpretations of the parameter estimates were conducted, particularly the equivalent in-vehicle time of access and egress times, and busway in-vehicle time. Market segmentation by trip origin was applied to investigate the difference in magnitude between the parameter estimates of access and egress times. The significant costs of transfer in transit trips were highlighted. These importance ratios were applied back to quality perceptions collected as RP data to compare the satisfaction levels between the service attributes and to generate an action relevance matrix to prioritise attributes for quality improvement. An empirical study on the relationship between average passenger waiting time and transit service characteristics was performed using the service quality perceived. Passenger arrivals for services with long headways (over 15 minutes) were found to be obviously coordinated with scheduled departure times of transit vehicles in order to reduce waiting time. This drove further investigations and modelling innovations in passenger’ access arrival time choice and its relationships with transit service characteristics and average passenger waiting time. Specifically, original contributions were made in formulation of expected waiting time, analysis of the risk-aversion attitude to missing desired service run in the passengers’ access time arrivals’ choice, and extensions of the utility function specification for modelling passenger access arrival distribution, by using complicated expected utility forms and non-linear probability weighting to explicitly accommodate the risk of missing an intended service and passenger’s risk-aversion attitude. Discussions on this research’s contributions to knowledge, its limitations, and recommendations for future research are provided at the concluding section of this thesis.
Resumo:
Awareness to avoid losses and casualties due to rain-induced landslide is increasing in regions that routinely experience heavy rainfall. Improvements in early warning systems against rain-induced landslide such as prediction modelling using rainfall records, is urgently needed in vulnerable regions. The existing warning systems have been applied using stability chart development and real-time displacement measurement on slope surfaces. However, there are still some drawbacks such as: ignorance of rain-induced instability mechanism, mislead prediction due to the probabilistic prediction and short time for evacuation. In this research, a real-time predictive method was proposed to alleviate the drawbacks mentioned above. A case-study soil slope in Indonesia that failed in 2010 during rainfall was used to verify the proposed predictive method. Using the results from the field and laboratory characterizations, numerical analyses can be applied to develop a model of unsaturated residual soils slope with deep cracks and subject to rainwater infiltration. Real-time rainfall measurement in the slope and the prediction of future rainfall are needed. By coupling transient seepage and stability analysis, the variation of safety factor of the slope with time were provided as a basis to develop method for the real-time prediction of the rain-induced instability of slopes. This study shows the proposed prediction method has the potential to be used in an early warning system against landslide hazard, since the FOS value and the timing of the end-result of the prediction can be provided before the actual failure of the case study slope.
Resumo:
While there are many similarities between the languages of the various workflow management systems, there are also significant differences. One particular area of differences is caused by the fact that different systems impose different syntactic restrictions. In such cases, business analysts have to choose between either conforming to the language in their specifications or transforming these specifications afterwards. The latter option is preferable as this allows for a separation of concerns. In this paper we investigate to what extent such transformations are possible in the context of various syntactical restrictions (the most restrictive of which will be referred to as structured workflows). We also provide a deep insight into the consequences, particularly in terms of expressive power, of imposing such restrictions.