670 resultados para Multi-island transport simulator MITS
Resumo:
LiFePO4 is a commercially available battery material with good theoretical discharge capacity, excellent cycle life and increased safety compared with competing Li-ion chemistries. It has been the focus of considerable experimental and theoretical scrutiny in the past decade, resulting in LiFePO4 cathodes that perform well at high discharge rates. This scrutiny has raised several questions about the behaviour of LiFePO4 material during charge and discharge. In contrast to many other battery chemistries that intercalate homogeneously, LiFePO4 can phase-separate into highly and lowly lithiated phases, with intercalation proceeding by advancing an interface between these two phases. The main objective of this thesis is to construct mathematical models of LiFePO4 cathodes that can be validated against experimental discharge curves. This is in an attempt to understand some of the multi-scale dynamics of LiFePO4 cathodes that can be difficult to determine experimentally. The first section of this thesis constructs a three-scale mathematical model of LiFePO4 cathodes that uses a simple Stefan problem (which has been used previously in the literature) to describe the assumed phase-change. LiFePO4 crystals have been observed agglomerating in cathodes to form a porous collection of crystals and this morphology motivates the use of three size-scales in the model. The multi-scale model developed validates well against experimental data and this validated model is then used to examine the role of manufacturing parameters (including the agglomerate radius) on battery performance. The remainder of the thesis is concerned with investigating phase-field models as a replacement for the aforementioned Stefan problem. Phase-field models have recently been used in LiFePO4 and are a far more accurate representation of experimentally observed crystal-scale behaviour. They are based around the Cahn-Hilliard-reaction (CHR) IBVP, a fourth-order PDE with electrochemical (flux) boundary conditions that is very stiff and possesses multiple time and space scales. Numerical solutions to the CHR IBVP can be difficult to compute and hence a least-squares based Finite Volume Method (FVM) is developed for discretising both the full CHR IBVP and the more traditional Cahn-Hilliard IBVP. Phase-field models are subject to two main physicality constraints and the numerical scheme presented performs well under these constraints. This least-squares based FVM is then used to simulate the discharge of individual crystals of LiFePO4 in two dimensions. This discharge is subject to isotropic Li+ diffusion, based on experimental evidence that suggests the normally orthotropic transport of Li+ in LiFePO4 may become more isotropic in the presence of lattice defects. Numerical investigation shows that two-dimensional Li+ transport results in crystals that phase-separate, even at very high discharge rates. This is very different from results shown in the literature, where phase-separation in LiFePO4 crystals is suppressed during discharge with orthotropic Li+ transport. Finally, the three-scale cathodic model used at the beginning of the thesis is modified to simulate modern, high-rate LiFePO4 cathodes. High-rate cathodes typically do not contain (large) agglomerates and therefore a two-scale model is developed. The Stefan problem used previously is also replaced with the phase-field models examined in earlier chapters. The results from this model are then compared with experimental data and fit poorly, though a significant parameter regime could not be investigated numerically. Many-particle effects however, are evident in the simulated discharges, which match the conclusions of recent literature. These effects result in crystals that are subject to local currents very different from the discharge rate applied to the cathode, which impacts the phase-separating behaviour of the crystals and raises questions about the validity of using cathodic-scale experimental measurements in order to determine crystal-scale behaviour.
Resumo:
Considerate amount of research has proposed optimization-based approaches employing various vibration parameters for structural damage diagnosis. The damage detection by these methods is in fact a result of updating the analytical structural model in line with the current physical model. The feasibility of these approaches has been proven. But most of the verification has been done on simple structures, such as beams or plates. In the application on a complex structure, like steel truss bridges, a traditional optimization process will cost massive computational resources and lengthy convergence. This study presents a multi-layer genetic algorithm (ML-GA) to overcome the problem. Unlike the tedious convergence process in a conventional damage optimization process, in each layer, the proposed algorithm divides the GA’s population into groups with a less number of damage candidates; then, the converged population in each group evolves as an initial population of the next layer, where the groups merge to larger groups. In a damage detection process featuring ML-GA, as parallel computation can be implemented, the optimization performance and computational efficiency can be enhanced. In order to assess the proposed algorithm, the modal strain energy correlation (MSEC) has been considered as the objective function. Several damage scenarios of a complex steel truss bridge’s finite element model have been employed to evaluate the effectiveness and performance of ML-GA, against a conventional GA. In both single- and multiple damage scenarios, the analytical and experimental study shows that the MSEC index has achieved excellent damage indication and efficiency using the proposed ML-GA, whereas the conventional GA only converges at a local solution.
Resumo:
Being able to innovate has become a critical capability for many contemporary organizations in an effort to sustain their operations in the long run. However, existing innovation models that attempt to guide organizations emphasize different aspects of innovation (e.g., products, services or business models), different stages of innovation (e.g., ideation, implementation or operation) or different skills (e.g., development or crowdsourcing) that are necessary to innovate, in turn creating isolated pockets of understanding about different aspects of innovation. In order to yield more predictable innovation outcomes organizations need to understand what exactly they need to focus on, what capabilities they need to have and what is necessary in order to take an idea to market. This paper aims at constructing a framework for innovation that contributes to this understanding. We will focus on a number of different stages in the innovation process and highlight different types and levels of organizational, technological, individual and process capabilities required to manage the organizational innovation process. Our work offers a comprehensive conceptualization of innovation as a multi-level process model, and provides a range of implications for further empirical and theoretical examination.
Resumo:
This poster summarises the current findings from STRC’s Integrated Traveller Information research domain that aims for accurate and reliable travel time prediction, and optimisation of multimodal trips. Following are the three selected discussions: a) Fundamental understanding on the use of Bluetooth MAC Scanner (BMS) for travel time estimation b) Integration of multi-sources (Loops and Bluetooth) for travel time and density estimation c) Architecture for online and predictive multimodal trip planner
Resumo:
Vehicle speed is an important attribute for the utility of a transport mode. The speed relationship between multiple modes of transport is of interest to the traffic planners and operators. This paper quantifies the relationship between bus speed and average car speed by integrating Bluetooth data and Transit Signal Priority data from the urban network in Brisbane, Australia. The method proposed in this paper is the first of its kind to relate bus speed and average car speed by integrating multi-source traffic data in a corridor-based method. Three transferable regression models relating not-in-service bus; in-service bus during peak; and in-service bus during off peak periods with average car are proposed. The models are cross-validated and the interrelationships are significant
Resumo:
The work presented in this thesis investigates the mathematical modelling of charge transport in electrolyte solutions, within the nanoporous structures of electrochemical devices. We compare two approaches found in the literature, by developing onedimensional transport models based on the Nernst-Planck and Maxwell-Stefan equations. The development of the Nernst-Planck equations relies on the assumption that the solution is infinitely dilute. However, this is typically not the case for the electrolyte solutions found within electrochemical devices. Furthermore, ionic concentrations much higher than those of the bulk concentrations can be obtained near the electrode/electrolyte interfaces due to the development of an electric double layer. Hence, multicomponent interactions which are neglected by the Nernst-Planck equations may become important. The Maxwell-Stefan equations account for these multicomponent interactions, and thus they should provide a more accurate representation of transport in electrolyte solutions. To allow for the effects of the electric double layer in both the Nernst-Planck and Maxwell-Stefan equations, we do not assume local electroneutrality in the solution. Instead, we model the electrostatic potential as a continuously varying function, by way of Poisson’s equation. Importantly, we show that for a ternary electrolyte solution at high interfacial concentrations, the Maxwell-Stefan equations predict behaviour that is not recovered from the Nernst-Planck equations. The main difficulty in the application of the Maxwell-Stefan equations to charge transport in electrolyte solutions is knowledge of the transport parameters. In this work, we apply molecular dynamics simulations to obtain the required diffusivities, and thus we are able to incorporate microscopic behaviour into a continuum scale model. This is important due to the small size scales we are concerned with, as we are still able to retain the computational efficiency of continuum modelling. This approach provides an avenue by which the microscopic behaviour may ultimately be incorporated into a full device-scale model. The one-dimensional Maxwell-Stefan model is extended to two dimensions, representing an important first step for developing a fully-coupled interfacial charge transport model for electrochemical devices. It allows us to begin investigation into ambipolar diffusion effects, where the motion of the ions in the electrolyte is affected by the transport of electrons in the electrode. As we do not consider modelling in the solid phase in this work, this is simulated by applying a time-varying potential to one interface of our two-dimensional computational domain, thus allowing a flow field to develop in the electrolyte. Our model facilitates the observation of the transport of ions near the electrode/electrolyte interface. For the simulations considered in this work, we show that while there is some motion in the direction parallel to the interface, the interfacial coupling is not sufficient for the ions in solution to be "dragged" along the interface for long distances.
Resumo:
An accurate evaluation of the airborne particle dose-response relationship requires detailed measurements of the actual particle concentration levels that people are exposed to, in every microenvironment in which they reside. The aim of this work was to perform an exposure assessment of children in relation to two different aerosol species: ultrafine particles (UFPs) and black carbon (BC). To this purpose, personal exposure measurements, in terms of UFP and BC concentrations, were performed on 103 children aged 8-11 years (10.1 ± 1.1 years) using hand-held particle counters and aethalometers. Simultaneously, a time-activity diary and a portable GPS were used to determine the children’s daily time-activity pattern and estimate their inhaled dose of UFPs and BC. The median concentration to which the study population was exposed was found to be comparable to the high levels typically detected in urban traffic microenvironments, in terms of both particle number (2.2×104 part. cm-3) and BC (3.8 μg m-3) concentrations. Daily inhaled doses were also found to be relatively high and were equal to 3.35×1011 part. day-1 and 3.92×101 μg day-1 for UFPs and BC, respectively. Cooking and using transportation were recognized as the main activities contributing to overall daily exposure, when normalized according to their corresponding time contribution for UFPs and BC, respectively. Therefore, UFPs and BC could represent tracers of children exposure to particulate pollution from indoor cooking activities and transportation microenvironments, respectively.
Resumo:
The use of intelligent transport systems is proliferating across the Australian road network, particularly on major freeways. New technology allows a greater range of signs and messages to be displayed to drivers. While there has been a long history of human factors analyses of signage, no evaluation has been conducted on this novel, sometimes dynamic, signage or potential interactions when co-located. The purpose of this driving simulator study was to investigate drivers’ behavioural changes and comprehension resulting from the co-location of Lane Use Management Systems with static signs and (Enhanced) Variable Message Signs on Queensland motorways. A section of motorway was simulated, and nine scenarios were developed which presented a combination of signage cases across levels of driving task complexity. Two higher-risk road user groups were targeted for this research on an advanced driving simulator: older (65+ years, N=21) and younger (18-22 years, N=20) drivers. Changes in sign co-location and task complexity had small effect on driver comprehension of the signs and vehicle dynamics variables, including difference with the posted speed limit, headway, standard deviation of lane keeping and brake jerks. However, increasing the amount of information provided to drivers at a given location (by co-locating several signs) increased participants’ gaze duration on the signs. With co-location of signs and without added task complexity, a single gaze was over 2s for more than half of the population tested for both groups, and up to 6 seconds for some individuals.
Resumo:
Graphene-polymer nanocomposites have attracted considerable attention due to their unique properties, such as high thermal conductivity (~3000 W mK-1), mechanical stiffness (~ 1 TPa) and electronic transport properties. Relatively, the thermal performance of graphene-polymer composites has not been well investigated. The major technical challenge is to understand the interfacial thermal transport between graphene nanofiller and polymer matrix at small material length scale. To this end, we conducted molecular dynamics simulations to investigate the thermal transport in graphene-polyethylene nanocomposite. The influence of functionalization with hydrocarbon chains on the interfacial thermal conductivity was studied, taking into account of the effects of model size and thermal conductivity of graphene. The results are considered to contribute to development of new graphene-polymer nanocomposites with tailored thermal properties.
Resumo:
In this paper we analyse the effects of highway traffic flow parameters like vehicle arrival rate and density on the performance of Amplify and Forward (AF) cooperative vehicular networks along a multi-lane highway under free flow state. We derive analytical expressions for connectivity performance and verify them with Monte-Carlo simulations. When AF cooperative relaying is employed together with Maximum Ratio Combining (MRC) at the receivers the average route error rate shows 10-20 fold improvement compared to direct communication. A 4-8 fold increase in maximum number of traversable hops can also be observed at different vehicle densities when AF cooperative communication is used to strengthen communication routes. However the theorical upper bound of maximum number of hops promises higher performance gains.
Resumo:
Despite its potential multiple contributions to sustainable policy objectives, urban transit is generally not widely used by the public in terms of its market share compared to that of automobiles, particularly in affluent societies with low-density urban forms like Australia. Transit service providers need to attract more people to transit by improving transit quality of service. The key to cost-effective transit service improvements lies in accurate evaluation of policy proposals by taking into account their impacts on transit users. If transit providers knew what is more or less important to their customers, they could focus their efforts on optimising customer-oriented service. Policy interventions could also be specified to influence transit users’ travel decisions, with targets of customer satisfaction and broader community welfare. This significance motivates the research into the relationship between urban transit quality of service and its user perception as well as behaviour. This research focused on two dimensions of transit user’s travel behaviour: route choice and access arrival time choice. The study area chosen was a busy urban transit corridor linking Brisbane central business district (CBD) and the St. Lucia campus of The University of Queensland (UQ). This multi-system corridor provided a ‘natural experiment’ for transit users between the CBD and UQ, as they can choose between busway 109 (with grade-separate exclusive right-of-way), ordinary on-street bus 412, and linear fast ferry CityCat on the Brisbane River. The population of interest was set as the attendees to UQ, who travelled from the CBD or from a suburb via the CBD. Two waves of internet-based self-completion questionnaire surveys were conducted to collect data on sampled passengers’ perception of transit service quality and behaviour of using public transit in the study area. The first wave survey is to collect behaviour and attitude data on respondents’ daily transit usage and their direct rating of importance on factors of route-level transit quality of service. A series of statistical analyses is conducted to examine the relationships between transit users’ travel and personal characteristics and their transit usage characteristics. A factor-cluster segmentation procedure is applied to respodents’ importance ratings on service quality variables regarding transit route preference to explore users’ various perspectives to transit quality of service. Based on the perceptions of service quality collected from the second wave survey, a series of quality criteria of the transit routes under study was quantitatively measured, particularly, the travel time reliability in terms of schedule adherence. It was proved that mixed traffic conditions and peak-period effects can affect transit service reliability. Multinomial logit models of transit user’s route choice were estimated using route-level service quality perceptions collected in the second wave survey. Relative importance of service quality factors were derived from choice model’s significant parameter estimates, such as access and egress times, seat availability, and busway system. Interpretations of the parameter estimates were conducted, particularly the equivalent in-vehicle time of access and egress times, and busway in-vehicle time. Market segmentation by trip origin was applied to investigate the difference in magnitude between the parameter estimates of access and egress times. The significant costs of transfer in transit trips were highlighted. These importance ratios were applied back to quality perceptions collected as RP data to compare the satisfaction levels between the service attributes and to generate an action relevance matrix to prioritise attributes for quality improvement. An empirical study on the relationship between average passenger waiting time and transit service characteristics was performed using the service quality perceived. Passenger arrivals for services with long headways (over 15 minutes) were found to be obviously coordinated with scheduled departure times of transit vehicles in order to reduce waiting time. This drove further investigations and modelling innovations in passenger’ access arrival time choice and its relationships with transit service characteristics and average passenger waiting time. Specifically, original contributions were made in formulation of expected waiting time, analysis of the risk-aversion attitude to missing desired service run in the passengers’ access time arrivals’ choice, and extensions of the utility function specification for modelling passenger access arrival distribution, by using complicated expected utility forms and non-linear probability weighting to explicitly accommodate the risk of missing an intended service and passenger’s risk-aversion attitude. Discussions on this research’s contributions to knowledge, its limitations, and recommendations for future research are provided at the concluding section of this thesis.
Resumo:
Organizations make increasingly use of social media in order to compete for customer awareness and improve the quality of their goods and services. Multiple techniques of social media analysis are already in use. Nevertheless, theoretical underpinnings and a sound research agenda are still unavailable in this field at the present time. In order to contribute to setting up such an agenda, we introduce digital social signal processing (DSSP) as a new research stream in IS that requires multi-facetted investigations. Our DSSP concept is founded upon a set of four sequential activities: sensing digital social signals that are emitted by individuals on social media; decoding online data of social media in order to reconstruct digital social signals; matching the signals with consumers’ life events; and configuring individualized goods and service offerings tailored to the individual needs of customers. We further contribute to tying loose ends of different research areas together, in order to frame DSSP as a field for further investigation. We conclude with developing a research agenda.
Resumo:
Pacific Rim Real Estate Society has conducted four property case competitions from 2009 to 2012. The competition provides opportunities for undergraduate students to present their proposal on a given case study. All students were locked down with their four team members for five hours without external help to ensure a level playing field across participants. Students prepared their presentation and defended their arguments in front of experts in property industry and academia. The aim of this paper is reflecting on the feedback received from stakeholders involved in the case competition. Besides exploring what students have gained from the competitions, this paper provides an insight on the opportunities and challenges for the new format of competition to be introduced in 2013. Over the last four competitions, there were three universities participated in all the four consecutive events, four universities partook in two events and another four universities only competed once. Some universities had a great advantage by having previous experiences by participating in similar international business competitions. Findings show that the students have benefited greatly from the event including improving their ability in problem solving and other non-technical skills. Despite the aforementioned benefits, the PRRES closed-book case competition is proven not viable thus future competition needs to minimise the travel and logistic cost.
Resumo:
Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.