866 resultados para Concentration-time response modelling
Resumo:
A review of the main rolling models is conducted to assess their suitability for modelling the foil rolling process. Two such models are Fleck and Johnson's Hertzian model and Fleck, Johnson, Mear and Zhang's Influence Function model. Both of these models are approximated through the use of perturbation methods. Decrease in the computation time resulted when compared with the numerical solution. The Hertzian model was approximated using the ratio of the yield stress of the strip to the plane-strain Young's Modulus of the rolls as the small perturbation parameter. The Influence Function model approximation takes advantage of the solution of the well-known Aerofoil Integral Equation to gain an insight into how the choice of interior boundary points affects the stability of numerical solution of the model's equations. These approximations require less computation than their full models and, in the case of the Hertzian approximation, only introduces a small error in the predictions of roll force roll torque. Hence the Hertzian approximate method is suitable for on-line control. The predictions from the Influence Function approximation underestimates the predictions from the numerical results. Better approximation of the pressure in the plastic reduction regions is the main source of this error.
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.
Resumo:
The numerical modelling of electromagnetic waves has been the focus of many research areas in the past. Some specific applications of electromagnetic wave scattering are in the fields of Microwave Heating and Radar Communication Systems. The equations that govern the fundamental behaviour of electromagnetic wave propagation in waveguides and cavities are the Maxwell's equations. In the literature, a number of methods have been employed to solve these equations. Of these methods, the classical Finite-Difference Time-Domain scheme, which uses a staggered time and space discretisation, is the most well known and widely used. However, it is complicated to implement this method on an irregular computational domain using an unstructured mesh. In this work, a coupled method is introduced for the solution of Maxwell's equations. It is proposed that the free-space component of the solution is computed in the time domain, whilst the load is resolved using the frequency dependent electric field Helmholtz equation. This methodology results in a timefrequency domain hybrid scheme. For the Helmholtz equation, boundary conditions are generated from the time dependent free-space solutions. The boundary information is mapped into the frequency domain using the Discrete Fourier Transform. The solution for the electric field components is obtained by solving a sparse-complex system of linear equations. The hybrid method has been tested for both waveguide and cavity configurations. Numerical tests performed on waveguides and cavities for inhomogeneous lossy materials highlight the accuracy and computational efficiency of the newly proposed hybrid computational electromagnetic strategy.
Resumo:
Many large coal mining operations in Australia rely heavily on the rail network to transport coal from mines to coal terminals at ports for shipment. Over the last few years, due to the fast growing demand, the coal rail network is becoming one of the worst industrial bottlenecks in Australia. As a result, this provides great incentives for pursuing better optimisation and control strategies for the operation of the whole rail transportation system under network and terminal capacity constraints. This PhD research aims to achieve a significant efficiency improvement in a coal rail network on the basis of the development of standard modelling approaches and generic solution techniques. Generally, the train scheduling problem can be modelled as a Blocking Parallel- Machine Job-Shop Scheduling (BPMJSS) problem. In a BPMJSS model for train scheduling, trains and sections respectively are synonymous with jobs and machines and an operation is regarded as the movement/traversal of a train across a section. To begin, an improved shifting bottleneck procedure algorithm combined with metaheuristics has been developed to efficiently solve the Parallel-Machine Job- Shop Scheduling (PMJSS) problems without the blocking conditions. Due to the lack of buffer space, the real-life train scheduling should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold a train until the next section on the routing becomes available. As a consequence, the problem has been considered as BPMJSS with the blocking conditions. To develop efficient solution techniques for BPMJSS, extensive studies on the nonclassical scheduling problems regarding the various buffer conditions (i.e. blocking, no-wait, limited-buffer, unlimited-buffer and combined-buffer) have been done. In this procedure, an alternative graph as an extension of the classical disjunctive graph is developed and specially designed for the non-classical scheduling problems such as the blocking flow-shop scheduling (BFSS), no-wait flow-shop scheduling (NWFSS), and blocking job-shop scheduling (BJSS) problems. By exploring the blocking characteristics based on the alternative graph, a new algorithm called the topological-sequence algorithm is developed for solving the non-classical scheduling problems. To indicate the preeminence of the proposed algorithm, we compare it with two known algorithms (i.e. Recursive Procedure and Directed Graph) in the literature. Moreover, we define a new type of non-classical scheduling problem, called combined-buffer flow-shop scheduling (CBFSS), which covers four extreme cases: the classical FSS (FSS) with infinite buffer, the blocking FSS (BFSS) with no buffer, the no-wait FSS (NWFSS) and the limited-buffer FSS (LBFSS). After exploring the structural properties of CBFSS, we propose an innovative constructive algorithm named the LK algorithm to construct the feasible CBFSS schedule. Detailed numerical illustrations for the various cases are presented and analysed. By adjusting only the attributes in the data input, the proposed LK algorithm is generic and enables the construction of the feasible schedules for many types of non-classical scheduling problems with different buffer constraints. Inspired by the shifting bottleneck procedure algorithm for PMJSS and characteristic analysis based on the alternative graph for non-classical scheduling problems, a new constructive algorithm called the Feasibility Satisfaction Procedure (FSP) is proposed to obtain the feasible BPMJSS solution. A real-world train scheduling case is used for illustrating and comparing the PMJSS and BPMJSS models. Some real-life applications including considering the train length, upgrading the track sections, accelerating a tardy train and changing the bottleneck sections are discussed. Furthermore, the BPMJSS model is generalised to be a No-Wait Blocking Parallel- Machine Job-Shop Scheduling (NWBPMJSS) problem for scheduling the trains with priorities, in which prioritised trains such as express passenger trains are considered simultaneously with non-prioritised trains such as freight trains. In this case, no-wait conditions, which are more restrictive constraints than blocking constraints, arise when considering the prioritised trains that should traverse continuously without any interruption or any unplanned pauses because of the high cost of waiting during travel. In comparison, non-prioritised trains are allowed to enter the next section immediately if possible or to remain in a section until the next section on the routing becomes available. Based on the FSP algorithm, a more generic algorithm called the SE algorithm is developed to solve a class of train scheduling problems in terms of different conditions in train scheduling environments. To construct the feasible train schedule, the proposed SE algorithm consists of many individual modules including the feasibility-satisfaction procedure, time-determination procedure, tune-up procedure and conflict-resolve procedure algorithms. To find a good train schedule, a two-stage hybrid heuristic algorithm called the SE-BIH algorithm is developed by combining the constructive heuristic (i.e. the SE algorithm) and the local-search heuristic (i.e. the Best-Insertion- Heuristic algorithm). To optimise the train schedule, a three-stage algorithm called the SE-BIH-TS algorithm is developed by combining the tabu search (TS) metaheuristic with the SE-BIH algorithm. Finally, a case study is performed for a complex real-world coal rail network under network and terminal capacity constraints. The computational results validate that the proposed methodology would be very promising because it can be applied as a fundamental tool for modelling and solving many real-world scheduling problems.
Resumo:
Monotony has been identified as a contributing factor to road crashes. Drivers’ ability to react to unpredictable events deteriorates when exposed to highly predictable and uneventful driving tasks, such as driving on Australian rural roads, many of which are monotonous by nature. Highway design in particular attempts to reduce the driver’s task to a merely lane-keeping one. Such a task provides little stimulation and is monotonous, thus affecting the driver’s attention which is no longer directed towards the road. Inattention contributes to crashes, especially for professional drivers. Monotony has been studied mainly from the endogenous perspective (for instance through sleep deprivation) without taking into account the influence of the task itself (repetitiveness) or the surrounding environment. The aim and novelty of this thesis is to develop a methodology (mathematical framework) able to predict driver lapses of vigilance under monotonous environments in real time, using endogenous and exogenous data collected from the driver, the vehicle and the environment. Existing approaches have tended to neglect the specificity of task monotony, leaving the question of the existence of a “monotonous state” unanswered. Furthermore the issue of detecting vigilance decrement before it occurs (predictions) has not been investigated in the literature, let alone in real time. A multidisciplinary approach is necessary to explain how vigilance evolves in monotonous conditions. Such an approach needs to draw on psychology, physiology, road safety, computer science and mathematics. The systemic approach proposed in this study is unique with its predictive dimension and allows us to define, in real time, the impacts of monotony on the driver’s ability to drive. Such methodology is based on mathematical models integrating data available in vehicles to the vigilance state of the driver during a monotonous driving task in various environments. The model integrates different data measuring driver’s endogenous and exogenous factors (related to the driver, the vehicle and the surrounding environment). Electroencephalography (EEG) is used to measure driver vigilance since it has been shown to be the most reliable and real time methodology to assess vigilance level. There are a variety of mathematical models suitable to provide a framework for predictions however, to find the most accurate model, a collection of mathematical models were trained in this thesis and the most reliable was found. The methodology developed in this research is first applied to a theoretically sound measure of sustained attention called Sustained Attention Response to Task (SART) as adapted by Michael (2010), Michael and Meuter (2006, 2007). This experiment induced impairments due to monotony during a vigilance task. Analyses performed in this thesis confirm and extend findings from Michael (2010) that monotony leads to an important vigilance impairment independent of fatigue. This thesis is also the first to show that monotony changes the dynamics of vigilance evolution and tends to create a “monotonous state” characterised by reduced vigilance. Personality traits such as being a low sensation seeker can mitigate this vigilance decrement. It is also evident that lapses in vigilance can be predicted accurately with Bayesian modelling and Neural Networks. This framework was then applied to the driving task by designing a simulated monotonous driving task. The design of such task requires multidisciplinary knowledge and involved psychologist Rebecca Michael. Monotony was varied through both the road design and the road environment variables. This experiment demonstrated that road monotony can lead to driving impairment. Particularly monotonous road scenery was shown to have the most impact compared to monotonous road design. Next, this study identified a variety of surrogate measures that are correlated with vigilance levels obtained from the EEG. Such vigilance states can be predicted with these surrogate measures. This means that vigilance decrement can be detected in a car without the use of an EEG device. Amongst the different mathematical models tested in this thesis, only Neural Networks predicted the vigilance levels accurately. The results of both these experiments provide valuable information about the methodology to predict vigilance decrement. Such an issue is quite complex and requires modelling that can adapt to highly inter-individual differences. Only Neural Networks proved accurate in both studies, suggesting that these models are the most likely to be accurate when used on real roads or for further research on vigilance modelling. This research provides a better understanding of the driving task under monotonous conditions. Results demonstrate that mathematical modelling can be used to determine the driver’s vigilance state when driving using surrogate measures identified during this study. This research has opened up avenues for future research and could result in the development of an in-vehicle device predicting driver vigilance decrement. Such a device could contribute to a reduction in crashes and therefore improve road safety.
Analytical modeling and sensitivity analysis for travel time estimation on signalized urban networks
Resumo:
This paper presents a model for estimation of average travel time and its variability on signalized urban networks using cumulative plots. The plots are generated based on the availability of data: a) case-D, for detector data only; b) case-DS, for detector data and signal timings; and c) case-DSS, for detector data, signal timings and saturation flow rate. The performance of the model for different degrees of saturation and different detector detection intervals is consistent for case-DSS and case-DS whereas, for case-D the performance is inconsistent. The sensitivity analysis of the model for case-D indicates that it is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.
Resumo:
This paper presents a travel time prediction model and evaluates its performance and transferability. Advanced Travelers Information Systems (ATIS) are gaining more and more importance, increasing the need for accurate, timely and useful information to the travelers. Travel time information quantifies the traffic condition in an easy to understand way for the users. The proposed travel time prediction model is based on an efficient use of nearest neighbor search. The model is calibrated for optimal performance using Genetic Algorithms. Results indicate better performance by using the proposed model than the presently used naïve model.
Resumo:
Plants subjected to increases in the supply of resource(s) limiting growth may allocate more of those resources to existing leaves, increasing photosynthetic capacity, and/or to production of more leaves, increasing whole-plant photosynthesis. The responses of three populations of the alpine willow, Salix glauca, growing along an alpine topographic sequence representing a gradient in soil moisture and organic matter, and thus potential N supply, to N amendments, were measured over two growing seasons, to elucidate patterns of leaf versus shoot photosynthetic responses. Leaf-(foliar N, photosynthesis rates, photosynthetic N-use efficiency) and shoot-(leaf area per shoot, number of leaves per shoot, stem weight, N resorption efficiency) level measurements were made to examine the spatial and temporal variation in these potential responses to increased N availability. The predominant response of the willows to N fertilization was at the shoot-level, by production of greater leaf area per shoot. Greater leaf area occurred due to production of larger leaves in both years of the experiment and to production of more leaves during the second year of fertilization treatment. Significant leaf-level photosynthetic response occurred only during the first year of treatment, and only in the dry meadow population. Variation in photosynthesis rates was related more to variation in stomatal conductance than to foliar N concentration. Stomatal conductance in turn was significantly related to N fertilization. Differences among the populations in photosynthesis, foliar N, leaf production, and responses to N fertilization indicate N availability may be lowest in the dry meadow population, and highest in the ridge population. This result is contrary to the hypothesis that a gradient of plant available N corresponds with a snowpack/topographic gradient.
Resumo:
Many of the costs associated with greenfield residential development are apparent and tangible. For example, regulatory fees, government taxes, acquisition costs, selling fees, commissions and others are all relatively easily identified since they represent actual costs incurred at a given point in time. However, identification of holding costs are not always immediately evident since by contrast they characteristically lack visibility. One reason for this is that, for the most part, they are typically assessed over time in an ever-changing environment. In addition, wide variations exist in development pipeline components: they are typically represented from anywhere between a two and over sixteen years time period - even if located within the same geographical region. Determination of the starting and end points, with regards holding cost computation, can also prove problematic. Furthermore, the choice between application of prevailing inflation, or interest rates, or a combination of both over time, adds further complexity. Although research is emerging in these areas, a review of the literature reveals attempts to identify holding cost components are limited. Their quantification (in terms of relative weight or proportionate cost to a development project) is even less apparent; in fact, the computation and methodology behind the calculation of holding costs varies widely and in some instances completely ignored. In addition, it may be demonstrated that ambiguities exists in terms of the inclusion of various elements of holding costs and assessment of their relative contribution. Yet their impact on housing affordability is widely acknowledged to be profound, with their quantification potentially maximising the opportunities for delivering affordable housing. This paper seeks to build on earlier investigations into those elements related to holding costs, providing theoretical modelling of the size of their impact - specifically on the end user. At this point the research is reliant upon quantitative data sets, however additional qualitative analysis (not included here) will be relevant to account for certain variations between expectations and actual outcomes achieved by developers. Although this research stops short of cross-referencing with a regional or international comparison study, an improved understanding of the relationship between holding costs, regulatory charges, and housing affordability results.
Resumo:
The reconstruction of extended maxillary and mandibular defects with prefabricated free flaps is a two stage procedure, that allows immediate function with implant supported dentures. The appropriate delay between prefabrication and reconstruction depends on the interfacial strength of the bone–implant surface. The purpose of this animal study was to evaluate the removal torque of unloaded titanium implants in the fibula, the scapula and the iliac crest. Ninety implants with a sandblasted and acid-etched (SLA) surface were tested after healing periods of 3, 6, and 12 weeks, respectively. Removal torque values (RTV) were collected using a computerized counterclockwise torque driver. The bicortical anchored 8 mm implants in the fibula revealed values of 63.73 Ncm, 91.50 Ncm, and 101.83 Ncm at 3, 6, and 12 weeks, respectively. The monocortical anchorage in the iliac crest showed values of 71.40 Ncm, 63.14 Ncm, and 61.59 Ncm with 12 mm implants at the corresponding times. The monocortical anchorage in the scapula demonstrated mean RTV of 62.28 Ncm, 97.63 Ncm, and 99.7 Ncm with 12 mm implants at 3, 6, and 12 weeks, respectively. The study showed an increase of removal torque with increasing healing time. The interfacial strength for bicortical anchored 8 mm implants in the fibula was comparable to monocortical anchored 12 mm implants in the iliac crest and the scapula at the corresponding times. The resistance to shear seemed to be determined by the type of anchorage (monocortical vs. bicortical) and the length of the implant with greater amount of bone–implant interface.
Resumo:
This paper reviews the main studies on transit users’ route choice in thecontext of transit assignment. The studies are categorized into three groups: static transit assignment, within-day dynamic transit assignment, and emerging approaches. The motivations and behavioural assumptions of these approaches are re-examined. The first group includes shortest-path heuristics in all-or-nothing assignment, random utility maximization route-choice models in stochastic assignment, and user equilibrium based assignment. The second group covers within-day dynamics in transit users’ route choice, transit network formulations, and dynamic transit assignment. The third group introduces the emerging studies on behavioural complexities, day-to-day dynamics, and real-time dynamics in transit users’ route choice. Future research directions are also discussed.
Resumo:
Many infrastructure and necessity systems such as electricity and telecommunication in Europe and the Northern America were used to be operated as monopolies, if not state-owned. However, they have now been disintegrated into a group of smaller companies managed by different stakeholders. Railways are no exceptions. Since the early 1980s, there have been reforms in the shape of restructuring of the national railways in different parts of the world. Continuous refinements are still conducted to allow better utilisation of railway resources and quality of service. There has been a growing interest for the industry to understand the impacts of these reforms on the operation efficiency and constraints. A number of post-evaluations have been conducted by analysing the performance of the stakeholders on their profits (Crompton and Jupe 2003), quality of train service (Shaw 2001) and engineering operations (Watson 2001). Results from these studies are valuable for future improvement in the system, followed by a new cycle of post-evaluations. However, direct implementation of these changes is often costly and the consequences take a long period of time (e.g. years) to surface. With the advance of fast computing technologies, computer simulation is a cost-effective means to evaluate a hypothetical change in a system prior to actual implementation. For example, simulation suites have been developed to study a variety of traffic control strategies according to sophisticated models of train dynamics, traction and power systems (Goodman, Siu and Ho 1998, Ho and Yeung 2001). Unfortunately, under the restructured railway environment, it is by no means easy to model the complex behaviour of the stakeholders and the interactions between them. Multi-agent system (MAS) is a recently developed modelling technique which may be useful in assisting the railway industry to conduct simulations on the restructured railway system. In MAS, a real-world entity is modelled as a software agent that is autonomous, reactive to changes, able to initiate proactive actions and social communicative acts. It has been applied in the areas of supply-chain management processes (García-Flores, Wang and Goltz 2000, Jennings et al. 2000a, b) and e-commerce activities (Au, Ngai and Parameswaran 2003, Liu and You 2003), in which the objectives and behaviour of the buyers and sellers are captured by software agents. It is therefore beneficial to investigate the suitability or feasibility of applying agent modelling in railways and the extent to which it might help in developing better resource management strategies. This paper sets out to examine the benefits of using MAS to model the resource management process in railways. Section 2 first describes the business environment after the railway 2 Modelling issues on the railway resource management process using MAS reforms. Then the problems emerge from the restructuring process are identified in section 3. Section 4 describes the realisation of a MAS for railway resource management under the restructured scheme and the feasible studies expected from the model.
Resumo:
The link between measured sub-saturated hygroscopicity and cloud activation potential of secondary organic aerosol particles produced by the chamber photo-oxidation of α-pinene in the presence or absence of ammonium sulphate seed aerosol was investigated using two models of varying complexity. A simple single hygroscopicity parameter model and a more complex model (incorporating surface effects) were used to assess the detail required to predict the cloud condensation nucleus (CCN) activity from the subsaturated water uptake. Sub-saturated water uptake measured by three hygroscopicity tandem differential mobility analyser (HTDMA) instruments was used to determine the water activity for use in the models. The predicted CCN activity was compared to the measured CCN activation potential using a continuous flow CCN counter. Reconciliation using the more complex model formulation with measured cloud activation could be achieved widely different assumed surface tension behavior of the growing droplet; this was entirely determined by the instrument used as the source of water activity data. This unreliable derivation of the water activity as a function of solute concentration from sub-saturated hygroscopicity data indicates a limitation in the use of such data in predicting cloud condensation nucleus behavior of particles with a significant organic fraction. Similarly, the ability of the simpler single parameter model to predict cloud activation behaviour was dependent on the instrument used to measure sub-saturated hygroscopicity and the relative humidity used to provide the model input. However, agreement was observed for inorganic salt solution particles, which were measured by all instruments in agreement with theory. The difference in HTDMA data from validated and extensively used instruments means that it cannot be stated with certainty the detail required to predict the CCN activity from sub-saturated hygroscopicity. In order to narrow the gap between measurements of hygroscopic growth and CCN activity the processes involved must be understood and the instrumentation extensively quality assured. It is impossible to say from the results presented here due to the differences in HTDMA data whether: i) Surface tension suppression occurs ii) Bulk to surface partitioning is important iii) The water activity coefficient changes significantly as a function of the solute concentration.