134 resultados para Concentration-time response modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The collective purpose of these two studies was to determine a link between the V02 slow component and the muscle activation patterns that occur during cycling. Six, male subjects performed an incremental cycle ergometer exercise test to determine asub-TvENT (i.e. 80% of TvENT) and supra-TvENT (TvENT + 0.75*(V02 max - TvENT) work load. These two constant work loads were subsequently performed on either three or four occasions for 8 mins each, with V02 captured on a breath-by-breath basis for every test, and EMO of eight major leg muscles collected on one occasion. EMG was collected for the first 10 s of every 30 s period, except for the very first 10 s period. The V02 data was interpolated, time aligned, averaged and smoothed for both intensities. Three models were then fitted to the V02 data to determine the kinetics responses. One of these models was mono-exponential, while the other two were biexponential. A second time delay parameter was the only difference between the two bi-exponential models. An F-test was used to determine significance between the biexponential models using the residual sum of squares term for each model. EMO was integrated to obtain one value for each 10 s period, per muscle. The EMG data was analysed by a two-way repeated measures ANOV A. A correlation was also used to determine significance between V02 and IEMG. The V02 data during the sub-TvENT intensity was best described by a mono-exponential response. In contrast, during supra-TvENT exercise the two bi-exponential models best described the V02 data. The resultant F-test revealed no significant difference between the two models and therefore demonstrated that the slow component was not delayed relative to the onset of the primary component. Furthermore, only two parameters were deemed to be significantly different based upon the two models. This is in contrast to other findings. The EMG data, for most muscles, appeared to follow the same pattern as V02 during both intensities of exercise. On most occasions, the correlation coefficient demonstrated significance. Although some muscles demonstrated the same relative increase in IEMO based upon increases in intensity and duration, it cannot be assumed that these muscles increase their contribution to V02 in a similar fashion. Larger muscles with a higher percentage of type II muscle fibres would have a larger increase in V02 over the same increase in intensity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A review of the main rolling models is conducted to assess their suitability for modelling the foil rolling process. Two such models are Fleck and Johnson's Hertzian model and Fleck, Johnson, Mear and Zhang's Influence Function model. Both of these models are approximated through the use of perturbation methods. Decrease in the computation time resulted when compared with the numerical solution. The Hertzian model was approximated using the ratio of the yield stress of the strip to the plane-strain Young's Modulus of the rolls as the small perturbation parameter. The Influence Function model approximation takes advantage of the solution of the well-known Aerofoil Integral Equation to gain an insight into how the choice of interior boundary points affects the stability of numerical solution of the model's equations. These approximations require less computation than their full models and, in the case of the Hertzian approximation, only introduces a small error in the predictions of roll force roll torque. Hence the Hertzian approximate method is suitable for on-line control. The predictions from the Influence Function approximation underestimates the predictions from the numerical results. Better approximation of the pressure in the plastic reduction regions is the main source of this error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The numerical modelling of electromagnetic waves has been the focus of many research areas in the past. Some specific applications of electromagnetic wave scattering are in the fields of Microwave Heating and Radar Communication Systems. The equations that govern the fundamental behaviour of electromagnetic wave propagation in waveguides and cavities are the Maxwell's equations. In the literature, a number of methods have been employed to solve these equations. Of these methods, the classical Finite-Difference Time-Domain scheme, which uses a staggered time and space discretisation, is the most well known and widely used. However, it is complicated to implement this method on an irregular computational domain using an unstructured mesh. In this work, a coupled method is introduced for the solution of Maxwell's equations. It is proposed that the free-space component of the solution is computed in the time domain, whilst the load is resolved using the frequency dependent electric field Helmholtz equation. This methodology results in a timefrequency domain hybrid scheme. For the Helmholtz equation, boundary conditions are generated from the time dependent free-space solutions. The boundary information is mapped into the frequency domain using the Discrete Fourier Transform. The solution for the electric field components is obtained by solving a sparse-complex system of linear equations. The hybrid method has been tested for both waveguide and cavity configurations. Numerical tests performed on waveguides and cavities for inhomogeneous lossy materials highlight the accuracy and computational efficiency of the newly proposed hybrid computational electromagnetic strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many large coal mining operations in Australia rely heavily on the rail network to transport coal from mines to coal terminals at ports for shipment. Over the last few years, due to the fast growing demand, the coal rail network is becoming one of the worst industrial bottlenecks in Australia. As a result, this provides great incentives for pursuing better optimisation and control strategies for the operation of the whole rail transportation system under network and terminal capacity constraints. This PhD research aims to achieve a significant efficiency improvement in a coal rail network on the basis of the development of standard modelling approaches and generic solution techniques. Generally, the train scheduling problem can be modelled as a Blocking Parallel- Machine Job-Shop Scheduling (BPMJSS) problem. In a BPMJSS model for train scheduling, trains and sections respectively are synonymous with jobs and machines and an operation is regarded as the movement/traversal of a train across a section. To begin, an improved shifting bottleneck procedure algorithm combined with metaheuristics has been developed to efficiently solve the Parallel-Machine Job- Shop Scheduling (PMJSS) problems without the blocking conditions. Due to the lack of buffer space, the real-life train scheduling should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold a train until the next section on the routing becomes available. As a consequence, the problem has been considered as BPMJSS with the blocking conditions. To develop efficient solution techniques for BPMJSS, extensive studies on the nonclassical scheduling problems regarding the various buffer conditions (i.e. blocking, no-wait, limited-buffer, unlimited-buffer and combined-buffer) have been done. In this procedure, an alternative graph as an extension of the classical disjunctive graph is developed and specially designed for the non-classical scheduling problems such as the blocking flow-shop scheduling (BFSS), no-wait flow-shop scheduling (NWFSS), and blocking job-shop scheduling (BJSS) problems. By exploring the blocking characteristics based on the alternative graph, a new algorithm called the topological-sequence algorithm is developed for solving the non-classical scheduling problems. To indicate the preeminence of the proposed algorithm, we compare it with two known algorithms (i.e. Recursive Procedure and Directed Graph) in the literature. Moreover, we define a new type of non-classical scheduling problem, called combined-buffer flow-shop scheduling (CBFSS), which covers four extreme cases: the classical FSS (FSS) with infinite buffer, the blocking FSS (BFSS) with no buffer, the no-wait FSS (NWFSS) and the limited-buffer FSS (LBFSS). After exploring the structural properties of CBFSS, we propose an innovative constructive algorithm named the LK algorithm to construct the feasible CBFSS schedule. Detailed numerical illustrations for the various cases are presented and analysed. By adjusting only the attributes in the data input, the proposed LK algorithm is generic and enables the construction of the feasible schedules for many types of non-classical scheduling problems with different buffer constraints. Inspired by the shifting bottleneck procedure algorithm for PMJSS and characteristic analysis based on the alternative graph for non-classical scheduling problems, a new constructive algorithm called the Feasibility Satisfaction Procedure (FSP) is proposed to obtain the feasible BPMJSS solution. A real-world train scheduling case is used for illustrating and comparing the PMJSS and BPMJSS models. Some real-life applications including considering the train length, upgrading the track sections, accelerating a tardy train and changing the bottleneck sections are discussed. Furthermore, the BPMJSS model is generalised to be a No-Wait Blocking Parallel- Machine Job-Shop Scheduling (NWBPMJSS) problem for scheduling the trains with priorities, in which prioritised trains such as express passenger trains are considered simultaneously with non-prioritised trains such as freight trains. In this case, no-wait conditions, which are more restrictive constraints than blocking constraints, arise when considering the prioritised trains that should traverse continuously without any interruption or any unplanned pauses because of the high cost of waiting during travel. In comparison, non-prioritised trains are allowed to enter the next section immediately if possible or to remain in a section until the next section on the routing becomes available. Based on the FSP algorithm, a more generic algorithm called the SE algorithm is developed to solve a class of train scheduling problems in terms of different conditions in train scheduling environments. To construct the feasible train schedule, the proposed SE algorithm consists of many individual modules including the feasibility-satisfaction procedure, time-determination procedure, tune-up procedure and conflict-resolve procedure algorithms. To find a good train schedule, a two-stage hybrid heuristic algorithm called the SE-BIH algorithm is developed by combining the constructive heuristic (i.e. the SE algorithm) and the local-search heuristic (i.e. the Best-Insertion- Heuristic algorithm). To optimise the train schedule, a three-stage algorithm called the SE-BIH-TS algorithm is developed by combining the tabu search (TS) metaheuristic with the SE-BIH algorithm. Finally, a case study is performed for a complex real-world coal rail network under network and terminal capacity constraints. The computational results validate that the proposed methodology would be very promising because it can be applied as a fundamental tool for modelling and solving many real-world scheduling problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monotony has been identified as a contributing factor to road crashes. Drivers’ ability to react to unpredictable events deteriorates when exposed to highly predictable and uneventful driving tasks, such as driving on Australian rural roads, many of which are monotonous by nature. Highway design in particular attempts to reduce the driver’s task to a merely lane-keeping one. Such a task provides little stimulation and is monotonous, thus affecting the driver’s attention which is no longer directed towards the road. Inattention contributes to crashes, especially for professional drivers. Monotony has been studied mainly from the endogenous perspective (for instance through sleep deprivation) without taking into account the influence of the task itself (repetitiveness) or the surrounding environment. The aim and novelty of this thesis is to develop a methodology (mathematical framework) able to predict driver lapses of vigilance under monotonous environments in real time, using endogenous and exogenous data collected from the driver, the vehicle and the environment. Existing approaches have tended to neglect the specificity of task monotony, leaving the question of the existence of a “monotonous state” unanswered. Furthermore the issue of detecting vigilance decrement before it occurs (predictions) has not been investigated in the literature, let alone in real time. A multidisciplinary approach is necessary to explain how vigilance evolves in monotonous conditions. Such an approach needs to draw on psychology, physiology, road safety, computer science and mathematics. The systemic approach proposed in this study is unique with its predictive dimension and allows us to define, in real time, the impacts of monotony on the driver’s ability to drive. Such methodology is based on mathematical models integrating data available in vehicles to the vigilance state of the driver during a monotonous driving task in various environments. The model integrates different data measuring driver’s endogenous and exogenous factors (related to the driver, the vehicle and the surrounding environment). Electroencephalography (EEG) is used to measure driver vigilance since it has been shown to be the most reliable and real time methodology to assess vigilance level. There are a variety of mathematical models suitable to provide a framework for predictions however, to find the most accurate model, a collection of mathematical models were trained in this thesis and the most reliable was found. The methodology developed in this research is first applied to a theoretically sound measure of sustained attention called Sustained Attention Response to Task (SART) as adapted by Michael (2010), Michael and Meuter (2006, 2007). This experiment induced impairments due to monotony during a vigilance task. Analyses performed in this thesis confirm and extend findings from Michael (2010) that monotony leads to an important vigilance impairment independent of fatigue. This thesis is also the first to show that monotony changes the dynamics of vigilance evolution and tends to create a “monotonous state” characterised by reduced vigilance. Personality traits such as being a low sensation seeker can mitigate this vigilance decrement. It is also evident that lapses in vigilance can be predicted accurately with Bayesian modelling and Neural Networks. This framework was then applied to the driving task by designing a simulated monotonous driving task. The design of such task requires multidisciplinary knowledge and involved psychologist Rebecca Michael. Monotony was varied through both the road design and the road environment variables. This experiment demonstrated that road monotony can lead to driving impairment. Particularly monotonous road scenery was shown to have the most impact compared to monotonous road design. Next, this study identified a variety of surrogate measures that are correlated with vigilance levels obtained from the EEG. Such vigilance states can be predicted with these surrogate measures. This means that vigilance decrement can be detected in a car without the use of an EEG device. Amongst the different mathematical models tested in this thesis, only Neural Networks predicted the vigilance levels accurately. The results of both these experiments provide valuable information about the methodology to predict vigilance decrement. Such an issue is quite complex and requires modelling that can adapt to highly inter-individual differences. Only Neural Networks proved accurate in both studies, suggesting that these models are the most likely to be accurate when used on real roads or for further research on vigilance modelling. This research provides a better understanding of the driving task under monotonous conditions. Results demonstrate that mathematical modelling can be used to determine the driver’s vigilance state when driving using surrogate measures identified during this study. This research has opened up avenues for future research and could result in the development of an in-vehicle device predicting driver vigilance decrement. Such a device could contribute to a reduction in crashes and therefore improve road safety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a model for estimation of average travel time and its variability on signalized urban networks using cumulative plots. The plots are generated based on the availability of data: a) case-D, for detector data only; b) case-DS, for detector data and signal timings; and c) case-DSS, for detector data, signal timings and saturation flow rate. The performance of the model for different degrees of saturation and different detector detection intervals is consistent for case-DSS and case-DS whereas, for case-D the performance is inconsistent. The sensitivity analysis of the model for case-D indicates that it is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a travel time prediction model and evaluates its performance and transferability. Advanced Travelers Information Systems (ATIS) are gaining more and more importance, increasing the need for accurate, timely and useful information to the travelers. Travel time information quantifies the traffic condition in an easy to understand way for the users. The proposed travel time prediction model is based on an efficient use of nearest neighbor search. The model is calibrated for optimal performance using Genetic Algorithms. Results indicate better performance by using the proposed model than the presently used naïve model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plants subjected to increases in the supply of resource(s) limiting growth may allocate more of those resources to existing leaves, increasing photosynthetic capacity, and/or to production of more leaves, increasing whole-plant photosynthesis. The responses of three populations of the alpine willow, Salix glauca, growing along an alpine topographic sequence representing a gradient in soil moisture and organic matter, and thus potential N supply, to N amendments, were measured over two growing seasons, to elucidate patterns of leaf versus shoot photosynthetic responses. Leaf-(foliar N, photosynthesis rates, photosynthetic N-use efficiency) and shoot-(leaf area per shoot, number of leaves per shoot, stem weight, N resorption efficiency) level measurements were made to examine the spatial and temporal variation in these potential responses to increased N availability. The predominant response of the willows to N fertilization was at the shoot-level, by production of greater leaf area per shoot. Greater leaf area occurred due to production of larger leaves in both years of the experiment and to production of more leaves during the second year of fertilization treatment. Significant leaf-level photosynthetic response occurred only during the first year of treatment, and only in the dry meadow population. Variation in photosynthesis rates was related more to variation in stomatal conductance than to foliar N concentration. Stomatal conductance in turn was significantly related to N fertilization. Differences among the populations in photosynthesis, foliar N, leaf production, and responses to N fertilization indicate N availability may be lowest in the dry meadow population, and highest in the ridge population. This result is contrary to the hypothesis that a gradient of plant available N corresponds with a snowpack/topographic gradient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many of the costs associated with greenfield residential development are apparent and tangible. For example, regulatory fees, government taxes, acquisition costs, selling fees, commissions and others are all relatively easily identified since they represent actual costs incurred at a given point in time. However, identification of holding costs are not always immediately evident since by contrast they characteristically lack visibility. One reason for this is that, for the most part, they are typically assessed over time in an ever-changing environment. In addition, wide variations exist in development pipeline components: they are typically represented from anywhere between a two and over sixteen years time period - even if located within the same geographical region. Determination of the starting and end points, with regards holding cost computation, can also prove problematic. Furthermore, the choice between application of prevailing inflation, or interest rates, or a combination of both over time, adds further complexity. Although research is emerging in these areas, a review of the literature reveals attempts to identify holding cost components are limited. Their quantification (in terms of relative weight or proportionate cost to a development project) is even less apparent; in fact, the computation and methodology behind the calculation of holding costs varies widely and in some instances completely ignored. In addition, it may be demonstrated that ambiguities exists in terms of the inclusion of various elements of holding costs and assessment of their relative contribution. Yet their impact on housing affordability is widely acknowledged to be profound, with their quantification potentially maximising the opportunities for delivering affordable housing. This paper seeks to build on earlier investigations into those elements related to holding costs, providing theoretical modelling of the size of their impact - specifically on the end user. At this point the research is reliant upon quantitative data sets, however additional qualitative analysis (not included here) will be relevant to account for certain variations between expectations and actual outcomes achieved by developers. Although this research stops short of cross-referencing with a regional or international comparison study, an improved understanding of the relationship between holding costs, regulatory charges, and housing affordability results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The reconstruction of extended maxillary and mandibular defects with prefabricated free flaps is a two stage procedure, that allows immediate function with implant supported dentures. The appropriate delay between prefabrication and reconstruction depends on the interfacial strength of the bone–implant surface. The purpose of this animal study was to evaluate the removal torque of unloaded titanium implants in the fibula, the scapula and the iliac crest. Ninety implants with a sandblasted and acid-etched (SLA) surface were tested after healing periods of 3, 6, and 12 weeks, respectively. Removal torque values (RTV) were collected using a computerized counterclockwise torque driver. The bicortical anchored 8 mm implants in the fibula revealed values of 63.73 Ncm, 91.50 Ncm, and 101.83 Ncm at 3, 6, and 12 weeks, respectively. The monocortical anchorage in the iliac crest showed values of 71.40 Ncm, 63.14 Ncm, and 61.59 Ncm with 12 mm implants at the corresponding times. The monocortical anchorage in the scapula demonstrated mean RTV of 62.28 Ncm, 97.63 Ncm, and 99.7 Ncm with 12 mm implants at 3, 6, and 12 weeks, respectively. The study showed an increase of removal torque with increasing healing time. The interfacial strength for bicortical anchored 8 mm implants in the fibula was comparable to monocortical anchored 12 mm implants in the iliac crest and the scapula at the corresponding times. The resistance to shear seemed to be determined by the type of anchorage (monocortical vs. bicortical) and the length of the implant with greater amount of bone–implant interface.