924 resultados para chirp parameters
Resumo:
Any incident on motorways potentially can be followed by secondary crashes. Rear-end crashes also could happen as a result of queue formation downstream of high speed platoons. To decrease the occurrence of secondary crashes and rear-end crashes, Variable Speed Limits (VSL) can be applied to protect queue formed downstream. This paper focuses on fine tuning the Queue Protection algorithm of VSL. Three performance indicators: activation time, deactivation time and number of false alarms are selected to optimise the Queue Protection algorithm. A calibrated microscopic traffic simulation model of Pacific Motorway in Brisbane is used for the optimisation. Performance of VSL during an incident and heavy congestion and the benefit of VSL will be presented in the paper.
Dynamics of attacker–defender dyads in Association Football : parameters influencing decision-making
Resumo:
Previous work on pattern-forming dynamics of team sports has investigated sub-phases of basketball and rugby union by focussing on one-versus-one (1v1) attacker-defender dyads. This body of work has identified the role of candidate control parameters, interpersonal distance and relative velocity, in predicting the outcomes of team player interactions. These two control parameters have been described as functioning in a nested relationship where relative velocity between players comes to the fore within a critical range of interpersonal distance. The critical influence of constraints on the intentionality of player behaviour has also been identified through the study of 1v1 attacker-defender dyads. This thesis draws from previous work adopting an ecological dynamics approach, which encompasses both Dynamical Systems Theory and Ecological Psychology concepts, to describe attacker-defender interactions in 1v1 dyads in association football. Twelve male youth association football players (average age 15.3 ± 0.5 yrs) performed as both attackers and defenders in 1v1 dyads in three field positions in an experimental manipulation of the proximity to goal and the role of players. Player and ball motion was tracked using TACTO 8.0 software (Fernandes & Caixinha, 2003) to produce two-dimensional (2D) trajectories of players and the ball on the ground. Significant differences were found for player-to-ball interactions depending on proximity to goal manipulations, indicating how key reference points in the environment such as the location of the goal may act as a constraint that shapes decision-making behaviour. Results also revealed that interpersonal distance and relative velocity alone were insufficient for accurately predicting the outcome of a dyad in association football. Instead, combined values of interpersonal distance, ball-to-defender distance, attacker-to-ball distance, attacker-to-ball relative velocity and relative angles were found to indicate the state of dyad outcomes.
Resumo:
Zeolite-based technology can provide a cost effective solution for stormwater treatment for the removal of toxic heavy metals under increasing demand of safe water from alternative sources. This paper reviews the currently available knowledge relating to the effect of properties of zeolites such as pore size, surface area and Si:Al ratio and the physico-chemical conditions of the system such as pH, temperature, initial metal concentration and zeolite concentration on heavy metal removal performance. The primary aims are, to consolidate available knowledge and identify knowledge gaps. It was established that an in-depth understanding of operational issues such as, diffusion of metal ions into the zeolite pore structure, pore clogging, zeolite surface coverage by particulates in stormwater as well as the effect of pH on stormwater quality in the presence of zeolites is essential for developing a zeolite-based technology for the treatment of polluted stormwater. The optimum zeolite concentration to treat typical volumes of stormwater and initial heavy metal concentrations in stormwater should also be considered as operational issues in this regard. Additionally, leaching of aluminium and sodium ions from the zeolite structure to solution were identified as key issues requiring further research in the effort to develop cost effective solutions for the removal of heavy metals from stormwater.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.
Resumo:
Modelling an environmental process involves creating a model structure and parameterising the model with appropriate values to accurately represent the process. Determining accurate parameter values for environmental systems can be challenging. Existing methods for parameter estimation typically make assumptions regarding the form of the Likelihood, and will often ignore any uncertainty around estimated values. This can be problematic, however, particularly in complex problems where Likelihoods may be intractable. In this paper we demonstrate an Approximate Bayesian Computational method for the estimation of parameters of a stochastic CA. We use as an example a CA constructed to simulate a range expansion such as might occur after a biological invasion, making parameter estimates using only count data such as could be gathered from field observations. We demonstrate ABC is a highly useful method for parameter estimation, with accurate estimates of parameters that are important for the management of invasive species such as the intrinsic rate of increase and the point in a landscape where a species has invaded. We also show that the method is capable of estimating the probability of long distance dispersal, a characteristic of biological invasions that is very influential in determining spread rates but has until now proved difficult to estimate accurately.
Resumo:
Twin studies offer the opportunity to determine the relative contribution of genes versus environment in traits of interest. Here, we investigate the extent to which variance in brain structure is reduced in monozygous twins with identical genetic make-up. We investigate whether using twins as compared to a control population reduces variability in a number of common magnetic resonance (MR) structural measures, and we investigate the location of areas under major genetic influences. This is fundamental to understanding the benefit of using twins in studies where structure is the phenotype of interest. Twenty-three pairs of healthy MZ twins were compared to matched control pairs. Volume, T2 and diffusion MR imaging were performed as well as spectroscopy (MRS). Images were compared using (i) global measures of standard deviation and effect size, (ii) voxel-based analysis of similarity and (iii) intra-pair correlation. Global measures indicated a consistent increase in structural similarity in twins. The voxel-based and correlation analyses indicated a widespread pattern of increased similarity in twin pairs, particularly in frontal and temporal regions. The areas of increased similarity were most widespread for the diffusion trace and least widespread for T2. MRS showed consistent reduction in metabolite variation that was significant in the temporal lobe N-acetylaspartate (NAA). This study has shown the distribution and magnitude of reduced variability in brain volume, diffusion, T2 and metabolites in twins. The data suggest that evaluation of twins discordant for disease is indeed a valid way to attribute genetic or environmental influences to observed abnormalities in patients since evidence is provided for the underlying assumption of decreased variability in twins.
Resumo:
Colour is one of the most important parameters in sugar quality and its presence in raw sugar plays a key role in the marketing strategy of sugar industries worldwide. This study investigated the degradation of a mixture of colour precursors using the Fenton oxidation process. These colour precursors are caffeic acid, p–coumaric acid and ferulic acid, which are present in cane juice. Results showed that with a Fe(II) to H2O2 molar ratio of 1:15 in an aqueous system at 25 °C, 77% of the total phenolic acid content was removed at pH 4.72. However, in a synthetic juice solution which contained 13 mass % sucrose (35 °C, pH 5.4), only 60% of the total phenolic acid content was removed.
Resumo:
Changing sodium intake from 70-200 mmol/day elevates blood pressure in normotensive volunteers by 6/4 mmHg. Older people, people with reduced renal function on a low sodium diet and people with a family history of hypertension are more likely to show this effect. The rise in blood pressure was associated with a fall in plasma volume suggesting that plasma volume changes do not initiate hypertension. In normotensive individuals the most common abnormality in membrane sodium transport induced by an extra sodium load was an increased permeability of the red cell to sodium. Some normotensive individuals also had an increase in the level of a plasma inhibitor that inhibited Na-K ATPase. These individuals also appeared to have a rise in blood pressure. Sodium intake and blood pressure are related. The relationship differs in different people and is probably controlled by the genetically inherited capacity of systems involved in membrane sodium transport.
Resumo:
A total histological grade does not necessarily distinguish between different manifestations of cartilage damage or degeneration. An accurate and reliable histological assessment method is required to separate normal and pathological tissue within a joint during treatment of degenerative joint conditions and to sub-classify the latter in meaningful ways. The Modified Mankin method may be adaptable for this purpose. We investigated how much detail may be lost by assigning one composite score/grade to represent different degenerative components of the osteoarthritic condition. We used four ovine injury models (sham surgery, anterior cruciate ligament/medial collateral ligament instability, simulated anatomic anterior cruciate ligament reconstruction and meniscal removal) to induce different degrees and potentially 'types' (mechanisms) of osteoarthritis. Articular cartilage was systematically harvested, prepared for histological examination and graded in a blinded fashion using a Modified Mankin grading method. Results showed that the possible permutations of cartilage damage were significant and far more varied than the current intended use that histological grading systems allow. Of 1352 cartilage specimens graded, 234 different manifestations of potential histological damage were observed across 23 potential individual grades of the Modified Mankin grading method. The results presented here show that current composite histological grading may contain additional information that could potentially discern different stages or mechanisms of cartilage damage and degeneration in a sheep model. This approach may be applicable to other grading systems.
Resumo:
We present a formalism for the analysis of sensitivity of nuclear magnetic resonance pulse sequences to variations of pulse sequence parameters, such as radiofrequency pulses, gradient pulses or evolution delays. The formalism enables the calculation of compact, analytic expressions for the derivatives of the density matrix and the observed signal with respect to the parameters varied. The analysis is based on two constructs computed in the course of modified density-matrix simulations: the error interrogation operators and error commutators. The approach presented is consequently named the Error Commutator Formalism (ECF). It is used to evaluate the sensitivity of the density matrix to parameter variation based on the simulations carried out for the ideal parameters, obviating the need for finite-difference calculations of signal errors. The ECF analysis therefore carries a computational cost comparable to a single density-matrix or product-operator simulation. Its application is illustrated using a number of examples from basic NMR spectroscopy. We show that the strength of the ECF is its ability to provide analytic insights into the propagation of errors through pulse sequences and the behaviour of signal errors under phase cycling. Furthermore, the approach is algorithmic and easily amenable to implementation in the form of a programming code. It is envisaged that it could be incorporated into standard NMR product-operator simulation packages.
Resumo:
This article presents a case study of corporate dialogue with vulnerable others. Dialogue with marginalized external groups is increasingly presented in the business literature as the key to making corporate social responsibility possible in particular through corporate learning. Corporate public communications at the same time promote community engagement as a core aspect of corporate social responsibility. This article examines the possibilities for and conditions underpinning corporate dialogue with marginalized stakeholders as occurred around the unexpected and sudden closure in January 2009 of the AU$2.2 billion BHP Billiton Ravensthorpe Nickel mine in rural Western Australia. In doing so we draw on John Roberts’ notion of dialogue with vulnerable others, and apply a discourse analysis approach to data spanning corporate public communications and interviews with residents affected by the decision to close the mine. In presenting this case study we contribute to the as yet limited organizational research concerned directly with marginalized stakeholders and argue that corporate social responsibility discourse and vulnerable other dialogue not only affirms the primacy of business interests but also co-opts vulnerable others in the pursuit of these interests. In conclusion we consider case study implications for critical understandings of corporate dialogue with vulnerable others.
Resumo:
Traffic safety studies demand more than what current micro-simulation models can provide as they presume that all drivers of motor vehicles exhibit safe behaviours. Several car-following models are used in various micro-simulation models. This research compares the mainstream car following models’ capabilities of emulating precise driver behaviour parameters such as headways and Time to Collisions. The comparison firstly illustrates which model is more robust in the metric reproduction. Secondly, the study conducted a series of sensitivity tests to further explore the behaviour of each model. Based on the outcome of these two steps exploration of the models, a modified structure and parameters adjustment for each car-following model is proposed to simulate more realistic vehicle movements, particularly headways and Time to Collision, below a certain critical threshold. NGSIM vehicle trajectory data is used to evaluate the modified models performance to assess critical safety events within traffic flow. The simulation tests outcomes indicate that the proposed modified models produce better frequency of critical Time to Collision than the generic models, while the improvement on the headway is not significant. The outcome of this paper facilitates traffic safety assessment using microscopic simulation.
A multivariate approach to the identification of surrogate parameters for heavy metals in stormwater
Resumo:
Stormwater is a potential and readily available alternative source for potable water in urban areas. However, its direct use is severely constrained by the presence of toxic pollutants, such as heavy metals (HMs). The presence of HMs in stormwater is of concern because of their chronic toxicity and persistent nature. In addition to human health impacts, metals can contribute to adverse ecosystem health impact on receiving waters. Therefore, the ability to predict the levels of HMs in stormwater is crucial for monitoring stormwater quality and for the design of effective treatment systems. Unfortunately, the current laboratory methods for determining HM concentrations are resource intensive and time consuming. In this paper, applications of multivariate data analysis techniques are presented to identify potential surrogate parameters which can be used to determine HM concentrations in stormwater. Accordingly, partial least squares was applied to identify a suite of physicochemical parameters which can serve as indicators of HMs. Datasets having varied characteristics, such as land use and particle size distribution of solids, were analyzed to validate the efficacy of the influencing parameters. Iron, manganese, total organic carbon, and inorganic carbon were identified as the predominant parameters that correlate with the HM concentrations. The practical extension of the study outcomes to urban stormwater management is also discussed.