881 resultados para partial-state estimation
Resumo:
In an effort to evaluate and improve their practices to ensure the future excellence of the Texas highway system, the Texas Department of Transportation (TxDOT) sought a forum in which experts from other state departments of transportation could share their expertise. Thus, the Peer State Review of TxDOT Maintenance Practices project was organized and conducted for TxDOT by the Center for Transportation Research (CTR) at The University of Texas at Austin. The goal of the project was to conduct a workshop at CTR and in the Austin District that would educate the visiting peers on TxDOT’s maintenance practices and invite their feedback. CTR and TxDOT arranged the participation of the following directors of maintenance: Steve Takigawa, CA; Roy Rissky, KS; Eric Pitts, GA; Jim Carney, MO; Jennifer Brandenburg, NC; and David Bierschbach, WA. One of the means used to capture the peer reviewers’ opinions was a carefully designed booklet of 15 questions. The peers provided TxDOT with written responses to these questions, and the oral comments made during the workshop were also captured. This information was then compiled and summarized in the following report. An examination of the peers’ comments suggests that TxDOT should use a more holistic, statewide approach to funding and planning rather than funding and planning for each district separately. Additionally, the peers stressed the importance of allocating funds based on the actual conditions of the roadways instead of on inventory. The visiting directors of maintenance also recommended continuing and proliferating programs that enhance communication, such as peer review workshops.
Resumo:
Travel time is an important network performance measure and it quantifies congestion in a manner easily understood by all transport users. In urban networks, travel time estimation is challenging due to number of reasons such as, fluctuations in traffic flow due to traffic signals, significant flow to/from mid link sinks/sources, etc. The classical analytical procedure utilizes cumulative plots at upstream and downstream locations for estimating travel time between the two locations. In this paper, we discuss about the issues and challenges with classical analytical procedure such as its vulnerability to non conservation of flow between the two locations. The complexity with respect to exit movement specific travel time is discussed. Recently, we have developed a methodology utilising classical procedure to estimate average travel time and its statistic on urban links (Bhaskar, Chung et al. 2010). Where, detector, signal and probe vehicle data is fused. In this paper we extend the methodology for route travel time estimation and test its performance using simulation. The originality is defining cumulative plots for each exit turning movement utilising historical database which is self updated after each estimation. The performance is also compared with a method solely based on probe (Probe-only). The performance of the proposed methodology has been found insensitive to different route flow, with average accuracy of more than 94% given a probe per estimation interval which is more than 5% increment in accuracy with respect to Probe-only method.
Resumo:
One of the impediments to large-scale use of wind generation within power system is its variable and uncertain real-time availability. Due to the low marginal cost of wind power, its output will change the merit order of power markets and influence the Locational Marginal Price (LMP). For the large scale of wind power, LMP calculation can't ignore the essential variable and uncertain nature of wind power. This paper proposes an algorithm to estimate LMP. The estimation result of conventional Monte Carlo simulation is taken as benchmark to examine accuracy. Case study is conducted on a simplified SE Australian power system, and the simulation results show the feasibility of proposed method.
Resumo:
Context: Parliamentary committees established in Westminster parliaments, such as Queensland, provide a cross-party structure that enables them to recommend policy and legislative changes that may otherwise be difficult for one party to recommend. The overall parliamentary committee process tends to be more cooperative and less adversarial than the main chamber of parliament and, as a result, this process permits parliamentary committees to make recommendations more on the available research evidence and less on political or party considerations. Objectives: This paper considers the contributions that parliamentary committees in Queensland have made in the past in the areas of road safety, drug use as well as organ and tissue donation. The paper also discusses the importance of researchers actively engaging with parliamentary committees to ensure the best evidence based policy outcomes. Key messages: In the past, parliamentary committees have successfully facilitated important safety changes with many committee recommendations based on research results. In order to maximise the benefits of the parliamentary committee process it is essential that researchers inform committees about their work and become key stakeholders in the inquiry process. Researchers can keep committees informed by making submissions to their inquiries, responding to requests for information and appearing as witnesses at public hearings. Researchers should emphasise the key findings and implications of their research as well as considering the jurisdictional implications and political consequences. It is important that researchers understand the differences between lobbying and providing informed recommendations when interacting with committees. Discussion and conclusions: Parliamentary committees in Queensland have successfully assisted in the introduction of evidence based policy and legislation. In order to present best practice recommendations, committees rely on the evidence presented to them including the results of researchers. Actively engaging with parliamentary committees will help researchers to turn their results into practice with a corresponding decrease in injuries and fatalities. Developing an understanding of parliamentary committees, and the typical inquiry process used by these committees, will help researchers to present their research results in a manner that will encourage the adoption of their ideas by parliamentary committees, the presentation of these results as recommendations within the report and the subsequent enactment of the committee’s recommendations by the government.
Resumo:
This paper describes modelling, estimation and control of the horizontal translational motion of an open-source and cost effective quadcopter — the MikroKopter. We determine the dynamics of its roll and pitch attitude controller, system latencies, and the units associated with the values exchanged with the vehicle over its serial port. Using this we create a horizontal-plane velocity estimator that uses data from the built-in inertial sensors and an onboard laser scanner, and implement translational control using a nested control loop architecture. We present experimental results for the model and estimator, as well as closed-loop positioning.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
This short paper focuses on strategic issues and important research questions.
Resumo:
Volatile properties of particle emissions from four compressed natural gas (CNG) and four diesel buses were investigated under steady state and transient driving modes on a chassis dynamometer. The exhaust was diluted utilising a full-flow continuous volume sampling system and passed through a thermodenuder at controlled temperature. Particle number concentration and size distribution were measured with a condensation particle counter and a scanning mobility particle sizer, respectively. We show that, while almost all the particles emitted by the CNG buses were in the nanoparticle size range, at least 85% and 98% were removed at 100ºC and 250ºC, respectively. Closer analysis of the volatility of particles emitted during transient cycles showed that volatilisation began at around 40°C with the majority occurring by 80°C. Particles produced during hard acceleration from rest exhibited lower volatility than that produced during other times of the cycle. Based on our results and the observation of ash deposits on the walls of the tailpipes, we suggest that these non-volatile particles were composed mostly of ash from lubricating oil. Heating the diesel bus emissions to 100ºC removed ultrafine particle numbers by 69% to 82% when a nucleation mode was present and just 18% when it was not.
Resumo:
This paper provides fundamental understanding for the use of cumulative plots for travel time estimation on signalized urban networks. Analytical modeling is performed to generate cumulative plots based on the availability of data: a) Case-D, for detector data only; b) Case-DS, for detector data and signal timings; and c) Case-DSS, for detector data, signal timings and saturation flow rate. The empirical study and sensitivity analysis based on simulation experiments have observed the consistency in performance for Case-DS and Case-DSS, whereas, for Case-D the performance is inconsistent. Case-D is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.
Resumo:
Various time-memory tradeoffs attacks for stream ciphers have been proposed over the years. However, the claimed success of these attacks assumes the initialisation process of the stream cipher is one-to-one. Some stream cipher proposals do not have a one-to-one initialisation process. In this paper, we examine the impact of this on the success of time-memory-data tradeoff attacks. Under the circumstances, some attacks are more successful than previously claimed while others are less. The conditions for both cases are established.
Resumo:
Background: The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Methods: Typically developing children (n = 67) from Years 1 – 3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Results: Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. Conclusion: In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice.