389 resultados para Camera Pose Estimation
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.
Resumo:
This paper describes modelling, estimation and control of the horizontal translational motion of an open-source and cost effective quadcopter — the MikroKopter. We determine the dynamics of its roll and pitch attitude controller, system latencies, and the units associated with the values exchanged with the vehicle over its serial port. Using this we create a horizontal-plane velocity estimator that uses data from the built-in inertial sensors and an onboard laser scanner, and implement translational control using a nested control loop architecture. We present experimental results for the model and estimator, as well as closed-loop positioning.
Resumo:
In public places, crowd size may be an indicator of congestion, delay, instability, or of abnormal events, such as a fight, riot or emergency. Crowd related information can also provide important business intelligence such as the distribution of people throughout spaces, throughput rates, and local densities. A major drawback of many crowd counting approaches is their reliance on large numbers of holistic features, training data requirements of hundreds or thousands of frames per camera, and that each camera must be trained separately. This makes deployment in large multi-camera environments such as shopping centres very costly and difficult. In this chapter, we present a novel scene-invariant crowd counting algorithm that uses local features to monitor crowd size. The use of local features allows the proposed algorithm to calculate local occupancy statistics, scale to conditions which are unseen in the training data, and be trained on significantly less data. Scene invariance is achieved through the use of camera calibration, allowing the system to be trained on one or more viewpoints and then deployed on any number of new cameras for testing without further training. A pre-trained system could then be used as a ‘turn-key’ solution for crowd counting across a wide range of environments, eliminating many of the costly barriers to deployment which currently exist.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
This paper provides fundamental understanding for the use of cumulative plots for travel time estimation on signalized urban networks. Analytical modeling is performed to generate cumulative plots based on the availability of data: a) Case-D, for detector data only; b) Case-DS, for detector data and signal timings; and c) Case-DSS, for detector data, signal timings and saturation flow rate. The empirical study and sensitivity analysis based on simulation experiments have observed the consistency in performance for Case-DS and Case-DSS, whereas, for Case-D the performance is inconsistent. Case-D is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.
Resumo:
Background: The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Methods: Typically developing children (n = 67) from Years 1 – 3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Results: Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. Conclusion: In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice.
Resumo:
We propose an approach to employ eigen light-fields for face recognition across pose on video. Faces of a subject are collected from video frames and combined based on the pose to obtain a set of probe light-fields. These probe data are then projected to the principal subspace of the eigen light-fields within which the classification takes place. We modify the original light-field projection and found that it is more robust in the proposed system. Evaluation on VidTIMIT dataset has demonstrated that the eigen light-fields method is able to take advantage of multiple observations contained in the video.
Resumo:
Objective: To comprehensively measure the burden of hepatitis B, liver cirrhosis and liver cancer in Shandong province, using disability-adjusted life years (DALYs) to estimate the disease burden attribute to hepatitis B virus (HBV)infection. Methods: Based on the mortality data of hepatitis B, liver cirrhosis and liver cancer derived from the third National Sampling Retrospective Survey for Causes of Death during 2004 and 2005, the incidence data of hepatitis B and the prevalence and the disability weights of liver cancer gained from the Shandong Cancer Prevalence Sampling Survey in 2007, we calculated the years of life lost (YLLs), years lived with disability (YLDs) and DALYs of three diseases following the procedures developed for the global burden of disease (GBD) study to ensure the comparability. Results: The total burden for hepatitis B, liver cirrhosis and liver cancer were 211 616 (39 377 YLLs and 172 239 YLDs), 16 783 (13 497 YLLs and 3286 YLDs) and 247 795 (240 236 YLLs and 7559 YLDs) DALYs in 2005 respectively, and men were 2.19, 2.36 and 3.16 times as that for women, respectively in Shandong province. The burden for hepatitis B was mainly because of disability (81.39%). However, most burden on liver cirrhosis and liver cancer were due to premature death (80.42% and 96.95%). The burden of each patient related to hepatitis B, liver cirrhosis and liver cancer were 4.8, 13.73 and 11.11 respectively. Conclusion: Hepatitis B, liver cirrhosis and liver cancer caused considerable burden to the people living in Shandong province, indicating that the control of hepatitis B virus infection would bring huge potential benefits.
Resumo:
Objective: To determine the major health related risk factors and provide evidence for policy-making,using health burden analysis on selected factors among general population from Shandong province. Methods: Based on data derived from the Third Death of Cause Sampling Survey in Shandong. Years of life lcrat(YLLs),yearS Iived with disability(YLDs)and disability-adjusted life years(DALYs) were calculated according to the GBD ethodology.Deaths and DALYs attributed to the selected risk factors were than estimated together with the PAF data from GBD 2001 study.The indirect method was employed to estimate the YLDs. Results: 51.09%of the total dearlls and 31.83%of the total DALYs from the Shandong population were resulted from the 19 selected risk factors.High blood pre.ure,smoking,low fruit and vegetable intake,aleohol consumption,indoor smoke from solid fuels,high cholesterol,urban air pollution, physical inactivity,overweight and obesity and unsafe injections in health care settings were identified as the top 10 risk faetors for mortality which together caused 50.21%of the total deaths.Alcohol use,smoking,high blood pressure,Low fruit and vegetable intake, indoor smoke from solid fuels, overweight and obesity,high cholesterol, physical inactivity,urban air pollution and iron-deficiency anemia were proved as the top 10 risk factors related to disease burden and were responsible for 29.04%of the total DALYs. Conclusion: Alcohol use.smoking and high blood pressure were determined as the major risk factors which influencing the health of residents in Shandong. The mortality and burden of disease could be reduced significantly if these major factors were effectively under control.
Resumo:
This paper investigates energy saving potential of commercial building by living wall and green façade system using Envelope Thermal Transfer Value (ETTV) equation in Sub-tropical climate of Australia. Energy saving of four commercial buildings was quantified by applying living wall and green façade system to the west facing wall. A field experimental facility, from which temperature data of living wall system was collected, was used to quantify wall temperatures and heat gain under controlled conditions. The experimental parameters were accumulated with extensive data of existing commercial building to quantify energy saving. Based on temperature data of living wall system comprised of Australian native plants, equivalent temperature of living wall system has been computed. Then, shading coefficient of plants in green façade system has been included in mathematical equation and in graphical analysis. To minimize the air-conditioned load of commercial building, therefore to minimize the heat gain of commercial building, an analysis of building heat gain reduction by living wall and green façade system has been performed. Overall, cooling energy performance of commercial building before and after living wall and green façade system application has been examined. The quantified energy saving showed that only living wall system on opaque part of west facing wall can save 8-13 % of cooling energy consumption where as only green façade system on opaque part of west facing wall can save 9.5-18% cooling energy consumption of commercial building. Again, green façade system on fenestration system on west facing wall can save 28-35 % of cooling energy consumption where as combination of both living wall on opaque part of west facing wall and green façade on fenestration system on west facing wall can save 35-40% cooling energy consumption of commercial building in sub-tropical climate of Australia.
Resumo:
The 31st TTRA conference was held in California’s San Fernando Valley, home of Hollywood and Burbank’s movie and television studios. The twin themes of Hollywood and the new Millennium promised and delivered “something old, yet something new”. The meeting offered a historical summary, not only of the year in review but also of many features of travel research since the first literature in the field appeared in the 1970s. Also, the millennium theme set the scene for some stimulating and forward thinking discussions. The Hollywood location offered an opportunity to ponder on the value of the movie-induced tourism for Los Angeles, at a time when Hollywood Boulevard was in the midst of a much needed redevelopment programme. Hollywood Chamber of Commerce speaker Oscar Arslanian acknowledged that the face of the famous district had become tired, and that its ability to continue to attract visitors in the future lay in redeveloping its past heritage. In line with the Hollywood theme a feature of the conference was a series of six special sessions with “Stars of Travel Research”. These sessions featured: Clare Gunn, Stanley Plog, Charles Gouldner, John Hunt, Brent Ritchie, Geoffrey Crouch, Peter Williams, Douglas Frechtling, Turgut Var, Robert Christie-Mill, and John Crotts. Delegates were indeed privileged to hear from many of the pioneers of tourism research. Clare Gunn, Charles Goeldner, Turgut Var and Stanley Plog, for example, traced the history of different aspects of the tourism literature, and in line with the millennium theme, offered some thought provoking discussion on the future challenges facing tourism. These included; the commodotisation of airlines and destinations, airport and traffic congestion, environment sustainability responsibility and the looming burst of the baby-boomer bubble. Included in the conference proceedings are four papers presented by five of the “Stars”. Brent Ritchie and Geoffrey Crouch discuss the critical success factors for destinations, Clare Gunn shares his concerns about tourism being a smokestack industry, Doug Frechtling provides forecasts of outbound travel from 20 countries, and Charles Gouldner, who has attended all 31 TTRA conferences, reflects on the changes that have taken place in tourism research over 35 years...
Resumo:
This paper presents an innovative prognostics model based on health state probability estimation embedded in the closed loop diagnostic and prognostic system. To employ an appropriate classifier for health state probability estimation in the proposed prognostic model, the comparative intelligent diagnostic tests were conducted using five different classifiers applied to the progressive fault levels of three faults in HP-LNG pump. Two sets of impeller-rubbing data were employed for the prediction of pump remnant life based on estimation of discrete health state probability using an outstanding capability of SVM and a feature selection technique. The results obtained were very encouraging and showed that the proposed prognosis system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.
Resumo:
Precise identification of the time when a change in a hospital outcome has occurred enables clinical experts to search for a potential special cause more effectively. In this paper, we develop change point estimation methods for survival time of a clinical procedure in the presence of patient mix in a Bayesian framework. We apply Bayesian hierarchical models to formulate the change point where there exists a step change in the mean survival time of patients who underwent cardiac surgery. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. Markov Chain Monte Carlo is used to obtain posterior distributions of the change point parameters including location and magnitude of changes and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time CUSUM control charts for different magnitude scenarios. The proposed estimator shows a better performance where a longer follow-up period, censoring time, is applied. In comparison with the alternative built-in CUSUM estimator, more accurate and precise estimates are obtained by the Bayesian estimator. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.
Resumo:
Traffic generated semi and non volatile organic compounds (SVOCs and NVOCs) pose a serious threat to human and ecosystem health when washed off into receiving water bodies by stormwater. Climate change influenced rainfall characteristics makes the estimation of these pollutants in stormwater quite complex. The research study discussed in the paper developed a prediction framework for such pollutants under the dynamic influence of climate change on rainfall characteristics. It was established through principal component analysis (PCA) that the intensity and durations of low to moderate rain events induced by climate change mainly affect the wash-off of SVOCs and NVOCs from urban roads. The study outcomes were able to overcome the limitations of stringent laboratory preparation of calibration matrices by extracting uncorrelated underlying factors in the data matrices through systematic application of PCA and factor analysis (FA). Based on the initial findings from PCA and FA, the framework incorporated orthogonal rotatable central composite experimental design to set up calibration matrices and partial least square regression to identify significant variables in predicting the target SVOCs and NVOCs in four particulate fractions ranging from >300-1 μm and one dissolved fraction of <1 μm. For the particulate fractions range >300-1 μm, similar distributions of predicted and observed concentrations of the target compounds from minimum to 75th percentile were achieved. The inter-event coefficient of variations for particulate fractions of >300-1 μm were 5% to 25%. The limited solubility of the target compounds in stormwater restricted the predictive capacity of the proposed method for the dissolved fraction of <1 μm.