860 resultados para Hexarotor. Dynamic modeling. Robust backstepping control. EKF Attitude Estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the growing significance of services in most developed economies, there is an increased interest in the role of service innovation in service firm competitive strategy. Despite growing literature on service innovation, it remains fragmented reflecting the need for a model that captures key antecedents driving the service innovation-based competitive advantage process. Building on extant literature and using thirteen in-depth interviews with CEOs of project-oriented service firms, this paper presents a model of innovation-based competitive advantage. The emergent model suggests that entrepreneurial service firms pursuing innovation carefully select and use dynamic capabilities that enable them to achieve greater innovation and sustained competitive advantage. Our findings indicate that firms purposefully use create, extend and modify processes to build and nurture key dynamic capabilities. The paper presents a set of theoretical propositions to guide future research. Implications for theory and practice are discussed. Finally, directions for future research are outlined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes modelling, estimation and control of the horizontal translational motion of an open-source and cost effective quadcopter — the MikroKopter. We determine the dynamics of its roll and pitch attitude controller, system latencies, and the units associated with the values exchanged with the vehicle over its serial port. Using this we create a horizontal-plane velocity estimator that uses data from the built-in inertial sensors and an onboard laser scanner, and implement translational control using a nested control loop architecture. We present experimental results for the model and estimator, as well as closed-loop positioning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasingly, studies are reported that examine how conceptual modeling is conducted in practice. Yet, typically the studies to date have examined in isolation how modeling grammars can be, or are, used to develop models of information systems or organizational processes, without considering that such modeling is typically done by means of a modeling tool that extends the modeling functionality offered by a grammar through complementary features. This paper extends the literature by examining how the use of seven different features of modeling tools affects usage beliefs users develop when using modeling grammars for process modeling. We show that five distinct tool features positively affect usefulness, ease of use and satisfaction beliefs of users. We offer a number of interpretations about the findings. We also describe how the results inform decisions of relevance to developers of modeling tools as well as managers in charge for making modeling-related investment decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At the beginning of the pandemic (H1N1) 2009 outbreak, we estimated the potential surge in demand for hospital-based services in 4 Health Service Districts of Queensland, Australia, using the FluSurge model. Modifications to the model were made on the basis of emergent evidence and results provided to local hospitals to inform resource planning for the forthcoming pandemic. To evaluate the fit of the model, a comparison between the model's predictions and actual hospitalizations was made. In early 2010, a Web-based survey was undertaken to evaluate the model's usefulness. Predictions based on modified assumptions arising from the new pandemic gained better fit than results from the default model. The survey identified that the modeling support was helpful and useful to service planning for local hospitals. Our research illustrates an integrated framework involving post hoc comparison and evaluation for implementing epidemiologic modeling in response to a public health emergency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The compressed gas industry and government agencies worldwide utilize "adiabatic compression" testing for qualifying high-pressure valves, regulators, and other related flow control equipment for gaseous oxygen service. This test methodology is known by various terms including adiabatic compression testing, gaseous fluid impact testing, pneumatic impact testing, and BAM testing as the most common terms. The test methodology will be described in greater detail throughout this document but in summary it consists of pressurizing a test article (valve, regulator, etc.) with gaseous oxygen within 15 to 20 milliseconds (ms). Because the driven gas1 and the driving gas2 are rapidly compressed to the final test pressure at the inlet of the test article, they are rapidly heated by the sudden increase in pressure to sufficient temperatures (thermal energies) to sometimes result in ignition of the nonmetallic materials (seals and seats) used within the test article. In general, the more rapid the compression process the more "adiabatic" the pressure surge is presumed to be and the more like an isentropic process the pressure surge has been argued to simulate. Generally speaking, adiabatic compression is widely considered the most efficient ignition mechanism for directly kindling a nonmetallic material in gaseous oxygen and has been implicated in many fire investigations. Because of the ease of ignition of many nonmetallic materials by this heating mechanism, many industry standards prescribe this testing. However, the results between various laboratories conducting the testing have not always been consistent. Research into the test method indicated that the thermal profile achieved (i.e., temperature/time history of the gas) during adiabatic compression testing as required by the prevailing industry standards has not been fully modeled or empirically verified, although attempts have been made. This research evaluated the following questions: 1) Can the rapid compression process required by the industry standards be thermodynamically and fluid dynamically modeled so that predictions of the thermal profiles be made, 2) Can the thermal profiles produced by the rapid compression process be measured in order to validate the thermodynamic and fluid dynamic models; and, estimate the severity of the test, and, 3) Can controlling parameters be recommended so that new guidelines may be established for the industry standards to resolve inconsistencies between various test laboratories conducting tests according to the present standards?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work a biomechanical model is used for simulation of muscle forces necessary to maintain the posture in a car seat under different support conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we describe the dynamic simulation of an 18 degrees of freedom hexapod robot with the objective of developing control algorithms for smooth, efficient and robust walking in irregular terrain. This is to be achieved by using force sensors in addition to the conventional joint angle sensors as proprioceptors. The reaction forces on the feet of the robot provide the necessary information on the robots interaction with the terrain. As a first step we validate the simulator by implementing movement control by joint torques using PID controllers. As an unexpected by-product we find that it is simple to achieve robust walking behaviour on even terrain for a hexapod with the help of PID controllers and by specifying a trajectory of only a few joint configurations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An increase in the likelihood of navigational collisions in port waters has put focus on the collision avoidance process in port traffic safety. The most widely used on-board collision avoidance system is the automatic radar plotting aid which is a passive warning system that triggers an alert based on the pilot’s pre-defined indicators of distance and time proximities at the closest point of approaches in encounters with nearby vessels. To better help pilot in decision making in close quarter situations, collision risk should be considered as a continuous monotonic function of the proximities and risk perception should be considered probabilistically. This paper derives an ordered probit regression model to study perceived collision risks. To illustrate the procedure, the risks perceived by Singapore port pilots were obtained to calibrate the regression model. The results demonstrate that a framework based on the probabilistic risk assessment model can be used to give a better understanding of collision risk and to define a more appropriate level of evasive actions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study the interplay effects for Enhanced Dynamic Wedge (EDW) treatments are experimentally investigated. Single and multiple field EDW plans for different wedge angles were delivered to a phantom and detector on a moving platform, with various periods, amplitudes for parallel and perpendicular motions. A four field 4D CT planned lung EDW treatment was delivered to a dummy tumor over four fractions. For the single field parallel case the amplitude and the period of motion both affect the interplay resulting in the appearance of a step function and penumbral cut off with the discrepancy worst where collimator-tumor speed is similar. For perpendicular motion the amplitude of tumor motion is the only dominant factor. For large wedge angle the dose discrepancy is more pronounced compared to the small wedge angle for the same field size and amplitude-period values. For a small field size i.e. 5 × 5 cm2 the loss of wedged distribution was observed for both 60º and 15º wedge angles for of parallel and perpendicular motions. Film results from 4D CT planned delivery displayed a mix of over and under dosages over 4 fractions, with the gamma pass rate of 40% for the averaged film image at 3%/1 mm DTA (Distance to Agreement). Amplitude and period of the tumor motion both affect the interplay for single and multi-field EDW treatments and for a limited (4 or 5) fraction delivery there is a possibility of non-averaging of the EDW interplay.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advances in safety research—trying to improve the collective understanding of motor vehicle crash causes and contributing factors—rest upon the pursuit of numerous lines of research inquiry. The research community has focused considerable attention on analytical methods development (negative binomial models, simultaneous equations, etc.), on better experimental designs (before-after studies, comparison sites, etc.), on improving exposure measures, and on model specification improvements (additive terms, non-linear relations, etc.). One might logically seek to know which lines of inquiry might provide the most significant improvements in understanding crash causation and/or prediction. It is the contention of this paper that the exclusion of important variables (causal or surrogate measures of causal variables) cause omitted variable bias in model estimation and is an important and neglected line of inquiry in safety research. In particular, spatially related variables are often difficult to collect and omitted from crash models—but offer significant opportunities to better understand contributing factors and/or causes of crashes. This study examines the role of important variables (other than Average Annual Daily Traffic (AADT)) that are generally omitted from intersection crash prediction models. In addition to the geometric and traffic regulatory information of intersection, the proposed model includes many spatial factors such as local influences of weather, sun glare, proximity to drinking establishments, and proximity to schools—representing a mix of potential environmental and human factors that are theoretically important, but rarely used. Results suggest that these variables in addition to AADT have significant explanatory power, and their exclusion leads to omitted variable bias. Provided is evidence that variable exclusion overstates the effect of minor road AADT by as much as 40% and major road AADT by 14%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of tumour motion during radiation therapy delivery have been widely investigated. Motion effects have become increasingly important with the introduction of dynamic radiotherapy delivery modalities such as enhanced dynamic wedges (EDWs) and intensity modulated radiation therapy (IMRT) where a dynamically collimated radiation beam is delivered to the moving target, resulting in dose blurring and interplay effects which are a consequence of the combined tumor and beam motion. Prior to this work, reported studies on the EDW based interplay effects have been restricted to the use of experimental methods for assessing single-field non-fractionated treatments. In this work, the interplay effects have been investigated for EDW treatments. Single and multiple field treatments have been studied using experimental and Monte Carlo (MC) methods. Initially this work experimentally studies interplay effects for single-field non-fractionated EDW treatments, using radiation dosimetry systems placed on a sinusoidaly moving platform. A number of wedge angles (60º, 45º and 15º), field sizes (20 × 20, 10 × 10 and 5 × 5 cm2), amplitudes (10-40 mm in step of 10 mm) and periods (2 s, 3 s, 4.5 s and 6 s) of tumor motion are analysed (using gamma analysis) for parallel and perpendicular motions (where the tumor and jaw motions are either parallel or perpendicular to each other). For parallel motion it was found that both the amplitude and period of tumor motion affect the interplay, this becomes more prominent where the collimator tumor speeds become identical. For perpendicular motion the amplitude of tumor motion is the dominant factor where as varying the period of tumor motion has no observable effect on the dose distribution. The wedge angle results suggest that the use of a large wedge angle generates greater dose variation for both parallel and perpendicular motions. The use of small field size with a large tumor motion results in the loss of wedged dose distribution for both parallel and perpendicular motion. From these single field measurements a motion amplitude and period have been identified which show the poorest agreement between the target motion and dynamic delivery and these are used as the „worst case motion parameters.. The experimental work is then extended to multiple-field fractionated treatments. Here a number of pre-existing, multiple–field, wedged lung plans are delivered to the radiation dosimetry systems, employing the worst case motion parameters. Moreover a four field EDW lung plan (using a 4D CT data set) is delivered to the IMRT quality control phantom with dummy tumor insert over four fractions using the worst case parameters i.e. 40 mm amplitude and 6 s period values. The analysis of the film doses using gamma analysis at 3%-3mm indicate the non averaging of the interplay effects for this particular study with a gamma pass rate of 49%. To enable Monte Carlo modelling of the problem, the DYNJAWS component module (CM) of the BEAMnrc user code is validated and automated. DYNJAWS has been recently introduced to model the dynamic wedges. DYNJAWS is therefore commissioned for 6 MV and 10 MV photon energies. It is shown that this CM can accurately model the EDWs for a number of wedge angles and field sizes. The dynamic and step and shoot modes of the CM are compared for their accuracy in modelling the EDW. It is shown that dynamic mode is more accurate. An automation of the DYNJAWS specific input file has been carried out. This file specifies the probability of selection of a subfield and the respective jaw coordinates. This automation simplifies the generation of the BEAMnrc input files for DYNJAWS. The DYNJAWS commissioned model is then used to study multiple field EDW treatments using MC methods. The 4D CT data of an IMRT phantom with the dummy tumor is used to produce a set of Monte Carlo simulation phantoms, onto which the delivery of single field and multiple field EDW treatments is simulated. A number of static and motion multiple field EDW plans have been simulated. The comparison of dose volume histograms (DVHs) and gamma volume histograms (GVHs) for four field EDW treatments (where the collimator and patient motion is in the same direction) using small (15º) and large wedge angles (60º) indicates a greater mismatch between the static and motion cases for the large wedge angle. Finally, to use gel dosimetry as a validation tool, a new technique called the „zero-scan method. is developed for reading the gel dosimeters with x-ray computed tomography (CT). It has been shown that multiple scans of a gel dosimeter (in this case 360 scans) can be used to reconstruct a zero scan image. This zero scan image has a similar precision to an image obtained by averaging the CT images, without the additional dose delivered by the CT scans. In this investigation the interplay effects have been studied for single and multiple field fractionated EDW treatments using experimental and Monte Carlo methods. For using the Monte Carlo methods the DYNJAWS component module of the BEAMnrc code has been validated and automated and further used to study the interplay for multiple field EDW treatments. Zero-scan method, a new gel dosimetry readout technique has been developed for reading the gel images using x-ray CT without losing the precision and accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Singapore crash statistics from 2001 to 2006 show that the motorcyclist fatality and injury rates per registered vehicle are higher than those of other motor vehicles by 13 and 7 times respectively. The crash involvement rate of motorcyclists as victims of other road users is also about 43%. The objective of this study is to identify the factors that contribute to the fault of motorcyclists involved in crashes. This is done by using the binary logit model to differentiate between at-fault and not-at-fault cases and the analysis is further categorized by the location of the crashes, i.e., at intersections, on expressways and at non-intersections. A number of explanatory variables representing roadway characteristics, environmental factors, motorcycle descriptions, and rider demographics have been evaluated. Time trend effect shows that not-at-fault crash involvement of motorcyclists has increased with time. The likelihood of night time crashes has also increased for not-at-fault crashes at intersections and expressways. The presence of surveillance cameras is effective in reducing not-at-fault crashes at intersections. Wet road surfaces increase at-fault crash involvement at non-intersections. At intersections, not-at-fault crash involvement is more likely on single lane roads or on median lane of multi-lane roads, while on expressways at-fault crash involvement is more likely on the median lane. Roads with higher speed limit have higher at-fault crash involvement and this is also true on expressways. Motorcycles with pillion passengers or with higher engine capacity have higher likelihood of being at-fault in crashes on expressways. Motorcyclists are more likely to be at-fault in collisions involving pedestrians and this effect is higher at night. In multi-vehicle crashes, motorcyclists are more likely to be victims than at fault. Young and older riders are more likely to be at-fault in crashes than middle-aged group of riders. The findings of this study will help to develop more targeted countermeasures to improve motorcycle safety and more cost-effective safety awareness program in motorcyclist training.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Designing practical rules for controlling invasive species is a challenging task for managers, particularly when species are long-lived, have complex life cycles and high dispersal capacities. Previous findings derived from plant matrix population analyses suggest that effective control of long-lived invaders may be achieved by focusing on killing adult plants. However, the cost-effectiveness of managing different life stages has not been evaluated. We illustrate the benefits of integrating matrix population models with decision theory to undertake this evaluation, using empirical data from the largest infestation of mesquite (Leguminosae: Prosopis spp) within Australia. We include in our model the mesquite life cycle, different dispersal rates and control actions that target individuals at different life stages with varying costs, depending on the intensity of control effort. We then use stochastic dynamic programming to derive cost-effective control strategies that minimize the cost of controlling the core infestation locally below a density threshold and the future cost of control arising from infestation of adjacent areas via seed dispersal. Through sensitivity analysis, we show that four robust management rules guide the allocation of resources between mesquite life stages for this infestation: (i) When there is no seed dispersal, no action is required until density of adults exceeds the control threshold and then only control of adults is needed; (ii) when there is seed dispersal, control strategy is dependent on knowledge of the density of adults and large juveniles (LJ) and broad categories of dispersal rates only; (iii) if density of adults is higher than density of LJ, controlling adults is most cost-effective; (iv) alternatively, if density of LJ is equal or higher than density of adults, management efforts should be spread between adults, large and to a lesser extent small juveniles, but never saplings. Synthesis and applications.In this study, we show that simple rules can be found for managing invasive plants with complex life cycles and high dispersal rates when population models are combined with decision theory. In the case of our mesquite population, focussing effort on controlling adults is not always the most cost-effective way to meet our management objective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recognizing the impact of reconfiguration on the QoS of running systems is especially necessary for choosing an appropriate approach to dealing with dynamic evolution of mission-critical or non-stop business systems. The rationale is that the impaired QoS caused by inappropriate use of dynamic approaches is unacceptable for such running systems. To predict in advance the impact, the challenge is two-fold. First, a unified benchmark is necessary to expose QoS problems of existing dynamic approaches. Second, an abstract representation is necessary to provide a basis for modeling and comparing the QoS of existing and new dynamic reconfiguration approaches. Our previous work [8] has successfully evaluated the QoS assurance capabilities of existing dynamic approaches and provided guidance of appropriate use of particular approaches. This paper reinvestigates our evaluations, extending them into concurrent and parallel environments by abstracting hardware and software conditions to design an evaluation context. We report the new evaluation results and conclude with updated impact analysis and guidance.