984 resultados para Effectiveness Estimation
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.
Resumo:
This paper describes modelling, estimation and control of the horizontal translational motion of an open-source and cost effective quadcopter — the MikroKopter. We determine the dynamics of its roll and pitch attitude controller, system latencies, and the units associated with the values exchanged with the vehicle over its serial port. Using this we create a horizontal-plane velocity estimator that uses data from the built-in inertial sensors and an onboard laser scanner, and implement translational control using a nested control loop architecture. We present experimental results for the model and estimator, as well as closed-loop positioning.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
Introduction / objectives Many strategies are used to control MRSA in hospitals. Only a few have been assessed in clinical trials and it is not obvious how findings should be generalised between settings. Uncertainty remains about which strategies represent the most appropriate use of scarce resources. We assess the cost-effectiveness of alternative MRSA screening and infection control strategies in England and Wales and discuss international relevance. Methods Models of MRSA transmission in ICUs and general medical (GM) wards were developed and used to evaluate different screening methods combined with decolonisation or isolation. Strategies were compared in terms of costs and health benefits (quality adjusted life years, QALYs). Different prevalences, proportions of high risk patients and ward sizes were investigated, and probabilistic sensitivity analyses (PSA) conducted. Results Decolonisation strategies were cost-saving in ICUs at a 5% admission prevalence, with admission and weekly PCR screening the most cost-effective (£3,929/QALY). In ICUs, screening and isolation reduced infection rates by ~10%. With admission prevalence ≤5%, targeting screening and isolation to high risk patients was optimal. In GM wards decolonisation and isolation strategies, though able to reduce MRSA infection rates up to ~50%, were not cost-effective. Conclusion The largest reductions in MRSA infection were achieved by screening and decolonisation strategies, and were cost-effective in ICU settings. In comparison, there is limited potential for screening and control strategies to be cost-effective in GM wards due to lower infection and mortality rates.
Resumo:
This paper provides fundamental understanding for the use of cumulative plots for travel time estimation on signalized urban networks. Analytical modeling is performed to generate cumulative plots based on the availability of data: a) Case-D, for detector data only; b) Case-DS, for detector data and signal timings; and c) Case-DSS, for detector data, signal timings and saturation flow rate. The empirical study and sensitivity analysis based on simulation experiments have observed the consistency in performance for Case-DS and Case-DSS, whereas, for Case-D the performance is inconsistent. Case-D is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.
Resumo:
Various time-memory tradeoffs attacks for stream ciphers have been proposed over the years. However, the claimed success of these attacks assumes the initialisation process of the stream cipher is one-to-one. Some stream cipher proposals do not have a one-to-one initialisation process. In this paper, we examine the impact of this on the success of time-memory-data tradeoff attacks. Under the circumstances, some attacks are more successful than previously claimed while others are less. The conditions for both cases are established.
Resumo:
Background: The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Methods: Typically developing children (n = 67) from Years 1 – 3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Results: Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. Conclusion: In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice.
Resumo:
Safety culture is a concept that has long been accepted in high risk industries such as aviation, nuclear industries and mining, however, considerable research is now being undertaken within the construction sector, with varying levels of success. The current paper discusses three recent interlocked projects that have had some success in the Australian construction industry. The first project examined the development and implementation of a safety competency framework targeted at safety critical positions across first tier construction organisations. Combining qualitative and quantitative methods, the project: developed a matrix of safety critical positions (n=11) and safety managements tasks (SMTs; n=39); mapped the process steps for their acquisition and ongoing development; detailed the knowledge, skills and behaviours required for all SMTs; and outlined organisational cultural outcomes that could be anticipated in a successful implementation of the framework. The second project extended research on safety competency and leadership to develop behavioural guidelines for leaders to drive safety culture change down to second tier companies. This was designed to assist smaller construction companies to customise their own competency framework and develop implementation guidelines that match their aspirations and resources. The third interlocked project explored the use of safety effectiveness indicators (SEIs) as an industry-relevant assessment tool for reducing risk on construction sites. With direct linkages to safety competencies and safety management tasks, the SEIs are the next step towards an integrated safety cultural approach to safety and extend the concept of positive performance indicators (PPIs) by providing a valid, reliable, and user friendly measurement platform. Taken together, the results of the interlocked projects suggest that safety culture research has many potential benefits for the construction industry, particularly when research is conducted in partnership with industry stakeholders. Suggestions are made for future research, including further application and testing of the safety competency framework and aligning SEIs across construction projects of varying size, location and design.
Resumo:
There is an intimate interconnectivity between policy guidelines defining reform and the delineation of what research methods would be subsequently applied to determine reform success. Research is guided as much by the metaphors describing it as by the ensuing empirical definition of actions of results obtained from it. In a call for different reform policy metaphors Lumby and English (2010) note, “The primary responsibility for the parlous state of education... lies with the policy makers that have racked our schools with reductive and dehumanizing processes, following the metaphors of market efficiency, and leadership models based on accounting and the characteristics of machine bureaucracy” (p. 127)
Resumo:
Objective: To comprehensively measure the burden of hepatitis B, liver cirrhosis and liver cancer in Shandong province, using disability-adjusted life years (DALYs) to estimate the disease burden attribute to hepatitis B virus (HBV)infection. Methods: Based on the mortality data of hepatitis B, liver cirrhosis and liver cancer derived from the third National Sampling Retrospective Survey for Causes of Death during 2004 and 2005, the incidence data of hepatitis B and the prevalence and the disability weights of liver cancer gained from the Shandong Cancer Prevalence Sampling Survey in 2007, we calculated the years of life lost (YLLs), years lived with disability (YLDs) and DALYs of three diseases following the procedures developed for the global burden of disease (GBD) study to ensure the comparability. Results: The total burden for hepatitis B, liver cirrhosis and liver cancer were 211 616 (39 377 YLLs and 172 239 YLDs), 16 783 (13 497 YLLs and 3286 YLDs) and 247 795 (240 236 YLLs and 7559 YLDs) DALYs in 2005 respectively, and men were 2.19, 2.36 and 3.16 times as that for women, respectively in Shandong province. The burden for hepatitis B was mainly because of disability (81.39%). However, most burden on liver cirrhosis and liver cancer were due to premature death (80.42% and 96.95%). The burden of each patient related to hepatitis B, liver cirrhosis and liver cancer were 4.8, 13.73 and 11.11 respectively. Conclusion: Hepatitis B, liver cirrhosis and liver cancer caused considerable burden to the people living in Shandong province, indicating that the control of hepatitis B virus infection would bring huge potential benefits.
Resumo:
Objective: To determine the major health related risk factors and provide evidence for policy-making,using health burden analysis on selected factors among general population from Shandong province. Methods: Based on data derived from the Third Death of Cause Sampling Survey in Shandong. Years of life lcrat(YLLs),yearS Iived with disability(YLDs)and disability-adjusted life years(DALYs) were calculated according to the GBD ethodology.Deaths and DALYs attributed to the selected risk factors were than estimated together with the PAF data from GBD 2001 study.The indirect method was employed to estimate the YLDs. Results: 51.09%of the total dearlls and 31.83%of the total DALYs from the Shandong population were resulted from the 19 selected risk factors.High blood pre.ure,smoking,low fruit and vegetable intake,aleohol consumption,indoor smoke from solid fuels,high cholesterol,urban air pollution, physical inactivity,overweight and obesity and unsafe injections in health care settings were identified as the top 10 risk faetors for mortality which together caused 50.21%of the total deaths.Alcohol use,smoking,high blood pressure,Low fruit and vegetable intake, indoor smoke from solid fuels, overweight and obesity,high cholesterol, physical inactivity,urban air pollution and iron-deficiency anemia were proved as the top 10 risk factors related to disease burden and were responsible for 29.04%of the total DALYs. Conclusion: Alcohol use.smoking and high blood pressure were determined as the major risk factors which influencing the health of residents in Shandong. The mortality and burden of disease could be reduced significantly if these major factors were effectively under control.
Resumo:
Organizations today engage in various forms of alliances to manage their existing business processes or to diversify into new processes to sustain their competitive positions. Many of today’s alliances use the IT resources as their backbone. The results of these alliances are collaborative organizational structures with little or no ownership stakes between the parties. The emergence of Web 2.0 tools is having a profound effect on the nature and form of these alliance structures. These alliances heavily depend on and make radical use of the IT resources in a collaborative environment. This situation requires a deeper understanding of the governance of these IT resources to ensure the sustainability of the collaborative organizational structures. This study first suggests the types of IT governance structures required for collaborative organizational structures. Semi-structured interviews with senior executives who operate in such alliances reveal that co-created IT governance structures are necessary. Such structures include co-created IT-steering committees, co-created operational committees, and inter-organizational performance management and communication systems. The findings paved the way for the development of a model for understanding approaches to governing IT and evaluating the effectiveness for such governance mechanisms in today’s IT dependent alliances. This study presents a sustainable IT-related capabilities approach to assessing the effectiveness of suggested IT governance structures for collaborative alliances. The findings indicate a favourable association between organizations IT governance efforts and their ability to sustain their capabilities to leverage their IT resources. These IT-related capabilities also relate to measures business value at the process and firm level. This makes it possible to infer that collaborative organizations’ IT governance efforts contribute to business value.
Resumo:
Given the substantial investment in information technology (IT), and the significant impact IT has on organizational success, organizations consume considerable resources to manage acquisition and use of their IT resources. While various arguments proposed suggest which IT governance arrangements may work best, our understanding of the effectiveness of such initiatives is limited. We examine the relationship between the effectiveness of IT steering committee driven IT governance initiatives and firm's IT management and IT infrastructure related capabilities. We further propose that firm's ITrelated capabilities generated through IT governance initiatives should improve its business processes and firm-level performance. We test these relationships empirically by a field survey. Results suggest that firms' effectiveness of IT steering committee driven IT governance initiatives positively relates to the level of their IT-related capabilities. We also found positive relationships between IT-related capabilities and internal process-level performance. Our results also support that improvement in internal process-level performance positively relates to improvement in customer service and firm-level performance.