927 resultados para overlap probability
Resumo:
The effects of tumour motion during radiation therapy delivery have been widely investigated. Motion effects have become increasingly important with the introduction of dynamic radiotherapy delivery modalities such as enhanced dynamic wedges (EDWs) and intensity modulated radiation therapy (IMRT) where a dynamically collimated radiation beam is delivered to the moving target, resulting in dose blurring and interplay effects which are a consequence of the combined tumor and beam motion. Prior to this work, reported studies on the EDW based interplay effects have been restricted to the use of experimental methods for assessing single-field non-fractionated treatments. In this work, the interplay effects have been investigated for EDW treatments. Single and multiple field treatments have been studied using experimental and Monte Carlo (MC) methods. Initially this work experimentally studies interplay effects for single-field non-fractionated EDW treatments, using radiation dosimetry systems placed on a sinusoidaly moving platform. A number of wedge angles (60º, 45º and 15º), field sizes (20 × 20, 10 × 10 and 5 × 5 cm2), amplitudes (10-40 mm in step of 10 mm) and periods (2 s, 3 s, 4.5 s and 6 s) of tumor motion are analysed (using gamma analysis) for parallel and perpendicular motions (where the tumor and jaw motions are either parallel or perpendicular to each other). For parallel motion it was found that both the amplitude and period of tumor motion affect the interplay, this becomes more prominent where the collimator tumor speeds become identical. For perpendicular motion the amplitude of tumor motion is the dominant factor where as varying the period of tumor motion has no observable effect on the dose distribution. The wedge angle results suggest that the use of a large wedge angle generates greater dose variation for both parallel and perpendicular motions. The use of small field size with a large tumor motion results in the loss of wedged dose distribution for both parallel and perpendicular motion. From these single field measurements a motion amplitude and period have been identified which show the poorest agreement between the target motion and dynamic delivery and these are used as the „worst case motion parameters.. The experimental work is then extended to multiple-field fractionated treatments. Here a number of pre-existing, multiple–field, wedged lung plans are delivered to the radiation dosimetry systems, employing the worst case motion parameters. Moreover a four field EDW lung plan (using a 4D CT data set) is delivered to the IMRT quality control phantom with dummy tumor insert over four fractions using the worst case parameters i.e. 40 mm amplitude and 6 s period values. The analysis of the film doses using gamma analysis at 3%-3mm indicate the non averaging of the interplay effects for this particular study with a gamma pass rate of 49%. To enable Monte Carlo modelling of the problem, the DYNJAWS component module (CM) of the BEAMnrc user code is validated and automated. DYNJAWS has been recently introduced to model the dynamic wedges. DYNJAWS is therefore commissioned for 6 MV and 10 MV photon energies. It is shown that this CM can accurately model the EDWs for a number of wedge angles and field sizes. The dynamic and step and shoot modes of the CM are compared for their accuracy in modelling the EDW. It is shown that dynamic mode is more accurate. An automation of the DYNJAWS specific input file has been carried out. This file specifies the probability of selection of a subfield and the respective jaw coordinates. This automation simplifies the generation of the BEAMnrc input files for DYNJAWS. The DYNJAWS commissioned model is then used to study multiple field EDW treatments using MC methods. The 4D CT data of an IMRT phantom with the dummy tumor is used to produce a set of Monte Carlo simulation phantoms, onto which the delivery of single field and multiple field EDW treatments is simulated. A number of static and motion multiple field EDW plans have been simulated. The comparison of dose volume histograms (DVHs) and gamma volume histograms (GVHs) for four field EDW treatments (where the collimator and patient motion is in the same direction) using small (15º) and large wedge angles (60º) indicates a greater mismatch between the static and motion cases for the large wedge angle. Finally, to use gel dosimetry as a validation tool, a new technique called the „zero-scan method. is developed for reading the gel dosimeters with x-ray computed tomography (CT). It has been shown that multiple scans of a gel dosimeter (in this case 360 scans) can be used to reconstruct a zero scan image. This zero scan image has a similar precision to an image obtained by averaging the CT images, without the additional dose delivered by the CT scans. In this investigation the interplay effects have been studied for single and multiple field fractionated EDW treatments using experimental and Monte Carlo methods. For using the Monte Carlo methods the DYNJAWS component module of the BEAMnrc code has been validated and automated and further used to study the interplay for multiple field EDW treatments. Zero-scan method, a new gel dosimetry readout technique has been developed for reading the gel images using x-ray CT without losing the precision and accuracy.
Resumo:
Spectrum sensing is considered to be one of the most important tasks in cognitive radio. One of the common assumption among current spectrum sensing detectors is the full presence or complete absence of the primary user within the sensing period. In reality, there are many situations where the primary user signal only occupies a portion of the observed signal and the assumption of primary user duty cycle not necessarily fulfilled. In this paper we show that the true detection performance can degrade from the assumed achievable values when the observed primary user exhibits a certain duty cycle. Therefore, a two-stage detection method incorporating primary user duty cycle that enhances the detection performance is proposed. The proposed detector can improve the probability of detection under low duty cycle at the expense of a small decrease in performance at high duty cycle.
Resumo:
The Wright-Fisher model is an Itô stochastic differential equation that was originally introduced to model genetic drift within finite populations and has recently been used as an approximation to ion channel dynamics within cardiac and neuronal cells. While analytic solutions to this equation remain within the interval [0,1], current numerical methods are unable to preserve such boundaries in the approximation. We present a new numerical method that guarantees approximations to a form of Wright-Fisher model, which includes mutation, remain within [0,1] for all time with probability one. Strong convergence of the method is proved and numerical experiments suggest that this new scheme converges with strong order 1/2. Extending this method to a multidimensional case, numerical tests suggest that the algorithm still converges strongly with order 1/2. Finally, numerical solutions obtained using this new method are compared to those obtained using the Euler-Maruyama method where the Wiener increment is resampled to ensure solutions remain within [0,1].
Resumo:
With increasing rate of shipping traffic, the risk of collisions in busy and congested port waters is likely to rise. However, due to low collision frequencies in port waters, it is difficult to analyze such risk in a sound statistical manner. A convenient approach of investigating navigational collision risk is the application of the traffic conflict techniques, which have potential to overcome the difficulty of obtaining statistical soundness. This study aims at examining port water conflicts in order to understand the characteristics of collision risk with regard to vessels involved, conflict locations, traffic and kinematic conditions. A hierarchical binomial logit model, which considers the potential correlations between observation-units, i.e., vessels, involved in the same conflicts, is employed to evaluate the association of explanatory variables with conflict severity levels. Results show higher likelihood of serious conflicts for vessels of small gross tonnage or small overall length. The probability of serious conflict also increases at locations where vessels have more varied headings, such as traffic intersections and anchorages; becoming more critical at night time. Findings from this research should assist both navigators operating in port waters as well as port authorities overseeing navigational management.
Resumo:
Navigational collisions are one of the major safety concerns for many seaports. Continuing growth of shipping traffic in number and sizes is likely to result in increased number of traffic movements, which consequently could result higher risk of collisions in these restricted waters. This continually increasing safety concern warrants a comprehensive technique for modeling collision risk in port waters, particularly for modeling the probability of collision events and the associated consequences (i.e., injuries and fatalities). A number of techniques have been utilized for modeling the risk qualitatively, semi-quantitatively and quantitatively. These traditional techniques mostly rely on historical collision data, often in conjunction with expert judgments. However, these techniques are hampered by several shortcomings, such as randomness and rarity of collision occurrence leading to obtaining insufficient number of collision counts for a sound statistical analysis, insufficiency in explaining collision causation, and reactive approach to safety. A promising alternative approach that overcomes these shortcomings is the navigational traffic conflict technique (NTCT), which uses traffic conflicts as an alternative to the collisions for modeling the probability of collision events quantitatively. This article explores the existing techniques for modeling collision risk in port waters. In particular, it identifies the advantages and limitations of the traditional techniques and highlights the potentials of the NTCT in overcoming the limitations. In view of the principles of the NTCT, a structured method for managing collision risk is proposed. This risk management method allows safety analysts to diagnose safety deficiencies in a proactive manner, which consequently has great potential for managing collision risk in a fast, reliable and efficient manner.
Resumo:
This book examines public worrying over 'ethnic crime' and what it tells us about Australia today. How, for instance, can the blame for a series of brutal group sexual assaults in Sydney be so widely attributed to whole ethnic communities? How is it that the arrival of a foundering boatload of asylum-seekers mostly seeking refuge from despotic regimes in 'the Middle East' can be manipulated to characterise complete cohorts of applicants for refuge 'and their immigrant compatriots' as dangerous, dishonest, criminally inclined and inhuman? How did the airborne terror attacks on the USA on 11 September 2001 exacerbate existing tendencies in Australia to stereotype Arabs and Muslims as backward, inassimilable, without respect for Western laws and values, and complicit with barbarism and terrorism? Bin Laden in the Suburbs argues that we are witnessing the emergence of the 'Arab Other' as the pre-eminent 'folk devil' of our time. This Arab Other functions in the national imaginary to prop up the project of national belonging. It has little to do with the lived experiences of Arab, Middle Eastern or Muslim Australians, and everything to do with a host of social anxieties which overlap in a series of moral panics. Bin Laden in the Suburbs analyses a decisive moment in the history of multiculturalism in Australia. 'Unlike most migrants, the Arab migrant is a subversive will ... They invade our shores, take over our neighbourhood and rape our women. They are all little bin Ladens and they are everywhere: Explicit bin Ladens and closet bin Ladens; Conscious bin Ladens and unconscious bin Ladens; bin Ladens on the beach and bin Ladens in the suburbs, as this book is aptly titled. Within this register ... even a single Arab is a threat. Contain the Arab or exterminate the Arab? A 'tolerable' presence in the suburbs, or caged in a concentration camp? ... The politics of the Western post-colonial state is constantly and dangerously oscillating between these tendencies today. It is this dangerous oscillation that is so lucidly exposed in this book'.
Resumo:
Fractional order dynamics in physics, particularly when applied to diffusion, leads to an extension of the concept of Brown-ian motion through a generalization of the Gaussian probability function to what is termed anomalous diffusion. As MRI is applied with increasing temporal and spatial resolution, the spin dynamics are being examined more closely; such examinations extend our knowledge of biological materials through a detailed analysis of relaxation time distribution and water diffusion heterogeneity. Here the dynamic models become more complex as they attempt to correlate new data with a multiplicity of tissue compartments where processes are often anisotropic. Anomalous diffusion in the human brain using fractional order calculus has been investigated. Recently, a new diffusion model was proposed by solving the Bloch-Torrey equation using fractional order calculus with respect to time and space (see R.L. Magin et al., J. Magnetic Resonance, 190 (2008) 255-270). However effective numerical methods and supporting error analyses for the fractional Bloch-Torrey equation are still limited. In this paper, the space and time fractional Bloch-Torrey equation (ST-FBTE) is considered. The time and space derivatives in the ST-FBTE are replaced by the Caputo and the sequential Riesz fractional derivatives, respectively. Firstly, we derive an analytical solution for the ST-FBTE with initial and boundary conditions on a finite domain. Secondly, we propose an implicit numerical method (INM) for the ST-FBTE, and the stability and convergence of the INM are investigated. We prove that the implicit numerical method for the ST-FBTE is unconditionally stable and convergent. Finally, we present some numerical results that support our theoretical analysis.
Resumo:
This paper presents two novel concepts to enhance the accuracy of damage detection using the Modal Strain Energy based Damage Index (MSEDI) with the presence of noise in the mode shape data. Firstly, the paper presents a sequential curve fitting technique that reduces the effect of noise on the calculation process of the MSEDI, more effectively than the two commonly used curve fitting techniques; namely, polynomial and Fourier’s series. Secondly, a probability based Generalized Damage Localization Index (GDLI) is proposed as a viable improvement to the damage detection process. The study uses a validated ABAQUS finite-element model of a reinforced concrete beam to obtain mode shape data in the undamaged and damaged states. Noise is simulated by adding three levels of random noise (1%, 3%, and 5%) to the mode shape data. Results show that damage detection is enhanced with increased number of modes and samples used with the GDLI.
Resumo:
With well over 700 species, the Tribe Dacini is one of the most species-rich clades within the dipteran family Tephritidae, the true fruit flies. Nearly all Dacini belong to one of two very large genera, Dacus Fabricius and Bactrocera Macquart. The distribution of the genera overlap in or around the Indian subcontinent, but the greatest diversity of Dacus is in Africa and the greatest diversity of Bactrocera is in south-east Asia and the Pacific. The monophyly of these two genera has not been rigorously established, with previous phylogenies only including a small number of species and always heavily biased to one genus over the other. Moreover, the subgeneric taxonomy within both genera is complex and the monophyly of many subgenera has not been explicitly tested. Previous hypotheses about the biogeography of the Dacini based on morphological reviews and current distributions of taxa have invoked an out-of-India hypothesis; however this has not been tested in a phylogenetic framework. We attempted to resolve these issues with a dated, molecular phylogeny of 125 Dacini species generated using 16S, COI, COII and white eye genes. The phylogeny shows that Bactrocera is not monophyletic, but rather consists of two major clades: Bactrocera s.s. and the ‘Zeugodacus group of subgenera’ (a recognised, but informal taxonomic grouping of 15 Bactrocera subgenera). This ‘Zeugodacus’ clade is the sister group to Dacus, not Bactrocera and, based on current distributions, split from Dacus before that genus moved into Africa. We recommend that taxonomic consideration be given to raising Zeugodacus to genus level. Supportive of predictions following from the out-of-India hypothesis, the first common ancestor of the Dacini arose in the mid-Cretaceous approximately 80 mya. Major divergence events occurred during the Indian rafting period and diversification of Bactrocera apparently did not begin until after India docked with Eurasia (50–35 mya). In contrast, diversification in Dacus, at approximately 65 mya, apparently began much earlier than predicted by the out-of-India hypothesis, suggesting that, if the Dacini arose on the Indian plate, then ancestral Dacus may have left the plate in the mid to late Cretaceous via the well documented India–Madagascar–Africa migration route. We conclude that the phylogeny does not disprove the predictions of an out-of-India hypothesis for the Dacini, although modification of the original hypothesis is required.
Resumo:
Purpose: This study provides insight into the histories and current statuses of queer community archives in California and explores what the archives profession can learn from the queer community archives and archivists. Through the construction of histories of three community archives (GLBT Historical Society; Lavender Library, Archives, and Cultural Exchange of Sacramento, Inc.; and ONE National Gay & Lesbian Archives), the study discovered why these independent, community-based archives were created, the issues that influenced their evolution, and the similarities and differences among them. Additionally, it compared the community archives to institutional archives which collect queer materials to explore the similarities and differences among the archives and determine possible implications for the archives profession. Significance: The study contributes to the literature in several significant ways: it is the first in-depth comparative history of the queer community archives; it adds to the cross-disciplinary research in archives and history; it contributes to the current debates on the nature of the archives and the role of the professional archivist; and it has implications for changing archival practice. Methodology: This study used social constructionism for epistemological positioning and new social history theory for theoretical framework. Information was gathered through seven oral history interviews with community archivists and volunteers and from materials in the archives’ collections. This evidence was used to construct the histories of the archives and determine their current statuses. The institutional archives used in the comparisons are the: University of California, Berkeley’s Bancroft Library; University of California, Santa Cruz’s Special Collections and University Archives; and San Francisco Public Library’s James C. Hormel Gay and Lesbian Center. The collection policies, finding aids, and archival collections related to the queer communities at the institutional and community archives were compared to determine commonalities and differences among the archives. Findings: The findings revealed striking similarities in the histories of the community archives and important implications for the archives’ survival and their relevancy to the archives profession. Each archives was started by an individual or small group collecting materials to preserve history that would otherwise have been lost as institutional archives were not collecting queer materials. These private collections grew and became the basis for the community archives. The community archives differ in their staffing models, circulation policies, and descriptive practices. The community archives have grown to incorporate more public programming functions than most institutional archives. While in the past, the community archives had little connection to institutional archives, today they have varying degrees of partnerships. However, the historical lack of collecting queer materials by institutional archives makes some members of the communities reluctant to donate materials to institutional archives or collaborate with them. All three queer community archives are currently managed by professionally trained and educated archivists and face financial issues impacting their continued survival. The similarities and differences between the community and institutional archives include differences in collection policies, language differences in the finding aids, and differing levels of relationships between the archives. However, they share similar sensitivity in the use of language in describing the queer communities and overlap in the types of materials collected. Implications: This study supports previous research on community archives showing that communities take the preservation of history into their own hands when ignored by mainstream archives (Flinn, 2007; Flinn & Stevens, 2009; Nestle, 1990). Based on the study’s findings, institutional archivists could learn from their community archivist counterparts better ways to become involved in and relevant to the communities whose records they possess. This study also expands the understanding of history of the queer communities to include in-depth research into the archives which preserve and make available material for constructing history. Furthermore, this study supports reflective practice for archivists, especially in terms of descriptions used in finding aids. It also supports changes in graduate education for archives students to enable archivists in the United States to be more fully cognizant of community archives and able to engage in collaborative, international projects. Through this more activist role of the archivists, partnerships between the community and institutional archives would be built to establish more collaborative, respectful relationships with the communities in this post-custodial age of the archives (Stevens, Flinn, & Shepherd, 2010). Including community archives in discussions of archival practice and theory is one way of ensuring archives represent and serve a diversity of voices.
Resumo:
The serviceability and safety of bridges are crucial to people’s daily lives and to the national economy. Every effort should be taken to make sure that bridges function safely and properly as any damage or fault during the service life can lead to transport paralysis, catastrophic loss of property or even casualties. Nonetheless, aggressive environmental conditions, ever-increasing and changing traffic loads and aging can all contribute to bridge deterioration. With often constrained budget, it is of significance to identify bridges and bridge elements that should be given higher priority for maintenance, rehabilitation or replacement, and to select optimal strategy. Bridge health prediction is an essential underpinning science to bridge maintenance optimization, since the effectiveness of optimal maintenance decision is largely dependent on the forecasting accuracy of bridge health performance. The current approaches for bridge health prediction can be categorised into two groups: condition ratings based and structural reliability based. A comprehensive literature review has revealed the following limitations of the current modelling approaches: (1) it is not evident in literature to date that any integrated approaches exist for modelling both serviceability and safety aspects so that both performance criteria can be evaluated coherently; (2) complex system modelling approaches have not been successfully applied to bridge deterioration modelling though a bridge is a complex system composed of many inter-related bridge elements; (3) multiple bridge deterioration factors, such as deterioration dependencies among different bridge elements, observed information, maintenance actions and environmental effects have not been considered jointly; (4) the existing approaches are lacking in Bayesian updating ability to incorporate a variety of event information; (5) the assumption of series and/or parallel relationship for bridge level reliability is always held in all structural reliability estimation of bridge systems. To address the deficiencies listed above, this research proposes three novel models based on the Dynamic Object Oriented Bayesian Networks (DOOBNs) approach. Model I aims to address bridge deterioration in serviceability using condition ratings as the health index. The bridge deterioration is represented in a hierarchical relationship, in accordance with the physical structure, so that the contribution of each bridge element to bridge deterioration can be tracked. A discrete-time Markov process is employed to model deterioration of bridge elements over time. In Model II, bridge deterioration in terms of safety is addressed. The structural reliability of bridge systems is estimated from bridge elements to the entire bridge. By means of conditional probability tables (CPTs), not only series-parallel relationship but also complex probabilistic relationship in bridge systems can be effectively modelled. The structural reliability of each bridge element is evaluated from its limit state functions, considering the probability distributions of resistance and applied load. Both Models I and II are designed in three steps: modelling consideration, DOOBN development and parameters estimation. Model III integrates Models I and II to address bridge health performance in both serviceability and safety aspects jointly. The modelling of bridge ratings is modified so that every basic modelling unit denotes one physical bridge element. According to the specific materials used, the integration of condition ratings and structural reliability is implemented through critical failure modes. Three case studies have been conducted to validate the proposed models, respectively. Carefully selected data and knowledge from bridge experts, the National Bridge Inventory (NBI) and existing literature were utilised for model validation. In addition, event information was generated using simulation to demonstrate the Bayesian updating ability of the proposed models. The prediction results of condition ratings and structural reliability were presented and interpreted for basic bridge elements and the whole bridge system. The results obtained from Model II were compared with the ones obtained from traditional structural reliability methods. Overall, the prediction results demonstrate the feasibility of the proposed modelling approach for bridge health prediction and underpin the assertion that the three models can be used separately or integrated and are more effective than the current bridge deterioration modelling approaches. The primary contribution of this work is to enhance the knowledge in the field of bridge health prediction, where more comprehensive health performance in both serviceability and safety aspects are addressed jointly. The proposed models, characterised by probabilistic representation of bridge deterioration in hierarchical ways, demonstrated the effectiveness and pledge of DOOBNs approach to bridge health management. Additionally, the proposed models have significant potential for bridge maintenance optimization. Working together with advanced monitoring and inspection techniques, and a comprehensive bridge inventory, the proposed models can be used by bridge practitioners to achieve increased serviceability and safety as well as maintenance cost effectiveness.
Resumo:
1. Local extinctions in habitat patches and asymmetric dispersal between patches are key processes structuring animal populations in heterogeneous environments. Effective landscape conservation requires an understanding of how habitat loss and fragmentation influence demographic processes within populations and movement between populations. 2. We used patch occupancy surveys and molecular data for a rainforest bird, the logrunner (Orthonyx temminckii), to determine (i) the effects of landscape change and patch structure on local extinction; (ii) the asymmetry of emigration and immigration rates; (iii) the relative influence of local and between-population landscapes on asymmetric emigration and immigration; and (iv) the relative contributions of habitat loss and habitat fragmentation to asymmetric emigration and immigration. 3. Whether or not a patch was occupied by logrunners was primarily determined by the isolation of that patch. After controlling for patch isolation, patch occupancy declined in landscapes experiencing high levels of rainforest loss over the last 100 years. Habitat loss and fragmentation over the last century was more important than the current pattern of patch isolation alone, which suggested that immigration from neighbouring patches was unable to prevent local extinction in highly modified landscapes. 4. We discovered that dispersal between logrunner populations is highly asymmetric. Emigration rates were 39% lower when local landscapes were fragmented, but emigration was not limited by the structure of the between-population landscapes. In contrast, immigration was 37% greater when local landscapes were fragmented and was lower when the between-population landscapes were fragmented. Rainforest fragmentation influenced asymmetric dispersal to a greater extent than did rainforest loss, and a 60% reduction in mean patch area was capable of switching a population from being a net exporter to a net importer of dispersing logrunners. 5. The synergistic effects of landscape change on species occurrence and asymmetric dispersal have important implications for conservation. Conservation measures that maintain large patch sizes in the landscape may promote asymmetric dispersal from intact to fragmented landscapes and allow rainforest bird populations to persist in fragmented and degraded landscapes. These sink populations could form the kernel of source populations given sufficient habitat restoration. However, the success of this rescue effect will depend on the quality of the between-population landscapes.
Resumo:
Bactrocera dorsalis (Hendel) and B. papayae Drew & Hancock represent a closely related sibling species pair for which the biological species limits are unclear; i.e., it is uncertain if they are truely two biological species, or one biological species which has been incorrectly taxonomically split. The geographic ranges of the two taxa are thought to abut or overlap on or around the Isthmus of Kra, a recognised biogeographic barrier located on the narrowest portion of the Thai Peninsula. We collected fresh material of B. dorsalis sensu lato (i.e., B. dorsalis sensu stricto + B. papayae) in a north-south transect down the Thai Peninsula, from areas regarded as being exclusively B. dorsalis s.s., across the Kra Isthmus, and into regions regarded as exclusively B. papayae. We carried out microsatellite analyses and took measurements of male genitalia and wing shape. Both the latter morphological tests have been used previously to separate these two taxa. No significant population structuring was found in the microsatellite analysis and results were consistent with an interpretation of one, predominantly panmictic population. Both morphological datasets showed consistent, clinal variation along the transect, with no evidence for disjunction. No evidence in any tests supported historical vicariance driven by the Isthmus of Kra, and none of the three datasets supported the current taxonomy of two species. Rather, within and across the area of range overlap or abutment between the two species, only continuous morphological and genetic variation was recorded. Recognition that morphological traits previously used to separate these taxa are continuous, and that there is no genetic evidence for population segregation in the region of suspected species overlap, is consistent with a growing body of literature that reports no evidence of biological differentiation between these taxa.
Resumo:
Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.
Resumo:
The growing demand of air-conditioning is one of the largest contributors to Australia’s overall electricity consumption. This has started to create peak load supply problems for some electricity utilities particularly in Queensland. This research aimed to develop consumer demand side response model to assist electricity consumers to mitigate peak demand on the electrical network. The model developed demand side response model to allow consumers to manage and control air conditioning for every period, it is called intelligent control. This research investigates optimal response of end-user toward electricity price for several cases in the near future, such as: no spike, spike and probability spike price cases. The results indicate the potential of the scheme to achieve energy savings, reducing electricity bills (costs) to the consumer and targeting best economic performance for electrical generation distribution and transmission.