893 resultados para Statistical process control


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network induced delay in networked control systems (NCS) is inherently non-uniformly distributed and behaves with multifractal nature. However, such network characteristics have not been well considered in NCS analysis and synthesis. Making use of the information of the statistical distribution of NCS network induced delay, a delay distribution based stochastic model is adopted to link Quality-of-Control and network Quality-of-Service for NCS with uncertainties. From this model together with a tighter bounding technology for cross terms, H∞ NCS analysis is carried out with significantly improved stability results. Furthermore, a memoryless H∞ controller is designed to stabilize the NCS and to achieve the prescribed disturbance attenuation level. Numerical examples are given to demonstrate the effectiveness of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the complexities that are involved in the genetics of multifactorial diseases is still a monumental task. In addition to environmental factors that can influence the risk of disease, there is also a number of other complicating factors. Genetic variants associated with age of disease onset may be different from those variants associated with overall risk of disease, and variants may be located in positions that are not consistent with the traditional protein coding genetic paradigm. Latent Variable Models are well suited for the analysis of genetic data. A latent variable is one that we do not directly observe, but which is believed to exist or is included for computational or analytic convenience in a model. This thesis presents a mixture of methodological developments utilising latent variables, and results from case studies in genetic epidemiology and comparative genomics. Epidemiological studies have identified a number of environmental risk factors for appendicitis, but the disease aetiology of this oft thought useless vestige remains largely a mystery. The effects of smoking on other gastrointestinal disorders are well documented, and in light of this, the thesis investigates the association between smoking and appendicitis through the use of latent variables. By utilising data from a large Australian twin study questionnaire as both cohort and case-control, evidence is found for the association between tobacco smoking and appendicitis. Twin and family studies have also found evidence for the role of heredity in the risk of appendicitis. Results from previous studies are extended here to estimate the heritability of age-at-onset and account for the eect of smoking. This thesis presents a novel approach for performing a genome-wide variance components linkage analysis on transformed residuals from a Cox regression. This method finds evidence for a dierent subset of genes responsible for variation in age at onset than those associated with overall risk of appendicitis. Motivated by increasing evidence of functional activity in regions of the genome once thought of as evolutionary graveyards, this thesis develops a generalisation to the Bayesian multiple changepoint model on aligned DNA sequences for more than two species. This sensitive technique is applied to evaluating the distributions of evolutionary rates, with the finding that they are much more complex than previously apparent. We show strong evidence for at least 9 well-resolved evolutionary rate classes in an alignment of four Drosophila species and at least 7 classes in an alignment of four mammals, including human. A pattern of enrichment and depletion of genic regions in the profiled segments suggests they are functionally significant, and most likely consist of various functional classes. Furthermore, a method of incorporating alignment characteristics representative of function such as GC content and type of mutation into the segmentation model is developed within this thesis. Evidence of fine-structured segmental variation is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this case-control study of 617 children was to investigate early childhood caries (ECC) risk indicators in a non-fluoridated region in Australia. ECC cases were recruited from childcare facilities, public hospitals and private specialist clinics to source children from different socioeconomic backgrounds. Non-ECC controls were recruited from the same childcare facilities. A multinomial logistic modelling approach was used for statistical analysis. The results showed that a large percentage of children tested positive for Streptococcus mutans if their mothers also tested positive. A common risk indicator found in ECC children from childcare facilities and public hospitals was visible plaque (OR 4.1, 95% CI 1.0-15.9, and OR 8.7, 95% CI 2.3-32.9, respectively). Compared to ECC-free controls, the risk indicators specific to childcare cases were enamel hypoplasia (OR 4.2, 95% CI 1.0-18.3), difficulty in cleaning child's teeth (OR 6.6, 95% CI 2.2-19.8), presence of S. mutans (OR 4.8, 95% CI 0.7-32.6), sweetened drinks (OR 4.0, 95% CI 1.2-13.6) and maternal anxiety (OR 5.1, 95% CI 1.1-25.0). Risk indicators specific to public hospital cases were S. mutans presence in child (OR 7.7, 95% CI 1.3-44.6) or mother (OR 8.1, 95% CI 0.9-72.4), ethnicity (OR 5.6, 95% CI 1.4-22.1), and access of mother to pension or health care card (OR 20.5, 95% CI 3.5-119.9). By contrast, a history of chronic ear infections was found to be protective for ECC in childcare children (OR 0.28, 95% CI 0.09-0.82). The biological, socioeconomic and maternal risk indicators demonstrated in the present study can be employed in models of ECC that can be usefully applied for future longitudinal studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current train of thought in appetite research is favouring an interest in non-homeostatic or hedonic (reward) mechanisms in relation to overconsumption and energy balance. This tendency is supported by advances in neurobiology that precede the emergence of a new conceptual approach to reward where affect and motivation (liking and wanting) can be seen as the major force in guiding human eating behaviour. In this review, current progress in applying processes of liking and wanting to the study of human appetite are examined by discussing the following issues: How can these concepts be operationalised for use in human research to reflect the neural mechanisms by which they may be influenced? Do liking and wanting operate independently to produce functionally significant changes in behaviour? Can liking and wanting be truly experimentally separated or will an expression of one inevitably contain elements of the other? The review contains a re-examination of selected human appetite research before exploring more recent methodological approaches to the study of liking and wanting in appetite control. In addition, some theoretical developments are described in four diverse models that may enhance current understanding of the role of these processes in guiding ingestive behaviour. Finally, the implications of a dual process modulation of food reward for weight gain and obesity are discussed. The review concludes that processes of liking and wanting are likely to have independent roles in characterising susceptibility to weight gain. Further research into the dissociation of liking and wanting through implicit and explicit levels of processing would help to disclose the relative importance of these components of reward for appetite control and weight regulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation develops the model of a prototype system for the digital lodgement of spatial data sets with statutory bodies responsible for the registration and approval of land related actions under the Torrens Title system. Spatial data pertain to the location of geographical entities together with their spatial dimensions and are classified as point, line, area or surface. This dissertation deals with a sub-set of spatial data, land boundary data that result from the activities performed by surveying and mapping organisations for the development of land parcels. The prototype system has been developed, utilising an event-driven paradigm for the user-interface, to exploit the potential of digital spatial data being generated from the utilisation of electronic techniques. The system provides for the creation of a digital model of the cadastral network and dependent data sets for an area of interest from hard copy records. This initial model is calibrated on registered control and updated by field survey to produce an amended model. The field-calibrated model then is electronically validated to ensure it complies with standards of format and content. The prototype system was designed specifically to create a database of land boundary data for subsequent retrieval by land professionals for surveying, mapping and related activities. Data extracted from this database are utilised for subsequent field survey operations without the need to create an initial digital model of an area of interest. Statistical reporting of differences resulting when subsequent initial and calibrated models are compared, replaces the traditional checking operations of spatial data performed by a land registry office. Digital lodgement of survey data is fundamental to the creation of the database of accurate land boundary data. This creation of the database is fundamental also to the efficient integration of accurate spatial data about land being generated by modem technology such as global positioning systems, and remote sensing and imaging, with land boundary information and other information held in Government databases. The prototype system developed provides for the delivery of accurate, digital land boundary data for the land registration process to ensure the continued maintenance of the integrity of the cadastre. Such data should meet also the more general and encompassing requirements of, and prove to be of tangible, longer term benefit to the developing, electronic land information industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses the contemporary issue of the control, restoration and potential for reuse of State Government-owned heritage properties with commercial potential. It attempts to reconcile the sometimes competing interests of the range of stakeholders in such properties, particularly those seeking to maximise economic performance and return on one hand and community expectations for heritage preservation and exhibition on the other. The matters are approached principally from the Government's position as asset owner/manager. It includes research into a number of key elements - including statutory, physical and economic parameters and an analysis of the legitimate requirements of all stakeholders. The thesis also recognises the need for innovation in approach and for the careful structuring and pre-planning of proposals on a project-by-project basis. On the matter of innovation, four case studies are included in the thesis to exhibit some approaches and techniques that have already been employed in addressing these issues. From this research base, a series of deductions at both a macro and micro level are established and a model for a rational decision-making process for dealing with such projects is developed as a major outcome of the work. Finally, the general model is applied to a specific project, the currently unused Port Office heritage site in the Brisbane Central Business District.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Queensland Coal Industry Employees Health Scheme was implemented in 1993 to provide health surveillance for all Queensland coal industry workers. Tt1e government, mining employers and mining unions agreed that the scheme should operate for seven years. At the expiry of the scheme, an assessment of the contribution of health surveillance to meet coal industry needs would be an essential part of determining a future health surveillance program. This research project has analysed the data made available between 1993 and 1998. All current coal industry employees have had at least one health assessment. The project examined how the centralised nature of the Health Scheme benefits industry by identi~)jng key health issues and exploring their dimensions on a scale not possible by corporate based health surveillance programs. There is a body of evidence that indicates that health awareness - on the scale of the individual, the work group and the industry is not a part of the mining industry culture. There is also growing evidence that there is a need for this culture to change and that some change is in progress. One element of this changing culture is a growth in the interest by the individual and the community in information on health status and benchmarks that are reasonably attainable. This interest opens the way for health education which contains personal, community and occupational elements. An important element of such education is the data on mine site health status. This project examined the role of health surveillance in the coal mining industry as a tool for generating the necessary information to promote an interest in health awareness. The Health Scheme Database provides the material for the bulk of the analysis of this project. After a preliminary scan of the data set, more detailed analysis was undertaken on key health and related safety issues that include respiratory disorders, hearing loss and high blood pressure. The data set facilitates control for confounding factors such as age and smoking status. Mines can be benchmarked to identify those mines with effective health management and those with particular challenges. While the study has confirmed the very low prevalence of restrictive airway disease such as pneu"moconiosis, it has demonstrated a need to examine in detail the emergence of obstructive airway disease such as bronchitis and emphysema which may be a consequence of the increasing use of high dust longwall technology. The power of the Health Database's electronic data management is demonstrated by linking the health data to other data sets such as injury data that is collected by the Department of l\1mes and Energy. The analysis examines serious strain -sprain injuries and has identified a marked difference between the underground and open cut sectors of the industry. The analysis also considers productivity and OHS data to examine the extent to which there is correlation between any pairs ofJpese and previously analysed health parameters. This project has demonstrated that the current structure of the Coal Industry Employees Health Scheme has largely delivered to mines and effective health screening process. At the same time, the centralised nature of data collection and analysis has provided to the mines, the unions and the government substantial statistical cross-sectional data upon which strategies to more effectively manage health and relates safety issues can be based.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A review of the main rolling models is conducted to assess their suitability for modelling the foil rolling process. Two such models are Fleck and Johnson's Hertzian model and Fleck, Johnson, Mear and Zhang's Influence Function model. Both of these models are approximated through the use of perturbation methods. Decrease in the computation time resulted when compared with the numerical solution. The Hertzian model was approximated using the ratio of the yield stress of the strip to the plane-strain Young's Modulus of the rolls as the small perturbation parameter. The Influence Function model approximation takes advantage of the solution of the well-known Aerofoil Integral Equation to gain an insight into how the choice of interior boundary points affects the stability of numerical solution of the model's equations. These approximations require less computation than their full models and, in the case of the Hertzian approximation, only introduces a small error in the predictions of roll force roll torque. Hence the Hertzian approximate method is suitable for on-line control. The predictions from the Influence Function approximation underestimates the predictions from the numerical results. Better approximation of the pressure in the plastic reduction regions is the main source of this error.