959 resultados para whole rock analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bioelectrical impedance analysis, (BIA), is a method of body composition analysis first investigated in 1962 which has recently received much attention by a number of research groups. The reasons for this recent interest are its advantages, (viz: inexpensive, non-invasive and portable) and also the increasing interest in the diagnostic value of body composition analysis. The concept utilised by BIA to predict body water volumes is the proportional relationship for a simple cylindrical conductor, (volume oc length2/resistance), which allows the volume to be predicted from the measured resistance and length. Most of the research to date has measured the body's resistance to the passage of a 50· kHz AC current to predict total body water, (TBW). Several research groups have investigated the application of AC currents at lower frequencies, (eg 5 kHz), to predict extracellular water, (ECW). However all research to date using BIA to predict body water volumes has used the impedance measured at a discrete frequency or frequencies. This thesis investigates the variation of impedance and phase of biological systems over a range of frequencies and describes the development of a swept frequency bioimpedance meter which measures impedance and phase at 496 frequencies ranging from 4 kHz to 1 MHz. The impedance of any biological system varies with the frequency of the applied current. The graph of reactance vs resistance yields a circular arc with the resistance decreasing with increasing frequency and reactance increasing from zero to a maximum then decreasing to zero. Computer programs were written to analyse the measured impedance spectrum and determine the impedance, Zc, at the characteristic frequency, (the frequency at which the reactance is a maximum). The fitted locus of the measured data was extrapolated to determine the resistance, Ro, at zero frequency; a value that cannot be measured directly using surface electrodes. The explanation of the theoretical basis for selecting these impedance values (Zc and Ro), to predict TBW and ECW is presented. Studies were conducted on a group of normal healthy animals, (n=42), in which TBW and ECW were determined by the gold standard of isotope dilution. The prediction quotients L2/Zc and L2/Ro, (L=length), yielded standard errors of 4.2% and 3.2% respectively, and were found to be significantly better than previously reported, empirically determined prediction quotients derived from measurements at a single frequency. The prediction equations established in this group of normal healthy animals were applied to a group of animals with abnormally low fluid levels, (n=20), and also to a group with an abnormal balance of extra-cellular to intracellular fluids, (n=20). In both cases the equations using L2/Zc and L2/Ro accurately and precisely predicted TBW and ECW. This demonstrated that the technique developed using multiple frequency bioelectrical impedance analysis, (MFBIA), can accurately predict both TBW and ECW in both normal and abnormal animals, (with standard errors of the estimate of 6% and 3% for TBW and ECW respectively). Isotope dilution techniques were used to determine TBW and ECW in a group of 60 healthy human subjects, (male. and female, aged between 18 and 45). Whole body impedance measurements were recorded on each subject using the MFBIA technique and the correlations between body water volumes, (TBW and ECW), and heighe/impedance, (for all measured frequencies), were compared. The prediction quotients H2/Zc and H2/Ro, (H=height), again yielded the highest correlation with TBW and ECW respectively with corresponding standard errors of 5.2% and 10%. The values of the correlation coefficients obtained in this study were very similar to those recently reported by others. It was also observed that in healthy human subjects the impedance measured at virtually any frequency yielded correlations not significantly different from those obtained from the MFBIA quotients. This phenomenon has been reported by other research groups and emphasises the need to validate the technique by investigating its application in one or more groups with abnormalities in fluid levels. The clinical application of MFBIA was trialled and its capability of detecting lymphoedema, (an excess of extracellular fluid), was investigated. The MFBIA technique was demonstrated to be significantly more sensitive, (P<.05), in detecting lymphoedema than the current technique of circumferential measurements. MFBIA was also shown to provide valuable information describing the changes in the quantity of muscle mass of the patient during the course of the treatment. The determination of body composition, (viz TBW and ECW), by MFBIA has been shown to be a significant improvement on previous bioelectrical impedance techniques. The merit of the MFBIA technique is evidenced in its accurate, precise and valid application in animal groups with a wide variation in body fluid volumes and balances. The multiple frequency bioelectrical impedance analysis technique developed in this study provides accurate and precise estimates of body composition, (viz TBW and ECW), regardless of the individual's state of health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Mount Isa Basin is a new concept used to describe the area of Palaeo- to Mesoproterozoic rocks south of the Murphy Inlier and inappropriately described presently as the Mount Isa Inlier. The new basin concept presented in this thesis allows for the characterisation of basin-wide structural deformation, correlation of mineralisation with particular lithostratigraphic and seismic stratigraphic packages, and the recognition of areas with petroleum exploration potential. The northern depositional margin of the Mount Isa Basin is the metamorphic, intrusive and volcanic complex here referred to as the Murphy Inlier (not the "Murphy Tectonic Ridge"). The eastern, southern and western boundaries of the basin are obscured by younger basins (Carpentaria, Eromanga and Georgina Basins). The Murphy Inlier rocks comprise the seismic basement to the Mount Isa Basin sequence. Evidence for the continuity of the Mount Isa Basin with the McArthur Basin to the northwest and the Willyama Block (Basin) at Broken Hill to the south is presented. These areas combined with several other areas of similar age are believed to have comprised the Carpentarian Superbasin (new term). The application of seismic exploration within Authority to Prospect (ATP) 423P at the northern margin of the basin was critical to the recognition and definition of the Mount Isa Basin. The Mount Isa Basin is structurally analogous to the Palaeozoic Arkoma Basin of Illinois and Arkansas in southern USA but, as with all basins it contains unique characteristics, a function of its individual development history. The Mount Isa Basin evolved in a manner similar to many well described, Phanerozoic plate tectonic driven basins. A full Wilson Cycle is recognised and a plate tectonic model proposed. The northern Mount Isa Basin is defined as the Proterozoic basin area northwest of the Mount Gordon Fault. Deposition in the northern Mount Isa Basin began with a rift sequence of volcaniclastic sediments followed by a passive margin drift phase comprising mostly carbonate rocks. Following the rift and drift phases, major north-south compression produced east-west thrusting in the south of the basin inverting the older sequences. This compression produced an asymmetric epi- or intra-cratonic clastic dominated peripheral foreland basin provenanced in the south and thinning markedly to a stable platform area (the Murphy Inlier) in the north. The fmal major deformation comprised east-west compression producing north-south aligned faults that are particularly prominent at Mount Isa. Potential field studies of the northern Mount Isa Basin, principally using magnetic data (and to a lesser extent gravity data, satellite images and aerial photographs) exhibit remarkable correlation with the reflection seismic data. The potential field data contributed significantly to the unravelling of the northern Mount Isa Basin architecture and deformation. Structurally, the Mount Isa Basin consists of three distinct regions. From the north to the south they are the Bowthorn Block, the Riversleigh Fold Zone and the Cloncurry Orogen (new names). The Bowthom Block, which is located between the Elizabeth Creek Thrust Zone and the Murphy Inlier, consists of an asymmetric wedge of volcanic, carbonate and clastic rocks. It ranges from over 10 000 m stratigraphic thickness in the south to less than 2000 min the north. The Bowthorn Block is relatively undeformed: however, it contains a series of reverse faults trending east-west that are interpreted from seismic data to be down-to-the-north normal faults that have been reactivated as thrusts. The Riversleigh Fold Zone is a folded and faulted region south of the Bowthorn Block, comprising much of the area formerly referred to as the Lawn Hill Platform. The Cloncurry Orogen consists of the area and sequences equivalent to the former Mount Isa Orogen. The name Cloncurry Orogen clearly distinguishes this area from the wider concept of the Mount Isa Basin. The South Nicholson Group and its probable correlatives, the Pilpah Sandstone and Quamby Conglomerate, comprise a later phase of now largely eroded deposits within the Mount Isa Basin. The name South Nicholson Basin is now outmoded as this terminology only applied to the South Nicholson Group unlike the original broader definition in Brown et al. (1968). Cored slimhole stratigraphic and mineral wells drilled by Amoco, Esso, Elf Aquitaine and Carpentaria Exploration prior to 1986, penetrated much of the stratigraphy and intersected both minor oil and gas shows plus excellent potential source rocks. The raw data were reinterpreted and augmented with seismic stratigraphy and source rock data from resampled mineral and petroleum stratigraphic exploration wells for this study. Since 1986, Comalco Aluminium Limited, as operator of a joint venture with Monument Resources Australia Limited and Bridge Oil Limited, recorded approximately 1000 km of reflection seismic data within the basin and drilled one conventional stratigraphic petroleum well, Beamesbrook-1. This work was the first reflection seismic and first conventional petroleum test of the northern Mount Isa Basin. When incorporated into the newly developed foreland basin and maturity models, a grass roots petroleum exploration play was recognised and this led to the present thesis. The Mount Isa Basin was seen to contain excellent source rocks coupled with potential reservoirs and all of the other essential aspects of a conventional petroleum exploration play. This play, although high risk, was commensurate with the enormous and totally untested petroleum potential of the basin. The basin was assessed for hydrocarbons in 1992 with three conventional exploration wells, Desert Creek-1, Argyle Creek-1 and Egilabria-1. These wells also tested and confrrmed the proposed basin model. No commercially viable oil or gas was encountered although evidence of its former existence was found. In addition to the petroleum exploration, indeed as a consequence of it, the association of the extensive base metal and other mineralisation in the Mount Isa Basin with hydrocarbons could not be overlooked. A comprehensive analysis of the available data suggests a link between the migration and possible generation or destruction of hydrocarbons and metal bearing fluids. Consequently, base metal exploration based on hydrocarbon exploration concepts is probably. the most effective technique in such basins. The metal-hydrocarbon-sedimentary basin-plate tectonic association (analogous to Phanerozoic models) is a compelling outcome of this work on the Palaeo- to Mesoproterozoic Mount lsa Basin. Petroleum within the Bowthom Block was apparently destroyed by hot brines that produced many ore deposits elsewhere in the basin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates the coefficient of performance (COP) of a hybrid liquid desiccant solar cooling system. This hybrid cooling system includes three sections: 1) conventional air-conditioning section; 2) liquid desiccant dehumidification section and 3) air mixture section. The air handling unit (AHU) with mixture variable air volume design is included in the hybrid cooling system to control humidity. In the combined system, the air is first dehumidified in the dehumidifier and then mixed with ambient air by AHU before entering the evaporator. Experiments using lithium chloride as the liquid desiccant have been carried out for the performance evaluation of the dehumidifier and regenerator. Based on the air mixture (AHU) design, the electrical coefficient of performance (ECOP), thermal coefficient of performance (TCOP) and whole system coefficient of performance (COPsys) models used in the hybrid liquid desiccant solar cooing system were developed to evaluate this system performance. These mathematical models can be used to describe the coefficient of performance trend under different ambient conditions, while also providing a convenient comparison with conventional air conditioning systems. These models provide good explanations about the relationship between the performance predictions of models and ambient air parameters. The simulation results have revealed the coefficient of performance in hybrid liquid desiccant solar cooling systems substantially depends on ambient air and dehumidifier parameters. Also, the liquid desiccant experiments prove that the latent component of the total cooling load requirements can be easily fulfilled by using the liquid desiccant dehumidifier. While cooling requirements can be met, the liquid desiccant system is however still subject to the hysteresis problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The vast sequence divergence among different virus groups has presented a great challenge to alignment-based analysis of virus phylogeny. Due to the problems caused by the uncertainty in alignment, existing tools for phylogenetic analysis based on multiple alignment could not be directly applied to the whole-genome comparison and phylogenomic studies of viruses. There has been a growing interest in alignment-free methods for phylogenetic analysis using complete genome data. Among the alignment-free methods, a dynamical language (DL) method proposed by our group has successfully been applied to the phylogenetic analysis of bacteria and chloroplast genomes. Results In this paper, the DL method is used to analyze the whole-proteome phylogeny of 124 large dsDNA viruses and 30 parvoviruses, two data sets with large difference in genome size. The trees from our analyses are in good agreement to the latest classification of large dsDNA viruses and parvoviruses by the International Committee on Taxonomy of Viruses (ICTV). Conclusions The present method provides a new way for recovering the phylogeny of large dsDNA viruses and parvoviruses, and also some insights on the affiliation of a number of unclassified viruses. In comparison, some alignment-free methods such as the CV Tree method can be used for recovering the phylogeny of large dsDNA viruses, but they are not suitable for resolving the phylogeny of parvoviruses with a much smaller genome size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concerns regarding groundwater contamination with nitrate and the long-term sustainability of groundwater resources have prompted the development of a multi-layered three dimensional (3D) geological model to characterise the aquifer geometry of the Wairau Plain, Marlborough District, New Zealand. The 3D geological model which consists of eight litho-stratigraphic units has been subsequently used to synthesise hydrogeological and hydrogeochemical data for different aquifers in an approach that aims to demonstrate how integration of water chemistry data within the physical framework of a 3D geological model can help to better understand and conceptualise groundwater systems in complex geological settings. Multivariate statistical techniques(e.g. Principal Component Analysis and Hierarchical Cluster Analysis) were applied to groundwater chemistry data to identify hydrochemical facies which are characteristic of distinct evolutionary pathways and a common hydrologic history of groundwaters. Principal Component Analysis on hydrochemical data demonstrated that natural water-rock interactions, redox potential and human agricultural impact are the key controls of groundwater quality in the Wairau Plain. Hierarchical Cluster Analysis revealed distinct hydrochemical water quality groups in the Wairau Plain groundwater system. Visualisation of the results of the multivariate statistical analyses and distribution of groundwater nitrate concentrations in the context of aquifer lithology highlighted the link between groundwater chemistry and the lithology of host aquifers. The methodology followed in this study can be applied in a variety of hydrogeological settings to synthesise geological, hydrogeological and hydrochemical data and present them in a format readily understood by a wide range of stakeholders. This enables a more efficient communication of the results of scientific studies to the wider community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Each financial year concessions, benefits and incentives are delivered to taxpayers via the tax system. These concessions, benefits and incentives, referred to as tax expenditure, differ from direct expenditure because of the recurring fiscal impact without regular scrutiny through the federal budget process. There are approximately 270 different tax expenditures existing within the current tax regime with total measured tax expenditures in the 2005-06 financial year estimated to be around $42.1 billion, increasing to $52.7 billion by 2009-10. Each year, new tax expenditures are introduced, while existing tax expenditures are modified and deleted. In recognition of some of the problems associated with tax expenditure, a Tax Expenditure Statement, as required by the Charter of Budget Honesty Act 1988, is produced annually by the Australian Federal Treasury. The Statement details the various expenditures and measures in the form of concessions, benefits and incentives provided to taxpayers by the Australian Government and calculates the tax expenditure in terms of revenue forgone. A similar approach to reporting tax expenditure, with such a report being a legal requirement, is followed by most OECD countries. The current Tax Expenditure Statement lists 270 tax expenditures and where it is able to, reports on the estimated pecuniary value of those expenditures. Apart from the annual Tax Expenditure Statement, there is very little other scrutiny of Australia’s Federal tax expenditure program. While there has been various academic analysis of tax expenditure in Australia, when compared to the North American literature, it is suggested that the Australian literature is still in its infancy. In fact, one academic author who has contributed to tax expenditure analysis recently noted that there is ‘remarkably little secondary literature which deals at any length with tax expenditures in the Australian context.’ Given this perceived gap in the secondary literature, this paper examines fundamental concept of tax expenditure and considers the role it plays in to the current tax regime as a whole, along with the effects of the introduction of new tax expenditures. In doing so, tax expenditure is contrasted with direct expenditure. An analysis of tax expenditure versus direct expenditure is already a sophisticated and comprehensive body of work stemming from the US over the last three decades. As such, the title of this paper is rather misleading. However, given the lack of analysis in Australia, it is appropriate that this paper undertakes a consideration of tax expenditure versus direct expenditure in an Australian context. Given this proposition, rather than purport to undertake a comprehensive analysis of tax expenditure which has already been done, this paper discusses the substantive considerations of any such analysis to enable further investigation into the tax expenditure regime both as a whole and into individual tax expenditure initiatives. While none of the propositions in this paper are new in a ‘tax expenditure analysis’ sense, this debate is a relatively new contribution to the Australian literature on the tax policy. Before the issues relating to tax expenditure can be determined, it is necessary to consider what is meant by ‘tax expenditure’. As such, part two if this paper defines ‘tax expenditure’. Part three determines the framework in which tax expenditure can be analysed. It is suggested that an analysis of tax expenditure must be evaluated within the framework of the design criteria of an income tax system with the key features of equity, efficiency, and simplicity. Tax expenditure analysis can then be applied to deviations from the ideal tax base. Once it is established what is meant by tax expenditure and the framework for evaluation is determined, it is possible to establish the substantive issues to be evaluated. This paper suggests that there are four broad areas worthy of investigation; economic efficiency, administrative efficiency, whether tax expenditure initiatives achieve their policy intent, and the impact on stakeholders. Given these areas of investigation, part four of this paper considers the issues relating to the economic efficiency of the tax expenditure regime, in particular, the effect on resource allocation, incentives for taxpayer behaviour and distortions created by tax expenditures. Part five examines the notion of administrative efficiency in light of the fact that most tax expenditures could simply be delivered as direct expenditures. Part six explores the notion of policy intent and considers the two questions that need to be asked; whether any tax expenditure initiative reaches its target group and whether the financial incentives are appropriate. Part seven examines the impact on stakeholders. Finally, part eight considers the future of tax expenditure analysis in Australia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite of a significant contribution of transport sector in the global economy and society, it is one of the largest sources of global energy consumption, green house gas emissions and environmental pollutions. A complete look onto the whole life cycle environmental inventory of this sector will be helpful to generate a holistic understanding of contributory factors causing emissions. Previous studies were mainly based on segmental views which mostly compare environmental impacts of different modes of transport, but very few consider impacts other than the operational phase. Ignoring the impacts of non-operational phases, e.g., manufacture, construction, maintenance, may not accurately reflect total contributions on emissions. Moreover an integrated study for all motorized modes of road transport is also needed to achieve a holistic estimation. The objective of this study is to develop a component based life cycle inventory model which considers impacts of both operational and non-operational phases of the whole life as well as different transport modes. In particular, the whole life cycle of road transport has been segmented into vehicle, infrastructure, fuel and operational components and inventories have been conducted on each component. The inventory model has been demonstrated using the road transport of Singapore. Results show that total life cycle green house gas emissions from the road transport sector of Singapore is 7.8 million tons per year, among which operational phase and non-operational phases contribute about 55% and about 45%, respectively. Total amount of criteria air pollutants are 46, 8.5, 33.6, 13.6 and 2.6 thousand tons per year for CO, SO2, NOx, VOC and PM10, respectively. From the findings, it can be deduced that stringent government policies on emission control measures have a significant impact on reducing environmental pollutions. In combating global warming and environmental pollutions the promotion of public transport over private modes is an effective sustainable policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to bring leadership context into sharper focus and to suggest there are strong constraints on public leaders’ discretion to lead in ways consistent with NPM or NPL. Much of the existing public leadership research focuses on the individual leader and tends to give little attention to the influence of context. This lack of focus on leader context adversely affects our ability to build public leadership capacity. We draw on prior research to establish that (1) there are strong contextual constraints on public leaders’ capacity to lead in ways consistent with NPL, (2) public leaders are subject to contradictory messages and for the most part these contradictions are unacknowledged and unresolved, the impact of which is confusion and informal power-politics, (3) the task of leader transition from traditional leadership to new public leadership is very much underestimated and requires a new way to think about leadership development. On the basis of this analysis, we argue that public leaders find themselves between a rock and a hard place.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of Six Sigma was initiated in the 1980s by Motorola. Since then it has been implemented in several manufacturing and service organizations. Till now Six Sigma implementation is mostly limited to healthcare and financial services in private sector. Its implementation is now gradually picking up in services such as call center, education, construction and related engineering etc. in private as well as public sector. Through a literature review, a questionnaire survey, and multiple case study approach the paper develops a conceptual framework to facilitate widening the scope of Six Sigma implementation in service organizations. Using grounded theory methodology, this study develops theory for Six Sigma implementation in service organizations. The study involves a questionnaire survey and case studies to understand and build a conceptual framework. The survey was conducted in service organizations in Singapore and exploratory in nature. The case studies involved three service organizations which implemented Six Sigma. The objective is to explore and understand the issues highlighted by the survey and the literature. The findings confirm the inclusion of critical success factors, critical-to-quality characteristics, and set of tools and techniques as observed from the literature. In case of key performance indicator, there are different interpretations about it in literature and also by industry practitioners. Some literature explain key performance indicator as performance metrics whereas some feel it as key process input or output variables, which is similar to interpretations by practitioners of Six Sigma. The response of not relevant and unknown to us as reasons for not implementing Six Sigma shows the need for understanding specific requirements of service organizations. Though much theoretical description is available about Six Sigma, but there has been limited rigorous academic research on it. This gap is far more pronounced about Six Sigma implementation in service organizations, where the theory is not mature enough. Identifying this need, the study contributes by going through theory building exercise and developing a conceptual framework to understand the issues involving its implementation in service organizations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Process bus networks are the next stage in the evolution of substation design, bringing digital technology to the high voltage switchyard. Benefits of process buses include facilitating the use of Non-Conventional Instrument Transformers, improved disturbance recording and phasor measurement and the removal of costly, and potentially hazardous, copper cabling from substation switchyards and control rooms. This paper examines the role a process bus plays in an IEC 61850 based Substation Automation System. Measurements taken from a process bus substation are used to develop an understanding of the network characteristics of "whole of substation" process buses. The concept of "coherent transmission" is presented and the impact of this on Ethernet switches is examined. Experiments based on substation observations are used to investigate in detail the behavior of Ethernet switches with sampled value traffic. Test methods that can be used to assess the adequacy of a network are proposed, and examples of the application and interpretation of these tests are provided. Once sampled value frames are queued by an Ethernet switch the additional delay incurred by subsequent switches is minimal, and this allows their use in switchyards to further reduce communications cabling, without significantly impacting operation. The performance and reliability of a process bus network operating with close to the theoretical maximum number of digital sampling units (merging units or electronic instrument transformers) was investigated with networking equipment from several vendors, and has been demonstrated to be acceptable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Barmah Forest virus (BFV) disease is one of the most widespread mosquito-borne diseases in Australia. The number of outbreaks and the incidence rate of BFV in Australia have attracted growing concerns about the spatio-temporal complexity and underlying risk factors of BFV disease. A large number of notifications has been recorded continuously in Queensland since 1992. Yet, little is known about the spatial and temporal characteristics of the disease. I aim to use notification data to better understand the effects of climatic, demographic, socio-economic and ecological risk factors on the spatial epidemiology of BFV disease transmission, develop predictive risk models and forecast future disease risks under climate change scenarios. Computerised data files of daily notifications of BFV disease and climatic variables in Queensland during 1992-2008 were obtained from Queensland Health and Australian Bureau of Meteorology, respectively. Projections on climate data for years 2025, 2050 and 2100 were obtained from Council of Scientific Industrial Research Organisation. Data on socio-economic, demographic and ecological factors were also obtained from relevant government departments as follows: 1) socio-economic and demographic data from Australian Bureau of Statistics; 2) wetlands data from Department of Environment and Resource Management and 3) tidal readings from Queensland Department of Transport and Main roads. Disease notifications were geocoded and spatial and temporal patterns of disease were investigated using geostatistics. Visualisation of BFV disease incidence rates through mapping reveals the presence of substantial spatio-temporal variation at statistical local areas (SLA) over time. Results reveal high incidence rates of BFV disease along coastal areas compared to the whole area of Queensland. A Mantel-Haenszel Chi-square analysis for trend reveals a statistically significant relationship between BFV disease incidence rates and age groups (ƒÓ2 = 7587, p<0.01). Semi-variogram analysis and smoothed maps created from interpolation techniques indicate that the pattern of spatial autocorrelation was not homogeneous across the state. A cluster analysis was used to detect the hot spots/clusters of BFV disease at a SLA level. Most likely spatial and space-time clusters are detected at the same locations across coastal Queensland (p<0.05). The study demonstrates heterogeneity of disease risk at a SLA level and reveals the spatial and temporal clustering of BFV disease in Queensland. Discriminant analysis was employed to establish a link between wetland classes, climate zones and BFV disease. This is because the importance of wetlands in the transmission of BFV disease remains unclear. The multivariable discriminant modelling analyses demonstrate that wetland types of saline 1, riverine and saline tidal influence were the most significant risk factors for BFV disease in all climate and buffer zones, while lacustrine, palustrine, estuarine and saline 2 and saline 3 wetlands were less important. The model accuracies were 76%, 98% and 100% for BFV risk in subtropical, tropical and temperate climate zones, respectively. This study demonstrates that BFV disease risk varied with wetland class and climate zone. The study suggests that wetlands may act as potential breeding habitats for BFV vectors. Multivariable spatial regression models were applied to assess the impact of spatial climatic, socio-economic and tidal factors on the BFV disease in Queensland. Spatial regression models were developed to account for spatial effects. Spatial regression models generated superior estimates over a traditional regression model. In the spatial regression models, BFV disease incidence shows an inverse relationship with minimum temperature, low tide and distance to coast, and positive relationship with rainfall in coastal areas whereas in whole Queensland the disease shows an inverse relationship with minimum temperature and high tide and positive relationship with rainfall. This study determines the most significant spatial risk factors for BFV disease across Queensland. Empirical models were developed to forecast the future risk of BFV disease outbreaks in coastal Queensland using existing climatic, socio-economic and tidal conditions under climate change scenarios. Logistic regression models were developed using BFV disease outbreak data for the existing period (2000-2008). The most parsimonious model had high sensitivity, specificity and accuracy and this model was used to estimate and forecast BFV disease outbreaks for years 2025, 2050 and 2100 under climate change scenarios for Australia. Important contributions arising from this research are that: (i) it is innovative to identify high-risk coastal areas by creating buffers based on grid-centroid and the use of fine-grained spatial units, i.e., mesh blocks; (ii) a spatial regression method was used to account for spatial dependence and heterogeneity of data in the study area; (iii) it determined a range of potential spatial risk factors for BFV disease; and (iv) it predicted the future risk of BFV disease outbreaks under climate change scenarios in Queensland, Australia. In conclusion, the thesis demonstrates that the distribution of BFV disease exhibits a distinct spatial and temporal variation. Such variation is influenced by a range of spatial risk factors including climatic, demographic, socio-economic, ecological and tidal variables. The thesis demonstrates that spatial regression method can be applied to better understand the transmission dynamics of BFV disease and its risk factors. The research findings show that disease notification data can be integrated with multi-factorial risk factor data to develop build-up models and forecast future potential disease risks under climate change scenarios. This thesis may have implications in BFV disease control and prevention programs in Queensland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whole-body computer control interfaces present new opportunities to engage children with games for learning. Stomp is a suite of educational games that use such a technology, allowing young children to use their whole body to interact with a digital environment projected on the floor. To maximise the effectiveness of this technology, tenets of self-determination theory (SDT) are applied to the design of Stomp experiences. By meeting user needs for competence, autonomy, and relatedness our aim is to increase children's engagement with the Stomp learning platform. Analysis of Stomp's design suggests that these tenets are met. Observations from a case study of Stomp being used by young children show that they were highly engaged and motivated by Stomp. This analysis demonstrates that continued application of SDT to Stomp will further enhance user engagement. It also is suggested that SDT, when applied more widely to other whole-body multi-user interfaces, could instil similar positive effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Small-angle and ultra-small-angle neutron scattering (SANS and USANS) measurements were performed on samples from the Triassic Montney tight gas reservoir in Western Canada in order to determine the applicability of these techniques for characterizing the full pore size spectrum and to gain insight into the nature of the pore structure and its control on permeability. The subject tight gas reservoir consists of a finely laminated siltstone sequence; extensive cementation and moderate clay content are the primary causes of low permeability. SANS/USANS experiments run at ambient pressure and temperature conditions on lithologically-diverse sub-samples of three core plugs demonstrated that a broad pore size distribution could be interpreted from the data. Two interpretation methods were used to evaluate total porosity, pore size distribution and surface area and the results were compared to independent estimates derived from helium porosimetry (connected porosity) and low-pressure N2 and CO2 adsorption (accessible surface area and pore size distribution). The pore structure of the three samples as interpreted from SANS/USANS is fairly uniform, with small differences in the small-pore range (<2000 Å), possibly related to differences in degree of cementation, and mineralogy, in particular clay content. Total porosity interpreted from USANS/SANS is similar to (but systematically higher than) helium porosities measured on the whole core plug. Both methods were used to estimate the percentage of open porosity expressed here as a ratio of connected porosity, as established from helium adsorption, to the total porosity, as estimated from SANS/USANS techniques. Open porosity appears to control permeability (determined using pressure and pulse-decay techniques), with the highest permeability sample also having the highest percentage of open porosity. Surface area, as calculated from low-pressure N2 and CO2 adsorption, is significantly less than surface area estimates from SANS/USANS, which is due in part to limited accessibility of the gases to all pores. The similarity between N2 and CO2-accessible surface area suggests an absence of microporosity in these samples, which is in agreement with SANS analysis. A core gamma ray profile run on the same core from which the core plug samples were taken correlates to profile permeability measurements run on the slabbed core. This correlation is related to clay content, which possibly controls the percentage of open porosity. Continued study of these effects will prove useful in log-core calibration efforts for tight gas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the first stage of power system restoration after a blackout, an optimal black-start scheme is very important for speeding up the whole restoration procedure. Up to now, much research work has been done on generating or selecting an optimal black-start scheme by a single round of decision-making. However, less attention has been paid for improving the final decision-making results through a multiple-round decision-making procedure. In the group decision-making environment, decision-making results evaluated by different black-start experts may differ significantly with each other. Thus, the consistency of black-start decision-making results could be deemed as an important indicator in assessing the black-start group decision-making results. Given this background, an intuitionistic fuzzy distance-based method is presented to analyse the consistency of black-start group decision-making results. Moreover, the weights of black-start indices as well as the weights of decision-making experts are modified in order to optimise the consistency of black-start group decision-making results. Finally, an actual example is served for demonstrating the proposed method.