91 resultados para Positional number systems
Resumo:
World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.
Resumo:
This paper describes a number of techniques for GNSS navigation message authentication. A detailed analysis of the security facilitated by navigation message authentication is given. The analysis takes into consideration the risk of critical applications that rely on GPS including transportation, finance and telecommunication networks. We propose a number of cryptographic authentication schemes for navigation data authentication. These authentication schemes provide authenticity and integrity of the navigation data to the receiver. Through software simulation, the performance of the schemes is quantified. The use of software simulation enables the collection of authentication performance data of different data channels, and the impact of various schemes on the infrastructure and receiver. Navigation message authentication schemes have been simulated at the proposed data rates of Galileo and GPS services, for which the resulting performance data is presented. This paper concludes by making recommendations for optimal implementation of navigation message authentication for Galileo and next generation GPS systems.
Resumo:
With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.
Resumo:
A composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. Hence, this model was able to quickly quantify the time spent in each segment within the considered zone, as well as the composition and position of the requisite segments based on the vehicle fleet information, which not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bi-directional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. Although the CLSE model is intended to be applied in traffic management and transport analysis systems for the evaluation of exposure, as well as the simulation of vehicle emissions in traffic interrupted microenvironments, the bus station model can also be used for the input of initial source definitions in future dispersion models.
Resumo:
A number of instrumented laboratory-scale soil embankment slopes were subjected to artificial rainfall until they failed. The factor of safety of the slope based on real-time measurements of pore-water pressure (suction) and laboratory measured soil properties were calculated as the rainfall progressed. Based on the experiment measurements and slope stability analysis, it was observed that slope displacement measurements can be used to warn the slope failure more accurately. Further, moisture content/pore-water pressure measurements near the toe of the slope and the real-time factor of safety can also be used for prediction of rainfall-induced embankment failures with adequate accuracy.
Resumo:
This paper investigates the current turbulent state of copyright in the digital age, and explores the viability of alternative compensation systems that aim to achieve the same goals with fewer negative consequences for consumers and artists. To sustain existing business models associated with creative content, increased recourse to DRM (Digital Rights Management) technologies, designed to restrict access to and usage of digital content, is well underway. Considerable technical challenges associated with DRM systems necessitate increasingly aggressive recourse to the law. A number of controversial aspects of copyright enforcement are discussed and contrasted with those inherent in levy based compensation systems. Lateral exploration of the copyright dilemma may help prevent some undesirable societal impacts, but with powerful coalitions of creative, consumer electronics and information technology industries having enormous vested interest in current models, alternative schemes are frequently treated dismissively. This paper focuses on consideration of alternative models that better suit the digital era whilst achieving a more even balance in the copyright bargain.
Resumo:
The development of effective safety regulations for unmanned aircraft systems (UAS) is an issue of paramount concern for industry. The development of this framework is a prerequisite for greater UAS access to civil airspace and, subsequently, the continued growth of the UAS industry. The direct use of the existing conventionally piloted aircraft (CPA) airworthiness certification framework for the regulation of UAS has a number of limitations. The objective of this paper is to present one possible approach for the structuring of airworthiness regulations for civilian UAS. The proposed approach facilitates a more systematic, objective and justifiable method for managing the spectrum of risk associated with the diversity of UAS and their potential operations. A risk matrix is used to guide the development of an airworthiness certification matrix (ACM). The ACM provides a structured categorisation that facilitates the future tailoring of regulations proportionate to the levels of risk associated with the operation of the UAS. As a result, an objective and traceable link may be established between mandated regulations and the overarching objective for an equivalent level of safety to CPA. The ACM also facilitates the systematic consideration of a range of technical and operational mitigation strategies. For these reasons, the ACM is proposed as a suitable method for the structuring of an airworthiness certification framework for civil or commercially operated UAS (i.e., the UAS equivalent in function to the Part 21 regulations for civil CPA) and for the further structuring of requirements on the operation of UAS in un-segregated airspace.
Resumo:
Different international plant protection organisations advocate different schemes for conducting pest risk assessments. Most of these schemes use structured questionnaire in which experts are asked to score several items using an ordinal scale. The scores are then combined using a range of procedures, such as simple arithmetic mean, weighted averages, multiplication of scores, and cumulative sums. The most useful schemes will correctly identify harmful pests and identify ones that are not. As the quality of a pest risk assessment can depend on the characteristics of the scoring system used by the risk assessors (i.e., on the number of points of the scale and on the method used for combining the component scores), it is important to assess and compare the performance of different scoring systems. In this article, we proposed a new method for assessing scoring systems. Its principle is to simulate virtual data using a stochastic model and, then, to estimate sensitivity and specificity values from these data for different scoring systems. The interest of our approach was illustrated in a case study where several scoring systems were compared. Data for this analysis were generated using a probabilistic model describing the pest introduction process. The generated data were then used to simulate the outcome of scoring systems and to assess the accuracy of the decisions about positive and negative introduction. The results showed that ordinal scales with at most 5 or 6 points were sufficient and that the multiplication-based scoring systems performed better than their sum-based counterparts. The proposed method could be used in the future to assess a great diversity of scoring systems.
Resumo:
Social tags in web 2.0 are becoming another important information source to describe the content of items as well as to profile users’ topic preferences. However, as arbitrary words given by users, tags contains a lot of noise such as tag synonym and semantic ambiguity a large number personal tags that only used by one user, which brings challenges to effectively use tags to make item recommendations. To solve these problems, this paper proposes to use a set of related tags along with their weights to represent semantic meaning of each tag for each user individually. A hybrid recommendation generation approaches that based on the weighted tags are proposed. We have conducted experiments using the real world dataset obtained from Amazon.com. The experimental results show that the proposed approaches outperform the other state of the art approaches.
Resumo:
This paper presents an approach to predict the operating conditions of machine based on classification and regression trees (CART) and adaptive neuro-fuzzy inference system (ANFIS) in association with direct prediction strategy for multi-step ahead prediction of time series techniques. In this study, the number of available observations and the number of predicted steps are initially determined by using false nearest neighbor method and auto mutual information technique, respectively. These values are subsequently utilized as inputs for prediction models to forecast the future values of the machines’ operating conditions. The performance of the proposed approach is then evaluated by using real trending data of low methane compressor. A comparative study of the predicted results obtained from CART and ANFIS models is also carried out to appraise the prediction capability of these models. The results show that the ANFIS prediction model can track the change in machine conditions and has the potential for using as a tool to machine fault prognosis.
Resumo:
This paper describes and evaluates the novel utility of network methods for understanding human interpersonal interactions within social neurobiological systems such as sports teams. We show how collective system networks are supported by the sum of interpersonal interactions that emerge from the activity of system agents (such as players in a sports team). To test this idea we trialled the methodology in analyses of intra-team collective behaviours in the team sport of water polo. We observed that the number of interactions between team members resulted in varied intra-team coordination patterns of play, differentiating between successful and unsuccessful performance outcomes. Future research on small-world networks methodologies needs to formalize measures of node connections in analyses of collective behaviours in sports teams, to verify whether a high frequency of interactions is needed between players in order to achieve competitive performance outcomes.
Resumo:
Increasingly, national and international governments have a strong mandate to develop national e-health systems to enable delivery of much-needed healthcare services. Research is, therefore, needed into appropriate security and reliance structures for the development of health information systems which must be compliant with governmental and alike obligations. The protection of e-health information security is critical to the successful implementation of any e-health initiative. To address this, this paper proposes a security architecture for index-based e-health environments, according to the broad outline of Australia’s National E-health Strategy and National E-health Transition Authority (NEHTA)’s Connectivity Architecture. This proposal, however, could be equally applied to any distributed, index-based health information system involving referencing to disparate health information systems. The practicality of the proposed security architecture is supported through an experimental demonstration. This successful prototype completion demonstrates the comprehensibility of the proposed architecture, and the clarity and feasibility of system specifications, in enabling ready development of such a system. This test vehicle has also indicated a number of parameters that need to be considered in any national indexed-based e-health system design with reasonable levels of system security. This paper has identified the need for evaluation of the levels of education, training, and expertise required to create such a system.
Resumo:
This study examines the impact of utilising a Decision Support System (DSS) in a practical health planning study. Specifically, it presents a real-world case of a community-based initiative aiming to improve overall public health outcomes. Previous studies have emphasised that because of a lack of effective information, systems and an absence of frameworks for making informed decisions in health planning, it has become imperative to develop innovative approaches and methods in health planning practice. Online Geographical Information Systems (GIS) has been suggested as one of the innovative methods that will inform decision-makers and improve the overall health planning process. However, a number of gaps in knowledge have been identified within health planning practice: lack of methods to develop these tools in a collaborative manner; lack of capacity to use the GIS application among health decision-makers perspectives, and lack of understanding about the potential impact of such systems on users. This study addresses the abovementioned gaps and introduces an online GIS-based Health Decision Support System (HDSS), which has been developed to improve collaborative health planning in the Logan-Beaudesert region of Queensland, Australia. The study demonstrates a participatory and iterative approach undertaken to design and develop the HDSS. It then explores the perceived user satisfaction and impact of the tool on a selected group of health decision makers. Finally, it illustrates how decision-making processes have changed since its implementation. The overall findings suggest that the online GIS-based HDSS is an effective tool, which has the potential to play an important role in the future in terms of improving local community health planning practice. However, the findings also indicate that decision-making processes are not merely informed by using the HDSS tool. Instead, they seem to enhance the overall sense of collaboration in health planning practice. Thus, to support the Healthy Cities approach, communities will need to encourage decision-making based on the use of evidence, participation and consensus, which subsequently transfers into informed actions.
Resumo:
In an age where digital innovation knows no boundaries, research in the area of brain-computer interface and other neural interface devices go where none have gone before. The possibilities are endless and as dreams become reality, the implications of these amazing developments should be considered. Some of these new devices have been created to correct or minimise the effects of disease or injury so the paper discusses some of the current research and development in the area, including neuroprosthetics. To assist researchers and academics in identifying some of the legal and ethical issues that might arise as a result of research and development of neural interface devices, using both non-invasive techniques and invasive procedures, the paper discusses a number of recent observations of authors in the field. The issue of enhancing human attributes by incorporating these new devices is also considered. Such enhancement may be regarded as freeing the mind from the constraints of the body, but there are legal and moral issues that researchers and academics would be well advised to contemplate as these new devices are developed and used. While different fact situation surround each of these new devices, and those that are yet to come, consideration of the legal and ethical landscape may assist researchers and academics in dealing effectively with matters that arise in these times of transition. Lawyers could seek to facilitate the resolution of the legal disputes that arise in this area of research and development within the existing judicial and legislative frameworks. Whether these frameworks will suffice, or will need to change in order to enable effective resolution, is a broader question to be explored.
Resumo:
Fiber Bragg grating (FBG) sensor technology has been attracting substantial industrial interests for the last decade. FBG sensors have seen increasing acceptance and widespread use for structural sensing and health monitoring applications in composites, civil engineering, aerospace, marine, oil & gas, and smart structures. One transportation system that has been benefitted tremendously from this technology is railways, where it is of the utmost importance to understand the structural and operating conditions of rails as well as that of freight and passenger service cars to ensure safe and reliable operation. Fiberoptic sensors, mostly in the form of FBGs, offer various important characteristics, such as EMI/RFI immunity, multiplexing capability, and very long-range interrogation (up to 230 km between FBGs and measurement unit), over the conventional electrical sensors for the distinctive operational conditions in railways. FBG sensors are unique from other types of fiber-optic sensors as the measured information is wavelength-encoded, which provides self-referencing and renders their signals less susceptible to intensity fluctuations. In addition, FBGs are reflective sensors that can be interrogated from either end, providing redundancy to FBG sensing networks. These two unique features are particularly important for the railway industry where safe and reliable operations are the major concerns. Furthermore, FBGs are very versatile and transducers based on FBGs can be designed to measure a wide range of parameters such as acceleration and inclination. Consequently, a single interrogator can deal with a large number of FBG sensors to measure a multitude of parameters at different locations that spans over a large area.