906 resultados para simple systems
Resumo:
Networked control over data networks has received increasing attention in recent years. Among many problems in networked control systems (NCSs) is the need to reduce control latency and jitter and to deal with packet dropouts. This paper introduces our recent progress on a queuing communication architecture for real-time NCS applications, and simple strategies for dealing with packet dropouts. Case studies for a middle-scale process or multiple small-scale processes are presented for TCP/IP based real-time NCSs. Variations of network architecture design are modelled, simulated, and analysed for evaluation of control latency and jitter performance. It is shown that a simple bandwidth upgrade or adding hierarchy does not necessarily bring benefits for performance improvement of control latency and jitter. A co-design of network and control is necessary to maximise the real-time control performance of NCSs
Resumo:
Grid music systems provide discrete geometric methods for simplified music-making by providing spatialised input to construct patterned music on a 2D matrix layout. While they are conceptually simple, grid systems may be layered to enable complex and satisfying musical results. Grid music systems have been applied to a range of systems from small portable devices up to larger systems. In this paper we will discuss the use of grid music systems in general and present an overview of the HarmonyGrid system we have developed as a new interactive performance system. We discuss a range of issues related to the design and use of larger-scale grid- based interactive performance systems such as the HarmonyGrid.
Resumo:
The book within which this chapter appears is published as a research reference book (not a coursework textbook) on Management Information Systems (MIS) for seniors or graduate students in Chinese universities. It is hoped that this chapter, along with the others, will be helpful to MIS scholars and PhD/Masters research students in China who seek understanding of several central Information Systems (IS) research topics and related issues. The subject of this chapter - ‘Evaluating Information Systems’ - is broad, and cannot be addressed in its entirety in any depth within a single book chapter. The chapter proceeds from the truism that organizations have limited resources and those resources need to be invested in a way that provides greatest benefit to the organization. IT expenditure represents a substantial portion of any organization’s investment budget and IT related innovations have broad organizational impacts. Evaluation of the impact of this major investment is essential to justify this expenditure both pre- and post-investment. Evaluation is also important to prioritize possible improvements. The chapter (and most of the literature reviewed herein) admittedly assumes a blackbox view of IS/IT1, emphasizing measures of its consequences (e.g. for organizational performance or the economy) or perceptions of its quality from a user perspective. This reflects the MIS emphasis – a ‘management’ emphasis rather than a software engineering emphasis2, where a software engineering emphasis might be on the technical characteristics and technical performance. Though a black-box approach limits diagnostic specificity of findings from a technical perspective, it offers many benefits. In addition to superior management information, these benefits may include economy of measurement and comparability of findings (e.g. see Part 4 on Benchmarking IS). The chapter does not purport to be a comprehensive treatment of the relevant literature. It does, however, reflect many of the more influential works, and a representative range of important writings in the area. The author has been somewhat opportunistic in Part 2, employing a single journal – The Journal of Strategic Information Systems – to derive a classification of literature in the broader domain. Nonetheless, the arguments for this approach are believed to be sound, and the value from this exercise real. The chapter drills down from the general to the specific. It commences with a highlevel overview of the general topic area. This is achieved in 2 parts: - Part 1 addressing existing research in the more comprehensive IS research outlets (e.g. MISQ, JAIS, ISR, JMIS, ICIS), and Part 2 addressing existing research in a key specialist outlet (i.e. Journal of Strategic Information Systems). Subsequently, in Part 3, the chapter narrows to focus on the sub-topic ‘Information Systems Success Measurement’; then drilling deeper to become even more focused in Part 4 on ‘Benchmarking Information Systems’. In other words, the chapter drills down from Parts 1&2 Value of IS, to Part 3 Measuring Information Systems Success, to Part 4 Benchmarking IS. While the commencing Parts (1&2) are by definition broadly relevant to the chapter topic, the subsequent, more focused Parts (3 and 4) admittedly reflect the author’s more specific interests. Thus, the three chapter foci – value of IS, measuring IS success, and benchmarking IS - are not mutually exclusive, but, rather, each subsequent focus is in most respects a sub-set of the former. Parts 1&2, ‘the Value of IS’, take a broad view, with much emphasis on ‘the business Value of IS’, or the relationship between information technology and organizational performance. Part 3, ‘Information System Success Measurement’, focuses more specifically on measures and constructs employed in empirical research into the drivers of IS success (ISS). (DeLone and McLean 1992) inventoried and rationalized disparate prior measures of ISS into 6 constructs – System Quality, Information Quality, Individual Impact, Organizational Impact, Satisfaction and Use (later suggesting a 7th construct – Service Quality (DeLone and McLean 2003)). These 6 constructs have been used extensively, individually or in some combination, as the dependent variable in research seeking to better understand the important antecedents or drivers of IS Success. Part 3 reviews this body of work. Part 4, ‘Benchmarking Information Systems’, drills deeper again, focusing more specifically on a measure of the IS that can be used as a ‘benchmark’3. This section consolidates and extends the work of the author and his colleagues4 to derive a robust, validated IS-Impact measurement model for benchmarking contemporary Information Systems (IS). Though IS-Impact, like ISS, has potential value in empirical, causal research, its design and validation has emphasized its role and value as a comparator; a measure that is simple, robust and generalizable and which yields results that are as far as possible comparable across time, across stakeholders, and across differing systems and systems contexts.
Resumo:
This paper considers the implications of the permanent/transitory decomposition of shocks for identification of structural models in the general case where the model might contain more than one permanent structural shock. It provides a simple and intuitive generalization of the influential work of Blanchard and Quah [1989. The dynamic effects of aggregate demand and supply disturbances. The American Economic Review 79, 655–673], and shows that structural equations with known permanent shocks cannot contain error correction terms, thereby freeing up the latter to be used as instruments in estimating their parameters. The approach is illustrated by a re-examination of the identification schemes used by Wickens and Motto [2001. Estimating shocks and impulse response functions. Journal of Applied Econometrics 16, 371–387], Shapiro and Watson [1988. Sources of business cycle fluctuations. NBER Macroeconomics Annual 3, 111–148], King et al. [1991. Stochastic trends and economic fluctuations. American Economic Review 81, 819–840], Gali [1992. How well does the ISLM model fit postwar US data? Quarterly Journal of Economics 107, 709–735; 1999. Technology, employment, and the business cycle: Do technology shocks explain aggregate fluctuations? American Economic Review 89, 249–271] and Fisher [2006. The dynamic effects of neutral and investment-specific technology shocks. Journal of Political Economy 114, 413–451].
Resumo:
This paper presents a simple and intuitive approach to determining the kinematic parameters of a serial-link robot in Denavit– Hartenberg (DH) notation. Once a manipulator’s kinematics is parameterized in this form, a large body of standard algorithms and code implementations for kinematics, dynamics, motion planning, and simulation are available. The proposed method has two parts. The first is the “walk through,” a simple procedure that creates a string of elementary translations and rotations, from the user-defined base coordinate to the end-effector. The second step is an algebraic procedure to manipulate this string into a form that can be factorized as link transforms, which can be represented in standard or modified DH notation. The method allows for an arbitrary base and end-effector coordinate system as well as an arbitrary zero joint angle pose. The algebraic procedure is amenable to computer algebra manipulation and a Java program is available as supplementary downloadable material.
Resumo:
World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.
Resumo:
This paper investigates the control of a HVDC link, fed from an AC source through a controlled rectifier and feeding an AC line through a controlled inverter. The overall objective is to maintain maximum possible link voltage at the inverter while regulating the link current. In this paper the practical feedback design issues are investigated with a view of obtaining simple, robust designs that are easy to evaluate for safety and operability. The investigations are applicable to back-to-back links used for frequency decoupling and to long DC lines. The design issues discussed include: (i) a review of overall system dynamics to establish the time scale of different feedback loops and to highlight feedback design issues; (ii) the concept of using the inverter firing angle control to regulate link current when the rectifier firing angle controller saturates; and (iii) the design issues for the individual controllers including robust design for varying line conditions and the trade-off between controller complexity and the reduction of nonlinearity and disturbance effects
Resumo:
With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.
Resumo:
This abstract explores the possibility of a grass roots approach to engaging people in community change initiatives by designing simple interactive exploratory prototypes for use by communities over time that support shared action. The prototype is gradually evolved in response to community use, fragments of data gathered through the prototype, and participant feedback with the goal of building participation in community change initiatives. A case study of a system to support ridesharing is discussed. The approach is compared and contrasted to a traditional IT systems procurement approach.
Resumo:
This paper presents the results of a pilot study examining the factors that impact most on the effective implementation of, and improvement to, Quality Mangement Sytems (QMSs) amongst Indonesian construction companies. Nine critical factors were identified from an extensive literature review, and a survey was conducted of 23 respondents from three specific groups (Quality Managers, Project Managers, and Site Engineers) undertaking work in the Indonesian infrastructure construction sector. The data has been analyzed initially using simple descriptive techniques. This study reveals that different groups within the sector have different opinions of the factors regardless of the degree of importance of each factor. However, the evaluation of construction project success and the incentive schemes for high performance staff, are the two factors that were considered very important by most of the respondents in all three groups. In terms of their assessment of tools for measuring contractor’s performance, additional QMS guidelines, techniques related to QMS practice provided by the Government, and benchmarking, a clear majority in each group regarded their usefulness as ‘of some importance’.
Resumo:
A special transmit polarization signalling scheme is presented to alleviate the power reduction as a result of polarization mismatch from random antenna orientations. This is particularly useful for hand held mobile terminals typically equipped with only a single linearly polarized antenna, since the average signal power is desensitized against receiver orientations. Numerical simulations also show adequate robustness against incorrect channel estimations.
Resumo:
Different international plant protection organisations advocate different schemes for conducting pest risk assessments. Most of these schemes use structured questionnaire in which experts are asked to score several items using an ordinal scale. The scores are then combined using a range of procedures, such as simple arithmetic mean, weighted averages, multiplication of scores, and cumulative sums. The most useful schemes will correctly identify harmful pests and identify ones that are not. As the quality of a pest risk assessment can depend on the characteristics of the scoring system used by the risk assessors (i.e., on the number of points of the scale and on the method used for combining the component scores), it is important to assess and compare the performance of different scoring systems. In this article, we proposed a new method for assessing scoring systems. Its principle is to simulate virtual data using a stochastic model and, then, to estimate sensitivity and specificity values from these data for different scoring systems. The interest of our approach was illustrated in a case study where several scoring systems were compared. Data for this analysis were generated using a probabilistic model describing the pest introduction process. The generated data were then used to simulate the outcome of scoring systems and to assess the accuracy of the decisions about positive and negative introduction. The results showed that ordinal scales with at most 5 or 6 points were sufficient and that the multiplication-based scoring systems performed better than their sum-based counterparts. The proposed method could be used in the future to assess a great diversity of scoring systems.
Resumo:
This research explores music in space, as experienced through performing and music-making with interactive systems. It explores how musical parameters may be presented spatially and displayed visually with a view to their exploration by a musician during performance. Spatial arrangements of musical components, especially pitches and harmonies, have been widely studied in the literature, but the current capabilities of interactive systems allow the improvisational exploration of these musical spaces as part of a performance practice. This research focuses on quantised spatial organisation of musical parameters that can be categorised as grid music systems (GMSs), and interactive music systems based on them. The research explores and surveys existing and historical uses of GMSs, and develops and demonstrates the use of a novel grid music system designed for whole body interaction. Grid music systems provide plotting of spatialised input to construct patterned music on a two-dimensional grid layout. GMSs are navigated to construct a sequence of parametric steps, for example a series of pitches, rhythmic values, a chord sequence, or terraced dynamic steps. While they are conceptually simple when only controlling one musical dimension, grid systems may be layered to enable complex and satisfying musical results. These systems have proved a viable, effective, accessible and engaging means of music-making for the general user as well as the musician. GMSs have been widely used in electronic and digital music technologies, where they have generally been applied to small portable devices and software systems such as step sequencers and drum machines. This research shows that by scaling up a grid music system, music-making and musical improvisation are enhanced, gaining several advantages: (1) Full body location becomes the spatial input to the grid. The system becomes a partially immersive one in four related ways: spatially, graphically, sonically and musically. (2) Detection of body location by tracking enables hands-free operation, thereby allowing the playing of a musical instrument in addition to “playing” the grid system. (3) Visual information regarding musical parameters may be enhanced so that the performer may fully engage with existing spatial knowledge of musical materials. The result is that existing spatial knowledge is overlaid on, and combined with, music-space. Music-space is a new concept produced by the research, and is similar to notions of other musical spaces including soundscape, acoustic space, Smalley's “circumspace” and “immersive space” (2007, 48-52), and Lotis's “ambiophony” (2003), but is rather more textural and “alive”—and therefore very conducive to interaction. Music-space is that space occupied by music, set within normal space, which may be perceived by a person located within, or moving around in that space. Music-space has a perceivable “texture” made of tensions and relaxations, and contains spatial patterns of these formed by musical elements such as notes, harmonies, and sounds, changing over time. The music may be performed by live musicians, created electronically, or be prerecorded. Large-scale GMSs have the capability not only to interactively display musical information as music representative space, but to allow music-space to co-exist with it. Moving around the grid, the performer may interact in real time with musical materials in music-space, as they form over squares or move in paths. Additionally he/she may sense the textural matrix of the music-space while being immersed in surround sound covering the grid. The HarmonyGrid is a new computer-based interactive performance system developed during this research that provides a generative music-making system intended to accompany, or play along with, an improvising musician. This large-scale GMS employs full-body motion tracking over a projected grid. Playing with the system creates an enhanced performance employing live interactive music, along with graphical and spatial activity. Although one other experimental system provides certain aspects of immersive music-making, currently only the HarmonyGrid provides an environment to explore and experience music-space in a GMS.