886 resultados para operating systems
Resumo:
Information Systems researchers have employed a diversity of sometimes inconsistent measures of IS success, seldom explicating the rationale, thereby complicating the choice for future researchers. In response to these and other issues, Gable, Sedera and Chan introduced the IS-Impact measurement model. This model represents “the stream of net benefits from the Information System (IS), to date and anticipated, as perceived by all key-user-groups”. Although the IS-Impact model was rigorously validated in previous research, there is a need to further generalise and validate it in different context. This paper reported the findings of the IS-Impact model revalidation study at four state governments in Malaysia with 232 users of a financial system that is currently being used at eleven state governments in Malaysia. Data was analysed following the guidelines for formative measurement validation using SmartPLS. Based on the PLS results, data supported the IS-Impact dimensions and measures thus confirming the validity of the IS-Impact model in Malaysia. This indicates that the IS-Impact model is robust and can be used across different context.
Resumo:
Pipe insulation between the collector and storage tank on pumped storage (commonly called split), solar water heaters can be subject to high temperatures, with a maximum equal to the collector stagnation temperature. The frequency of occurrence of these temperatures is dependent on many factors including climate, hot water demand, system size and efficiency. This paper outlines the findings of a computer modelling study to quantify the frequency of occurrence of pipe temperatures of 80 degrees Celsius or greater at the outlet of the collectors for these systems. This study will help insulation suppliers determine the suitability of their materials for this application. The TRNSYS program was used to model the performance of a common size of domestic split solar system, using both flat plate and evacuated tube, selective surface collectors. Each system was modelled at a representative city in each of the 6 climate zones for Australia and New Zealand, according to AS/NZS4234 - Heat Water Systems - Calculation of energy consumption, and the ORER RECs calculation method. TRNSYS was used to predict the frequency of occurrence of the temperatures that the pipe insulation would be exposed to over an average year, for hot water consumption patterns specified in AS/NZS4234, and for worst case conditions in each of the climate zones. The results show; * For selectively surfaced, flat plate collectors in the hottest location (Alice Sprints) with a medium size hot water demand according to AS/NZS2434, the annual frequency of occurrence of temperatures at and above 80 degrees Celsius was 33 hours. The frequency of temperatures at and above 140 degrees Celsius was insignificant. * For evacuated tube collectors in the hottest location (Alice Springs), the annual frequency of temperatures at and above 80 degrees Celsius was 50 hours. Temperatures at and above 140 degrees Celsius were significant and were estimated to occur for more than 21 hours per year in this climate zone. Even in Melbourne, temperatures at and above 80 degrees can occur for 12 hours per year and at and above 140 degrees for 5 hours per year. * The worst case identified was for evacuated tube collectors in Alice Springs, with mostly afternoon loads in January. Under these conditions, the frequency of temperatures at and above 80 degrees Celsius was 10 hours for this month only. Temperatures at and above 140 degrees Celsius were predicted to occur for 5 hours in January.
Resumo:
Appropriate pipe insulation on domestic, pumped storage (split), solar water heating systems forms an integral part of energy conservation measures of well engineered systems. However, its importance over the life of the system is often overlooked. This study outlines the findings of computer modelling to quantify the energy and cost savings by using pipe insulation between the collector and storage tank. System sizes of 270 Litre storage tank, together with either selectively surfaced, flat plate collectors (4m2 area), or 30 evacuated tube collectors, were used. Insulation thicknesses of 13mm and 15mm, pipe runs both ways of 10, 15 and 20 metres and both electric and gas boosting of systems were all considered. The TRNSYS program was used to model the system performance at a representative city in each of the 6 climate zones for Australia and New Zealand, according to AS/NZS4234 – Heat Water Systems – Calculation of energy consumption and the ORER RECs calculation method. The results show: Energy savings from pipe insulation are very significant, even in mild climates such as Rockhampton. Across all climates zones, savings ranged from 0.16 to 3.5GJ per system per year, or about 2 to 23 percent of the annual load. There is very little advantage in increasing the insulation thickness from 13 to 15mm. For electricity at 19c/kWh and gas at 2 c/MJ, cost savings of between $27 and $100 per year are achieved across the climate zones. Both energy and cost savings would increase in colder climates with increased system size, solar contribution and water temperatures. The pipe insulation substantially improves the solar contribution (or fraction) and Renewable Energy Certificates (RECs), as well as giving small savings in circulating pump running costs in milder climates. Solar contribution increased by up to 23 percent points and RECs by over 7 in some cases. The study highlights the need to install and maintain the integrity of appropriate pipe insulation on solar water heaters over their life time in Australia and New Zealand.
Resumo:
In this paper a new approach is proposed for interpreting of regional frequencies in multi machine power systems. The method uses generator aggregation and system reduction based on coherent generators in each area. The reduced system structure is able to be identified and a kalman estimator is designed for the reduced system to estimate the inter-area modes using the synchronized phasor measurement data. The proposed method is tested on a six machine, three area test system and the obtained results show the estimation of inter-area oscillations in the system with a high accuracy.
Resumo:
Proposed transmission smart grids will use a digital platform for the automation of substations operating at voltage levels of 110 kV and above. The IEC 61850 series of standards, released in parts over the last ten years, provide a specification for substation communications networks and systems. These standards, along with IEEE Std 1588-2008 Precision Time Protocol version 2 (PTPv2) for precision timing, are recommended by the both IEC Smart Grid Strategy Group and the NIST Framework and Roadmap for Smart Grid Interoperability Standards for substation automation. IEC 61850-8-1 and IEC 61850-9-2 provide an inter-operable solution to support multi-vendor digital process bus solutions, allowing for the removal of potentially lethal voltages and damaging currents from substation control rooms, a reduction in the amount of cabling required in substations, and facilitates the adoption of non-conventional instrument transformers (NCITs). IEC 61850, PTPv2 and Ethernet are three complementary protocol families that together define the future of sampled value digital process connections for smart substation automation. This paper describes a specific test and evaluation system that uses real time simulation, protection relays, PTPv2 time clocks and artificial network impairment that is being used to investigate technical impediments to the adoption of SV process bus systems by transmission utilities. Knowing the limits of a digital process bus, especially when sampled values and NCITs are included, will enable utilities to make informed decisions regarding the adoption of this technology.
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.
Resumo:
This paper presents a preliminary crash avoidance framework for heavy equipment control systems. Safe equipment operation is a major concern on construction sites since fatal on-site injuries are an industry-wide problem. The proposed framework has potential for effecting active safety for equipment operation. The framework contains algorithms for spatial modeling, object tracking, and path planning. Beyond generating spatial models in fractions of seconds, these algorithms can successfully track objects in an environment and produce a collision-free 3D motion trajectory for equipment.
Resumo:
Linking real-time schedulability directly to the Quality of Control (QoC), the ultimate goal of a control system, a hierarchical feedback QoC management framework with the Fixed Priority (FP) and the Earliest-Deadline-First (EDF) policies as plug-ins is proposed in this paper for real-time control systems with multiple control tasks. It uses a task decomposition model for continuous QoC evaluation even in overload conditions, and then employs heuristic rules to adjust the period of each of the control tasks for QoC improvement. If the total requested workload exceeds the desired value, global adaptation of control periods is triggered for workload maintenance. A sufficient stability condition is derived for a class of control systems with delay and period switching of the heuristic rules. Examples are given to demonstrate the proposed approach.
Resumo:
Video surveillance technology, based on Closed Circuit Television (CCTV) cameras, is one of the fastest growing markets in the field of security technologies. However, the existing video surveillance systems are still not at a stage where they can be used for crime prevention. The systems rely heavily on human observers and are therefore limited by factors such as fatigue and monitoring capabilities over long periods of time. To overcome this limitation, it is necessary to have “intelligent” processes which are able to highlight the salient data and filter out normal conditions that do not pose a threat to security. In order to create such intelligent systems, an understanding of human behaviour, specifically, suspicious behaviour is required. One of the challenges in achieving this is that human behaviour can only be understood correctly in the context in which it appears. Although context has been exploited in the general computer vision domain, it has not been widely used in the automatic suspicious behaviour detection domain. So, it is essential that context has to be formulated, stored and used by the system in order to understand human behaviour. Finally, since surveillance systems could be modeled as largescale data stream systems, it is difficult to have a complete knowledge base. In this case, the systems need to not only continuously update their knowledge but also be able to retrieve the extracted information which is related to the given context. To address these issues, a context-based approach for detecting suspicious behaviour is proposed. In this approach, contextual information is exploited in order to make a better detection. The proposed approach utilises a data stream clustering algorithm in order to discover the behaviour classes and their frequency of occurrences from the incoming behaviour instances. Contextual information is then used in addition to the above information to detect suspicious behaviour. The proposed approach is able to detect observed, unobserved and contextual suspicious behaviour. Two case studies using video feeds taken from CAVIAR dataset and Z-block building, Queensland University of Technology are presented in order to test the proposed approach. From these experiments, it is shown that by using information about context, the proposed system is able to make a more accurate detection, especially those behaviours which are only suspicious in some contexts while being normal in the others. Moreover, this information give critical feedback to the system designers to refine the system. Finally, the proposed modified Clustream algorithm enables the system to both continuously update the system’s knowledge and to effectively retrieve the information learned in a given context. The outcomes from this research are: (a) A context-based framework for automatic detecting suspicious behaviour which can be used by an intelligent video surveillance in making decisions; (b) A modified Clustream data stream clustering algorithm which continuously updates the system knowledge and is able to retrieve contextually related information effectively; and (c) An update-describe approach which extends the capability of the existing human local motion features called interest points based features to the data stream environment.
Resumo:
The application of variable structure control (VSC) for power systems stabilization is studied in this paper. It is the application, aspects and constraints of VSC which are of particular interest. A variable structure control methodology has been proposed for power systems stabilization. The method is implemented using thyristor controlled series compensators. A three machine power system is stabilized using a switching line control for large disturbances which becomes a sliding control as the disturbance becomes smaller. The results demonstrate the effectiveness of the methodology proposed as an useful tool to suppress the oscillations in power systems.
Resumo:
In open railway markets, coordinating train schedules at an interchange station requires negotiation between two independent train operating companies to resolve their operational conflicts. This paper models the stakeholders as software agents and proposes an agent negotiation model to study their interaction. Three negotiation strategies have been devised to represent the possible objectives of the stakeholders, and they determine the behavior in proposing offers to the proponent. Empirical simulation results confirm that the use of the proposed negotiation strategies lead to outcomes that are consistent with the objectives of the stakeholders.
Resumo:
Different international plant protection organisations advocate different schemes for conducting pest risk assessments. Most of these schemes use structured questionnaire in which experts are asked to score several items using an ordinal scale. The scores are then combined using a range of procedures, such as simple arithmetic mean, weighted averages, multiplication of scores, and cumulative sums. The most useful schemes will correctly identify harmful pests and identify ones that are not. As the quality of a pest risk assessment can depend on the characteristics of the scoring system used by the risk assessors (i.e., on the number of points of the scale and on the method used for combining the component scores), it is important to assess and compare the performance of different scoring systems. In this article, we proposed a new method for assessing scoring systems. Its principle is to simulate virtual data using a stochastic model and, then, to estimate sensitivity and specificity values from these data for different scoring systems. The interest of our approach was illustrated in a case study where several scoring systems were compared. Data for this analysis were generated using a probabilistic model describing the pest introduction process. The generated data were then used to simulate the outcome of scoring systems and to assess the accuracy of the decisions about positive and negative introduction. The results showed that ordinal scales with at most 5 or 6 points were sufficient and that the multiplication-based scoring systems performed better than their sum-based counterparts. The proposed method could be used in the future to assess a great diversity of scoring systems.
Resumo:
The ability to accurately predict the remaining useful life of machine components is critical for machine continuous operation and can also improve productivity and enhance system’s safety. In condition-based maintenance (CBM), maintenance is performed based on information collected through condition monitoring and assessment of the machine health. Effective diagnostics and prognostics are important aspects of CBM for maintenance engineers to schedule a repair and to acquire replacement components before the components actually fail. Although a variety of prognostic methodologies have been reported recently, their application in industry is still relatively new and mostly focused on the prediction of specific component degradations. Furthermore, they required significant and sufficient number of fault indicators to accurately prognose the component faults. Hence, sufficient usage of health indicators in prognostics for the effective interpretation of machine degradation process is still required. Major challenges for accurate longterm prediction of remaining useful life (RUL) still remain to be addressed. Therefore, continuous development and improvement of a machine health management system and accurate long-term prediction of machine remnant life is required in real industry application. This thesis presents an integrated diagnostics and prognostics framework based on health state probability estimation for accurate and long-term prediction of machine remnant life. In the proposed model, prior empirical (historical) knowledge is embedded in the integrated diagnostics and prognostics system for classification of impending faults in machine system and accurate probability estimation of discrete degradation stages (health states). The methodology assumes that machine degradation consists of a series of degraded states (health states) which effectively represent the dynamic and stochastic process of machine failure. The estimation of discrete health state probability for the prediction of machine remnant life is performed using the ability of classification algorithms. To employ the appropriate classifier for health state probability estimation in the proposed model, comparative intelligent diagnostic tests were conducted using five different classifiers applied to the progressive fault data of three different faults in a high pressure liquefied natural gas (HP-LNG) pump. As a result of this comparison study, SVMs were employed in heath state probability estimation for the prediction of machine failure in this research. The proposed prognostic methodology has been successfully tested and validated using a number of case studies from simulation tests to real industry applications. The results from two actual failure case studies using simulations and experiments indicate that accurate estimation of health states is achievable and the proposed method provides accurate long-term prediction of machine remnant life. In addition, the results of experimental tests show that the proposed model has the capability of providing early warning of abnormal machine operating conditions by identifying the transitional states of machine fault conditions. Finally, the proposed prognostic model is validated through two industrial case studies. The optimal number of health states which can minimise the model training error without significant decrease of prediction accuracy was also examined through several health states of bearing failure. The results were very encouraging and show that the proposed prognostic model based on health state probability estimation has the potential to be used as a generic and scalable asset health estimation tool in industrial machinery.
Resumo:
In this contribution, a stability analysis for a dynamic voltage restorer (DVR) connected to a weak ac system containing a dynamic load is presented using continuation techniques and bifurcation theory. The system dynamics are explored through the continuation of periodic solutions of the associated dynamic equations. The switching process in the DVR converter is taken into account to trace the stability regions through a suitable mathematical representation of the DVR converter. The stability regions in the Thevenin equivalent plane are computed. In addition, the stability regions in the control gains space, as well as the contour lines for different Floquet multipliers, are computed. Besides, the DVR converter model employed in this contribution avoids the necessity of developing very complicated iterative map approaches as in the conventional bifurcation analysis of converters. The continuation method and the DVR model can take into account dynamics and nonlinear loads and any network topology since the analysis is carried out directly from the state space equations. The bifurcation approach is shown to be both computationally efficient and robust, since it eliminates the need for numerically critical and long-lasting transient simulations.
Resumo:
The heterogeneous photocatalytic water purification process has gained wide attention due to its effectiveness in degrading and mineralizing the recalcitrant organic compounds as well as the possibility of utilizing the solar UV and visible light spectrum. This paper aims to review and summarize the recently published works in the field of photocatalytic oxidation of toxic organic compounds such as phenols and dyes, predominant in waste water effluent. In this review, the effects of various operating parameters on the photocatalytic degradation of phenols and dyes are presented. Recent findings suggested that different parameters, such as type of photocatalyst and composition, light intensity, initial substrate concentration, amount of catalyst, pH of the reaction medium, ionic components in water, solvent types, oxidizing agents/electron acceptors, mode of catalyst application, and calcinations temperature can play an important role on the photocatlytic degradation of organic compounds in water environment. Extensive research has focused on the enhancement of photocatalysis by modification of TiO2 employing metal, non-metal and ion doping. Recent advances in TiO2 photocatalysis for the degradation of various phenols and dyes are also highlighted in this review.