68 resultados para dynamic systems
Resumo:
The hybrid test method is a relatively recently developed dynamic testing technique that uses numerical modelling combined with simultaneous physical testing. The concept of substructuring allows the critical or highly nonlinear part of the structure that is difficult to numerically model with accuracy to be physically tested whilst the remainder of the structure, that has a more predictable response, is numerically modelled. In this paper, a substructured soft-real time hybrid test is evaluated as an accurate means of performing seismic tests of complex structures. The structure analysed is a three-storey, two-by-one bay concentrically braced frame (CBF) steel structure subjected to seismic excitation. A ground storey braced frame substructure whose response is critical to the overall response of the structure is tested, whilst the remainder of the structure is numerically modelled. OpenSees is used for numerical modelling and OpenFresco is used for the communication between the test equipment and numerical model. A novel approach using OpenFresco to define the complex numerical substructure of an X-braced frame within a hybrid test is also presented. The results of the hybrid tests are compared to purely numerical models using OpenSees and a simulated test using a combination of OpenSees and OpenFresco. The comparative results indicate that the test method provides an accurate and cost effective procedure for performing
full scale seismic tests of complex structural systems.
Resumo:
This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism. A key ParaPhrase design goal is that parallel components are intended to match heterogeneous architectures, defined in terms of CPU/GPU combinations, for example. In order to achieve this, the ParaPhrase approach will map components at link time to the available hardware, and will then re-map them during program execution, taking account of multiple applications, changes in hardware resource availability, the desire to reduce communication costs etc. In this way, we aim to develop a new approach to programming that will be able to produce software that can adapt to dynamic changes in the system environment. Moreover, by using a strong component basis for parallelism, we can achieve potentially significant gains in terms of reducing sharing at a high level of abstraction, and so in reducing or even eliminating the costs that are usually associated with cache management, locking, and synchronisation. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
Modern networks are large, highly complex and dynamic. Add to that the mobility of the agents comprising many of these networks. It is difficult or even impossible for such systems to be managed centrally in an efficient manner. It is imperative for such systems to attain a degree of self-management. Self-healing i.e. the capability of a system in a good state to recover to another good state in face of an attack, is desirable for such systems. In this paper, we discuss the self-healing model for dynamic reconfigurable systems. In this model, an omniscient adversary inserts or deletes nodes from a network and the algorithm responds by adding a limited number of edges in order to maintain invariants of the network. We look at some of the results in this model and argue for their applicability and further extensions of the results and the model. We also look at some of the techniques we have used in our earlier work, in particular, we look at the idea of maintaining virtual graphs mapped over the existing network and assert that this may be a useful technique to use in many problem domains.
Resumo:
Purpose
This article aims to analyze the role of performance management systems (PMS) in supporting public value strategies.
Design/methodology/approach
This article draws on the public value dynamic model by Horner and Hutton (2010). It presents the results of a case study of implementation of a PMS model, the ‘Value Pyramid’ (VP).
Findings
The results stress the need for an improved conceptualization of PMS within public value strategy. Through experimentation using the VP, the case site was able to measure and visualize what it considered public value and reflect on the internal/external causes of both creation and destruction of public value.
Research limitations/implication
This article is limited to just one case study, although in-depth and longitudinal.
Originality/value
This article is one of the first attempting to understand the role of PMS within the public value strategy framework, answering the call of Benington and Moore (2010) to consider public value from an accounting perspective.
Resumo:
In this paper metrics for assessing the performance of directional modulation (DM) physical-layer secure wireless systems are discussed. In the paper DM systems are shown to be categorized as static or dynamic. The behavior of each type of system is discussed for QPSK modulation. Besides EVM-like and BER metrics, secrecy rate as used in information theory community is also derived for the purpose of this QPSK DM system evaluation.
Resumo:
Abstract—Power capping is an essential function for efficient power budgeting and cost management on modern server systems. Contemporary server processors operate under power caps by using dynamic voltage and frequency scaling (DVFS). However, these processors are often deployed in non-uniform memory
access (NUMA) architectures, where thread allocation between cores may significantly affect performance and power consumption. This paper proposes a method which maximizes performance under power caps on NUMA systems by dynamically optimizing two knobs: DVFS and thread allocation. The method selects the optimal combination of the two knobs with models based on artificial neural network (ANN) that captures the nonlinear effect of thread allocation on performance. We implement
the proposed method as a runtime system and evaluate it with twelve multithreaded benchmarks on a real AMD Opteron based NUMA system. The evaluation results show that our method outperforms a naive technique optimizing only DVFS by up to
67.1%, under a power cap.
Resumo:
Today's multi-media electronic era is driven by the increasing demand for small multifunctional devices able to support diverse services. Unfortunately, the high levels of transistor integration and performance required by such devices lead to an unprecedented increase of on-chip power that significantly limits the battery lifetime and even poses reliability concerns. Several techniques have been developed to address the power increase, but voltage over-scaling (VOS) is considered to be one of the most effective ones due to the quadratic dependence of voltage on dynamic power consumption. However, VOS may not always be applicable since it increases the delay in all paths of a system and may limit high performance required by today's complex applications. In addition, application of VOS is further complicated since it increases the variations in transistor characteristics imposed by their tiny size which can lead to large delay and leakage variations, making it difficult to meet delay and power budgets. This paper presents a review of various cross-layer design options that can provide solutions for dynamic voltage over-scaling and can potentially assist in meeting the strict power budgets and yield/quality requirements of future systems. © 2011 IEEE.
Resumo:
Various scientific studies have explored the causes of violent behaviour from different perspectives, with psychological tests, in particular, applied to the analysis of crime factors. The relationship between bi-factors has also been extensively studied including the link between age and crime. In reality, many factors interact to contribute to criminal behaviour and as such there is a need to have a greater level of insight into its complex nature. In this article we analyse violent crime information systems containing data on psychological, environmental and genetic factors. Our approach combines elements of rough set theory with fuzzy logic and particle swarm optimisation to yield an algorithm and methodology that can effectively extract multi-knowledge from information systems. The experimental results show that our approach outperforms alternative genetic algorithm and dynamic reduct-based techniques for reduct identification and has the added advantage of identifying multiple reducts and hence multi-knowledge (rules). Identified rules are consistent with classical statistical analysis of violent crime data and also reveal new insights into the interaction between several factors. As such, the results are helpful in improving our understanding of the factors contributing to violent crime and in highlighting the existence of hidden and intangible relationships between crime factors.
Resumo:
In this paper the evolution of a time domain dynamic identification technique based on a statistical moment approach is presented. This technique can be used in the case of structures under base random excitations in the linear state and in the non linear one. By applying Itoˆ stochastic calculus, special algebraic equations can be obtained depending on the statistical moments of the response of the system to be identified. Such equations can be used for the dynamic identification of the mechanical parameters and of the input. The above equations, differently from many techniques in the literature, show the possibility of obtaining the identification of the dissipation characteristics independently from the input. Through the paper the first formulation of this technique, applicable to non linear systems, based on the use of a restricted class of the potential models, is presented. Further a second formulation of the technique in object, applicable to each kind of linear systems and based on the use of a class of linear models, characterized by a mass proportional damping matrix, is described.
Resumo:
This paper addresses the problems of effective in situ measurement of the real-time strain for bridge weigh in motion in reinforced concrete bridge structures through the use of optical fiber sensor systems. By undertaking a series of tests, coupled with dynamic loading, the performance of fiber Bragg grating-based sensor systems with various amplification techniques were investigated. In recent years, structural health monitoring (SHM) systems have been developed to monitor bridge deterioration, to assess load levels and hence extend bridge life and safety. Conventional SHM systems, based on measuring strain, can be used to improve knowledge of the bridge's capacity to resist loads but generally give no information on the causes of any increase in stresses. Therefore, it is necessary to find accurate sensors capable of capturing peak strains under dynamic load and suitable methods for attaching these strain sensors to existing and new bridge structures. Additionally, it is important to ensure accurate strain transfer between concrete and steel, adhesives layer, and strain sensor. The results show the benefits in the use of optical fiber networks under these circumstances and their ability to deliver data when conventional sensors cannot capture accurate strains and/or peak strains.
Resumo:
Economic and environmental load dispatch aims to determine the amount of electricity generated from power plants to meet load demand while minimizing fossil fuel costs and air pollution emissions subject to operational and licensing requirements. These two scheduling problems are commonly formulated with non-smooth cost functions respectively considering various effects and constraints, such as the valve point effect, power balance and ramp rate limits. The expected increase in plug-in electric vehicles is likely to see a significant impact on the power system due to high charging power consumption and significant uncertainty in charging times. In this paper, multiple electric vehicle charging profiles are comparatively integrated into a 24-hour load demand in an economic and environment dispatch model. Self-learning teaching-learning based optimization (TLBO) is employed to solve the non-convex non-linear dispatch problems. Numerical results on well-known benchmark functions, as well as test systems with different scales of generation units show the significance of the new scheduling method.
Resumo:
Traditional internal combustion engine vehicles are a major contributor to global greenhouse gas emissions and other air pollutants, such as particulate matter and nitrogen oxides. If the tail pipe point emissions could be managed centrally without reducing the commercial and personal user functionalities, then one of the most attractive solutions for achieving a significant reduction of emissions in the transport sector would be the mass deployment of electric vehicles. Though electric vehicle sales are still hindered by battery performance, cost and a few other technological bottlenecks, focused commercialisation and support from government policies are encouraging large scale electric vehicle adoptions. The mass proliferation of plug-in electric vehicles is likely to bring a significant additional electric load onto the grid creating a highly complex operational problem for power system operators. Electric vehicle batteries also have the ability to act as energy storage points on the distribution system. This double charge and storage impact of many uncontrollable small kW loads, as consumers will want maximum flexibility, on a distribution system which was originally not designed for such operations has the potential to be detrimental to grid balancing. Intelligent scheduling methods if established correctly could smoothly integrate electric vehicles onto the grid. Intelligent scheduling methods will help to avoid cycling of large combustion plants, using expensive fossil fuel peaking plant, match renewable generation to electric vehicle charging and not overload the distribution system causing a reduction in power quality. In this paper, a state-of-the-art review of scheduling methods to integrate plug-in electric vehicles are reviewed, examined and categorised based on their computational techniques. Thus, in addition to various existing approaches covering analytical scheduling, conventional optimisation methods (e.g. linear, non-linear mixed integer programming and dynamic programming), and game theory, meta-heuristic algorithms including genetic algorithm and particle swarm optimisation, are all comprehensively surveyed, offering a systematic reference for grid scheduling considering intelligent electric vehicle integration.
Resumo:
Dynamic economic load dispatch (DELD) is one of the most important steps in power system operation. Various optimisation algorithms for solving the problem have been developed; however, due to the non-convex characteristics and large dimensionality of the problem, it is necessary to explore new methods to further improve the dispatch results and minimise the costs. This article proposes a hybrid differential evolution (DE) algorithm, namely clonal selection-based differential evolution (CSDE), to solve the problem. CSDE is an artificial intelligence technique that can be applied to complex optimisation problems which are for example nonlinear, large scale, non-convex and discontinuous. This hybrid algorithm combines the clonal selection algorithm (CSA) as the local search technique to update the best individual in the population, which enhances the diversity of the solutions and prevents premature convergence in DE. Furthermore, we investigate four mutation operations which are used in CSA as the hyper-mutation operations. Finally, an efficient solution repair method is designed for DELD to satisfy the complicated equality and inequality constraints of the power system to guarantee the feasibility of the solutions. Two benchmark power systems are used to evaluate the performance of the proposed method. The experimental results show that the proposed CSDE/best/1 approach significantly outperforms nine other variants of CSDE and DE, as well as most other published methods, in terms of the quality of the solution and the convergence characteristics.
Resumo:
Embedded memories account for a large fraction of the overall silicon area and power consumption in modern SoC(s). While embedded memories are typically realized with SRAM, alternative solutions, such as embedded dynamic memories (eDRAM), can provide higher density and/or reduced power consumption. One major challenge that impedes the widespread adoption of eDRAM is that they require frequent refreshes potentially reducing the availability of the memory in periods of high activity and also consuming significant amount of power due to such frequent refreshes. Reducing the refresh rate while on one hand can reduce the power overhead, if not performed in a timely manner, can cause some cells to lose their content potentially resulting in memory errors. In this paper, we consider extending the refresh period of gain-cell based dynamic memories beyond the worst-case point of failure, assuming that the resulting errors can be tolerated when the use-cases are in the domain of inherently error-resilient applications. For example, we observe that for various data mining applications, a large number of memory failures can be accepted with tolerable imprecision in output quality. In particular, our results indicate that by allowing as many as 177 errors in a 16 kB memory, the maximum loss in output quality is 11%. We use this failure limit to study the impact of relaxing reliability constraints on memory availability and retention power for different technologies.