937 resultados para Distribution transformer modeling
Resumo:
Series reactors are used in distribution grids to reduce the short-circuit fault level. Some of the disadvantages of the application of these devices are the voltage drop produced across the reactor and the steep front rise of the transient recovery voltage (TRV), which generally exceeds the rating of the associated circuit breaker. Simulations were performed to compare the characteristics of a saturated core High-Temperature Superconducting Fault Current Limiter (HTS FCL) and a series reactor. The design of the HTS FCL was optimized using the evolutionary algorithm. The resulting Pareto frontier curve of optimum solution is presented in this paper. The results show that the steady-state impedance of an HTS FCL is significantly lower than that of a series reactor for the same level of fault current limiting. Tests performed on a prototype 11 kV HTS FCL confirm the theoretical results. The respective transient recovery voltages (TRV) of the HTS FCL and an air core reactor of comparable fault current limiting capability are also determined. The results show that the saturated core HTS FCL has a significantly lower effect on the rate of rise of the circuit breaker TRV as compared to the air core reactor. The simulations results are validated with shortcircuit test results.
Resumo:
This work identifies the limitations of n-way data analysis techniques in multidimensional stream data, such as Internet chat room communications data, and establishes a link between data collection and performance of these techniques. Its contributions are twofold. First, it extends data analysis to multiple dimensions by constructing n-way data arrays known as high order tensors. Chat room tensors are generated by a simulator which collects and models actual communication data. The accuracy of the model is determined by the Kolmogorov-Smirnov goodness-of-fit test which compares the simulation data with the observed (real) data. Second, a detailed computational comparison is performed to test several data analysis techniques including svd [1], and multi-way techniques including Tucker1, Tucker3 [2], and Parafac [3].
Resumo:
Previous research on construction innovation has commonly recognized the importance of the organizational climate and key individuals, often called “champions,” for the success of innovation. However, it rarely focuses on the role of participants at the project level and addresses the dynamics of construction innovation. This paper therefore presents a dynamic innovation model that has been developed using the concept of system dynamics. The model incorporates the influence of several individual and situational factors and highlights two critical elements that drive construction innovations: (1) normative pressure created by project managers through their championing behavior, and (2) instrumental motivation of team members facilitated by a supportive organizational climate. The model is qualified empirically, using the results of a survey of project managers and their project team members working for general contractors in Singapore, by assessing casual relationships for key model variables. Finally, the paper discusses the implications of the model structure for fostering construction innovations.
Resumo:
Advances in technology introduce new application areas for sensor networks. Foreseeable wide deployment of mission critical sensor networks creates concerns on security issues. Security of large scale densely deployed and infrastructure less wireless networks of resource limited sensor nodes requires efficient key distribution and management mechanisms. We consider distributed and hierarchical wireless sensor networks where unicast, multicast and broadcast type of communications can take place. We evaluate deterministic, probabilistic and hybrid type of key pre-distribution and dynamic key generation algorithms for distributing pair-wise, group-wise and network-wise keys.
Resumo:
Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pair wise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighbouring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighbouring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools. Comparison to probabilistic schemes shows that our combinatorial approach produces better connectivity with smaller key-chain sizes.
Resumo:
Several approaches have been introduced in the literature for active noise control (ANC) systems. Since the filtered-x least-mean-square (FxLMS) algorithm appears to be the best choice as a controller filter, researchers tend to improve performance of ANC systems by enhancing and modifying this algorithm. This paper proposes a new version of the FxLMS algorithm, as a first novelty. In many ANC applications, an on-line secondary path modeling method using white noise as a training signal is required to ensure convergence of the system. As a second novelty, this paper proposes a new approach for on-line secondary path modeling on the basis of a new variable-step-size (VSS) LMS algorithm in feed forward ANC systems. The proposed algorithm is designed so that the noise injection is stopped at the optimum point when the modeling accuracy is sufficient. In this approach, a sudden change in the secondary path during operation makes the algorithm reactivate injection of the white noise to re-adjust the secondary path estimate. Comparative simulation results shown in this paper indicate the effectiveness of the proposed approach in reducing both narrow-band and broad-band noise. In addition, the proposed ANC system is robust against sudden changes of the secondary path model.
Resumo:
Carbon fibre reinforced polymer (CFRP) sheets have many outstanding properties such as high strength, high elastic modulus, light weight and good durability which are made them a suitable alternative for steel in strengthening work. This paper describe the ultimate load carrying capacity of steel hollow sections at effective bond length in terms of its cross sectional area and the stress distribution within bond region for different layers CFRP. It was found that depending on their size and orientation of uni- directional CFRP layers, the ultimate tensile load was different. Along with these tests, non linear finite element analysis was also performed to validate the ultimate load carrying capacity depending on their cross sections. The predicted ultimate loads from FE analysis are found very close to the laboratory test results. The validated model has been used to determine the stress distribution at bond joint for different orientation of CFRP. This research shows the effect of stress distribution and suitable wrapping layer to be used for the strengthening of steel hollow sections in tension.
Resumo:
Graphene has promised many novel applications in nanoscale electronics and sustainable energy due to its novel electronic properties. Computational exploration of electronic functionality and how it varies with architecture and doping presently runs ahead of experimental synthesis yet provides insights into types of structures that may prove profitable for targeted experimental synthesis and characterization. We present here a summary of our understanding on the important aspects of dimension, band gap, defect, and interfacial engineering of graphene based on state-of-the-art ab initio approaches. Some most recent experimental achievements relevant for future theoretical exploration are also covered.
Resumo:
Transient hyperopic refractive shifts occur on a timescale of weeks in some patients after initiation of therapy for hyperglycemia, and are usually followed by recovery to the original refraction. Possible lenticular origin of these changes is considered in terms of a paraxial gradient index model. Assuming that the lens thickness and curvatures remain unchanged, as observed in practice, it appears possible to account for initial hyperopic refractive shifts of up to a few diopters by reduction in refractive index near the lens center and alteration in the rate of change between center and surface, so that most of the index change occurs closer to the lens surface. Restoration of the original refraction depends on further change in the refractive index distribution with more gradual changes in refractive index from the lens center to its surface. Modeling limitations are discussed.
Resumo:
The success of contemporary organizations depends on their ability to make appropriate decisions. Making appropriate decisions is inevitably bound to the availability and provision of relevant information. Information systems should be able to provide information in an efficient way. Thus, within information systems development a detailed analysis of information supply and information demands has to prevail. Based on Syperski’s information set and subset-model we will give an epistemological foundation of information modeling in general and show, why conceptual modeling in particular is capable of specifying effective and efficient information systems. Furthermore, we derive conceptual modeling requirements based on our findings. A short example illustrates the usefulness of a conceptual data modeling technique for the specification of information systems.
Resumo:
In a business environment, making the right decisions is vital for the success of a company. Making right decisions is inevitably bound to the availability and provision of relevant information. Information systems are supposed to be able to provide this information in an efficient way. Thus, within information systems development a detailed analysis of information supply and information demands has to prevail. Based on Szyperski’s information set and subset-model we will give an epistemological foundation of information modeling in general and show, why conceptual modeling in particular is capable of developing effective and efficient information systems. Furthermore, we derive conceptual modeling requirements based on our findings.
Resumo:
It is commonly assumed that rates of accumulation of organic-rich strata have varied through geologic time with some periods that were particularly favorable for accumulation of petroleum source rocks or coals. A rigorous analysis of the validity of such an assumption requires consideration of the basic fact that although sedimentary rocks have been lost through geologic time to erosion and metamorphism. Consequently, their present-day global abundance decreases with their geologic age. Measurements of the global abundance of coal-bearing strata suggest that conditions for coal accumulation were exceptionally favorable during the late Carboniferous. Strata of this age constitute 21% of the world's coal-bearing strata. Global rates of coal accumulation appear to have been relatively constant since the end of the Carboniferous, with the exception of the Triassic which contains only 1.75% of the world's coal-bearing strata. Estimation of the global amount of discovered oil by age of the source rock show that 58% of the world's oil has been sourced from Cretaceous or younger strata and 99% from Silurian or younger strata. Although most geologic periods were favourable for oil source-rock accumulation the mid-Permian to mid-Jurassic appears to have been particularly unfavourable accounting for less than 2% of the world's oil. Estimation of the global amount of discovered natural gas by age of the source rock show that 48% of the world's oil has been sourced from Cretaceous or younger strata and 99% from Silurian or younger strata. The Silurian and Late Carboniferous were particularly favourable for gas source-rock accumulation respectively accounting for 12.9% and 6.9% of the world's gas. By contrast, Permian and Triassic source rocks account for only 1.7% of the world's natural gas. Rather than invoking global climatic or oceanic events to explain the relative abundance of organic rich sediments through time, examination of the data suggests the more critical control is tectonic. The majority of coals are associated with foreland basins and the majority of oil-prone source rocks are associated with rifting. The relative abundance of these types of basin through time determines the abundance and location of coals and petroleum source rocks.
Resumo:
This paper proposes a new method for online secondary path modeling in feedback active noise control (ANC) systems. In practical cases, the secondary path is usually time varying. For these cases, online modeling of secondary path is required to ensure convergence of the system. In literature the secondary path estimation is usually performed offline, prior to online modeling, where in the proposed system there is no need for using offline estimation. The proposed method consists of two steps: a noise controller which is based on an FxLMS algorithm, and a variable step size (VSS) LMS algorithm which is used to adapt the modeling filter with the secondary path. In order to increase performance of the algorithm in a faster convergence and accurate performance, we stop the VSS-LMS algorithm at the optimum point. The results of computer simulation shown in this paper indicate effectiveness of the proposed method.
Resumo:
Crowds of noncombatants play a large and increasingly recognized role in modern military operations and often create substantial difficulties for the combatant forces involved. However, realistic models of crowds are essentially absent from current military simulations. To address this problem, the authors are developing a crowd simulation capable of generating crowds of noncombatant civilians that exhibit a variety of realistic individual and group behaviors at differing levels of fidelity. The crowd simulation is interoperable with existing military simulations using a standard, distributed simulation architecture. Commercial game technology is used in the crowd simulation to model both urban terrain and the physical behaviors of the human characters that make up the crowd. The objective of this article is to present the design and development process of a simulation that integrates commercially available game technology with current military simulations to generate realistic and believable crowd behavior.
Resumo:
This paper describes an analysis of construction project bids to determine (a) the global distribution and (b) factors influencing the distribution of bids. The global distribution of bids was found, by using a battery of ll test statistics, to be approximated by a three-parameter log normal distribution. No global spread parameter was found. A multivariate analysis revealed the year of tender to be the major influencing factor. Consideration of the construction order, tender price and output indices lead to the conclusion that distributional spread reflected the degree of difference in pricing policies between bidders and the skewness of the distributions reflected the degree of competition. The paper concludes with a tentative model of the causal relationships between the factors and distributional characteristics involved.