962 resultados para Systems identification
Dynamic method of stiffness identification in impacting systems for percussive drilling applications
Resumo:
Peer reviewed
Resumo:
We thank Dr. R. Yang (formerly at ASU), Dr. R.-Q. Su (formerly at ASU), and Mr. Zhesi Shen for their contributions to a number of original papers on which this Review is partly based. This work was supported by ARO under Grant No. W911NF-14-1-0504. W.-X. Wang was also supported by NSFC under Grants No. 61573064 and No. 61074116, as well as by the Fundamental Research Funds for the Central Universities, Beijing Nova Programme.
Resumo:
Power system engineers face a double challenge: to operate electric power systems within narrow stability and security margins, and to maintain high reliability. There is an acute need to better understand the dynamic nature of power systems in order to be prepared for critical situations as they arise. Innovative measurement tools, such as phasor measurement units, can capture not only the slow variation of the voltages and currents but also the underlying oscillations in a power system. Such dynamic data accessibility provides us a strong motivation and a useful tool to explore dynamic-data driven applications in power systems. To fulfill this goal, this dissertation focuses on the following three areas: Developing accurate dynamic load models and updating variable parameters based on the measurement data, applying advanced nonlinear filtering concepts and technologies to real-time identification of power system models, and addressing computational issues by implementing the balanced truncation method. By obtaining more realistic system models, together with timely updated parameters and stochastic influence consideration, we can have an accurate portrait of the ongoing phenomena in an electrical power system. Hence we can further improve state estimation, stability analysis and real-time operation.
Resumo:
Accurate estimation of road pavement geometry and layer material properties through the use of proper nondestructive testing and sensor technologies is essential for evaluating pavement’s structural condition and determining options for maintenance and rehabilitation. For these purposes, pavement deflection basins produced by the nondestructive Falling Weight Deflectometer (FWD) test data are commonly used. The nondestructive FWD test drops weights on the pavement to simulate traffic loads and measures the created pavement deflection basins. Backcalculation of pavement geometry and layer properties using FWD deflections is a difficult inverse problem, and the solution with conventional mathematical methods is often challenging due to the ill-posed nature of the problem. In this dissertation, a hybrid algorithm was developed to seek robust and fast solutions to this inverse problem. The algorithm is based on soft computing techniques, mainly Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) as well as the use of numerical analysis techniques to properly simulate the geomechanical system. A widely used pavement layered analysis program ILLI-PAVE was employed in the analyses of flexible pavements of various pavement types; including full-depth asphalt and conventional flexible pavements, were built on either lime stabilized soils or untreated subgrade. Nonlinear properties of the subgrade soil and the base course aggregate as transportation geomaterials were also considered. A computer program, Soft Computing Based System Identifier or SOFTSYS, was developed. In SOFTSYS, ANNs were used as surrogate models to provide faster solutions of the nonlinear finite element program ILLI-PAVE. The deflections obtained from FWD tests in the field were matched with the predictions obtained from the numerical simulations to develop SOFTSYS models. The solution to the inverse problem for multi-layered pavements is computationally hard to achieve and is often not feasible due to field variability and quality of the collected data. The primary difficulty in the analysis arises from the substantial increase in the degree of non-uniqueness of the mapping from the pavement layer parameters to the FWD deflections. The insensitivity of some layer properties lowered SOFTSYS model performances. Still, SOFTSYS models were shown to work effectively with the synthetic data obtained from ILLI-PAVE finite element solutions. In general, SOFTSYS solutions very closely matched the ILLI-PAVE mechanistic pavement analysis results. For SOFTSYS validation, field collected FWD data were successfully used to predict pavement layer thicknesses and layer moduli of in-service flexible pavements. Some of the very promising SOFTSYS results indicated average absolute errors on the order of 2%, 7%, and 4% for the Hot Mix Asphalt (HMA) thickness estimation of full-depth asphalt pavements, full-depth pavements on lime stabilized soils and conventional flexible pavements, respectively. The field validations of SOFTSYS data also produced meaningful results. The thickness data obtained from Ground Penetrating Radar testing matched reasonably well with predictions from SOFTSYS models. The differences observed in the HMA and lime stabilized soil layer thicknesses observed were attributed to deflection data variability from FWD tests. The backcalculated asphalt concrete layer thickness results matched better in the case of full-depth asphalt flexible pavements built on lime stabilized soils compared to conventional flexible pavements. Overall, SOFTSYS was capable of producing reliable thickness estimates despite the variability of field constructed asphalt layer thicknesses.
Resumo:
Part 4: Transition Towards Product-Service Systems
Resumo:
Power system policies are broadly on track to escalate the use of renewable energy resources in electric power generation. Integration of dispersed generation to the utility network not only intensifies the benefits of renewable generation but also introduces further advantages such as power quality enhancement and freedom of power generation for the consumers. However, issues arise from the integration of distributed generators to the existing utility grid are as significant as its benefits. The issues are aggravated as the number of grid-connected distributed generators increases. Therefore, power quality demands become stricter to ensure a safe and proper advancement towards the emerging smart grid. In this regard, system protection is the area that is highly affected as the grid-connected distributed generation share in electricity generation increases. Islanding detection, amongst all protection issues, is the most important concern for a power system with high penetration of distributed sources. Islanding occurs when a portion of the distribution network which includes one or more distributed generation units and local loads is disconnected from the remaining portion of the grid. Upon formation of a power island, it remains energized due to the presence of one or more distributed sources. This thesis introduces a new islanding detection technique based on an enhanced multi-layer scheme that shows superior performance over the existing techniques. It provides improved solutions for safety and protection of power systems and distributed sources that are capable of operating in grid-connected mode. The proposed active method offers negligible non-detection zone. It is applicable to micro-grids with a number of distributed generation sources without sacrificing the dynamic response of the system. In addition, the information obtained from the proposed scheme allows for smooth transition to stand-alone operation if required. The proposed technique paves the path towards a comprehensive protection solution for future power networks. The proposed method is converter-resident and all power conversion systems that are operating based on power electronics converters can benefit from this method. The theoretical analysis is presented, and extensive simulation results confirm the validity of the analytical work.
Resumo:
The thesis has been carried out within the “SHAPE Project - Predicting Strength Changes in Bridges from Frequency Data Safety, Hazard, and Poly-harmonic Evaluation” (ERA-NET Plus Infravation Call 2014) which dealt with the structural assessment of existing bridges and laboratory structural reproductions through the use of vibration-based monitoring systems, for detecting changes in their natural frequencies and correlating them with the occurrence of damage. The main purpose of this PhD dissertation has been the detection of the variation of the main natural frequencies as a consequence of a previous-established damage configuration provided on a structure. Firstly, the effect of local damage on the modal feature has been discussed mainly concerning a steel frame and a composite steel-concrete bridge. Concerning the variation of the fundamental frequency of the small bridge, the increasing severity of two local damages has been investigated. Moreover, the comparison with a 3D FE model is even presented establishing a link between the dynamic properties and the damage features. Then, moving towards a diffused damage pattern, four concrete beams and a small concrete deck were loaded achieving the yielding of the steel reinforcement. The stiffness deterioration in terms of frequency shifts has been reconsidered by collecting a large set of dynamic experiments on simply supported R.C. beams discussed in the literature. The comparison of the load-frequency curves suggested a significant agreement among all the experiments. Thus, in the framework of damage mechanics, the “breathing cracks” phenomenon has been discussed leading to an analytical formula able to explain the frequency decay observed experimentally. Lastly, some dynamic investigations of two existing bridges and the corresponding FE Models are presented in Chapter 4. Moreover, concerning the bridge in Bologna, two prototypes of a network of accelerometers were installed and the data of a few months of monitoring have been discussed.
Resumo:
Context. Close binary supersoft X-ray sources (CBSS) are binary systems that contain a white dwarf with stable nuclear burning on its surface. These sources, first discovered in the Magellanic Clouds, have high accretion rates and near-Eddington luminosities (10(37)-10(38) erg s(-1)) with high temperatures (T = 2-7 x 10(5) K). Aims. The total number of known objects in the MC is still small and, in our galaxy, even smaller. We observed the field of the unidentified transient supersoft X-ray source RX J0527.8-6954 in order to identify its optical counterpart. Methods. The observation was made with the IFU-GMOS on the Gemini South telescope with the purpose of identifying stars with possible He II or Balmer emission or else of observing nebular extended jets or ionization cones, features that may be expected in CBSS. Results. The X-ray source is identified with a B5e V star that is associated with subarcsecond extended H alpha emission, possibly bipolar. Conclusions. If the primary star is a white dwarf, as suggested by the supersoft X-ray spectrum, the expected orbital period exceeds 21 h; therefore, we believe that the 9.4 h period found so far is not associated to this system.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
This work proposes a method based on both preprocessing and data mining with the objective of identify harmonic current sources in residential consumers. In addition, this methodology can also be applied to identify linear and nonlinear loads. It should be emphasized that the entire database was obtained through laboratory essays, i.e., real data were acquired from residential loads. Thus, the residential system created in laboratory was fed by a configurable power source and in its output were placed the loads and the power quality analyzers (all measurements were stored in a microcomputer). So, the data were submitted to pre-processing, which was based on attribute selection techniques in order to minimize the complexity in identifying the loads. A newer database was generated maintaining only the attributes selected, thus, Artificial Neural Networks were trained to realized the identification of loads. In order to validate the methodology proposed, the loads were fed both under ideal conditions (without harmonics), but also by harmonic voltages within limits pre-established. These limits are in accordance with IEEE Std. 519-1992 and PRODIST (procedures to delivery energy employed by Brazilian`s utilities). The results obtained seek to validate the methodology proposed and furnish a method that can serve as alternative to conventional methods.
Resumo:
One of the e-learning environment goal is to attend the individual needs of students during the learning process. The adaptation of contents, activities and tools into different visualization or in a variety of content types is an important feature of this environment, bringing to the user the sensation that there are suitable workplaces to his profile in the same system. Nevertheless, it is important the investigation of student behaviour aspects, considering the context where the interaction happens, to achieve an efficient personalization process. The paper goal is to present an approach to identify the student learning profile analyzing the context of interaction. Besides this, the learning profile could be analyzed in different dimensions allows the system to deal with the different focus of the learning.
Resumo:
Among several process variability sources, valve friction and inadequate controller tuning are supposed to be two of the most prevalent. Friction quantification methods can be applied to the development of model-based compensators or to diagnose valves that need repair, whereas accurate process models can be used in controller retuning. This paper extends existing methods that jointly estimate the friction and process parameters, so that a nonlinear structure is adopted to represent the process model. The developed estimation algorithm is tested with three different data sources: a simulated first order plus dead time process, a hybrid setup (composed of a real valve and a simulated pH neutralization process) and from three industrial datasets corresponding to real control loops. The results demonstrate that the friction is accurately quantified, as well as ""good"" process models are estimated in several situations. Furthermore, when a nonlinear process model is considered, the proposed extension presents significant advantages: (i) greater accuracy for friction quantification and (ii) reasonable estimates of the nonlinear steady-state characteristics of the process. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Functional magnetic resonance imaging (fMRI) has become an important tool in Neuroscience due to its noninvasive and high spatial resolution properties compared to other methods like PET or EEG. Characterization of the neural connectivity has been the aim of several cognitive researches, as the interactions among cortical areas lie at the heart of many brain dysfunctions and mental disorders. Several methods like correlation analysis, structural equation modeling, and dynamic causal models have been proposed to quantify connectivity strength. An important concept related to connectivity modeling is Granger causality, which is one of the most popular definitions for the measure of directional dependence between time series. In this article, we propose the application of the partial directed coherence (PDC) for the connectivity analysis of multisubject fMRI data using multivariate bootstrap. PDC is a frequency domain counterpart of Granger causality and has become a very prominent tool in EEG studies. The achieved frequency decomposition of connectivity is useful in separating interactions from neural modules from those originating in scanner noise, breath, and heart beating. Real fMRI dataset of six subjects executing a language processing protocol was used for the analysis of connectivity. Hum Brain Mapp 30:452-461, 2009. (C) 2007 Wiley-Liss, Inc.
Resumo:
This paper presents the design and implementation of an embedded soft sensor, i. e., a generic and autonomous hardware module, which can be applied to many complex plants, wherein a certain variable cannot be directly measured. It is implemented based on a fuzzy identification algorithm called ""Limited Rules"", employed to model continuous nonlinear processes. The fuzzy model has a Takagi-Sugeno-Kang structure and the premise parameters are defined based on the Fuzzy C-Means (FCM) clustering algorithm. The firmware contains the soft sensor and it runs online, estimating the target variable from other available variables. Tests have been performed using a simulated pH neutralization plant. The results of the embedded soft sensor have been considered satisfactory. A complete embedded inferential control system is also presented, including a soft sensor and a PID controller. (c) 2007, ISA. Published by Elsevier Ltd. All rights reserved.