14 resultados para Based structure model
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Mestrado em Auditoria
Resumo:
This paper presents the design and implementation of direct power controllers for three-phase matrix converters (MC) operating as Unified Power Flow Controllers (UPFC). Theoretical principles of the decoupled linear power controllers of the MC-UPFC to minimize the cross-coupling between active and reactive power control are established. From the matrix converter based UPFC model with a modified Venturini high frequency PWM modulator, decoupled controllers for the transmission line active (P) and reactive (Q) power direct control are synthesized. Simulation results, obtained from Matlab/Simulink, are presented in order to confirm the proposed approach. Results obtained show decoupled power control, zero error tracking, and fast responses with no overshoot and no steady-state error.
Resumo:
Model updating methods often neglect that in fact all physical structures are damped. Such simplification relies on the structural modelling approach, although it compromises the accuracy of the predictions of the structural dynamic behaviour. In the present work, the authors address the problem of finite element (FE) model updating based on measured frequency response functions (FRFs), considering damping. The proposed procedure is based upon the complex experimental data, which contains information related to the damped FE model parameters and presents the advantage of requiring no prior knowledge about the damping matrix structure or its content, only demanding the definition of the damping type. Numerical simulations are performed in order to establish the applicability of the proposed damped FE model updating technique and its results are discussed in terms of the correlation between the simulated experimental complex FRFs and the ones obtained from the updated FE model.
Resumo:
Novel alternating copolymers comprising biscalix[4]arene-p-phenylene ethynylene and m-phenylene ethynylene units (CALIX-m-PPE) were synthesized using the Sonogashira-Hagihara cross-coupling polymerization. Good isolated yields (60-80%) were achieved for the polymers that show M-n ranging from 1.4 x 10(4) to 5.1 x 10(4) gmol(-1) (gel permeation chromatography analysis), depending on specific polymerization conditions. The structural analysis of CALIX-m-PPE was performed by H-1, C-13, C-13-H-1 heteronuclear single quantum correlation (HSQC), C-13-H-1 heteronuclear multiple bond correlation (HMBC), correlation spectroscopy (COSY), and nuclear overhauser effect spectroscopy (NOESY) in addition to Fourier transform-Infrared spectroscopy and microanalysis allowing its full characterization. Depending on the reaction setup, variable amounts (16-45%) of diyne units were found in polymers although their photophysical properties are essentially the same. It is demonstrated that CALIX-m-PPE does not form ground-or excited-state interchain interactions owing to the highly crowded environment of the main-chain imparted by both calix[4]arene side units which behave as insulators inhibiting main-chain pi-pi staking. It was also found that the luminescent properties of CALIX-m-PPE are markedly different from those of an all-p-linked phenylene ethynylene copolymer (CALIX-p-PPE) previously reported. The unexpected appearance of a low-energy emission band at 426 nm, in addition to the locally excited-state emission (365 nm), together with a quite low fluorescence quantum yield (Phi = 0.02) and a double-exponential decay dynamics led to the formulation of an intramolecular exciplex as the new emissive species.
Resumo:
In this paper a realistic directional channel model that is an extension of the COST 273 channel model is presented. The model uses a cluster of scatterers and visibility region generation based strategy with increased realism, due to the introduction of terrain and clutter information. New approaches for path-loss prediction and line of sight modeling are considered, affecting the cluster path gain model implementation. The new model was implemented using terrain, clutter, street and user mobility information for the city of Lisbon, Portugal. Some of the model's outputs are presented, mainly path loss and small/large-scale fading statistics.
Resumo:
It is proposed a new approach based on a methodology, assisted by a tool, to create new products in the automobile industry based on previous defined processes and experiences inspired on a set of best practices or principles: it is based on high-level models or specifications; it is component-based architecture centric; it is based on generative programming techniques. This approach follows in essence the MDA (Model Driven Architecture) philosophy with some specific characteristics. We propose a repository that keeps related information, such as models, applications, design information, generated artifacts and even information concerning the development process itself (e.g., generation steps, tests and integration milestones). Generically, this methodology receives the users' requirements to a new product (e.g., functional, non-functional, product specification) as its main inputs and produces a set of artifacts (e.g., design parts, process validation output) as its main output, that will be integrated in the engineer design tool (e.g. CAD system) facilitating the work.
Resumo:
Solution enthalpies of 1,4-dioxane have been obtained in 15 protic and aprotic solvents at 298.15 K. Breaking the overall process through the use of Solomonov's methodology the cavity term was calculated and interaction enthalpies (Delta H-int) were determined. Main factors involved in the interaction enthalpy have been identified and quantified using a QSPR approach based on the TAKA model equation. The relevant descriptors were found to be pi* and beta, which showed, respectively, exothermic and endothermic contributions. The magnitude of pi* coefficient points toward non-specific solute-solvent interactions playing a major role in the solution process. The positive value of the beta coefficient reflects the endothermic character of the solvents' hydrogen bond acceptor (HBA) basicity contribution, indicating that solvent molecules engaged in hydrogen bonding preferentially interact with each other rather than with 1,4-dioxane. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In cluster analysis, it can be useful to interpret the partition built from the data in the light of external categorical variables which are not directly involved to cluster the data. An approach is proposed in the model-based clustering context to select a number of clusters which both fits the data well and takes advantage of the potential illustrative ability of the external variables. This approach makes use of the integrated joint likelihood of the data and the partitions at hand, namely the model-based partition and the partitions associated to the external variables. It is noteworthy that each mixture model is fitted by the maximum likelihood methodology to the data, excluding the external variables which are used to select a relevant mixture model only. Numerical experiments illustrate the promising behaviour of the derived criterion. © 2014 Springer-Verlag Berlin Heidelberg.
Resumo:
We propose a 3-D gravity model for the volcanic structure of the island of Maio (Cape Verde archipelago) with the objective of solving some open questions concerning the geometry and depth of the intrusive Central Igneous Complex. A gravity survey was made covering almost the entire surface of the island. The gravity data was inverted through a non-linear 3-D approach which provided a model constructed in a random growth process. The residual Bouguer gravity field shows a single positive anomaly presenting an elliptic shape with a NWSE trending long axis. This Bouguer gravity anomaly is slightly off-centred with the island but its outline is concordant with the surface exposure of the Central Igneous Complex. The gravimetric modelling shows a high-density volume whose centre of mass is about 4500 m deep. With increasing depth, and despite the restricted gravimetric resolution, the horizontal sections of the model suggest the presence of two distinct bodies, whose relative position accounts for the elongated shape of the high positive Bouguer gravity anomaly. These bodies are interpreted as magma chambers whose coeval volcanic counterparts are no longer preserved. The orientation defined by the two bodies is similar to that of other structures known in the southern group of the Cape Verde islands, thus suggesting a possible structural control constraining the location of the plutonic intrusions.
Resumo:
This paper deals with a hierarchical structure composed by an event-based supervisor in a higher level and two distinct proportional integral (PI) controllers in a lower level. The controllers are applied to a variable speed wind energy conversion system with doubly-fed induction generator, namely, the fuzzy PI control and the fractional-order PI control. The event-based supervisor analyses the operation state of the wind energy conversion system among four possible operational states: park, start-up, generating or brake and sends the operation state to the controllers in the lower level. In start-up state, the controllers only act on electric torque while pitch angle is equal to zero. In generating state, the controllers must act on the pitch angle of the blades in order to maintain the electric power around the nominal value, thus ensuring that the safety conditions required for integration in the electric grid are met. Comparisons between fuzzy PI and fractional-order PI pitch controllers applied to a wind turbine benchmark model are given and simulation results by Matlab/Simulink are shown. From the results regarding the closed loop point of view, fuzzy PI controller allows a smoother response at the expense of larger number of variations of the pitch angle, implying frequent switches between operational states. On the other hand fractional-order PI controller allows an oscillatory response with less control effort, reducing switches between operational states. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Solution enthalpies of 18-crown-6 have been obtained for a set of 14 protic and aprotic solvents at 298.15 K. The complementary use of Solomonov's methodology and a QSPR-based approach allowed the identification of the most significant solvent descriptors that model the interaction enthalpy contribution of the solution process (Delta H-int(A/S)). Results were compared with data previously obtained for 1,4-dioxane. Although the interaction enthalpies of 18-crown-6 correlate well with those of 1,4-dioxane, the magnitude of the most relevant parameters, pi* and beta, is almost three times higher for 18-crown-6. This is rationalized in terms of the impact of the solute's volume in the solution processes of both compounds. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
In cluster analysis, it can be useful to interpret the partition built from the data in the light of external categorical variables which are not directly involved to cluster the data. An approach is proposed in the model-based clustering context to select a number of clusters which both fits the data well and takes advantage of the potential illustrative ability of the external variables. This approach makes use of the integrated joint likelihood of the data and the partitions at hand, namely the model-based partition and the partitions associated to the external variables. It is noteworthy that each mixture model is fitted by the maximum likelihood methodology to the data, excluding the external variables which are used to select a relevant mixture model only. Numerical experiments illustrate the promising behaviour of the derived criterion.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.