24 resultados para Infrared fixed point
Resumo:
This paper is on offshore wind energy conversion systems installed on the deep water and equipped with back-to-back neutral point clamped full-power converter, permanent magnet synchronous generator with an AC link. The model for the drive train is a five-mass model which incorporates the dynamic of the structure and the tower in order to emulate the effect of the moving surface. A three-level converter and a four-level converter are the two options with a fractional-order control strategy considered to equip the conversion system. Simulation studies are carried out to assess the quality of the energy injected into the electric grid. Finally, conclusions are presented. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
3D laser scanning is becoming a standard technology to generate building models of a facility's as-is condition. Since most constructions are constructed upon planar surfaces, recognition of them paves the way for automation of generating building models. This paper introduces a new logarithmically proportional objective function that can be used in both heuristic and metaheuristic (MH) algorithms to discover planar surfaces in a point cloud without exploiting any prior knowledge about those surfaces. It can also adopt itself to the structural density of a scanned construction. In this paper, a metaheuristic method, genetic algorithm (GA), is used to test this introduced objective function on a synthetic point cloud. The results obtained show the proposed method is capable to find all plane configurations of planar surfaces (with a wide variety of sizes) in the point cloud with a minor distance to the actual configurations. © 2014 IEEE.
Resumo:
The purpose of this paper is to discuss the linear solution of equality constrained problems by using the Frontal solution method without explicit assembling. Design/methodology/approach - Re-written frontal solution method with a priori pivot and front sequence. OpenMP parallelization, nearly linear (in elimination and substitution) up to 40 threads. Constraints enforced at the local assembling stage. Findings - When compared with both standard sparse solvers and classical frontal implementations, memory requirements and code size are significantly reduced. Research limitations/implications - Large, non-linear problems with constraints typically make use of the Newton method with Lagrange multipliers. In the context of the solution of problems with large number of constraints, the matrix transformation methods (MTM) are often more cost-effective. The paper presents a complete solution, with topological ordering, for this problem. Practical implications - A complete software package in Fortran 2003 is described. Examples of clique-based problems are shown with large systems solved in core. Social implications - More realistic non-linear problems can be solved with this Frontal code at the core of the Newton method. Originality/value - Use of topological ordering of constraints. A-priori pivot and front sequences. No need for symbolic assembling. Constraints treated at the core of the Frontal solver. Use of OpenMP in the main Frontal loop, now quantified. Availability of Software.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações
Resumo:
Resumo I (Prática Pedagógica) - O Estágio do Ensino Especializado realizado no presente ano lectivo foi elaborado na Academia de Música de Lisboa em três turmas. Vários foram os desafios encontrados no decorrer do ano lectivo, como por exemplo a instabilidade das turmas, a falta do quadro na sala em algumas aulas e a pouca experiência anterior na área de docência. A realização deste estágio permitiu experimentar actividades e estratégias aprendidas nas disciplinas do mestrado e estimulou uma atitude de reflexão regular sobre as escolhas pedagógicas elaboradas e sobre a resposta dos alunos. Também o feedback dos professores da Unidade Curricular de Didáctica do Ensino Especializado foi essencial na consciencialização de aspectos que teriam que ser mudados na minha abordagem do ensino: fazer actividades mais formativas e menos avaliativas, dar mais feedback, não avançar para outro nível enquanto uma tarefa ainda não estiver consolidada, não modificar as instruções tão rapidamente, ter cuidado com a apresentação visual das células rítmicas e pensar em soluções para quando os alunos estão cansados. Foi também importante reflectir sobre os planos de aula realizados ao longo do ano e sobre o que não seria realizado da mesma forma, nomeadamente na introdução de células rítmicas, introdução de funções harmónicas e cadências. Durante este ano foi feito um esforço para melhorar estes aspectos, no entanto ainda não foi possível implementar todas as mudanças. De qualquer modo, esta reflexão é um bom ponto de partida para o planeamento do próximo ano e um exemplo da atitude que deve acompanhar-me durante toda a minha actividade enquanto docente.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Química e Biológica
Resumo:
The development of biopharmaceutical manufacturing processes presents critical constraints, with the major constraint being that living cells synthesize these molecules, presenting inherent behavior variability due to their high sensitivity to small fluctuations in the cultivation environment. To speed up the development process and to control this critical manufacturing step, it is relevant to develop high-throughput and in situ monitoring techniques, respectively. Here, high-throughput mid-infrared (MIR) spectral analysis of dehydrated cell pellets and in situ near-infrared (NIR) spectral analysis of the whole culture broth were compared to monitor plasmid production in recombinant Escherichia coil cultures. Good partial least squares (PLS) regression models were built, either based on MIR or NIR spectral data, yielding high coefficients of determination (R-2) and low predictive errors (root mean square error, or RMSE) to estimate host cell growth, plasmid production, carbon source consumption (glucose and glycerol), and by-product acetate production and consumption. The predictive errors for biomass, plasmid, glucose, glycerol, and acetate based on MIR data were 0.7 g/L, 9 mg/L, 0.3 g/L, 0.4 g/L, and 0.4 g/L, respectively, whereas for NIR data the predictive errors obtained were 0.4 g/L, 8 mg/L, 0.3 g/L, 0.2 g/L, and 0.4 g/L, respectively. The models obtained are robust as they are valid for cultivations conducted with different media compositions and with different cultivation strategies (batch and fed-batch). Besides being conducted in situ with a sterilized fiber optic probe, NIR spectroscopy allows building PLS models for estimating plasmid, glucose, and acetate that are as accurate as those obtained from the high-throughput MIR setup, and better models for estimating biomass and glycerol, yielding a decrease in 57 and 50% of the RMSE, respectively, compared to the MIR setup. However, MIR spectroscopy could be a valid alternative in the case of optimization protocols, due to possible space constraints or high costs associated with the use of multi-fiber optic probes for multi-bioreactors. In this case, MIR could be conducted in a high-throughput manner, analyzing hundreds of culture samples in a rapid and automatic mode.
Resumo:
Infrared spectroscopy, either in the near and mid (NIR/MIR) region of the spectra, has gained great acceptance in the industry for bioprocess monitoring according to Process Analytical Technology, due to its rapid, economic, high sensitivity mode of application and versatility. Due to the relevance of cyprosin (mostly for dairy industry), and as NIR and MIR spectroscopy presents specific characteristics that ultimately may complement each other, in the present work these techniques were compared to monitor and characterize by in situ and by at-line high-throughput analysis, respectively, recombinant cyprosin production by Saccharomyces cerevisiae. Partial least-square regression models, relating NIR and MIR-spectral features with biomass, cyprosin activity, specific activity, glucose, galactose, ethanol and acetate concentration were developed, all presenting, in general, high regression coefficients and low prediction errors. In the case of biomass and glucose slight better models were achieved by in situ NIR spectroscopic analysis, while for cyprosin activity and specific activity slight better models were achieved by at-line MIR spectroscopic analysis. Therefore both techniques enabled to monitor the highly dynamic cyprosin production bioprocess, promoting by this way more efficient platforms for the bioprocess optimization and control.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.