929 resultados para Linear and nonlinear methods
Resumo:
A numerical comparison is performed between three methods of third order with the same structure, namely BSC, Halley’s and Euler–Chebyshev’s methods. As the behavior of an iterative method applied to a nonlinear equation can be highly sensitive to the starting points, the numerical comparison is carried out, allowing for complex starting points and for complex roots, on the basins of attraction in the complex plane. Several examples of algebraic and transcendental equations are presented.
Resumo:
The introduction of new distributed energy resources, based on natural intermittent power sources, in power systems imposes the development of new adequate operation management and control methods. This paper proposes a short-term Energy Resource Management (ERM) methodology performed in two phases. The first one addresses the hour-ahead ERM scheduling and the second one deals with the five-minute ahead ERM scheduling. Both phases consider the day-ahead resource scheduling solution. The ERM scheduling is formulated as an optimization problem that aims to minimize the operation costs from the point of view of a virtual power player that manages the network and the existing resources. The optimization problem is solved by a deterministic mixed-integer non-linear programming approach and by a heuristic approach based on genetic algorithms. A case study considering a distribution network with 33 bus, 66 distributed generation, 32 loads with demand response contracts and 7 storage units has been implemented in a PSCADbased simulator developed in the field of the presented work, in order to validate the proposed short-term ERM methodology considering the dynamic power system behavior.
Resumo:
Mathematical Program with Complementarity Constraints (MPCC) finds applica- tion in many fields. As the complementarity constraints fail the standard Linear In- dependence Constraint Qualification (LICQ) or the Mangasarian-Fromovitz constraint qualification (MFCQ), at any feasible point, the nonlinear programming theory may not be directly applied to MPCC. However, the MPCC can be reformulated as NLP problem and solved by nonlinear programming techniques. One of them, the Inexact Restoration (IR) approach, performs two independent phases in each iteration - the feasibility and the optimality phases. This work presents two versions of an IR algorithm to solve MPCC. In the feasibility phase two strategies were implemented, depending on the constraints features. One gives more importance to the complementarity constraints, while the other considers the priority of equality and inequality constraints neglecting the complementarity ones. The optimality phase uses the same approach for both algorithm versions. The algorithms were implemented in MATLAB and the test problems are from MACMPEC collection.
Resumo:
The filter method is a technique for solving nonlinear programming problems. The filter algorithm has two phases in each iteration. The first one reduces a measure of infeasibility, while in the second the objective function value is reduced. In real optimization problems, usually the objective function is not differentiable or its derivatives are unknown. In these cases it becomes essential to use optimization methods where the calculation of the derivatives or the verification of their existence is not necessary: direct search methods or derivative-free methods are examples of such techniques. In this work we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of simplex and filter methods. This method neither computes nor approximates derivatives, penalty constants or Lagrange multipliers.
Resumo:
OBJECTIVE To examine factors associated with social participation and their relationship with self-perceived well-being in older adults. METHODS This study was based on data obtained from the National Socioeconomic Characterization (CASEN) Survey conducted in Chile, in 2011, on a probability sample of households. We examined information of 31,428 older adults living in these households. Descriptive and explanatory analyses were performed using linear and multivariate logistic regression models. We assessed the respondents’ participation in different types of associations: egotropic, sociotropic, and religious. RESULTS Social participation increased with advancing age and then declined after the age of 80. The main finding of this study was that family social capital is a major determinant of social participation of older adults. Their involvement was associated with high levels of self-perceived subjective well-being. We identified four settings as sources of social participation: home-based; rural community-based; social policy programs; and religious. Older adults were significantly more likely to participate when other members of the household were also involved in social activities evidencing an intergenerational transmission of social participation. Rural communities, especially territorial associations, were the most favorable setting for participation. There has been a steady increase in the rates of involvement of older adults in social groups in Chile, especially after retirement. Religiosity remains a major determinant of associativism. The proportion of participation was higher among older women than men but these proportions equaled after the age of 80. CONCLUSIONS Self-perceived subjective well-being is not only dependent upon objective factors such as health and income, but is also dependent upon active participation in social life, measured as participation in associations, though its effects are moderate.
Resumo:
In Nonlinear Optimization Penalty and Barrier Methods are normally used to solve Constrained Problems. There are several Penalty/Barrier Methods and they are used in several areas from Engineering to Economy, through Biology, Chemistry, Physics among others. In these areas it often appears Optimization Problems in which the involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. In this work some Penalty/Barrier functions are tested and compared, using in the internal process, Derivative-free, namely Direct Search, methods. This work is a part of a bigger project involving the development of an Application Programming Interface, that implements several Optimization Methods, to be used in applications that need to solve constrained and/or unconstrained Nonlinear Optimization Problems. Besides the use of it in applied mathematics research it is also to be used in engineering software packages.
Resumo:
This paper addresses the use of multidimensional scaling in the evaluation of controller performance. Several nonlinear systems are analyzed based on the closed loop time response under the action of a reference step input signal. Three alternative performance indices, based on the time response, Fourier analysis, and mutual information, are tested. The numerical experiments demonstrate the feasibility of the proposed methodology and motivate its extension for other performance measures and new classes of nonlinearities.
Resumo:
This contribution introduces the fractional calculus (FC) fundamental mathematical aspects and discuses some of their consequences. Based on the FC concepts, the chapter reviews the main approaches for implementing fractional operators and discusses the adoption of FC in control systems. Finally are presented some applications in the areas of modeling and control, namely fractional PID, heat diffusion systems, electromagnetism, fractional electrical impedances, evolutionary algorithms, robotics, and nonlinear system control.
Resumo:
Trabalho Final de mestrado para obtenção do grau de Mestre em engenharia Mecância
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
A sustentabilidade do sistema energético é crucial para o desenvolvimento económico e social das sociedades presentes e futuras. Para garantir o bom funcionamento dos sistemas de energia actua-se, tipicamente, sobre a produção e sobre as redes de transporte e de distribuição. No entanto, a integração crescente de produção distribuída, principalmente nas redes de distribuição de média e de baixa tensão, a liberalização dos mercados energéticos, o desenvolvimento de mecanismos de armazenamento de energia, o desenvolvimento de sistemas automatizados de controlo de cargas e os avanços tecnológicos das infra-estruturas de comunicação impõem o desenvolvimento de novos métodos de gestão e controlo dos sistemas de energia. O contributo deste trabalho é o desenvolvimento de uma metodologia de gestão de recursos energéticos num contexto de SmartGrids, considerando uma entidade designada por VPP que gere um conjunto de instalações (unidades produtoras, consumidores e unidades de armazenamento) e, em alguns casos, tem ao seu cuidado a gestão de uma parte da rede eléctrica. Os métodos desenvolvidos contemplam a penetração intensiva de produção distribuída, o aparecimento de programas de Demand Response e o desenvolvimento de novos sistemas de armazenamento. São ainda propostos níveis de controlo e de tomada de decisão hierarquizados e geridos por entidades que actuem num ambiente de cooperação mas também de concorrência entre si. A metodologia proposta foi desenvolvida recorrendo a técnicas determinísticas, nomeadamente, à programação não linear inteira mista, tendo sido consideradas três funções objectivo distintas (custos mínimos, emissões mínimas e cortes de carga mínimos), originando, posteriormente, uma função objectivo global, o que permitiu determinar os óptimos de Pareto. São ainda determinados os valores dos custos marginais locais em cada barramento e consideradas as incertezas dos dados de entrada, nomeadamente, produção e consumo. Assim, o VPP tem ao seu dispor um conjunto de soluções que lhe permitirão tomar decisões mais fundamentadas e de acordo com o seu perfil de actuação. São apresentados dois casos de estudo. O primeiro utiliza uma rede de distribuição de 32 barramentos publicada por Baran & Wu. O segundo caso de estudo utiliza uma rede de distribuição de 114 barramentos adaptada da rede de 123 barramentos do IEEE.
Resumo:
The theory of fractional calculus goes back to the beginning of thr throry of differential calculus but its inherent complexity postponed the applications of the associated concepts. In the last decade the progress in the areas of chaos and fractals revealed subtle relationships with the fractional calculus leading to an increasing interest in the development of the new paradigm. In the area of automaticcontrol preliminary work has already been carried out but the proposed algorithms are restricted to the frequency domain. The paper discusses the design of fractional-order discrete-time controllers. The algorithms studied adopt the time domein, which makes them suited for z-transform analusis and discrete-time implementation. The performance of discrete-time fractional-order controllers with linear and non-linear systems is also investigated.
Resumo:
A levitação magnética tem sido um tema bastante investigado sobretudo devido à sua utilização em sistemas ferroviários de transportes. É o método ideal quando existe a necessidade em aplicações de restringir do contacto físico, ou a conveniência, em termos energéticos, de eliminar o atrito. O princípio de funcionamento é simples, um eletroíman cria uma força sobre um objeto ferromagnético que contraria a gravidade. Contudo um sistema de levitação por atração é instável e não linear, o que significa a necessidade de implementar um controlador para satisfazer as características de estabilidade desejadas. Ao longo deste projeto serão descritos os procedimentos teóricos e práticos que foram tomados na criação de um sistema de levitação eletromagnética. Desde a conceção física do sistema, como escolha do sensor, condicionamento de sinal ou construção do eletroíman, até aos procedimentos matemáticos que permitiram a modelação do sistema e criação de controladores. Os controladores clássicos, como o PID ou em avanço de fase, foram projetados através da técnica do Lugar Geométrico de Raízes. No projeto do controlador difuso, pelo contrário não se fez uso da modelação do sistema ou de qualquer relação matemática entre as variáveis. A utilização desta técnica de controlo destacou-se pela usa simplicidade e rapidez de implementação, fornecendo um bom desempenho ao sistema. Na parte final do relatório os resultados obtidos pelos diferentes métodos de controlo são analisados e apresentadas as respetivas conclusões. Estes resultados revelam que para este sistema, relativamente aos outros métodos, o controlador difuso apresenta o melhor desempenho tanto ao nível da resposta transitória, como em regime permanente.
Resumo:
This study aims to assess the association between schistosomiasis and hookworm infection with hemoglobin levels of schoolchildren in northern Mozambique. Through a cross-sectional survey, 1,015 children from five to 12 years old in the provinces of Nampula, Cabo Delgado and Niassa were studied. Hookworm infection and urinary schistosomiasis were diagnosed, through Ritchie and filtration methods, with a prevalence of 31.3% and 59.1%, respectively. Hemoglobin levels were obtained with a portable photometer (Hemocue®). The average hemoglobin concentration was 10.8 ± 1.42 g/dL, and 62.1% of the children presented levels below 11.5 g/dL, of which 11.8% of the total number of children had hemoglobin levels below 9 g/dL. A multiple linear regression analysis demonstrated negative interactions between hemoglobin levels and ancylostomiasis, this being restricted to the province of Cabo Delgado (β = -0.55; p < 0.001) where an independent interaction between hemoglobin levels and urinary schistosomiasis was also observed (β = -0.35; p = 0.016). The logistical regression model indicated that hookworm infection represents a predictor of mild (OR = 1.87; 95% CI = 1.17-3.00) and moderate/severe anemia (OR = 2.71; 95% CI = 1.50 - 4.89). We concluded that, in the province of Cabo Delgado, hookworm and Schistosoma haematobium infections negatively influence hemoglobin levels in schoolchildren. Periodical deworming should be considered in the region. Health education and improvements in sanitary infrastructure could achieve long-term and sustainable reductions in soil-transmitted helminthiases and schistosomiasis prevalence rates.
Resumo:
As an introduction to a series of articles focused on the exploration of particular tools and/or methods to bring together digital technology and historical research, the aim of this paper is mainly to highlight and discuss in what measure those methodological approaches can contribute to improve analytical and interpretative capabilities available to historians. In a moment when the digital world present us with an ever-increasing variety of tools to perform extraction, analysis and visualization of large amounts of text, we thought it would be relevant to bring the digital closer to the vast historical academic community. More than repeating an idea of digital revolution introduced in the historical research, something recurring in the literature since the 1980s, the aim was to show the validity and usefulness of using digital tools and methods, as another set of highly relevant tools that the historians should consider. For this several case studies were used, combining the exploration of specific themes of historical knowledge and the development or discussion of digital methodologies, in order to highlight some changes and challenges that, in our opinion, are already affecting the historians' work, such as a greater focus given to interdisciplinarity and collaborative work, and a need for the form of communication of historical knowledge to become more interactive.