9 resultados para linear ordering problem
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
This paper reports the reconstruction of the contamination history of a large South American industrial coastal area (Santos Estuary, Brazil) using linear alkylbenzenes (LABs). Three sediment cores were dated by (137)Cs Concentrations in surficial layers were comparable to the midrange concentrations reported for coastal sediments worldwide LAB concentrations increased towards the surface. indicating increased waste discharges into the estuary in recent decades. The highest concentration values occurred in the early 1970s, a time of intense industrial activity and marked population growth. The decreased LAB concentration, in the late 1970s was assumed to be the result of the world oil crisis Treatment of industrial effluents, which began in 1984, was represented by decreased LAB levels Microbial degradation of LABs may be more intense in the industrial area sediments. The results show that industrial and domestic waste discharges are a historical problem in the area. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This work addresses the solution to the problem of robust model predictive control (MPC) of systems with model uncertainty. The case of zone control of multi-variable stable systems with multiple time delays is considered. The usual approach of dealing with this kind of problem is through the inclusion of non-linear cost constraint in the control problem. The control action is then obtained at each sampling time as the solution to a non-linear programming (NLP) problem that for high-order systems can be computationally expensive. Here, the robust MPC problem is formulated as a linear matrix inequality problem that can be solved in real time with a fraction of the computer effort. The proposed approach is compared with the conventional robust MPC and tested through the simulation of a reactor system of the process industry.
Resumo:
Linear parameter varying (LPV) control is a model-based control technique that takes into account time-varying parameters of the plant. In the case of rotating systems supported by lubricated bearings, the dynamic characteristics of the bearings change in time as a function of the rotating speed. Hence, LPV control can tackle the problem of run-up and run-down operational conditions when dynamic characteristics of the rotating system change significantly in time due to the bearings and high vibration levels occur. In this work, the LPV control design for a flexible shaft supported by plain journal bearings is presented. The model used in the LPV control design is updated from unbalance response experimental results and dynamic coefficients for the entire range of rotating speeds are obtained by numerical optimization. Experimental implementation of the designed LPV control resulted in strong reduction of vibration amplitudes when crossing the critical speed, without affecting system behavior in sub- or supercritical speeds. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
This paper addresses the numerical solution of random crack propagation problems using the coupling boundary element method (BEM) and reliability algorithms. Crack propagation phenomenon is efficiently modelled using BEM, due to its mesh reduction features. The BEM model is based on the dual BEM formulation, in which singular and hyper-singular integral equations are adopted to construct the system of algebraic equations. Two reliability algorithms are coupled with BEM model. The first is the well known response surface method, in which local, adaptive polynomial approximations of the mechanical response are constructed in search of the design point. Different experiment designs and adaptive schemes are considered. The alternative approach direct coupling, in which the limit state function remains implicit and its gradients are calculated directly from the numerical mechanical response, is also considered. The performance of both coupling methods is compared in application to some crack propagation problems. The investigation shows that direct coupling scheme converged for all problems studied, irrespective of the problem nonlinearity. The computational cost of direct coupling has shown to be a fraction of the cost of response surface solutions, regardless of experiment design or adaptive scheme considered. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Considerable effort has been made in recent years to optimize materials properties for magnetic hyperthermia applications. However, due to the complexity of the problem, several aspects pertaining to the combined influence of the different parameters involved still remain unclear. In this paper, we discuss in detail the role of the magnetic anisotropy on the specific absorption rate of cobalt-ferrite nanoparticles with diameters ranging from 3 to 14 nm. The structural characterization was carried out using x-ray diffraction and Rietveld analysis and all relevant magnetic parameters were extracted from vibrating sample magnetometry. Hyperthermia investigations were performed at 500 kHz with a sinusoidal magnetic field amplitude of up to 68 Oe. The specific absorption rate was investigated as a function of the coercive field, saturation magnetization, particle size, and magnetic anisotropy. The experimental results were also compared with theoretical predictions from the linear response theory and dynamic hysteresis simulations, where exceptional agreement was found in both cases. Our results show that the specific absorption rate has a narrow and pronounced maxima for intermediate anisotropy values. This not only highlights the importance of this parameter but also shows that in order to obtain optimum efficiency in hyperthermia applications, it is necessary to carefully tailor the materials properties during the synthesis process. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4729271]
Resumo:
In this paper, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noises under two criteria. The first one is an unconstrained mean-variance trade-off performance criterion along the time, and the second one is a minimum variance criterion along the time with constraints on the expected output. We present explicit conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. We conclude the paper by presenting a numerical example of a multi-period portfolio selection problem with regime switching in which it is desired to minimize the sum of the variances of the portfolio along the time under the restriction of keeping the expected value of the portfolio greater than some minimum values specified by the investor. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A systematic approach to model nonlinear systems using norm-bounded linear differential inclusions (NLDIs) is proposed in this paper. The resulting NLDI model is suitable for the application of linear control design techniques and, therefore, it is possible to fulfill certain specifications for the underlying nonlinear system, within an operating region of interest in the state-space, using a linear controller designed for this NLDI model. Hence, a procedure to design a dynamic output feedback controller for the NLDI model is also proposed in this paper. One of the main contributions of the proposed modeling and control approach is the use of the mean-value theorem to represent the nonlinear system by a linear parameter-varying model, which is then mapped into a polytopic linear differential inclusion (PLDI) within the region of interest. To avoid the combinatorial problem that is inherent of polytopic models for medium- and large-sized systems, the PLDI is transformed into an NLDI, and the whole process is carried out ensuring that all trajectories of the underlying nonlinear system are also trajectories of the resulting NLDI within the operating region of interest. Furthermore, it is also possible to choose a particular structure for the NLDI parameters to reduce the conservatism in the representation of the nonlinear system by the NLDI model, and this feature is also one important contribution of this paper. Once the NLDI representation of the nonlinear system is obtained, the paper proposes the application of a linear control design method to this representation. The design is based on quadratic Lyapunov functions and formulated as search problem over a set of bilinear matrix inequalities (BMIs), which is solved using a two-step separation procedure that maps the BMIs into a set of corresponding linear matrix inequalities. Two numerical examples are given to demonstrate the effectiveness of the proposed approach.
Resumo:
Setup operations are significant in some production environments. It is mandatory that their production plans consider some features, as setup state conservation across periods through setup carryover and crossover. The modelling of setup crossover allows more flexible decisions and is essential for problems with long setup times. This paper proposes two models for the capacitated lot-sizing problem with backlogging and setup carryover and crossover. The first is in line with other models from the literature, whereas the second considers a disaggregated setup variable, which tracks the starting and completion times of the setup operation. This innovative approach permits a more compact formulation. Computational results show that the proposed models have outperformed other state-of-the-art formulation.