842 resultados para linear matrix inequality (LMI) optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The high performance computing community has traditionally focused uniquely on the reduction of execution time, though in the last years, the optimization of energy consumption has become a main issue. A reduction of energy usage without a degradation of performance requires the adoption of energy-efficient hardware platforms accompanied by the development of energy-aware algorithms and computational kernels. The solution of linear systems is a key operation for many scientific and engineering problems. Its relevance has motivated an important amount of work, and consequently, it is possible to find high performance solvers for a wide variety of hardware platforms. In this work, we aim to develop a high performance and energy-efficient linear system solver. In particular, we develop two solvers for a low-power CPU-GPU platform, the NVIDIA Jetson TK1. These solvers implement the Gauss-Huard algorithm yielding an efficient usage of the target hardware as well as an efficient memory access. The experimental evaluation shows that the novel proposal reports important savings in both time and energy-consumption when compared with the state-of-the-art solvers of the platform.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The municipal management in any country of the globe requires planning and allocation of resources evenly. In Brazil, the Law of Budgetary Guidelines (LDO) guides municipal managers toward that balance. This research develops a model that seeks to find the balance of the allocation of public resources in Brazilian municipalities, considering the LDO as a parameter. For this using statistical techniques and multicriteria analysis as a first step in order to define allocation strategies, based on the technical aspects arising from the municipal manager. In a second step, presented in linear programming based optimization where the objective function is derived from the preference of the results of the manager and his staff. The statistical representation is presented to support multicriteria development in the definition of replacement rates through time series. The multicriteria analysis was structured by defining the criteria, alternatives and the application of UTASTAR methods to calculate replacement rates. After these initial settings, an application of linear programming was developed to find the optimal allocation of enforcement resources of the municipal budget. Data from the budget of a municipality in southwestern Paraná were studied in the application of the model and analysis of results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we propose several advances in the numerical and computational algorithms that are used to determine tomographic estimates of physical parameters in the solar corona. We focus on methods for both global dynamic estimation of the coronal electron density and estimation of local transient phenomena, such as coronal mass ejections, from empirical observations acquired by instruments onboard the STEREO spacecraft. We present a first look at tomographic reconstructions of the solar corona from multiple points-of-view, which motivates the developments in this thesis. In particular, we propose a method for linear equality constrained state estimation that leads toward more physical global dynamic solar tomography estimates. We also present a formulation of the local static estimation problem, i.e., the tomographic estimation of local events and structures like coronal mass ejections, that couples the tomographic imaging problem to a phase field based level set method. This formulation will render feasible the 3D tomography of coronal mass ejections from limited observations. Finally, we develop a scalable algorithm for ray tracing dense meshes, which allows efficient computation of many of the tomographic projection matrices needed for the applications in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The flow rates of drying and nebulizing gas, heat block and desolvation line temperatures and interface voltage are potential electrospray ionization parameters as they may enhance sensitivity of the mass spectrometer. The conditions that give higher sensitivity of 13 pharmaceuticals were explored. First, Plackett-Burman design was implemented to screen significant factors, and it was concluded that interface voltage and nebulizing gas flow were the only factors that influence the intensity signal for all pharmaceuticals. This fractionated factorial design was projected to set a full 2(2) factorial design with center points. The lack-of-fit test proved to be significant. Then, a central composite face-centered design was conducted. Finally, a stepwise multiple linear regression and subsequently an optimization problem solving were carried out. Two main drug clusters were found concerning the signal intensities of all runs of the augmented factorial design. p-Aminophenol, salicylic acid, and nimesulide constitute one cluster as a result of showing much higher sensitivity than the remaining drugs. The other cluster is more homogeneous with some sub-clusters comprising one pharmaceutical and its respective metabolite. It was observed that instrumental signal increased when both significant factors increased with maximum signal occurring when both codified factors are set at level +1. It was also found that, for most of the pharmaceuticals, interface voltage influences the intensity of the instrument more than the nebulizing gas flowrate. The only exceptions refer to nimesulide where the relative importance of the factors is reversed and still salicylic acid where both factors equally influence the instrumental signal. Graphical Abstract ᅟ.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is scientific evidence demonstrating the benefits of mushrooms ingestion due to their richness in bioactive compounds such as mycosterols, in particular ergosterol [I]. Agaricus bisporus L. is the most consumed mushroom worldwide presenting 90% of ergosterol in its sterol fraction [2]. Thus, it is an interesting matrix to obtain ergosterol, a molecule with a high commercial value. According to literature, ergosterol concentration can vary between 3 to 9 mg per g of dried mushroom. Nowadays, traditional methods such as maceration and Soxhlet extraction are being replaced by emerging methodologies such as ultrasound (UAE) and microwave assisted extraction (MAE) in order to decrease the used solvent amount, extraction time and, of course, increasing the extraction yield [2]. In the present work, A. bisporus was extracted varying several parameters relevant to UAE and MAE: UAE: solvent type (hexane and ethanol), ultrasound amplitude (50 - 100 %) and sonication time (5 min-15 min); MAE: solvent was fixed as ethanol, time (0-20 min), temperature (60-210 •c) and solid-liquid ratio (1-20 g!L). Moreover, in order to decrease the process complexity, the pertinence to apply a saponification step was evaluated. Response surface methodology was applied to generate mathematical models which allow maximizing and optimizing the response variables that influence the extraction of ergosterol. Concerning the UAE, ethanol proved to be the best solvent to achieve higher levels of ergosterol (671.5 ± 0.5 mg/100 g dw, at 75% amplitude for 15 min), once hexane was only able to extract 152.2 ± 0.2 mg/100 g dw, in the same conditions. Nevertheless, the hexane extract showed higher purity (11%) when compared with the ethanol counterpart ( 4% ). Furthermore, in the case of the ethanolic extract, the saponification step increased its purity to 21%, while for the hexane extract the purity was similar; in fact, hexane presents higher selectivity for the lipophilic compounds comparatively with ethanol. Regarding the MAE technique, the results showed that the optimal conditions (19 ± 3 min, 133 ± 12 •c and 1.6 ± 0.5 g!L) allowed higher ergosterol extraction levels (556 ± 26 mg/100 g dw). The values obtained with MAE are close to the ones obtained with conventional Soxhlet extraction (676 ± 3 mg/100 g dw) and UAE. Overall, UAE and MAE proved to he efficient technologies to maximize ergosterol extraction yields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let G be a simple graph on n vertices and e(G) edges. Consider the signless Laplacian, Q(G) = D + A, where A is the adjacency matrix and D is the diagonal matrix of the vertices degree of G. Let q1(G) and q2(G) be the first and the second largest eigenvalues of Q(G), respectively, and denote by S+ n the star graph with an additional edge. It is proved that inequality q1(G)+q2(G) e(G)+3 is tighter for the graph S+ n among all firefly graphs and also tighter to S+ n than to the graphs Kk _ Kn−k recently presented by Ashraf, Omidi and Tayfeh-Rezaie. Also, it is conjectured that S+ n minimizes f(G) = e(G) − q1(G) − q2(G) among all graphs G on n vertices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

International audience

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimization of Carnobacterium divergens V41 growth and bacteriocin activity in a culture medium deprived of animal protein, needs for food bioprotection, was performed by using a statistical approach. In a screening experiment, twelve factors (pH, temperature, carbohydrates, NaCl, yeast extract, soy peptone, sodium acetate, ammonium citrate, magnesium sulphate, manganese sulphate, ascorbic acid and thiamine) were tested for their influence on the maximal growth and bacteriocin activity using a two-level incomplete factorial design with 192 experiments performed in microtiter plate wells. Based on results, a basic medium was developed and three variables (pH, temperature and carbohydrates concentration) were selected for a scale-up study in bioreactor. A 23 complete factorial design was performed, allowing the estimation of linear effects of factors and all the first order interactions. The best conditions for the cell production were obtained with a temperature of 15°C and a carbohydrates concentration of 20 g/l whatever the pH (in the range 6.5-8), and the best conditions for bacteriocin activity were obtained at 15°C and pH 6.5 whatever the carbohydrates concentration (in the range 2-20 g/l). The predicted final count of C. divergens V41 and the bacteriocin activity under the optimized conditions (15°C, pH 6.5, 20 g/l carbohydrates) were 2.4 x 1010 CFU/ml and 819200 AU/ml respectively. C. divergens V41 cells cultivated in the optimized conditions were able to grow in cold-smoked salmon and totally inhibited the growth of Listeria monocytogenes (< 50 CFU g-1) during five weeks of vacuum storage at 4° and 8°C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Matrix converters convert a three-phase alternating-current power supply to a power supply of a different peak voltage and frequency, and are an emerging technology in a wide variety of applications. However, they are susceptible to an instability, whose behaviour is examined herein. The desired “steady-state” mode of operation of the matrix converter becomes unstable in a Hopf bifurcation as the output/input voltage transfer ratio, q, is increased through some threshold value, qc. Through weakly nonlinear analysis and direct numerical simulation of an averaged model, we show that this bifurcation is subcritical for typical parameter values, leading to hysteresis in the transition to the oscillatory state: there may thus be undesirable large-amplitude oscillations in the output voltages even when q is below the linear stability threshold value qc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a general multistage stochastic mixed 0-1 problem where the uncertainty appears everywhere in the objective function, constraints matrix and right-hand-side. The uncertainty is represented by a scenario tree that can be a symmetric or a nonsymmetric one. The stochastic model is converted in a mixed 0-1 Deterministic Equivalent Model in compact representation. Due to the difficulty of the problem, the solution offered by the stochastic model has been traditionally obtained by optimizing the objective function expected value (i.e., mean) over the scenarios, usually, along a time horizon. This approach (so named risk neutral) has the inconvenience of providing a solution that ignores the variance of the objective value of the scenarios and, so, the occurrence of scenarios with an objective value below the expected one. Alternatively, we present several approaches for risk averse management, namely, a scenario immunization strategy, the optimization of the well known Value-at-Risk (VaR) and several variants of the Conditional Value-at-Risk strategies, the optimization of the expected mean minus the weighted probability of having a "bad" scenario to occur for the given solution provided by the model, the optimization of the objective function expected value subject to stochastic dominance constraints (SDC) for a set of profiles given by the pairs of threshold objective values and either bounds on the probability of not reaching the thresholds or the expected shortfall over them, and the optimization of a mixture of the VaR and SDC strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over recent years the structural ceramics industry in Brazil has found a very favorable market for growth. However, difficulties related to productivity and product quality are partially inhibiting this possible growth. An alternative for trying to solve these problems and, thus, provide the pottery industry the feasibility of full development, is the substitution of firewood used in the burning process by natural gas. In order to contribute to this process of technological innovation, this paper studies the effect of co-use of ceramic phyllite and kaolin waste on the properties of a clay matrix, verifying the possible benefits that these raw materials can give to the final product, as well as the possibility of such materials to reduce the heat load necessary to obtain products with equal or superior quality. The study was divided into two steps: characterization of materials and study of formulations. Two clays, a phyllite and a residue of kaolin were characterized by the following techniques: laser granulometry, plasticity index by Atterberg limits, X-ray fluorescence, X-ray diffraction, mineralogical composition by Rietveld, thermogravimetric and differential thermal analysis. To study the formulations, specifically for evaluation of technological properties of the parts, was performed an experimental model that combined planning involving a mixture of three components (standard mass x phyllite x kaolin waste) and a 23 factorial design with central point associated with thermal processing parameters. The experiment was performed with restricted strip-plot randomization. In total, 13 compositional points were investigated within the following constraints: phyllite ≤ 20% by weight, kaolin waste ≤ 40% by weight, and standard mass ≥ 60% by weight. The thermal parameters were used at the following levels: 750 and 950 °C to the firing temperature, 5 and 15 °C/min at the heating rate, 15 and 45min to the baseline. The results showed that the introduction of phyllite and/or kaolin waste in ceramic body produced a number of benefits in properties of the final product, such as: decreased absorption of water, apparent porosity and linear retraction at burn; besides the increase in apparent specific mass and mechanical properties of parts. The best results were obtained in the compositional points where the sum of the levels of kaolin waste and phyllite was maximal (40% by weight), as well as conditions which were used in firing temperatures of 950 °C. Regarding the prospect of savings in heat energy required to form the desired microstructure, the phyllite and the residue of kaolin, for having small particle sizes and constitutions mineralogical phases with the presence of fluxes, contributed to the optimization of the firing cycle.