34 resultados para test case optimization
Resumo:
This work presents a case study on technology assessment for power quality improvement devices. A system compatibility test protocol for power quality mitigation devices was developed in order to evaluate the functionality of three-phase voltage restoration devices. In order to validate this test protocol, the micro-DVR, a reduced power development platform for DVR (dynamic voltage restorer) devices, was tested and the results are discussed based on voltage disturbances standards. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This paper addresses the use of optimization techniques in the design of a steel riser. Two methods are used: the genetic algorithm, which imitates the process of natural selection, and the simulated annealing, which is based on the process of annealing of a metal. Both of them are capable of searching a given solution space for the best feasible riser configuration according to predefined criteria. Optimization issues are discussed, such as problem codification, parameter selection, definition of objective function, and restrictions. A comparison between the results obtained for economic and structural objective functions is made for a case study. Optimization method parallelization is also addressed. [DOI: 10.1115/1.4001955]
Resumo:
Load cells are used extensively in engineering fields. This paper describes a novel structural optimization method for single- and multi-axis load cell structures. First, we briefly explain the topology optimization method that uses the solid isotropic material with penalization (SIMP) method. Next, we clarify the mechanical requirements and design specifications of the single- and multi-axis load cell structures, which are formulated as an objective function. In the case of multi-axis load cell structures, a methodology based on singular value decomposition is used. The sensitivities of the objective function with respect to the design variables are then formulated. On the basis of these formulations, an optimization algorithm is constructed using finite element methods and the method of moving asymptotes (MMA). Finally, we examine the characteristics of the optimization formulations and the resultant optimal configurations. We confirm the usefulness of our proposed methodology for the optimization of single- and multi-axis load cell structures.
Resumo:
Simulated annealing (SA) is an optimization technique that can process cost functions with degrees of nonlinearities, discontinuities and stochasticity. It can process arbitrary boundary conditions and constraints imposed on these cost functions. The SA technique is applied to the problem of robot path planning. Three situations are considered here: the path is represented as a polyline; as a Bezier curve; and as a spline interpolated curve. In the proposed SA algorithm, the sensitivity of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensitivity of each parameter is associated to its probability distribution in the definition of the next candidate. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The cost of a new ship design heavily depends on the principal dimensions of the ship; however, dimensions minimization often conflicts with the minimum oil outflow (in the event of an accidental spill). This study demonstrates one rational methodology for selecting the optimal dimensions and coefficients of form of tankers via the use of a genetic algorithm. Therein, a multi-objective optimization problem was formulated by using two objective attributes in the evaluation of each design, specifically, total cost and mean oil outflow. In addition, a procedure that can be used to balance the designs in terms of weight and useful space is proposed. A genetic algorithm was implemented to search for optimal design parameters and to identify the nondominated Pareto frontier. At the end of this study, three real ships are used as case studies. [DOI:10.1115/1.4002740]
Resumo:
Here, we study the stable integration of real time optimization (RTO) with model predictive control (MPC) in a three layer structure. The intermediate layer is a quadratic programming whose objective is to compute reachable targets to the MPC layer that lie at the minimum distance to the optimum set points that are produced by the RTO layer. The lower layer is an infinite horizon MPC with guaranteed stability with additional constraints that force the feasibility and convergence of the target calculation layer. It is also considered the case in which there is polytopic uncertainty in the steady state model considered in the target calculation. The dynamic part of the MPC model is also considered unknown but it is assumed to be represented by one of the models of a discrete set of models. The efficiency of the methods presented here is illustrated with the simulation of a low order system. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper studies a simplified methodology to integrate the real time optimization (RTO) of a continuous system into the model predictive controller in the one layer strategy. The gradient of the economic objective function is included in the cost function of the controller. Optimal conditions of the process at steady state are searched through the use of a rigorous non-linear process model, while the trajectory to be followed is predicted with the use of a linear dynamic model, obtained through a plant step test. The main advantage of the proposed strategy is that the resulting control/optimization problem can still be solved with a quadratic programming routine at each sampling step. Simulation results show that the approach proposed may be comparable to the strategy that solves the full economic optimization problem inside the MPC controller where the resulting control problem becomes a non-linear programming problem with a much higher computer load. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Highly redundant or statically undetermined structures, such as a cable-stayed bridge, have been of particular concern to the engineering community nowadays because of the complex parameters that must be taken into account for healthy monitoring. The purpose of this study was to verify the reliability and practicability of using GPS to characterize dynamic oscillations of small span bridges. The test was carried out on a cable-stayed wood footbridge at Escola de Engenharia de Sao Carlos-Universidade de Sao Paulo, Brazil. Initially a static load trial was carried out to get an idea of the deck amplitude and oscillation frequency. After that, a calibration trial was carried out by applying a well known oscillation on the rover antenna to check the environment detectable limits for the method used. Finally, a dynamic load trial was carried out by using GPS and a displacement transducer to measure the deck oscillation. The displacement transducer was used just to confirm the results obtained by the GPS. The results have shown that the frequencies and amplitude displacements obtained by the GPS are in good agreement with the displacement transducer responses. GPS can be used as a reliable tool to characterize the dynamic behavior of large structures such as cable-stayed footbridges undergoing dynamic loads.
Resumo:
Desserts made with soy cream, which are oil-in-water emulsions, are widely consumed by lactose-intolerant individuals in Brazil. In this regard, this study aimed at using response surface methodology (RSM) to optimize the sensory attributes of a soy-based emulsion over a range of pink guava juice (GJ: 22% to 32%) and soy protein (SP: 1% to 3%). WHC and backscattering were analyzed after 72 h of storage at 7 degrees C. Furthermore, a rating test was performed to determine the degree of liking of color, taste, creaminess, appearance, and overall acceptability. The data showed that the samples were stable against gravity and storage. The models developed by RSM adequately described the creaminess, taste, and appearance of the emulsions. The response surface of the desirability function was used successfully in the optimization of the sensory properties of dairy-free emulsions, suggesting that a product with 30.35% GJ and 3% SP was the best combination of these components. The optimized sample presented suitable sensory properties, in addition to being a source of dietary fiber, iron, copper, and ascorbic acid.
Resumo:
Abnormal heart-rate (HR) response during or after a graded exercise test has been recognized as a strong and an independent predictor of all-cause mortality in healthy and diseased subjects. The purpose of the present study was to evaluate the HR response during exercise in women with systemic lupus erythematosus (SLE). In this case-control study, 22 women with SLE (age 29.5 perpendicular to 1.1 years) were compared with 20 gender-, BMI-, and age-matched healthy subjects (age 26.5 +/- 1.4 years). A treadmill cardiorespiratory test was performed and HR response during exercise was evaluated by the chronotropic reserve (CR). HR recovery (Delta HRR) was defined as the difference between HR at peak exercise and at both first (Delta HRR1) and second (Delta HRR2) minutes after exercising. SLE patients presented lower peak VO(2) when compared with healthy subjects (27.6 perpendicular to 0.9 vs. 36.7 perpendicular to 1.1 ml/kg/min, p = 0.001, respectively). Additionally, SLE patients demonstrated lower CR (71.8 +/- 2.4 vs. 98.2 +/- 2.6%, p = 0.001), Delta HRR1 (22.1 +/- 2.5 vs. 32.4 +/- 2.2%, p = 0.004) and Delta HRR2 (39.1 +/- 2.9 vs. 50.8 +/- 2.5%, p = 0.001) than their healthy peers. In conclusion, SLE patients presented abnormal HR response to exercise, characterized by chronotropic incompetence and delayed Delta HRR. Lupus (2011) 20, 717-720.
Resumo:
There is a positive correlation between the intensity of use of a given antibiotic and the prevalence of resistant strains. The more you treat, more patients infected with resistant strains appears and, as a consequence, the higher the mortality due to the infection and the longer the hospitalization time. In contrast, the less you treat, the higher the mortality rates and the longer the hospitalization time of patients infected with sensitive strains that could be successfully treated. The hypothesis proposed in this paper is an attempt to solve such a conflict: there must be an optimum treatment intensity that minimizes both the additional mortality and hospitalization time due to the infection by both sensitive and resistant bacteria strains. In order to test this hypothesis we applied a simple mathematical model that allowed us to estimate the optimum proportion of patients to be treated in order to minimize the total number of deaths and hospitalization time due to the infection in a hospital setting. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
Aim The aim of this study is to investigate areas of endemism within the distribution of Oswaldella species in the Southern Ocean, thereby testing previous hypotheses and proposing alternative scenarios for Antarctic evolution. Location Southern Ocean, Antarctic and sub-Antarctic waters of southern South America. Methods We prepared a database for the 31 currently known species of the Antarctic genus Oswaldella, which includes geographical locations gathered from published taxonomic studies as well as materials from museums and expeditions. A parsimony analysis of endemicity (PAE) was used to test hypotheses of distribution patterns. Results Four areas of endemism are hypothesized: southern South America, two high Antarctic areas (eastern and western) and a larger area, mainly in western Antarctica at lower latitudes and including insular areas (but not the Balleny Islands). Main conclusions The results support, in part, previous hypotheses for the Southern Ocean region, while providing more detailed resolution. The areas of endemism may reflect both historical and ecological processes that influenced the Antarctic biota. The Magellanic area reflects the well-known affinities of the Antarctic biota with that of South America and may be a consequence of dispersal through deeper (and colder) waters, followed by speciation. The second area, the largest one, encompasses most of the insular faunas and may also be associated with deeper waters formed since 43 Ma. The third area may be explained by the development of seaways in the circum-Antarctic region beginning 50 Ma. Finally, the fourth zone, with a very poor fauna, coincides with the opening of the Tasman Strait and the formation of the Australo-Antarctic Gulf, associated with a minor wind-driven current.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
In this paper, we consider a classical problem of complete test generation for deterministic finite-state machines (FSMs) in a more general setting. The first generalization is that the number of states in implementation FSMs can even be smaller than that of the specification FSM. Previous work deals only with the case when the implementation FSMs are allowed to have the same number of states as the specification FSM. This generalization provides more options to the test designer: when traditional methods trigger a test explosion for large specification machines, tests with a lower, but yet guaranteed, fault coverage can still be generated. The second generalization is that tests can be generated starting with a user-defined test suite, by incrementally extending it until the desired fault coverage is achieved. Solving the generalized test derivation problem, we formulate sufficient conditions for test suite completeness weaker than the existing ones and use them to elaborate an algorithm that can be used both for extending user-defined test suites to achieve the desired fault coverage and for test generation. We present the experimental results that indicate that the proposed algorithm allows obtaining a trade-off between the length and fault coverage of test suites.