845 resultados para constraint optimization
Resumo:
Objective: A new protocol for fixation and slide preservation was evaluated in order to improve the quality of immunocytochemical reactions on cytology slides. Methods: The quality of immunoreactions was evaluated retrospectively on 186 cytology slides (130 direct smears, 56 cytospins) prepared from different cytology samples. Ninety-three of the slides were air dried, stored at -20 °C and fixed in acetone for 10 minutes (Protocol 1), whereas the other 93 were immediately fixed in methanol at -20 °C for at least 30 minutes, subsequently protected with polyethylene glycol (PEG) and stored at room temperature (Protocol 2). Immunocytochemical staining, with eight primary antibodies, was performed on a Ventana BenchMark Ultra instrument using an UltraView Universal DAB Detection Kit. The following parameters were evaluated for each immunoreaction: morphology preservation, intensity of specific staining, background and counterstain. The slides were blinded and independently scored by four observers with marks from 0 to 20. Results: The quality of immunoreactions was better on methanol-fixed slides protected with PEG than on air-dried slides stored in the freezer: X¯ = 14.44 ± 3.58 versus X¯ = 11.02 ± 3.86, respectively (P < 0.001). Conclusion: Immediate fixation of cytology slides in cold methanol with subsequent application of PEG is an easy and straightforward procedure that improves the quality of immunocytochemical reactions and allows the storage of the slides at room temperature.
Resumo:
Screening of topologies developed by hierarchical heuristic procedures can be carried out by comparing their optimal performance. In this work we will be exploiting mono-objective process optimization using two algorithms, simulated annealing and tabu search, and four different objective functions: two of the net present value type, one of them including environmental costs and two of the global potential impact type. The hydrodealkylation of toluene to produce benzene was used as case study, considering five topologies with different complexities mainly obtained by including or not liquid recycling and heat integration. The performance of the algorithms together with the objective functions was observed, analyzed and discussed from various perspectives: average deviation of results for each algorithm, capacity for producing high purity product, screening of topologies, objective functions robustness in screening of topologies, trade-offs between economic and environmental type objective functions and variability of optimum solutions.
Resumo:
Multi-objective particle swarm optimization (MOPSO) is a search algorithm based on social behavior. Most of the existing multi-objective particle swarm optimization schemes are based on Pareto optimality and aim to obtain a representative non-dominated Pareto front for a given problem. Several approaches have been proposed to study the convergence and performance of the algorithm, particularly by accessing the final results. In the present paper, a different approach is proposed, by using Shannon entropy to analyzethe MOPSO dynamics along the algorithm execution. The results indicate that Shannon entropy can be used as an indicator of diversity and convergence for MOPSO problems.
Resumo:
An optimised version of the Quick, Easy, Cheap, Effective, Rugged and Safe (QuEChERS) method for simultaneous determination of 14 organochlorine pesticides in carrots was developed using gas chromatography coupled with electron-capture detector (GC-ECD) and confirmation by gas chromatography tandem mass spectrometry (GC-MS/MS). A citrate-buffered version of QuEChERS was applied for the extraction of the organochlorine pesticides, and for the extract clean-up, primary secondary amine, octadecyl-bonded silica (C18), magnesium sulphate (MgSO4) and graphitized carbon black were used as sorbents. The GC-ECD determination of the target compounds was achieved in less than 20 min. The limits of detection were below the EUmaximum residue limits (MRLs) for carrots, 10–50 μg kg−1, while the limit of quantification did exceed 10 μg kg−1 for hexachlorobenzene (HCB). The introduction of a sonication step was shown to improve the recoveries. The overall average recoveries in carrots, at the four tested levels (60, 80, 100 and 140 μg kg−1), ranged from 66 to 111% with relative standard deviations in the range of 2– 15 % (n03) for all analytes, with the exception of HCB. The method has been applied to the analysis of 21 carrot samples from different Portuguese regions, and β-HCH was the pesticide most frequently found, with concentrations oscillating between less than the limit of quantification to 14.6 μg kg−1. Only one sample had a pesticide residue (β-HCH) above the MRL, 14.6 μg kg−1. This methodology combines the advantages of both QuEChERS and GC-ECD, producing a very rapid, sensitive and reliable procedure which can be applied in routine analytical laboratories.
Resumo:
The present work describes the optimization of a short-term assay, based on the inhibition of the esterase activity of the alga Pseudokirchneriella subcapitata, in a microplate format. The optimization of the staining procedure showed that the incubation of the algal cells with 20 μmolL−1 fluorescein diacetate (FDA) for 40 min allowed discrimination between metabolic active and inactive cells. The shortterm assay was tested using Cu as toxicant. For this purpose, algal cells, in the exponential or stationary phase of growth, were exposed to the heavy metal in growing conditions. After 3 or 6 h, cells were subsequently stained with FDA, using the optimized procedure. For Cu, the 3- and 6-h EC50 values, based on the inhibition of the esterase activity of algal cells in the exponential phase of growth, were 209 and 130 μg L−1, respectively. P. subcapitata cells, in the stationary phase of growth, displayed higher effective concentration values than those observed in the exponential phase. The 3- and 6-h EC50 values for Cu, for cells in the stationary phase, were 443 and 268 μgL−1, respectively. This short-term microplate assay showed to be a rapid endpoint for testing toxicity using the alga P. subcapitata. The small volume required, the simplicity of the assay (no washing steps), and the automatic reading of the fluorescence make the assay particularly well suited for the evaluation of the toxicity of a high number of environmental samples.
Resumo:
Competitive electricity markets have arisen as a result of power-sector restructuration and power-system deregulation. The players participating in competitive electricity markets must define strategies and make decisions using all the available information and business opportunities.
Resumo:
It is generally challenging to determine end-to-end delays of applications for maximizing the aggregate system utility subject to timing constraints. Many practical approaches suggest the use of intermediate deadline of tasks in order to control and upper-bound their end-to-end delays. This paper proposes a unified framework for different time-sensitive, global optimization problems, and solves them in a distributed manner using Lagrangian duality. The framework uses global viewpoints to assign intermediate deadlines, taking resource contention among tasks into consideration. For soft real-time tasks, the proposed framework effectively addresses the deadline assignment problem while maximizing the aggregate quality of service. For hard real-time tasks, we show that existing heuristic solutions to the deadline assignment problem can be incorporated into the proposed framework, enriching their mathematical interpretation.
Resumo:
Due to usage conditions, hazardous environments or intentional causes, physical and virtual systems are subject to faults in their components, which may affect their overall behaviour. In a ‘black-box’ agent modelled by a set of propositional logic rules, in which just a subset of components is externally visible, such faults may only be recognised by examining some output function of the agent. A (fault-free) model of the agent’s system provides the expected output given some input. If the real output differs from that predicted output, then the system is faulty. However, some faults may only become apparent in the system output when appropriate inputs are given. A number of problems regarding both testing and diagnosis thus arise, such as testing a fault, testing the whole system, finding possible faults and differentiating them to locate the correct one. The corresponding optimisation problems of finding solutions that require minimum resources are also very relevant in industry, as is minimal diagnosis. In this dissertation we use a well established set of benchmark circuits to address such diagnostic related problems and propose and develop models with different logics that we formalise and generalise as much as possible. We also prove that all techniques generalise to agents and to multiple faults. The developed multi-valued logics extend the usual Boolean logic (suitable for faultfree models) by encoding values with some dependency (usually on faults). Such logics thus allow modelling an arbitrary number of diagnostic theories. Each problem is subsequently solved with CLP solvers that we implement and discuss, together with a new efficient search technique that we present. We compare our results with other approaches such as SAT (that require substantial duplication of circuits), showing the effectiveness of constraints over multi-valued logics, and also the adequacy of a general set constraint solver (with special inferences over set functions such as cardinality) on other problems. In addition, for an optimisation problem, we integrate local search with a constructive approach (branch-and-bound) using a variety of logics to improve an existing efficient tool based on SAT and ILP.
Resumo:
The basic motivation of this work was the integration of biophysical models within the interval constraints framework for decision support. Comparing the major features of biophysical models with the expressive power of the existing interval constraints framework, it was clear that the most important inadequacy was related with the representation of differential equations. System dynamics is often modelled through differential equations but there was no way of expressing a differential equation as a constraint and integrate it within the constraints framework. Consequently, the goal of this work is focussed on the integration of ordinary differential equations within the interval constraints framework, which for this purpose is extended with the new formalism of Constraint Satisfaction Differential Problems. Such framework allows the specification of ordinary differential equations, together with related information, by means of constraints, and provides efficient propagation techniques for pruning the domains of their variables. This enabled the integration of all such information in a single constraint whose variables may subsequently be used in other constraints of the model. The specific method used for pruning its variable domains can then be combined with the pruning methods associated with the other constraints in an overall propagation algorithm for reducing the bounds of all model variables. The application of the constraint propagation algorithm for pruning the variable domains, that is, the enforcement of local-consistency, turned out to be insufficient to support decision in practical problems that include differential equations. The domain pruning achieved is not, in general, sufficient to allow safe decisions and the main reason derives from the non-linearity of the differential equations. Consequently, a complementary goal of this work proposes a new strong consistency criterion, Global Hull-consistency, particularly suited to decision support with differential models, by presenting an adequate trade-of between domain pruning and computational effort. Several alternative algorithms are proposed for enforcing Global Hull-consistency and, due to their complexity, an effort was made to provide implementations able to supply any-time pruning results. Since the consistency criterion is dependent on the existence of canonical solutions, it is proposed a local search approach that can be integrated with constraint propagation in continuous domains and, in particular, with the enforcing algorithms for anticipating the finding of canonical solutions. The last goal of this work is the validation of the approach as an important contribution for the integration of biophysical models within decision support. Consequently, a prototype application that integrated all the proposed extensions to the interval constraints framework is developed and used for solving problems in different biophysical domains.
Resumo:
This paper proposes a stochastic mixed-integer linear approach to deal with a short-term unit commitment problem with uncertainty on a deregulated electricity market that includes day-ahead bidding and bilateral contracts. The proposed approach considers the typically operation constraints on the thermal units and a spinning reserve. The uncertainty is due to the electricity prices, which are modeled by a scenario set, allowing an acceptable computation. Moreover, emission allowances are considered in a manner to allow for the consideration of environmental constraints. A case study to illustrate the usefulness of the proposed approach is presented and an assessment of the cost for the spinning reserve is obtained by a comparison between the situation with and without spinning reserve.
Resumo:
In distributed soft real-time systems, maximizing the aggregate quality-of-service (QoS) is a typical system-wide goal, and addressing the problem through distributed optimization is challenging. Subtasks are subject to unpredictable failures in many practical environments, and this makes the problem much harder. In this paper, we present a robust optimization framework for maximizing the aggregate QoS in the presence of random failures. We introduce the notion of K-failure to bound the effect of random failures on schedulability. Using this notion we define the concept of K-robustness that quantifies the degree of robustness on QoS guarantee in a probabilistic sense. The parameter K helps to tradeoff achievable QoS versus robustness. The proposed robust framework produces optimal solutions through distributed computations on the basis of Lagrangian duality, and we present some implementation techniques. Our simulation results show that the proposed framework can probabilistically guarantee sub-optimal QoS which remains feasible even in the presence of random failures.
Resumo:
The foot and the ankle are small structures commonly affected by disorders, and their complex anatomy represent significant diagnostic challenges. SPECT/CT Image fusion can provide missing anatomical and bone structure information to functional imaging, which is particularly useful to increase diagnosis certainty of bone pathology. However, due to SPECT acquisition duration, patient’s involuntary movements may lead to misalignment between SPECT and CT images. Patient motion can be reduced using a dedicated patient support. We aimed at designing an ankle and foot immobilizing device and measuring its efficacy at improving image fusion. Methods: We enrolled 20 patients undergoing distal lower-limb SPECT/CT of the ankle and the foot with and without a foot holder. The misalignment between SPECT and CT images was computed by manually measuring 14 fiducial markers chosen among anatomical landmarks also visible on bone scintigraphy. Analysis of variance was performed for statistical analysis. Results: The obtained absolute average difference without and with support was 5.1±5.2 mm (mean±SD) and 3.1±2.7 mm, respectively, which is significant (p<0.001). Conclusion: The introduction of the foot holder significantly decreases misalignment between SPECT and CT images, which may have clinical influence in the precise localization of foot and ankle pathology.
Resumo:
Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.
Resumo:
Penalty and Barrier methods are normally used to solve Nonlinear Optimization Problems constrained problems. The problems appear in areas such as engineering and are often characterised by the fact that involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. This means that optimization methods based on derivatives cannot net used. A Java based API was implemented, including only derivative-free optimizationmethods, to solve both constrained and unconstrained problems, which includes Penalty and Barriers methods. In this work a new penalty function, based on Fuzzy Logic, is presented. This function imposes a progressive penalization to solutions that violate the constraints. This means that the function imposes a low penalization when the violation of the constraints is low and a heavy penalisation when the violation is high. The value of the penalization is not known in beforehand, it is the outcome of a fuzzy inference engine. Numerical results comparing the proposed function with two of the classic penalty/barrier functions are presented. Regarding the presented results one can conclude that the prosed penalty function besides being very robust also exhibits a very good performance.
Resumo:
Search Optimization methods are needed to solve optimization problems where the objective function and/or constraints functions might be non differentiable, non convex or might not be possible to determine its analytical expressions either due to its complexity or its cost (monetary, computational, time,...). Many optimization problems in engineering and other fields have these characteristics, because functions values can result from experimental or simulation processes, can be modelled by functions with complex expressions or by noise functions and it is impossible or very difficult to calculate their derivatives. Direct Search Optimization methods only use function values and do not need any derivatives or approximations of them. In this work we present a Java API that including several methods and algorithms, that do not use derivatives, to solve constrained and unconstrained optimization problems. Traditional API access, by installing it on the developer and/or user computer, and remote API access to it, using Web Services, are also presented. Remote access to the API has the advantage of always allow the access to the latest version of the API. For users that simply want to have a tool to solve Nonlinear Optimization Problems and do not want to integrate these methods in applications, also two applications were developed. One is a standalone Java application and the other a Web-based application, both using the developed API.