937 resultados para ISE and ITSE optimization
Resumo:
Purpose Dasatinib is a BCR-ABL inhibitor, 325-fold more potent than imatinib against unmutated BCR-ABL in vitro. Phase II studies have demonstrated efficacy and safety with dasatinib 70 mg twice daily in chronic-phase (CP) chronic myelogenous leukemia (CML) after imatinib treatment failure. In phase I, responses occurred with once-daily administration despite only intermittent BCR-ABL inhibition. Once-daily treatment resulted in less toxicity, suggesting that toxicity results from continuous inhibition of unintended targets. Here, a dose-and schedule-optimization study is reported. Patients and Methods In this open-label phase III trial, 670 patients with imatinib-resistant or -intolerant CP-CML were randomly assigned 1: 1: 1: 1 between four dasatinib treatment groups: 100 mg once daily, 50 mg twice daily, 140 mg once daily, or 70 mg twice daily. Results With minimum follow-up of 6 months (median treatment duration, 8 months; range, = 1 to 15 months), marked and comparable hematologic (complete, 86% to 92%) and cytogenetic (major, 54% to 59%; complete, 41% to 45%) response rates were observed across the four groups. Time to and duration of cytogenetic response were similar, as was progression-free survival (8% to 11% of patients experienced disease progression or died). Compared with the approved 70-mg twice-daily regimen, dasatinib 100 mg once daily resulted in significantly lower rates of pleural effusion (all grades, 7% v 16%; P = .024) and grade 3 to 4 thrombocytopenia (22% v 37%; P = .004), and fewer patients required dose interruption (51% v 68%), reduction (30% v 55%), or discontinuation (16% v 23%). Conclusion Dasatinib 100 mg once daily retains the efficacy of 70 mg twice daily with less toxicity. Intermittent target inhibition with tyrosine kinase inhibitors may preserve efficacy and reduce adverse events.
Resumo:
Metaheuristics performance is highly dependent of the respective parameters which need to be tuned. Parameter tuning may allow a larger flexibility and robustness but requires a careful initialization. The process of defining which parameters setting should be used is not obvious. The values for parameters depend mainly on the problem, the instance to be solved, the search time available to spend in solving the problem, and the required quality of solution. This paper presents a learning module proposal for an autonomous parameterization of Metaheuristics, integrated on a Multi-Agent System for the resolution of Dynamic Scheduling problems. The proposed learning module is inspired on Autonomic Computing Self-Optimization concept, defining that systems must continuously and proactively improve their performance. For the learning implementation it is used Case-based Reasoning, which uses previous similar data to solve new cases. In the use of Case-based Reasoning it is assumed that similar cases have similar solutions. After a literature review on topics used, both AutoDynAgents system and Self-Optimization module are described. Finally, a computational study is presented where the proposed module is evaluated, obtained results are compared with previous ones, some conclusions are reached, and some future work is referred. It is expected that this proposal can be a great contribution for the self-parameterization of Metaheuristics and for the resolution of scheduling problems on dynamic environments.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
Over the last two decades the research and development of legged locomotion robots has grown steadily. Legged systems present major advantages when compared with ‘traditional’ vehicles, because they allow locomotion in inaccessible terrain to vehicles with wheels and tracks. However, the robustness of legged robots, and especially their energy consumption, among other aspects, still lag behind mechanisms that use wheels and tracks. Therefore, in the present state of development, there are several aspects that need to be improved and optimized. Keeping these ideas in mind, this paper presents the review of the literature of different methods adopted for the optimization of the structure and locomotion gaits of walking robots. Among the distinct possible strategies often used for these tasks are referred approaches such as the mimicking of biological animals, the use of evolutionary schemes to find the optimal parameters and structures, the adoption of sound mechanical design rules, and the optimization of power-based indexes.
Resumo:
Search Optimization methods are needed to solve optimization problems where the objective function and/or constraints functions might be non differentiable, non convex or might not be possible to determine its analytical expressions either due to its complexity or its cost (monetary, computational, time,...). Many optimization problems in engineering and other fields have these characteristics, because functions values can result from experimental or simulation processes, can be modelled by functions with complex expressions or by noise functions and it is impossible or very difficult to calculate their derivatives. Direct Search Optimization methods only use function values and do not need any derivatives or approximations of them. In this work we present a Java API that including several methods and algorithms, that do not use derivatives, to solve constrained and unconstrained optimization problems. Traditional API access, by installing it on the developer and/or user computer, and remote API access to it, using Web Services, are also presented. Remote access to the API has the advantage of always allow the access to the latest version of the API. For users that simply want to have a tool to solve Nonlinear Optimization Problems and do not want to integrate these methods in applications, also two applications were developed. One is a standalone Java application and the other a Web-based application, both using the developed API.
Resumo:
Nonlinear Optimization Problems are usual in many engineering fields. Due to its characteristics the objective function of some problems might not be differentiable or its derivatives have complex expressions. There are even cases where an analytical expression of the objective function might not be possible to determine either due to its complexity or its cost (monetary, computational, time, ...). In these cases Nonlinear Optimization methods must be used. An API, including several methods and algorithms to solve constrained and unconstrained optimization problems was implemented. This API can be accessed not only as traditionally, by installing it on the developer and/or user computer, but it can also be accessed remotely using Web Services. As long as there is a network connection to the server where the API is installed, applications always access to the latest API version. Also an Web-based application, using the proposed API, was developed. This application is to be used by users that do not want to integrate methods in applications, and simply want to have a tool to solve Nonlinear Optimization Problems.
Resumo:
Adhesive bonding as a joining or repair method has a wide application in many industries. Repairs with bonded patches are often carried out to re-establish the stiffness at critical regions or spots of corrosion and/or fatigue cracks. Single and double-strap repairs (SS and DS, respectively) are a viable option for repairing. For the SS repairs, a patch is adhesively-bonded on one of the structure faces. SS repairs are easy to execute, but the load eccentricity leads to peel peak stresses at the overlap edges. DS repairs involve the use of two patches, one on each face of the structure. These are more efficient than SS repairs, due to the doubling of the bonding area and suppression of the transverse deflection of the adherends. Shear stresses also become more uniform as a result of smaller differential straining. The experimental and Finite Element (FE) study presented here for strength prediction and design optimization of bonded repairs includes SS and DS solutions with different values of overlap length (LO). The examined values of LO include 10, 20 and 30 mm. The failure strengths of the SS and DS repairs were compared with FE results by using the Abaqus® FE software. A Cohesive Zone Model (CZM) with a triangular shape in pure tensile and shear modes, including the mixed-mode possibility for crack growth, was used to simulate fracture of the adhesive layer. A good agreement was found between the experiments and the FE simulations on the failure modes, elastic stiffness and strength of the repairs, showing the effectiveness and applicability of the proposed FE technique in predicting strength of bonded repairs. Furthermore, some optimization principles were proposed to repair structures with adhesively-bonded patches that will allow repair designers to effectively design bonded repairs.
Resumo:
The trajectory planning of redundant robots is an important area of research and efficient optimization algorithms have been investigated in the last years. This paper presents a new technique that combines the closed-loop pseudoinverse method with genetic algorithms. In this case the trajectory planning is formulated as an optimization problem with constraints.
Resumo:
The trajectory planning of redundant robots is an important area of research and efficient optimization algorithms are needed. The pseudoinverse control is not repeatable, causing drift in joint space which is undesirable for physical control. This paper presents a new technique that combines the closed-loop pseudoinverse method with genetic algorithms, leading to an optimization criterion for repeatable control of redundant manipulators, and avoiding the joint angle drift problem. Computer simulations performed based on redundant and hyper-redundant planar manipulators show that, when the end-effector traces a closed path in the workspace, the robot returns to its initial configuration. The solution is repeatable for a workspace with and without obstacles in the sense that, after executing several cycles, the initial and final states of the manipulator are very close.
Resumo:
Fuzzy logic controllers (FLC) are intelligent systems, based on heuristic knowledge, that have been largely applied in numerous areas of everyday life. They can be used to describe a linear or nonlinear system and are suitable when a real system is not known or too difficult to find their model. FLC provide a formal methodology for representing, manipulating and implementing a human heuristic knowledge on how to control a system. These controllers can be seen as artificial decision makers that operate in a closed-loop system, in real time. The main aim of this work was to develop a single optimal fuzzy controller, easily adaptable to a wide range of systems – simple to complex, linear to nonlinear – and able to control all these systems. Due to their efficiency in searching and finding optimal solution for high complexity problems, GAs were used to perform the FLC tuning by finding the best parameters to obtain the best responses. The work was performed using the MATLAB/SIMULINK software. This is a very useful tool that provides an easy way to test and analyse the FLC, the PID and the GAs in the same environment. Therefore, it was proposed a Fuzzy PID controller (FL-PID) type namely, the Fuzzy PD+I. For that, the controller was compared with the classical PID controller tuned with, the heuristic Ziegler-Nichols tuning method, the optimal Zhuang-Atherton tuning method and the GA method itself. The IAE, ISE, ITAE and ITSE criteria, used as the GA fitness functions, were applied to compare the controllers performance used in this work. Overall, and for most systems, the FL-PID results tuned with GAs were very satisfactory. Moreover, in some cases the results were substantially better than for the other PID controllers. The best system responses were obtained with the IAE and ITAE criteria used to tune the FL-PID and PID controllers.
Resumo:
European Journal of Operational Research, nº 73 (1994)
Optimization of fMRI Processing Parameters for Simutaneous Acquisition of EEG/fMRI in Focal Epilepsy
Resumo:
In the context of focal epilepsy, the simultaneous combination of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) holds a great promise as a technique by which the hemodynamic correlates of interictal spikes detected on scalp EEG can be identified. The fact that traditional EEG recordings have not been able to overcome the difficulty in correlating the ictal clinical symptoms to the onset in particular areas of the lobes, brings the need of mapping with more precision the epileptogenic cortical regions. On the other hand, fMRI suggested localizations more consistent with the ictal clinical manifestations detected. This study was developed in order to improve the knowledge about the way parameters involved in the physical and mathematical data, produced by the EEG/fMRI technique processing, would influence the final results. The evaluation of the accuracy was made by comparing the BOLD results with: the high resolution EEG maps; the malformative lesions detected in the T1 weighted MR images; and the anatomical localizations of the diagnosed symptomatology of each studied patient. The optimization of the set of parameters used, will provide an important contribution to the diagnosis of epileptogenic focuses, in patients included on an epilepsy surgery evaluation program. The results obtained allowed us to conclude that: by associating the BOLD effect with interictal spikes, the epileptogenic areas are mapped to localizations different from those obtained by the EEG maps representing the electrical potential distribution across the scalp (EEG); there is an important and solid bond between the variation of particular parameters (manipulated during the fMRI data processing) and the optimization of the final results, from which smoothing, deleted volumes, HRF (used to convolve with the activation design), and the shape of the Gamma function can be certainly emphasized.
Resumo:
Dissertação para obtenção do Grau de Mestre em Biotecnologia
Resumo:
Optimization is a very important field for getting the best possible value for the optimization function. Continuous optimization is optimization over real intervals. There are many global and local search techniques. Global search techniques try to get the global optima of the optimization problem. However, local search techniques are used more since they try to find a local minimal solution within an area of the search space. In Continuous Constraint Satisfaction Problems (CCSP)s, constraints are viewed as relations between variables, and the computations are supported by interval analysis. The continuous constraint programming framework provides branch-and-prune algorithms for covering sets of solutions for the constraints with sets of interval boxes which are the Cartesian product of intervals. These algorithms begin with an initial crude cover of the feasible space (the Cartesian product of the initial variable domains) which is recursively refined by interleaving pruning and branching steps until a stopping criterion is satisfied. In this work, we try to find a convenient way to use the advantages in CCSP branchand- prune with local search of global optimization applied locally over each pruned branch of the CCSP. We apply local search techniques of continuous optimization over the pruned boxes outputted by the CCSP techniques. We mainly use steepest descent technique with different characteristics such as penalty calculation and step length. We implement two main different local search algorithms. We use “Procure”, which is a constraint reasoning and global optimization framework, to implement our techniques, then we produce and introduce our results over a set of benchmarks.
Resumo:
A potentially renewable and sustainable source of energy is the chemical energy associated with solvation of salts. Mixing of two aqueous streams with different saline concentrations is spontaneous and releases energy. The global theoretically obtainable power from salinity gradient energy due to World’s rivers discharge into the oceans has been estimated to be within the range of 1.4-2.6 TW. Reverse electrodialysis (RED) is one of the emerging, membrane-based, technologies for harvesting the salinity gradient energy. A common RED stack is composed by alternately-arranged cation- and anion-exchange membranes, stacked between two electrodes. The compartments between the membranes are alternately fed with concentrated (e.g., sea water) and dilute (e.g., river water) saline solutions. Migration of the respective counter-ions through the membranes leads to ionic current between the electrodes, where an appropriate redox pair converts the chemical salinity gradient energy into electrical energy. Given the importance of the need for new sources of energy for power generation, the present study aims at better understanding and solving current challenges, associated with the RED stack design, fluid dynamics, ionic mass transfer and long-term RED stack performance with natural saline solutions as feedwaters. Chronopotentiometry was used to determinate diffusion boundary layer (DBL) thickness from diffusion relaxation data and the flow entrance effects on mass transfer were found to avail a power generation increase in RED stacks. Increasing the linear flow velocity also leads to a decrease of DBL thickness but on the cost of a higher pressure drop. Pressure drop inside RED stacks was successfully simulated by the developed mathematical model, in which contribution of several pressure drops, that until now have not been considered, was included. The effect of each pressure drop on the RED stack performance was identified and rationalized and guidelines for planning and/or optimization of RED stacks were derived. The design of new profiled membranes, with a chevron corrugation structure, was proposed using computational fluid dynamics (CFD) modeling. The performance of the suggested corrugation geometry was compared with the already existing ones, as well as with the use of conductive and non-conductive spacers. According to the estimations, use of chevron structures grants the highest net power density values, at the best compromise between the mass transfer coefficient and the pressure drop values. Finally, long-term experiments with natural waters were performed, during which fouling was experienced. For the first time, 2D fluorescence spectroscopy was used to monitor RED stack performance, with a dedicated focus on following fouling on ion-exchange membrane surfaces. To extract relevant information from fluorescence spectra, parallel factor analysis (PARAFAC) was performed. Moreover, the information obtained was then used to predict net power density, stack electric resistance and pressure drop by multivariate statistical models based on projection to latent structures (PLS) modeling. The use in such models of 2D fluorescence data, containing hidden, but extractable by PARAFAC, information about fouling on membrane surfaces, considerably improved the models fitting to the experimental data.