928 resultados para Unconstrained and convex optimization
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
We characterize the Walrasian allocations correspondence, in classesof exchange economies with smooth and convex preferences, by means of consistency requirements and other axioms. We present three characterizationresults; all of which require consistency, converse consistency and standard axioms. Two characterizations hold also on domains with a finite number ofpotential agents, one of them requires envy freeness (with respect to trades) and the other--core selection; a third characterization, that requires coreselection, applies only to a variable number of agents domain, but is validalso when the domain includes only a small variety of preferences.
Resumo:
We present a polyhedral framework for establishing general structural properties on optimal solutions of stochastic scheduling problems, where multiple job classes vie for service resources: the existence of an optimal priority policy in a given family, characterized by a greedoid(whose feasible class subsets may receive higher priority), where optimal priorities are determined by class-ranking indices, under restricted linear performance objectives (partial indexability). This framework extends that of Bertsimas and Niño-Mora (1996), which explained the optimality of priority-index policies under all linear objectives (general indexability). We show that, if performance measures satisfy partial conservation laws (with respect to the greedoid), which extend previous generalized conservation laws, then theproblem admits a strong LP relaxation over a so-called extended greedoid polytope, which has strong structural and algorithmic properties. We present an adaptive-greedy algorithm (which extends Klimov's) taking as input the linear objective coefficients, which (1) determines whether the optimal LP solution is achievable by a policy in the given family; and (2) if so, computes a set of class-ranking indices that characterize optimal priority policies in the family. In the special case of project scheduling, we show that, under additional conditions, the optimal indices can be computed separately for each project (index decomposition). We further apply the framework to the important restless bandit model (two-action Markov decision chains), obtaining new index policies, that extend Whittle's (1988), and simple sufficient conditions for their validity. These results highlight the power of polyhedral methods (the so-called achievable region approach) in dynamic and stochastic optimization.
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy, Total Variation (TV)- based energies and more recently non-local means. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm or fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n2) and O(1/√ε), while existing techniques are in O(1/n2) and O(1/√ε). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
Resumo:
3 Summary 3. 1 English The pharmaceutical industry has been facing several challenges during the last years, and the optimization of their drug discovery pipeline is believed to be the only viable solution. High-throughput techniques do participate actively to this optimization, especially when complemented by computational approaches aiming at rationalizing the enormous amount of information that they can produce. In siiico techniques, such as virtual screening or rational drug design, are now routinely used to guide drug discovery. Both heavily rely on the prediction of the molecular interaction (docking) occurring between drug-like molecules and a therapeutically relevant target. Several softwares are available to this end, but despite the very promising picture drawn in most benchmarks, they still hold several hidden weaknesses. As pointed out in several recent reviews, the docking problem is far from being solved, and there is now a need for methods able to identify binding modes with a high accuracy, which is essential to reliably compute the binding free energy of the ligand. This quantity is directly linked to its affinity and can be related to its biological activity. Accurate docking algorithms are thus critical for both the discovery and the rational optimization of new drugs. In this thesis, a new docking software aiming at this goal is presented, EADock. It uses a hybrid evolutionary algorithm with two fitness functions, in combination with a sophisticated management of the diversity. EADock is interfaced with .the CHARMM package for energy calculations and coordinate handling. A validation was carried out on 37 crystallized protein-ligand complexes featuring 11 different proteins. The search space was defined as a sphere of 15 R around the center of mass of the ligand position in the crystal structure, and conversely to other benchmarks, our algorithms was fed with optimized ligand positions up to 10 A root mean square deviation 2MSD) from the crystal structure. This validation illustrates the efficiency of our sampling heuristic, as correct binding modes, defined by a RMSD to the crystal structure lower than 2 A, were identified and ranked first for 68% of the complexes. The success rate increases to 78% when considering the five best-ranked clusters, and 92% when all clusters present in the last generation are taken into account. Most failures in this benchmark could be explained by the presence of crystal contacts in the experimental structure. EADock has been used to understand molecular interactions involved in the regulation of the Na,K ATPase, and in the activation of the nuclear hormone peroxisome proliferatoractivated receptors a (PPARa). It also helped to understand the action of common pollutants (phthalates) on PPARy, and the impact of biotransformations of the anticancer drug Imatinib (Gleevec®) on its binding mode to the Bcr-Abl tyrosine kinase. Finally, a fragment-based rational drug design approach using EADock was developed, and led to the successful design of new peptidic ligands for the a5ß1 integrin, and for the human PPARa. In both cases, the designed peptides presented activities comparable to that of well-established ligands such as the anticancer drug Cilengitide and Wy14,643, respectively. 3.2 French Les récentes difficultés de l'industrie pharmaceutique ne semblent pouvoir se résoudre que par l'optimisation de leur processus de développement de médicaments. Cette dernière implique de plus en plus. de techniques dites "haut-débit", particulièrement efficaces lorsqu'elles sont couplées aux outils informatiques permettant de gérer la masse de données produite. Désormais, les approches in silico telles que le criblage virtuel ou la conception rationnelle de nouvelles molécules sont utilisées couramment. Toutes deux reposent sur la capacité à prédire les détails de l'interaction moléculaire entre une molécule ressemblant à un principe actif (PA) et une protéine cible ayant un intérêt thérapeutique. Les comparatifs de logiciels s'attaquant à cette prédiction sont flatteurs, mais plusieurs problèmes subsistent. La littérature récente tend à remettre en cause leur fiabilité, affirmant l'émergence .d'un besoin pour des approches plus précises du mode d'interaction. Cette précision est essentielle au calcul de l'énergie libre de liaison, qui est directement liée à l'affinité du PA potentiel pour la protéine cible, et indirectement liée à son activité biologique. Une prédiction précise est d'une importance toute particulière pour la découverte et l'optimisation de nouvelles molécules actives. Cette thèse présente un nouveau logiciel, EADock, mettant en avant une telle précision. Cet algorithme évolutionnaire hybride utilise deux pressions de sélections, combinées à une gestion de la diversité sophistiquée. EADock repose sur CHARMM pour les calculs d'énergie et la gestion des coordonnées atomiques. Sa validation a été effectuée sur 37 complexes protéine-ligand cristallisés, incluant 11 protéines différentes. L'espace de recherche a été étendu à une sphère de 151 de rayon autour du centre de masse du ligand cristallisé, et contrairement aux comparatifs habituels, l'algorithme est parti de solutions optimisées présentant un RMSD jusqu'à 10 R par rapport à la structure cristalline. Cette validation a permis de mettre en évidence l'efficacité de notre heuristique de recherche car des modes d'interactions présentant un RMSD inférieur à 2 R par rapport à la structure cristalline ont été classés premier pour 68% des complexes. Lorsque les cinq meilleures solutions sont prises en compte, le taux de succès grimpe à 78%, et 92% lorsque la totalité de la dernière génération est prise en compte. La plupart des erreurs de prédiction sont imputables à la présence de contacts cristallins. Depuis, EADock a été utilisé pour comprendre les mécanismes moléculaires impliqués dans la régulation de la Na,K ATPase et dans l'activation du peroxisome proliferatoractivated receptor a (PPARa). Il a également permis de décrire l'interaction de polluants couramment rencontrés sur PPARy, ainsi que l'influence de la métabolisation de l'Imatinib (PA anticancéreux) sur la fixation à la kinase Bcr-Abl. Une approche basée sur la prédiction des interactions de fragments moléculaires avec protéine cible est également proposée. Elle a permis la découverte de nouveaux ligands peptidiques de PPARa et de l'intégrine a5ß1. Dans les deux cas, l'activité de ces nouveaux peptides est comparable à celles de ligands bien établis, comme le Wy14,643 pour le premier, et le Cilengitide (PA anticancéreux) pour la seconde.
Resumo:
The Mechanistic-Empirical Pavement Design Guide (MEPDG) was developed under National Cooperative Highway Research Program (NCHRP) Project 1-37A as a novel mechanistic-empirical procedure for the analysis and design of pavements. The MEPDG was subsequently supported by AASHTO’s DARWin-ME and most recently marketed as AASHTOWare Pavement ME Design software as of February 2013. Although the core design process and computational engine have remained the same over the years, some enhancements to the pavement performance prediction models have been implemented along with other documented changes as the MEPDG transitioned to AASHTOWare Pavement ME Design software. Preliminary studies were carried out to determine possible differences between AASHTOWare Pavement ME Design, MEPDG (version 1.1), and DARWin-ME (version 1.1) performance predictions for new jointed plain concrete pavement (JPCP), new hot mix asphalt (HMA), and HMA over JPCP systems. Differences were indeed observed between the pavement performance predictions produced by these different software versions. Further investigation was needed to verify these differences and to evaluate whether identified local calibration factors from the latest MEPDG (version 1.1) were acceptable for use with the latest version (version 2.1.24) of AASHTOWare Pavement ME Design at the time this research was conducted. Therefore, the primary objective of this research was to examine AASHTOWare Pavement ME Design performance predictions using previously identified MEPDG calibration factors (through InTrans Project 11-401) and, if needed, refine the local calibration coefficients of AASHTOWare Pavement ME Design pavement performance predictions for Iowa pavement systems using linear and nonlinear optimization procedures. A total of 130 representative sections across Iowa consisting of JPCP, new HMA, and HMA over JPCP sections were used. The local calibration results of AASHTOWare Pavement ME Design are presented and compared with national and locally calibrated MEPDG models.
Resumo:
The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.
Establishing intercompany relationships: Motives and methods for successful collaborative engagement
Resumo:
This study explores the early phases of intercompany relationship building, which is a very important topic for purchasing and business development practitioners as well as for companies' upper management. There is a lot ofevidence that a proper engagement with markets increases a company's potential for achieving business success. Taking full advantage of the market possibilities requires, however, a holistic view of managing related decision-making chain. Most literature as well as the business processes of companies are lacking this holism. Typically they observe the process from the perspective of individual stages and thus lead to discontinuity and sub-optimization. This study contains a comprehensive introduction to and evaluation of literature related to various steps of the decision-making process. It is studied from a holistic perspective ofdetermining a company's vertical integration position within its demand/ supplynetwork context; translating the vertical integration objectives to feasible strategies and objectives; and operationalizing the decisions made through engagement with collaborative intercompany relationships. The empirical part of the research has been conducted in two sections. First the phenomenon of intercompany engagement is studied using two complementary case studies. Secondly a survey hasbeen conducted among the purchasing and business development managers of several electronics manufacturing companies, to analyze the processes, decision-makingcriteria and success factors of engagement for collaboration. The aim has been to identify the reasons why companies and their management act the way they do. As a combination of theoretical and empirical research an analysis has been produced of what would be an ideal way of engaging with markets. Based on the respective findings the study concludes by proposing a holistic framework for successful engagement. The evidence presented throughout the study demonstrates clear gaps, discontinuities and limitations in both current research and in practical purchasing decision-making chains. The most significant discontinuity is the identified disconnection between the supplier selection process and related criteria and the relationship success factors.
Resumo:
Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy [1], Total Variation (TV)based energies [2,3] and more recently non-local means [4]. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm for fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n(2)) and O(1/root epsilon), while existing techniques are in O(1/n) and O(1/epsilon). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
A new approach to segmentation based on fusing circumscribed contours, region growing and clustering
Resumo:
One of the major problems in machine vision is the segmentation of images of natural scenes. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. The main contours of the scene are detected and used to guide the posterior region growing process. The algorithm places a number of seeds at both sides of a contour allowing stating a set of concurrent growing processes. A previous analysis of the seeds permits to adjust the homogeneity criterion to the regions's characteristics. A new homogeneity criterion based on clustering analysis and convex hull construction is proposed
Resumo:
The goal of the Master’s thesis is to develop and to analyze the optimization method for finding a geometry shape of classical horizontal wind turbine blades based on set of criteria. The thesis develops a technique that allows the designer to determine the weight of such factors as power coefficient, sound pressure level and the cost function in the overall process of blade shape optimization. The optimization technique applies the Desirability function. It was never used before in that kind of technical problems, and in this sense it can claim to originality of research. To do the analysis and the optimization processes more convenient the software application was developed.
Resumo:
Cutting of thick section stainless steel and mild steel, and medium section aluminium using the high power ytterbium fibre laser has been experimentally investigated in this study. Theoretical models of the laser power requirement for cutting of a metal workpiece and the melt removal rate were also developed. The calculated laser power requirement was correlated to the laser power used for the cutting of 10 mm stainless steel workpiece and 15 mm mild steel workpiece using the ytterbium fibre laser and the CO2 laser. Nitrogen assist gas was used for cutting of stainless steel and oxygen was used for mild steel cutting. It was found that the incident laser power required for cutting at a given cutting speed was lower for fibre laser cutting than for CO2 laser cutting indicating a higher absorptivity of the fibre laser beam by the workpiece and higher melting efficiency for the fibre laser beam than for the CO2 laser beam. The difficulty in achieving an efficient melt removal during high speed cutting of the 15 mmmild steel workpiece with oxygen assist gas using the ytterbium fibre laser can be attributed to the high melting efficiency of the ytterbium fibre laser. The calculated melt flow velocity and melt film thickness correlated well with the location of the boundary layer separation point on the 10 mm stainless steel cut edges. An increase in the melt film thickness caused by deceleration of the melt particles in the boundary layer by the viscous shear forces results in the flow separation. The melt flow velocity increases with an increase in assist gas pressure and cut kerf width resulting in a reduction in the melt film thickness and the boundary layer separation point moves closer to the bottom cut edge. The cut edge quality was examined by visual inspection of the cut samples and measurement of the cut kerf width, boundary layer separation point, cut edge squareness (perpendicularity) deviation, and cut edge surface roughness as output quality factors. Different regions of cut edge quality in 10 mm stainless steel and 4 mm aluminium workpieces were defined for different combinations of cutting speed and laserpower.Optimization of processing parameters for a high cut edge quality in 10 mmstainless steel was demonstrated
Resumo:
In any decision making under uncertainties, the goal is mostly to minimize the expected cost. The minimization of cost under uncertainties is usually done by optimization. For simple models, the optimization can easily be done using deterministic methods.However, many models practically contain some complex and varying parameters that can not easily be taken into account using usual deterministic methods of optimization. Thus, it is very important to look for other methods that can be used to get insight into such models. MCMC method is one of the practical methods that can be used for optimization of stochastic models under uncertainty. This method is based on simulation that provides a general methodology which can be applied in nonlinear and non-Gaussian state models. MCMC method is very important for practical applications because it is a uni ed estimation procedure which simultaneously estimates both parameters and state variables. MCMC computes the distribution of the state variables and parameters of the given data measurements. MCMC method is faster in terms of computing time when compared to other optimization methods. This thesis discusses the use of Markov chain Monte Carlo (MCMC) methods for optimization of Stochastic models under uncertainties .The thesis begins with a short discussion about Bayesian Inference, MCMC and Stochastic optimization methods. Then an example is given of how MCMC can be applied for maximizing production at a minimum cost in a chemical reaction process. It is observed that this method performs better in optimizing the given cost function with a very high certainty.
Resumo:
Thirty heads with the neck segment of Caiman latirostris were used. The animals were provided from a creation center called Mister Caiman, under the authorization of the Brazilian Institute of Environment and Renewable Natural Resources (Ibama). Animals were sacrificed according to the slaughtering routine of the abattoir, and the heads were sectioned at the level of the third cervical vertebra. The arterial system was washed with cold saline solution, with drainage through jugular veins. Subsequently, the system was filled with red colored latex injection. Pieces were than fixed in 20% formaldehyde, for seven days. The brains were removed, with a spinal cord segment, the duramater removed and the arteries dissected. At the level of the hypophysis, the internal carotid artery gave off a rostral branch, and a short caudal branch, continuing, naturally, as the caudal cerebral artery. This artery projected laterodorsalwards and, as it overpassed the optic tract, gave off its I (the first) central branch. Penetrated in the cerebral transverse fissure, emitting the diencephalic artery and next its II (second) central branch. Still inside the fissure, originated occipital hemispheric branches and a pineal branch. Emerged from the cerebral transverse fissure, over the occipital pole of the cerebral hemisphere. Projected rostralwards, sagital to the cerebral longitudinal fissure, as interhemispheric artery. This artery gave off medial and convex hemispheric branches to the respective surfaces of the cerebral hemispheres, anastomosed with its contralateral homologous, forming the common ethmoidal artery. This artery entered the fissure between the olfactory peduncles, emerging ventrally and dividing into ethmoidal arteries, right and left, which progressed towards the nasal cavities, vascularizing them. The territory of the caudal cerebral artery included the most caudal area of the base of the cerebral hemisphere, its convex surface, the olfactory peduncles and bulbs, the choroid plexuses and the diencephalus with its parietal organs.