840 resultados para multimodal optimization
Resumo:
Metaheuristic methods have become increasingly popular approaches in solving global optimization problems. From a practical viewpoint, it is often desirable to perform multimodal optimization which, enables the search of more than one optimal solution to the task at hand. Population-based metaheuristic methods offer a natural basis for multimodal optimization. The topic has received increasing interest especially in the evolutionary computation community. Several niching approaches have been suggested to allow multimodal optimization using evolutionary algorithms. Most global optimization approaches, including metaheuristics, contain global and local search phases. The requirement to locate several optima sets additional requirements for the design of algorithms to be effective in both respects in the context of multimodal optimization. In this thesis, several different multimodal optimization algorithms are studied in regard to how their implementation in the global and local search phases affect their performance in different problems. The study concentrates especially on variations of the Differential Evolution algorithm and their capabilities in multimodal optimization. To separate the global and local search search phases, three multimodal optimization algorithms are proposed, two of which hybridize the Differential Evolution with a local search method. As the theoretical background behind the operation of metaheuristics is not generally thoroughly understood, the research relies heavily on experimental studies in finding out the properties of different approaches. To achieve reliable experimental information, the experimental environment must be carefully chosen to contain appropriate and adequately varying problems. The available selection of multimodal test problems is, however, rather limited, and no general framework exists. As a part of this thesis, such a framework for generating tunable test functions for evaluating different methods of multimodal optimization experimentally is provided and used for testing the algorithms. The results demonstrate that an efficient local phase is essential for creating efficient multimodal optimization algorithms. Adding a suitable global phase has the potential to boost the performance significantly, but the weak local phase may invalidate the advantages gained from the global phase.
Resumo:
The increasing trend of disaster victims globally is posing a complex challenge for disaster management authorities. Moreover, to accomplish successful transition between preparedness and response, it is important to consider the different features inherent to each type of disaster. Floods are portrayed as one of the most frequent and harmful disasters, hence introducing the necessity to develop a tool for disaster preparedness to perform efficient and effective flood management. The purpose of the article is to introduce a method to simultaneously define the proper location of shelters and distribution centers, along with the allocation of prepositioned goods and distribution decisions required to satisfy flood victims. The tool combines the use of a raster geographical information system (GIS) and an optimization model. The GIS determines the flood hazard of the city areas aiming to assess the flood situation and to discard floodable facilities. Then, the multi-commodity multimodal optimization model is solved to obtain the Pareto frontier of two criteria: distance and cost. The methodology was applied to a case study in the flood of Villahermosa, Mexico, in 2007, and the results were compared to an optimized scenario of the guidelines followed by Mexican authorities, concluding that the value of the performance measures was improved using the developed method. Furthermore, the results exhibited the possibility to provide adequate care for people affected with less facilities than the current approach and the advantages of considering more than one distribution center for relief prepositioning.
Resumo:
Monitoring and management of intracranial pressure (ICP) and cerebral perfusion pressure (CPP) is a standard of care after traumatic brain injury (TBI). However, the pathophysiology of so-called secondary brain injury, i.e., the cascade of potentially deleterious events that occur in the early phase following initial cerebral insult-after TBI, is complex, involving a subtle interplay between cerebral blood flow (CBF), oxygen delivery and utilization, and supply of main cerebral energy substrates (glucose) to the injured brain. Regulation of this interplay depends on the type of injury and may vary individually and over time. In this setting, patient management can be a challenging task, where standard ICP/CPP monitoring may become insufficient to prevent secondary brain injury. Growing clinical evidence demonstrates that so-called multimodal brain monitoring, including brain tissue oxygen (PbtO2), cerebral microdialysis and transcranial Doppler among others, might help to optimize CBF and the delivery of oxygen/energy substrate at the bedside, thereby improving the management of secondary brain injury. Looking beyond ICP and CPP, and applying a multimodal therapeutic approach for the optimization of CBF, oxygen delivery, and brain energy supply may eventually improve overall care of patients with head injury. This review summarizes some of the important pathophysiological determinants of secondary cerebral damage after TBI and discusses novel approaches to optimize CBF and provide adequate oxygen and energy supply to the injured brain using multimodal brain monitoring.
Resumo:
L’athérosclérose est une maladie qui cause, par l’accumulation de plaques lipidiques, le durcissement de la paroi des artères et le rétrécissement de la lumière. Ces lésions sont généralement localisées sur les segments artériels coronariens, carotidiens, aortiques, rénaux, digestifs et périphériques. En ce qui concerne l’atteinte périphérique, celle des membres inférieurs est particulièrement fréquente. En effet, la sévérité de ces lésions artérielles est souvent évaluée par le degré d’une sténose (réduction >50 % du diamètre de la lumière) en angiographie, imagerie par résonnance magnétique (IRM), tomodensitométrie ou échographie. Cependant, pour planifier une intervention chirurgicale, une représentation géométrique artérielle 3D est notamment préférable. Les méthodes d’imagerie par coupe (IRM et tomodensitométrie) sont très performantes pour générer une imagerie tridimensionnelle de bonne qualité mais leurs utilisations sont dispendieuses et invasives pour les patients. L’échographie 3D peut constituer une avenue très prometteuse en imagerie pour la localisation et la quantification des sténoses. Cette modalité d’imagerie offre des avantages distincts tels la commodité, des coûts peu élevés pour un diagnostic non invasif (sans irradiation ni agent de contraste néphrotoxique) et aussi l’option d’analyse en Doppler pour quantifier le flux sanguin. Étant donné que les robots médicaux ont déjà été utilisés avec succès en chirurgie et en orthopédie, notre équipe a conçu un nouveau système robotique d’échographie 3D pour détecter et quantifier les sténoses des membres inférieurs. Avec cette nouvelle technologie, un radiologue fait l’apprentissage manuel au robot d’un balayage échographique du vaisseau concerné. Par la suite, le robot répète à très haute précision la trajectoire apprise, contrôle simultanément le processus d’acquisition d’images échographiques à un pas d’échantillonnage constant et conserve de façon sécuritaire la force appliquée par la sonde sur la peau du patient. Par conséquent, la reconstruction d’une géométrie artérielle 3D des membres inférieurs à partir de ce système pourrait permettre une localisation et une quantification des sténoses à très grande fiabilité. L’objectif de ce projet de recherche consistait donc à valider et optimiser ce système robotisé d’imagerie échographique 3D. La fiabilité d’une géométrie reconstruite en 3D à partir d’un système référentiel robotique dépend beaucoup de la précision du positionnement et de la procédure de calibration. De ce fait, la précision pour le positionnement du bras robotique fut évaluée à travers son espace de travail avec un fantôme spécialement conçu pour simuler la configuration des artères des membres inférieurs (article 1 - chapitre 3). De plus, un fantôme de fils croisés en forme de Z a été conçu pour assurer une calibration précise du système robotique (article 2 - chapitre 4). Ces méthodes optimales ont été utilisées pour valider le système pour l’application clinique et trouver la transformation qui convertit les coordonnées de l’image échographique 2D dans le référentiel cartésien du bras robotisé. À partir de ces résultats, tout objet balayé par le système robotique peut être caractérisé pour une reconstruction 3D adéquate. Des fantômes vasculaires compatibles avec plusieurs modalités d’imagerie ont été utilisés pour simuler différentes représentations artérielles des membres inférieurs (article 2 - chapitre 4, article 3 - chapitre 5). La validation des géométries reconstruites a été effectuée à l`aide d`analyses comparatives. La précision pour localiser et quantifier les sténoses avec ce système robotisé d’imagerie échographique 3D a aussi été déterminée. Ces évaluations ont été réalisées in vivo pour percevoir le potentiel de l’utilisation d’un tel système en clinique (article 3- chapitre 5).
Resumo:
Global optimization seeks a minimum or maximum of a multimodal function over a discrete or continuous domain. In this paper, we propose a hybrid heuristic-based on the CGRASP and GENCAN methods-for finding approximate solutions for continuous global optimization problems subject to box constraints. Experimental results illustrate the relative effectiveness of CGRASP-GENCAN on a set of benchmark multimodal test functions.
Resumo:
Este artigo apresenta um modelo matemático de otimização logística para o transporte multimodal de safras agrícolas pelo corredor Centro-Oeste. Tal ferramenta foi desenvolvida no contexto de três amplos projetos de pesquisa financiados pela FINEP e executados por um grupo de universidades. O modelo, conhecido genericamente como Modelo de fluxo de Custo Mínimo Multiproduto, considera a otimização de fluxos em rede, para os produtos açúcar, álcool, milho, soja, óleo de soja, farelo de soja e trigo. O modelo proposto para estimativa dos fluxos inter-regionais mostrou-se uma ferramenta factível para fins de avaliação do potencial de utilização da multimodalidade. A análise destes resultados gera importantes subsídios para a seleção dos locais com potencial para instalação de mecanismos e equipamentos de transferência de cargas, além de auxiliar no dimensionamento dessas infraestruturas. Também é um resultado importante do ferramental desenvolvido a identificação das zonas de cargas que apresentam potencial captável pelas ferrovias, hidrovias e dutovias, ou seja, possibilita a identificação das regiões que revelam potencial para uso da multimodalidade.
Resumo:
Dynamic importance weighting is proposed as a Monte Carlo method that has the capability to sample relevant parts of the configuration space even in the presence of many steep energy minima. The method relies on an additional dynamic variable (the importance weight) to help the system overcome steep barriers. A non-Metropolis theory is developed for the construction of such weighted samplers. Algorithms based on this method are designed for simulation and global optimization tasks arising from multimodal sampling, neural network training, and the traveling salesman problem. Numerical tests on these problems confirm the effectiveness of the method.
Resumo:
Part 18: Optimization in Collaborative Networks
Resumo:
Insulin was used as model protein to developed innovative Solid Lipid Nanoparticles (SLNs) for the delivery of hydrophilic biotech drugs, with potential use in medicinal chemistry. SLNs were prepared by double emulsion with the purpose of promoting stability and enhancing the protein bioavailability. Softisan(®)100 was selected as solid lipid matrix. The surfactants (Tween(®)80, Span(®)80 and Lipoid(®)S75) and insulin were chosen applying a 2(2) factorial design with triplicate of central point, evaluating the influence of dependents variables as polydispersity index (PI), mean particle size (z-AVE), zeta potential (ZP) and encapsulation efficiency (EE) by factorial design using the ANOVA test. Therefore, thermodynamic stability, polymorphism and matrix crystallinity were checked by Differential Scanning Calorimetry (DSC) and Wide Angle X-ray Diffraction (WAXD), whereas the effect of toxicity of SLNs was check in HepG2 and Caco-2 cells. Results showed a mean particle size (z-AVE) width between 294.6 nm and 627.0 nm, a PI in the range of 0.425-0.750, ZP about -3 mV, and the EE between 38.39% and 81.20%. After tempering the bulk lipid (mimicking the end process of production), the lipid showed amorphous characteristics, with a melting point of ca. 30 °C. The toxicity of SLNs was evaluated in two distinct cell lines (HEPG-2 and Caco-2), showing to be dependent on the concentration of particles in HEPG-2 cells, while no toxicity in was reported in Caco-2 cells. SLNs were stable for 24 h in in vitro human serum albumin (HSA) solution. The resulting SLNs fabricated by double emulsion may provide a promising approach for administration of protein therapeutics and antigens.
Resumo:
Response surface methodology based on Box-Behnken (BBD) design was successfully applied to the optimization in the operating conditions of the electrochemical oxidation of sanitary landfill leachate aimed for making this method feasible for scale up. Landfill leachate was treated in continuous batch-recirculation system, where a dimensional stable anode (DSA(©)) coated with Ti/TiO2 and RuO2 film oxide were used. The effects of three variables, current density (milliampere per square centimeter), time of treatment (minutes), and supporting electrolyte dosage (moles per liter) upon the total organic carbon removal were evaluated. Optimized conditions were obtained for the highest desirability at 244.11 mA/cm(2), 41.78 min, and 0.07 mol/L of NaCl and 242.84 mA/cm(2), 37.07 min, and 0.07 mol/L of Na2SO4. Under the optimal conditions, 54.99 % of chemical oxygen demand (COD) and 71.07 ammonia nitrogen (NH3-N) removal was achieved with NaCl and 45.50 of COD and 62.13 NH3-N with Na2SO4. A new kinetic model predicted obtained from the relation between BBD and the kinetic model was suggested.
Resumo:
Mutations in the SPG4 gene (SPG4-HSP) are the most frequent cause of hereditary spastic paraplegia, but the extent of the neurodegeneration related to the disease is not yet known. Therefore, our objective is to identify regions of the central nervous system damaged in patients with SPG4-HSP using a multi-modal neuroimaging approach. In addition, we aimed to identify possible clinical correlates of such damage. Eleven patients (mean age 46.0 ± 15.0 years, 8 men) with molecular confirmation of hereditary spastic paraplegia, and 23 matched healthy controls (mean age 51.4 ± 14.1years, 17 men) underwent MRI scans in a 3T scanner. We used 3D T1 images to perform volumetric measurements of the brain and spinal cord. We then performed tract-based spatial statistics and tractography analyses of diffusion tensor images to assess microstructural integrity of white matter tracts. Disease severity was quantified with the Spastic Paraplegia Rating Scale. Correlations were then carried out between MRI metrics and clinical data. Volumetric analyses did not identify macroscopic abnormalities in the brain of hereditary spastic paraplegia patients. In contrast, we found extensive fractional anisotropy reduction in the corticospinal tracts, cingulate gyri and splenium of the corpus callosum. Spinal cord morphometry identified atrophy without flattening in the group of patients with hereditary spastic paraplegia. Fractional anisotropy of the corpus callosum and pyramidal tracts did correlate with disease severity. Hereditary spastic paraplegia is characterized by relative sparing of the cortical mantle and remarkable damage to the distal portions of the corticospinal tracts, extending into the spinal cord.
Resumo:
Objective: The biochemical alterations between inflammatory fibrous hyperplasia (IFH) and normal tissues of buccal mucosa were probed by using the FT-Raman spectroscopy technique. The aim was to find the minimal set of Raman bands that would furnish the best discrimination. Background: Raman-based optical biopsy is a widely recognized potential technique for noninvasive real-time diagnosis. However, few studies had been devoted to the discrimination of very common subtle or early pathologic states as inflammatory processes that are always present on, for example, cancer lesion borders. Methods: Seventy spectra of IFH from 14 patients were compared with 30 spectra of normal tissues from six patients. The statistical analysis was performed with principal components analysis and soft independent modeling class analogy cross-validated, leave-one-out methods. Results: Bands close to 574, 1,100, 1,250 to 1,350, and 1,500 cm(-1) (mainly amino acids and collagen bands) showed the main intragroup variations that are due to the acanthosis process in the IFH epithelium. The 1,200 (C-C aromatic/DNA), 1,350 (CH(2) bending/collagen 1), and 1,730 cm(-1) (collagen III) regions presented the main intergroup variations. This finding was interpreted as originating in an extracellular matrix-degeneration process occurring in the inflammatory tissues. The statistical analysis results indicated that the best discrimination capability (sensitivity of 95% and specificity of 100%) was found by using the 530-580 cm(-1) spectral region. Conclusions: The existence of this narrow spectral window enabling normal and inflammatory diagnosis also had useful implications for an in vivo dispersive Raman setup for clinical applications.
Resumo:
Blends of milk fat and canola oil (MF:CNO) were enzymatically interesterified (EIE) by Rhizopus oryzne lipase immobilized on polysiloxane-polyvinyl alcohol (SiO(2)-PVA) composite, in a solvent-free system. A central composite design (CCD) was used to optimize the reaction, considering the effects of different mass fractions of binary blends of MF:CNO (50:50, 65:35 and 80:20) and temperatures (45, 55 and 65 degrees C) on the composition and texture properties of the interesterified products, taking the interesterification degree (ID) and consistency (at 10 degrees C) as response variables. For the ID variable both mass fraction of milk fat in the blend and temperature were found to be significant, while for the consistency only mass fraction of milk fat was significant. Empiric models for ID and consistency were obtained that allowed establishing the best interesterification conditions: blend with 65 % of milk fat and 35 %, of canola oil, and temperature of 45 degrees C. Under these conditions, the ID was 19.77 %) and the consistency at 10 degrees C was 56 290 Pa. The potential of this eco-friendly process demonstrated that a product could be obtained with the desirable milk fat flavour and better spreadability under refrigerated conditions.
Resumo:
The structural engineering community in Brazil faces new challenges with the recent occurrence of high intensity tornados. Satellite surveillance data shows that the area covering the south-east of Brazil, Uruguay and some of Argentina is one of the world most tornado-prone areas, second only to the infamous tornado alley in central United States. The design of structures subject to tornado winds is a typical example of decision making in the presence of uncertainty. Structural design involves finding a good balance between the competing goals of safety and economy. This paper presents a methodology to find the optimum balance between these goals in the presence of uncertainty. In this paper, reliability-based risk optimization is used to find the optimal safety coefficient that minimizes the total expected cost of a steel frame communications tower, subject to extreme storm and tornado wind loads. The technique is not new, but it is applied to a practical problem of increasing interest to Brazilian structural engineers. The problem is formulated in the partial safety factor format used in current design codes, with all additional partial factor introduced to serve as optimization variable. The expected cost of failure (or risk) is defined as the product of a. limit state exceedance probability by a limit state exceedance cost. These costs include costs of repairing, rebuilding, and paying compensation for injury and loss of life. The total expected failure cost is the sum of individual expected costs over all failure modes. The steel frame communications, tower subject of this study has become very common in Brazil due to increasing mobile phone coverage. The study shows that optimum reliability is strongly dependent on the cost (or consequences) of failure. Since failure consequences depend oil actual tower location, it turn,,; out that different optimum designs should be used in different locations. Failure consequences are also different for the different parties involved in the design, construction and operation of the tower. Hence, it is important that risk is well understood by the parties involved, so that proper contracts call be made. The investigation shows that when non-structural terms dominate design costs (e.g, in residential or office buildings) it is not too costly to over-design; this observation is in agreement with the observed practice for non-optimized structural systems. In this situation, is much easier to loose money by under-design. When by under-design. When structural material cost is a significant part of design cost (e.g. concrete dam or bridge), one is likely to lose significantmoney by over-design. In this situation, a cost-risk-benefit optimization analysis is highly recommended. Finally, the study also shows that under time-varying loads like tornados, the optimum reliability is strongly dependent on the selected design life.
Resumo:
This paper presents a rational approach to the design of a catamaran's hydrofoil applied within a modern context of multidisciplinary optimization. The approach used includes the use of response surfaces represented by neural networks and a distributed programming environment that increases the optimization speed. A rational approach to the problem simplifies the complex optimization model; when combined with the distributed dynamic training used for the response surfaces, this model increases the efficiency of the process. The results achieved using this approach have justified this publication.