947 resultados para Bio-inspired optimization techniques
Resumo:
Combinatorial Optimization is becoming ever more crucial, in these days. From natural sciences to economics, passing through urban centers administration and personnel management, methodologies and algorithms with a strong theoretical background and a consolidated real-word effectiveness is more and more requested, in order to find, quickly, good solutions to complex strategical problems. Resource optimization is, nowadays, a fundamental ground for building the basements of successful projects. From the theoretical point of view, Combinatorial Optimization rests on stable and strong foundations, that allow researchers to face ever more challenging problems. However, from the application point of view, it seems that the rate of theoretical developments cannot cope with that enjoyed by modern hardware technologies, especially with reference to the one of processors industry. In this work we propose new parallel algorithms, designed for exploiting the new parallel architectures available on the market. We found that, exposing the inherent parallelism of some resolution techniques (like Dynamic Programming), the computational benefits are remarkable, lowering the execution times by more than an order of magnitude, and allowing to address instances with dimensions not possible before. We approached four Combinatorial Optimization’s notable problems: Packing Problem, Vehicle Routing Problem, Single Source Shortest Path Problem and a Network Design problem. For each of these problems we propose a collection of effective parallel solution algorithms, either for solving the full problem (Guillotine Cuts and SSSPP) or for enhancing a fundamental part of the solution method (VRP and ND). We endorse our claim by presenting computational results for all problems, either on standard benchmarks from the literature or, when possible, on data from real-world applications, where speed-ups of one order of magnitude are usually attained, not uncommonly scaling up to 40 X factors.
Resumo:
Nowadays the number of hip joints arthroplasty operations continues to increase because the elderly population is growing. Moreover, the global life expectancy is increasing and people adopt a more active way of life. For this reasons, the demand of implant revision operations is becoming more frequent. The operation procedure includes the surgical removal of the old implant and its substitution with a new one. Every time a new implant is inserted, it generates an alteration in the internal femur strain distribution, jeopardizing the remodeling process with the possibility of bone tissue loss. This is of major concern, particularly in the proximal Gruen zones, which are considered critical for implant stability and longevity. Today, different implant designs exist in the market; however there is not a clear understanding of which are the best implant design parameters to achieve mechanical optimal conditions. The aim of the study is to investigate the stress shielding effect generated by different implant design parameters on proximal femur, evaluating which ranges of those parameters lead to the most physiological conditions.
Resumo:
The first part of this essay aims at investigating the already available and promising technologies for the biogas and bio-hydrogen production from anaerobic digestion of different organic substrates. One strives to show all the peculiarities of this complicate process, such as continuity, number of stages, moisture, biomass preservation and rate of feeding. The main outcome of this part is the awareness of the huge amount of reactor configurations, each of which suitable for a few types of substrate and circumstance. Among the most remarkable results, one may consider first of all the wet continuous stirred tank reactors (CSTR), right to face the high waste production rate in urbanised and industrialised areas. Then, there is the up-flow anaerobic sludge blanket reactor (UASB), aimed at the biomass preservation in case of highly heterogeneous feedstock, which can also be treated in a wise co-digestion scheme. On the other hand, smaller and scattered rural realities can be served by either wet low-rate digesters for homogeneous agricultural by-products (e.g. fixed-dome) or the cheap dry batch reactors for lignocellulose waste and energy crops (e.g. hybrid batch-UASB). The biological and technical aspects raised during the first chapters are later supported with bibliographic research on the important and multifarious large-scale applications the products of the anaerobic digestion may have. After the upgrading techniques, particular care was devoted to their importance as biofuels, highlighting a further and more flexible solution consisting in the reforming to syngas. Then, one shows the electricity generation and the associated heat conversion, stressing on the high potential of fuel cells (FC) as electricity converters. Last but not least, both the use as vehicle fuel and the injection into the gas pipes are considered as promising applications. The consideration of the still important issues of the bio-hydrogen management (e.g. storage and delivery) may lead to the conclusion that it would be far more challenging to implement than bio-methane, which can potentially “inherit” the assets of the similar fossil natural gas. Thanks to the gathered knowledge, one devotes a chapter to the energetic and financial study of a hybrid power system supplied by biogas and made of different pieces of equipment (natural gas thermocatalitic unit, molten carbonate fuel cell and combined-cycle gas turbine structure). A parallel analysis on a bio-methane-fed CCGT system is carried out in order to compare the two solutions. Both studies show that the apparent inconvenience of the hybrid system actually emphasises the importance of extending the computations to a broader reality, i.e. the upstream processes for the biofuel production and the environmental/social drawbacks due to fossil-derived emissions. Thanks to this “boundary widening”, one can realise the hidden benefits of the hybrid over the CCGT system.
Resumo:
Ce mémoire a comme objectif de montrer le processus de localisation en langue italienne d’un site Internet français, celui du Parc de loisir du Lac de Maine. En particulier, le but du mémoire est de démontrer que, lorsqu’on parle de localisation pour le Web, on doit tenir compte de deux facteurs essentiels, qui contribuent de manière exceptionnelle au succès du site sur le Réseau Internet. D’un côté, l’utilisabilité du site Web, dite également ergonomie du Web, qui a pour objectif de rendre les sites Web plus aisés d'utilisation pour l'utilisateur final, de manière que son rapprochement au site soit intuitif et simple. De l’autre côté, l’optimisation pour les moteurs de recherche, couramment appelée « SEO », acronyme de son appellation anglais, qui cherche à découvrir les meilleures techniques visant à optimiser la visibilité d'un site web dans les pages de résultats de recherche. En améliorant le positionnement d'une page web dans les pages de résultats de recherche des moteurs, le site a beaucoup plus de possibilités d’augmenter son trafic et, donc, son succès. Le premier chapitre de ce mémoire introduit la localisation, avec une approche théorique qui en illustre les caractéristiques principales ; il contient aussi des références à la naissance et l’origine de la localisation. On introduit aussi le domaine du site qu’on va localiser, c’est-à-dire le domaine du tourisme, en soulignant l’importance de la langue spéciale du tourisme. Le deuxième chapitre est dédié à l’optimisation pour les moteurs de recherche et à l’ergonomie Web. Enfin, le dernier chapitre est consacré au travail de localisation sur le site du Parc : on analyse le site, ses problèmes d’optimisation et d’ergonomie, et on montre toutes les phases du processus de localisation, y compris l’intégration de plusieurs techniques visant à améliorer la facilité d’emploi par les utilisateurs finaux, ainsi que le positionnement du site dans les pages de résultats des moteurs de recherche.
Resumo:
Polylactic acid (PLA) is a bio-derived, biodegradable polymer with a number of similar mechanical properties to commodity plastics like polyethylene (PE) and polyethylene terephthalate (PETE). There has recently been a great interest in using PLA to replace these typical petroleum-derived polymers because of the developing trend to use more sustainable materials and technologies. However, PLA¿s inherent slow crystallization behavior is not compatible with prototypical polymer processing techniques such as molding and extrusion, and in turn inhibits its widespread use in industrial applications. In order to make PLA into a commercially-viable material, there is a need to process the material in such a way that its tendency to form crystals is enhanced. The industry standard for producing PLA products is via twin screw extrusion (TSE), where polymer pellets are fed into a heated extruder, mixed at a temperature above its melting temperature, and molded into a desired shape. A relatively novel processing technique called solid-state shear pulverization (SSSP) processes the polymer in the solid state so that nucleation sites can develop and fast crystallization can occur. SSSP has also been found to enhance the mechanical properties of a material, but its powder output form is undesirable in industry. A new process called solid-state/melt extrusion (SSME), developed at Bucknell University, combines the TSE and SSSP processes in one instrument. This technique has proven to produce moldable polymer products with increased mechanical strength. This thesis first investigated the effects of the TSE, SSSP, and SSME polymer processing techniques on PLA. The study seeks to determine the process that yields products with the most enhanced thermal and mechanical properties. For characterization, percent crystallinity, crystallization half time, storage modulus, softening temperature, degradation temperature and molecular weight were analyzed for all samples. Through these characterization techniques, it was observed that SSME-processed PLA had enhanced properties relative to TSE- and SSSP-processed PLA. Because of the previous findings, an optimization study for SSME-processed PLA was conducted where throughput and screw design were varied. The optimization study determined PLA processed with a low flow rate and a moderate screw design in an SSME process produced a polymer product with the largest increase in thermal properties and a high retention of polymer structure relative to TSE-, SSSP-, and all other SSME-processed PLA. It was concluded that the SSSP part of processing scissions polymer chains, creating defects within the material, while the TSE part of processing allows these defects to be mixed thoroughly throughout the sample. The study showed that a proper SSME setup allows for both the increase in nucleation sites within the polymer and sufficient mixing, which in turn leads to the development of a large amount of crystals in a short period of time.
Resumo:
Reuse distance analysis, the prediction of how many distinct memory addresses will be accessed between two accesses to a given address, has been established as a useful technique in profile-based compiler optimization, but the cost of collecting the memory reuse profile has been prohibitive for some applications. In this report, we propose using the hardware monitoring facilities available in existing CPUs to gather an approximate reuse distance profile. The difficulties associated with this monitoring technique are discussed, most importantly that there is no obvious link between the reuse profile produced by hardware monitoring and the actual reuse behavior. Potential applications which would be made viable by a reliable hardware-based reuse distance analysis are identified.
Resumo:
This dissertation discusses structural-electrostatic modeling techniques, genetic algorithm based optimization and control design for electrostatic micro devices. First, an alternative modeling technique, the interpolated force model, for electrostatic micro devices is discussed. The method provides improved computational efficiency relative to a benchmark model, as well as improved accuracy for irregular electrode configurations relative to a common approximate model, the parallel plate approximation model. For the configuration most similar to two parallel plates, expected to be the best case scenario for the approximate model, both the parallel plate approximation model and the interpolated force model maintained less than 2.2% error in static deflection compared to the benchmark model. For the configuration expected to be the worst case scenario for the parallel plate approximation model, the interpolated force model maintained less than 2.9% error in static deflection while the parallel plate approximation model is incapable of handling the configuration. Second, genetic algorithm based optimization is shown to improve the design of an electrostatic micro sensor. The design space is enlarged from published design spaces to include the configuration of both sensing and actuation electrodes, material distribution, actuation voltage and other geometric dimensions. For a small population, the design was improved by approximately a factor of 6 over 15 generations to a fitness value of 3.2 fF. For a larger population seeded with the best configurations of the previous optimization, the design was improved by another 7% in 5 generations to a fitness value of 3.0 fF. Third, a learning control algorithm is presented that reduces the closing time of a radiofrequency microelectromechanical systems switch by minimizing bounce while maintaining robustness to fabrication variability. Electrostatic actuation of the plate causes pull-in with high impact velocities, which are difficult to control due to parameter variations from part to part. A single degree-of-freedom model was utilized to design a learning control algorithm that shapes the actuation voltage based on the open/closed state of the switch. Experiments on 3 test switches show that after 5-10 iterations, the learning algorithm lands the switch with an impact velocity not exceeding 0.2 m/s, eliminating bounce.
Resumo:
Understanding how a living cell behaves has become a very important topic in today’s research field. Hence, different sensors and testing devices have been designed to test the mechanical properties of these living cells. This thesis presents a method of micro-fabricating a bio-MEMS based force sensor which is used to measure the force response of living cells. Initially, the basic concepts of MEMS have been discussed and the different micro-fabrication techniques used to manufacture various MEMS devices have been described. There have been many MEMS based devices manufactured and employed for testing many nano-materials and bio-materials. Each of the MEMS based devices described in this thesis use a novel concept of testing the specimens. The different specimens tested are nano-tubes, nano-wires, thin film membranes and biological living cells. Hence, these different devices used for material testing and cell mechanics have been explained. The micro-fabrication techniques used to fabricate this force sensor has been described and the experiments preformed to successfully characterize each step in the fabrication have been explained. The fabrication of this force sensor is based on the facilities available at Michigan Technological University. There are some interesting and uncommon concepts in MEMS which have been observed during this fabrication. These concepts in MEMS which have been observed are shown in multiple SEM images.
Resumo:
The objective of this research is to develop sustainable wood-blend bioasphalt and characterize the atomic, molecular and bulk-scale behavior necessary to produce advanced asphalt paving mixtures. Bioasphalt was manufactured from Aspen, Basswood, Red Maple, Balsam, Maple, Pine, Beech and Magnolia wood via a 25 KWt fast-pyrolysis plant at 500 °C and refined into two distinct end forms - non-treated (5.54% moisture) and treated bioasphalt (1% moisture). Michigan petroleum-based asphalt, Performance Grade (PG) 58-28 was modified with 2, 5 and 10% of the bioasphalt by weight of base asphalt and characterized with the gas chromatography-mass spectroscopy (GC-MS), Fourier Transform Infra-red (FTIR) spectroscopy and the automated flocculation titrimetry techniques. The GC-MS method was used to characterize the Carbon-Hydrogen-Nitrogen (CHN) elemental ratio whiles the FTIR and the AFT were used to characterize the oxidative aging performance and the solubility parameters, respectively. For rheological characterization, the rotational viscosity, dynamic shear modulus and flexural bending methods are used in evaluating the low, intermediate and high temperature performance of the bio-modified asphalt materials. 54 5E3 (maximum of 3 million expected equivalent standard axle traffic loads) asphalt paving mixes were then prepared and characterized to investigate their laboratory permanent deformation, dynamic mix stiffness, moisture susceptibility, workability and constructability performance. From the research investigations, it was concluded that: 1) levo, 2, 6 dimethoxyphenol, 2 methoxy 4 vinylphenol, 2 methyl 1-2 cyclopentandione and 4-allyl-2, 6 dimetoxyphenol are the dominant chemical functional groups; 2) bioasphalt increases the viscosity and dynamic shear modulus of traditional asphalt binders; 3) Bio-modified petroleum asphalt can provide low-temperature cracking resistance benefits at -18 °C but is susceptible to cracking at -24 °C; 3) Carbonyl and sulphoxide oxidation in petroleum-based asphalt increases with increasing bioasphalt modifiers; 4) bioasphalt causes the asphaltene fractions in petroleum-based asphalt to precipitate out of the solvent maltene fractions; 5) there is no definite improvement or decline in the dynamic mix behavior of bio-modified mixes at low temperatures; 6) bio-modified asphalt mixes exhibit better rutting performance than traditional asphalt mixes; 7) bio-modified asphalt mixes have lower susceptibility to moisture damage; 8) more field compaction energy is needed to compact bio-modified mixes.
Resumo:
The problem of optimal design of a multi-gravity-assist space trajectories, with free number of deep space maneuvers (MGADSM) poses multi-modal cost functions. In the general form of the problem, the number of design variables is solution dependent. To handle global optimization problems where the number of design variables varies from one solution to another, two novel genetic-based techniques are introduced: hidden genes genetic algorithm (HGGA) and dynamic-size multiple population genetic algorithm (DSMPGA). In HGGA, a fixed length for the design variables is assigned for all solutions. Independent variables of each solution are divided into effective and ineffective (hidden) genes. Hidden genes are excluded in cost function evaluations. Full-length solutions undergo standard genetic operations. In DSMPGA, sub-populations of fixed size design spaces are randomly initialized. Standard genetic operations are carried out for a stage of generations. A new population is then created by reproduction from all members based on their relative fitness. The resulting sub-populations have different sizes from their initial sizes. The process repeats, leading to increasing the size of sub-populations of more fit solutions. Both techniques are applied to several MGADSM problems. They have the capability to determine the number of swing-bys, the planets to swing by, launch and arrival dates, and the number of deep space maneuvers as well as their locations, magnitudes, and directions in an optimal sense. The results show that solutions obtained using the developed tools match known solutions for complex case studies. The HGGA is also used to obtain the asteroids sequence and the mission structure in the global trajectory optimization competition (GTOC) problem. As an application of GA optimization to Earth orbits, the problem of visiting a set of ground sites within a constrained time frame is solved. The J2 perturbation and zonal coverage are considered to design repeated Sun-synchronous orbits. Finally, a new set of orbits, the repeated shadow track orbits (RSTO), is introduced. The orbit parameters are optimized such that the shadow of a spacecraft on the Earth visits the same locations periodically every desired number of days.
Resumo:
Micro-scale, two-phase flow is found in a variety of devices such as Lab-on-a-chip, bio-chips, micro-heat exchangers, and fuel cells. Knowledge of the fluid behavior near the dynamic gas-liquid interface is required for developing accurate predictive models. Light is distorted near a curved gas-liquid interface preventing accurate measurement of interfacial shape and internal liquid velocities. This research focused on the development of experimental methods designed to isolate and probe dynamic liquid films and measure velocity fields near a moving gas-liquid interface. A high-speed, reflectance, swept-field confocal (RSFC) imaging system was developed for imaging near curved surfaces. Experimental studies of dynamic gas-liquid interface of micro-scale, two-phase flow were conducted in three phases. Dynamic liquid film thicknesses of segmented, two-phase flow were measured using the RSFC and compared to a classic film thickness deposition model. Flow fields near a steadily moving meniscus were measured using RSFC and particle tracking velocimetry. The RSFC provided high speed imaging near the menisci without distortion caused the gas-liquid interface. Finally, interfacial morphology for internal two-phase flow and droplet evaporation were measured using interferograms produced by the RSFC imaging technique. Each technique can be used independently or simultaneously when.
Resumo:
In this paper, a computer-aided diagnostic (CAD) system for the classification of hepatic lesions from computed tomography (CT) images is presented. Regions of interest (ROIs) taken from nonenhanced CT images of normal liver, hepatic cysts, hemangiomas, and hepatocellular carcinomas have been used as input to the system. The proposed system consists of two modules: the feature extraction and the classification modules. The feature extraction module calculates the average gray level and 48 texture characteristics, which are derived from the spatial gray-level co-occurrence matrices, obtained from the ROIs. The classifier module consists of three sequentially placed feed-forward neural networks (NNs). The first NN classifies into normal or pathological liver regions. The pathological liver regions are characterized by the second NN as cyst or "other disease." The third NN classifies "other disease" into hemangioma or hepatocellular carcinoma. Three feature selection techniques have been applied to each individual NN: the sequential forward selection, the sequential floating forward selection, and a genetic algorithm for feature selection. The comparative study of the above dimensionality reduction methods shows that genetic algorithms result in lower dimension feature vectors and improved classification performance.
Resumo:
The North Atlantic spring bloom is one of the main events that lead to carbon export to the deep ocean and drive oceanic uptake of CO(2) from the atmosphere. Here we use a suite of physical, bio-optical and chemical measurements made during the 2008 spring bloom to optimize and compare three different models of biological carbon export. The observations are from a Lagrangian float that operated south of Iceland from early April to late June, and were calibrated with ship-based measurements. The simplest model is representative of typical NPZD models used for the North Atlantic, while the most complex model explicitly includes diatoms and the formation of fast sinking diatom aggregates and cysts under silicate limitation. We carried out a variational optimization and error analysis for the biological parameters of all three models, and compared their ability to replicate the observations. The observations were sufficient to constrain most phytoplankton-related model parameters to accuracies of better than 15 %. However, the lack of zooplankton observations leads to large uncertainties in model parameters for grazing. The simulated vertical carbon flux at 100 m depth is similar between models and agrees well with available observations, but at 600 m the simulated flux is larger by a factor of 2.5 to 4.5 for the model with diatom aggregation. While none of the models can be formally rejected based on their misfit with the available observations, the model that includes export by diatom aggregation has a statistically significant better fit to the observations and more accurately represents the mechanisms and timing of carbon export based on observations not included in the optimization. Thus models that accurately simulate the upper 100 m do not necessarily accurately simulate export to deeper depths.
Resumo:
The influence of respiratory motion on patient anatomy poses a challenge to accurate radiation therapy, especially in lung cancer treatment. Modern radiation therapy planning uses models of tumor respiratory motion to account for target motion in targeting. The tumor motion model can be verified on a per-treatment session basis with four-dimensional cone-beam computed tomography (4D-CBCT), which acquires an image set of the dynamic target throughout the respiratory cycle during the therapy session. 4D-CBCT is undersampled if the scan time is too short. However, short scan time is desirable in clinical practice to reduce patient setup time. This dissertation presents the design and optimization of 4D-CBCT to reduce the impact of undersampling artifacts with short scan times. This work measures the impact of undersampling artifacts on the accuracy of target motion measurement under different sampling conditions and for various object sizes and motions. The results provide a minimum scan time such that the target tracking error is less than a specified tolerance. This work also presents new image reconstruction algorithms for reducing undersampling artifacts in undersampled datasets by taking advantage of the assumption that the relevant motion of interest is contained within a volume-of-interest (VOI). It is shown that the VOI-based reconstruction provides more accurate image intensity than standard reconstruction. The VOI-based reconstruction produced 43% fewer least-squares error inside the VOI and 84% fewer error throughout the image in a study designed to simulate target motion. The VOI-based reconstruction approach can reduce acquisition time and improve image quality in 4D-CBCT.
Resumo:
Purpose: In this work, we present the analysis, design and optimization of one experimental device recently developed in the UK, called the 'GP' Thrombus Aspiration Device (GPTAD). This device has been designed to remove blood clots without the need to make contact with the clot itself thereby potentially reducing the risk of problems such as downstream embolisation. Method: To obtain the minimum pressure necessary to extract the clot and to optimize the device, we have simulated the performance of the GPTAD analysing the resistances, compliances and inertances effects. We model a range of diameters for the GPTAD considering different forces of adhesion of the blood clot to the artery wall, and different lengths of blood clot. In each case we determine the optimum pressure required to extract the blood clot from the artery using the GPTAD, which is attached at its proximal end to a suction pump. Result: We then compare the results of our mathematical modelling to measurements made in laboratory using plastic tube models of arteries of comparable diameter. We use abattoir porcine blood clots that are extracted using the GPTAD. The suction pressures required for such clot extraction in the plastic tube models compare favourably with those predicted by the mathematical modelling. Discussion & Conclusion: We conclude therefore that the mathematical modelling is a useful technique in predicting the performance of the GPTAD and may potentially be used in optimising the design of the device.