7 resultados para Simulated annealing algorithms

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. ^ A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. ^ The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed—especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. ^ The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A job shop with one batch processing and several discrete machines is analyzed. Given a set of jobs, their process routes, processing requirements, and size, the objective is to schedule the jobs such that the makespan is minimized. The batch processing machine can process a batch of jobs as long as the machine capacity is not violated. The batch processing time is equal to the longest processing job in the batch. The problem under study can be represented as Jm:batch:Cmax. If no batches were formed, the scheduling problem under study reduces to the classical job shop scheduling problem (i.e. Jm:: Cmax), which is known to be NP-hard. This research extends the scheduling literature by combining Jm::Cmax with batch processing. The primary contributions are the mathematical formulation, a new network representation and several solution approaches. The problem under study is observed widely in metal working and other industries, but received limited or no attention due to its complexity. A novel network representation of the problem using disjunctive and conjunctive arcs, and a mathematical formulation are proposed to minimize the makespan. Besides that, several algorithms, like batch forming heuristics, dispatching rules, Modified Shifting Bottleneck, Tabu Search (TS) and Simulated Annealing (SA), were developed and implemented. An experimental study was conducted to evaluate the proposed heuristics, and the results were compared to those from a commercial solver (i.e., CPLEX). TS and SA, with the combination of MWKR-FF as the initial solution, gave the best solutions among all the heuristics proposed. Their results were close to CPLEX; and for some larger instances, with total operations greater than 225, they were competitive in terms of solution quality and runtime. For some larger problem instances, CPLEX was unable to report a feasible solution even after running for several hours. Between SA and the experimental study indicated that SA produced a better average Cmax for all instances. The solution approaches proposed will benefit practitioners to schedule a job shop (with both discrete and batch processing machines) more efficiently. The proposed solution approaches are easier to implement and requires short run times to solve large problem instances.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An emergency is a deviation from a planned course of events that endangers people, properties, or the environment. It can be described as an unexpected event that causes economic damage, destruction, and human suffering. When a disaster happens, Emergency Managers are expected to have a response plan to most likely disaster scenarios. Unlike earthquakes and terrorist attacks, a hurricane response plan can be activated ahead of time, since a hurricane is predicted at least five days before it makes landfall. This research looked into the logistics aspects of the problem, in an attempt to develop a hurricane relief distribution network model. We addressed the problem of how to efficiently and effectively deliver basic relief goods to victims of a hurricane disaster. Specifically, where to preposition State Staging Areas (SSA), which Points of Distributions (PODs) to activate, and the allocation of commodities to each POD. Previous research has addressed several of these issues, but not with the incorporation of the random behavior of the hurricane's intensity and path. This research presents a stochastic meta-model that deals with the location of SSAs and the allocation of commodities. The novelty of the model is that it treats the strength and path of the hurricane as stochastic processes, and models them as Discrete Markov Chains. The demand is also treated as stochastic parameter because it depends on the stochastic behavior of the hurricane. However, for the meta-model, the demand is an input that is determined using Hazards United States (HAZUS), a software developed by the Federal Emergency Management Agency (FEMA) that estimates losses due to hurricanes and floods. A solution heuristic has been developed based on simulated annealing. Since the meta-model is a multi-objective problem, the heuristic is a multi-objective simulated annealing (MOSA), in which the initial solution and the cooling rate were determined via a Design of Experiments. The experiment showed that the initial temperature (T0) is irrelevant, but temperature reduction (δ) must be very gradual. Assessment of the meta-model indicates that the Markov Chains performed as well or better than forecasts made by the National Hurricane Center (NHC). Tests of the MOSA showed that it provides solutions in an efficient manner. Thus, an illustrative example shows that the meta-model is practical.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chloroperoxidase (CPO), a 298-residue glycosylated protein from the fungus Caldariomyces fumago, is probably the most versatile heme enzyme yet discovered. Interest in CPO as a catalyst is based on its power to produce enantiomerically enriched products. Recent research has focused its attention on the ability of CPO to epoxidize alkenes in high regioselectivity and enantioselectivity as an efficient and environmentally benign alternative to traditional synthetic routes. There has been little work on the nature of ligand binding, which probably controls the regio- and enantiospecifity of CPO. Consequently it is here that we focus our work. We report docking calculations and computer simulations aimed at predicting the enantiospecificity of CPO-catalyzed epoxidation of three model substrates. On the basis of this work candidate mutations to improve the efficiency of CPO are predicted. In order to accomplish these aims, a simulated annealing and molecular dynamics protocol is developed to sample potentially reactive substrate/CPO complexes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent decades, the rapid development of optical spectroscopy for tissue diagnosis has been indicative of its high clinical value. The goal of this research is to prove the feasibility of using diffuse reflectance spectroscopy and fluorescence spectroscopy to assess myocardial infarction (MI) in vivo. The proposed optical technique was designed to be an intra-operative guidance tool that can provide useful information about the condition of an infarct for surgeons and researchers. ^ In order to gain insight into the pathophysiological characteristics of an infarct, two novel spectral analysis algorithms were developed to interpret diffuse reflectance spectra. The algorithms were developed based on the unique absorption properties of hemoglobin for the purpose of retrieving regional hemoglobin oxygenation saturation and concentration data in tissue from diffuse reflectance spectra. The algorithms were evaluated and validated using simulated data and actual experimental data. ^ Finally, the hypothesis of the study was validated using a rabbit model of MI. The mechanism by which the MI was induced was the ligation of a major coronary artery of the left ventricle. Three to four weeks after the MI was induced, the extent of myocardial tissue injury and the evolution of the wound healing process were investigated using the proposed spectroscopic methodology as well as histology. The correlations between spectral alterations and histopathological features of the MI were analyzed statistically. ^ The results of this PhD study demonstrate the applicability of the proposed optical methodology for assessing myocardial tissue damage induced by MI in vivo. The results of the spectral analysis suggest that connective tissue proliferation induced by MI significantly alter the characteristics of diffuse reflectance and fluorescence spectra. The magnitudes of the alterations could be quantitatively related to the severity and extensiveness of connective tissue proliferation.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tumor functional volume (FV) and its mean activity concentration (mAC) are the quantities derived from positron emission tomography (PET). These quantities are used for estimating radiation dose for a therapy, evaluating the progression of a disease and also use it as a prognostic indicator for predicting outcome. PET images have low resolution, high noise and affected by partial volume effect (PVE). Manually segmenting each tumor is very cumbersome and very hard to reproduce. To solve the above problem I developed an algorithm, called iterative deconvolution thresholding segmentation (IDTS) algorithm; the algorithm segment the tumor, measures the FV, correct for the PVE and calculates mAC. The algorithm corrects for the PVE without the need to estimate camera's point spread function (PSF); also does not require optimizing for a specific camera. My algorithm was tested in physical phantom studies, where hollow spheres (0.5-16 ml) were used to represent tumors with a homogeneous activity distribution. It was also tested on irregular shaped tumors with a heterogeneous activity profile which were acquired using physical and simulated phantom. The physical phantom studies were performed with different signal to background ratios (SBR) and with different acquisition times (1-5 min). The algorithm was applied on ten clinical data where the results were compared with manual segmentation and fixed percentage thresholding method called T50 and T60 in which 50% and 60% of the maximum intensity respectively is used as threshold. The average error in FV and mAC calculation was 30% and -35% for 0.5 ml tumor. The average error FV and mAC calculation were ~5% for 16 ml tumor. The overall FV error was ∼10% for heterogeneous tumors in physical and simulated phantom data. The FV and mAC error for clinical image compared to manual segmentation was around -17% and 15% respectively. In summary my algorithm has potential to be applied on data acquired from different cameras as its not dependent on knowing the camera's PSF. The algorithm can also improve dose estimation and treatment planning.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tumor functional volume (FV) and its mean activity concentration (mAC) are the quantities derived from positron emission tomography (PET). These quantities are used for estimating radiation dose for a therapy, evaluating the progression of a disease and also use it as a prognostic indicator for predicting outcome. PET images have low resolution, high noise and affected by partial volume effect (PVE). Manually segmenting each tumor is very cumbersome and very hard to reproduce. To solve the above problem I developed an algorithm, called iterative deconvolution thresholding segmentation (IDTS) algorithm; the algorithm segment the tumor, measures the FV, correct for the PVE and calculates mAC. The algorithm corrects for the PVE without the need to estimate camera’s point spread function (PSF); also does not require optimizing for a specific camera. My algorithm was tested in physical phantom studies, where hollow spheres (0.5-16 ml) were used to represent tumors with a homogeneous activity distribution. It was also tested on irregular shaped tumors with a heterogeneous activity profile which were acquired using physical and simulated phantom. The physical phantom studies were performed with different signal to background ratios (SBR) and with different acquisition times (1-5 min). The algorithm was applied on ten clinical data where the results were compared with manual segmentation and fixed percentage thresholding method called T50 and T60 in which 50% and 60% of the maximum intensity respectively is used as threshold. The average error in FV and mAC calculation was 30% and -35% for 0.5 ml tumor. The average error FV and mAC calculation were ~5% for 16 ml tumor. The overall FV error was ~10% for heterogeneous tumors in physical and simulated phantom data. The FV and mAC error for clinical image compared to manual segmentation was around -17% and 15% respectively. In summary my algorithm has potential to be applied on data acquired from different cameras as its not dependent on knowing the camera’s PSF. The algorithm can also improve dose estimation and treatment planning.