975 resultados para Optimization analysis
Resumo:
Assessment of image quality for digital x-ray mammography systems used in European screening programs relies mainly on contrast-detail CDMAM phantom scoring and requires the acquisition and analysis of many images in order to reduce variability in threshold detectability. Part II of this study proposes an alternative method based on the detectability index (d') calculated for a non-prewhitened model observer with an eye filter (NPWE). The detectability index was calculated from the normalized noise power spectrum and image contrast, both measured from an image of a 5 cm poly(methyl methacrylate) phantom containing a 0.2 mm thick aluminium square, and the pre-sampling modulation transfer function. This was performed as a function of air kerma at the detector for 11 different digital mammography systems. These calculated d' values were compared against threshold gold thickness (T) results measured with the CDMAM test object and against derived theoretical relationships. A simple relationship was found between T and d', as a function of detector air kerma; a linear relationship was found between d' and contrast-to-noise ratio. The values of threshold thickness used to specify acceptable performance in the European Guidelines for 0.10 and 0.25 mm diameter discs were equivalent to threshold calculated detectability indices of 1.05 and 6.30, respectively. The NPWE method is a validated alternative to CDMAM scoring for use in the image quality specification, quality control and optimization of digital x-ray systems for screening mammography.
Resumo:
The Mechanistic-Empirical Pavement Design Guide (MEPDG) was developed under National Cooperative Highway Research Program (NCHRP) Project 1-37A as a novel mechanistic-empirical procedure for the analysis and design of pavements. The MEPDG was subsequently supported by AASHTO’s DARWin-ME and most recently marketed as AASHTOWare Pavement ME Design software as of February 2013. Although the core design process and computational engine have remained the same over the years, some enhancements to the pavement performance prediction models have been implemented along with other documented changes as the MEPDG transitioned to AASHTOWare Pavement ME Design software. Preliminary studies were carried out to determine possible differences between AASHTOWare Pavement ME Design, MEPDG (version 1.1), and DARWin-ME (version 1.1) performance predictions for new jointed plain concrete pavement (JPCP), new hot mix asphalt (HMA), and HMA over JPCP systems. Differences were indeed observed between the pavement performance predictions produced by these different software versions. Further investigation was needed to verify these differences and to evaluate whether identified local calibration factors from the latest MEPDG (version 1.1) were acceptable for use with the latest version (version 2.1.24) of AASHTOWare Pavement ME Design at the time this research was conducted. Therefore, the primary objective of this research was to examine AASHTOWare Pavement ME Design performance predictions using previously identified MEPDG calibration factors (through InTrans Project 11-401) and, if needed, refine the local calibration coefficients of AASHTOWare Pavement ME Design pavement performance predictions for Iowa pavement systems using linear and nonlinear optimization procedures. A total of 130 representative sections across Iowa consisting of JPCP, new HMA, and HMA over JPCP sections were used. The local calibration results of AASHTOWare Pavement ME Design are presented and compared with national and locally calibrated MEPDG models.
Resumo:
Tässä työssä on esitetty väsyttävän kuormituksen mittaamiseen ja mittausdatan jälkikäsittelyyn sekä väsymismitoitukseen liittyviä menetelmiä. Menetelmien sovelluskohteena oli metsäkoneen kuormain, joka on väsyttävästi kuormitettu hitsattu rakenne. Teoriaosassa on kuvattu väsymisilmiötä ja väsymismitoitusmenetelmiä sekä kuormitusten tunnistamiseen ja mittausten jälkikäsittelyyn liittyviä menetelmiä. Yleisimmin käytettyjen väsymismitoitusmenetelmien rinnalle on esitetty luotettavuuteen perustuvaa väsymismitoitusmenetelmää. Kuormainten suunnittelussa on keveys- j a kestoikävaatimusten takia erityisen suuri merkitys väsymisen huomioimisella. Rakenteille on ominaista tietyt toiminnan kannalta välttämättömät hitsatut yksityiskohdat, jotka usein määräävät koko rakenteen kestoiän. Koska nämä ongelmakohdat pystytään useimmiten tunnistamaan jo suunnitteluvaiheessa, voidaan yksityiskohtien muotoilulla usein parantaa huomattavasti koko rakenteen kestoikää. Näiden yksityiskohtien optimointi on osittain mahdollista toteuttaa ilman kuormituskertymätietoa, mutta useimmiten kuormitusten tunnistaminen on edellytys parhaan ratkaisun löytymiselle. Tällöin toistaiseksi paras keino todellisen väsyttävän kuormituksen tunnistamiseksi on pitkäaikaiset kenttämittaukset. Kenttämittauksilla selvitetään rakenteeseen kohdistuvat kuormitukset venymäliuskojen avulla. Kuormitusten tunnistamisella on erityisen suuri merkitys kun halutaan määrittää rakenteen kestoikä. Väsyminen ja väsyttävä kuormitus ovat kuitenkin tilastollisia muuttujia j a yksittäiselle rakenteelle ei ole mahdollista määrittää tarkkaa k estoikää. Tilastollisia menetelmiä käyttäen on kuitenkin mahdollista määrittää rakenteen vaurioitumisriski. Laskettaessa vaurioitumisriskiä suurelle määrälle yksittäisiä rakenteita voidaan muodostaa tarkkojakin ennusteita mahdollisten vaurioiden lukumäärästä. Tällöin kuormituskertymätiedosta voi olla tavanomaisen suunnittelun lisäksi laajempaa hyötyä esimerkiksi takuukäsittelyssä. Tässä työssä on sovellettu esitettyjä teorioita käytännössä metsäkoneen harvesterin puomiston väsymistarkasteluun. Kyseisen rakenteen kuormituksia mitattiin kahden viikon aikana yhteensä 35 tuntia, jonka perusteella laskettiin väsyttävän kuormituksen tilastollinen jakauma esimerkkitapaukselle. Mittauksen perusteella ei voitu tehdä kuitenkaan johtopäätöksiä tuotteen koko elinkaaren kuormituksista eikä muiden samanlaisten tuotteiden kuormituksista, koska mitattu otos oli suhteellisen lyhyt ja rajoittui vain yhteen käyttäjään ja muutamaan käyttökohteeseen. Menetelmien testaamiseksi kyseinen otos oli kuitenkin riittävä. Kuormituskertymätietoa käytettiin hyväksi myös laatumääritysten muodostamisessaesimerkkitapaukselle. Murtumismekaniikkaan perustuvalla menetelmällä arvioitiinharvesteripilarin valun mahdollisten valuvirheiden suurin sallittu koko. Luotettavuuteen pohjautuvan mitoitusmenettelyn tarve näyttää olevanlisääntymässä, joten pitkäaikaisten kenttämittausten tehokas hyödyntäminen tulee olemaan keskeinen osa väsymismitoitusta lähitulevaisuudessa. Menetelmiä olisi mahdollista tehostaa yhdistämällä kuormituskertymään erilaisia kuormitusten suhteen riippuvia tunnettuja suureita kuten käsiteltävän puun halkaisija. Todellisettuotekohtaiset tilastolliset jakaumat kuormituksista voitaisiin muodostaa mahdollisesti tehokkaammin, jos esimerkiksi kuormitusten riippuvuus metsätyypistä pystyttäisiin ensin määrittämään.
Resumo:
In this thesis, cleaning of ceramic filter media was studied. Mechanisms of fouling and dissolution of iron compounds, as well as methods for cleaning ceramic membranes fouled by iron deposits were studied in the literature part. Cleaning agents and different methods were closer examined in the experimental part of the thesis. Pyrite is found in the geologic strata. It is oxidized to form ferrous ions Fe(II) and ferric ions Fe(III). Fe(III) is further oxidized in the hydrolysis to form ferric hydroxide. Hematite and goethite, for instance, are naturally occurring iron oxidesand hydroxides. In contact with filter media, they can cause severe fouling, which common cleaning techniques competent enough to remove. Mechanisms for the dissolution of iron oxides include the ligand-promoted pathway and the proton-promoted pathway. The dissolution can also be reductive or non-reductive. The most efficient mechanism is the ligand-promoted reductive mechanism that comprises two stages: the induction period and the autocatalytic dissolution.Reducing agents(such as hydroquinone and hydroxylamine hydrochloride), chelating agents (such as EDTA) and organic acids are used for the removal of iron compounds. Oxalic acid is the most effective known cleaning agent for iron deposits. Since formulations are often more effective than organic acids, reducing agents or chelating agents alone, the citrate¿bicarbonate¿dithionite system among others is well studied in the literature. The cleaning is also enhanced with ultrasound and backpulsing.In the experimental part, oxalic acid and nitric acid were studied alone andin combinations. Also citric acid and ascorbic acid among other chemicals were tested. Soaking experiments, experiments with ultrasound and experiments for alternative methods to apply the cleaning solution on the filter samples were carried out. Permeability and ISO Brightness measurements were performed to examine the influence of the cleaning methods on the samples. Inductively coupled plasma optical emission spectroscopy (ICP-OES) analysis of the solutions was carried out to determine the dissolved metals.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
Background: Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results: Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions: Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.
Resumo:
Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the ethanol production in the fermentation of Saccharomyces cerevisiae.
Resumo:
Drug combinations can improve angiostatic cancer treatment efficacy and enable the reduction of side effects and drug resistance. Combining drugs is non-trivial due to the high number of possibilities. We applied a feedback system control (FSC) technique with a population-based stochastic search algorithm to navigate through the large parametric space of nine angiostatic drugs at four concentrations to identify optimal low-dose drug combinations. This implied an iterative approach of in vitro testing of endothelial cell viability and algorithm-based analysis. The optimal synergistic drug combination, containing erlotinib, BEZ-235 and RAPTA-C, was reached in a small number of iterations. Final drug combinations showed enhanced endothelial cell specificity and synergistically inhibited proliferation (p < 0.001), but not migration of endothelial cells, and forced enhanced numbers of endothelial cells to undergo apoptosis (p < 0.01). Successful translation of this drug combination was achieved in two preclinical in vivo tumor models. Tumor growth was inhibited synergistically and significantly (p < 0.05 and p < 0.01, respectively) using reduced drug doses as compared to optimal single-drug concentrations. At the applied conditions, single-drug monotherapies had no or negligible activity in these models. We suggest that FSC can be used for rapid identification of effective, reduced dose, multi-drug combinations for the treatment of cancer and other diseases.
Resumo:
Rautateillä käytettävät tavaravaunut ovat vanhenemassa hyvin nopeasti; tämä koskee niin Venäjää, Suomea, Ruotsia kuin laajemminkin Eurooppaa. Venäjällä ja Euroopassa on käytössä runsaasti vaunuja, jotka ovat jo ylittäneet niille suositeltavan käyttöiän. Silti niitä käytetään kuljetuksissa, kun näitä korvaavia uusia vaunuja ei ole tarpeeksi saatavilla. Uusimmat vaunut ovat yleensä vaunuja vuokraavien yritysten tai uusien rautatieoperaattorien hankkimia - tämä koskee erityisesti Venäjää, jossa vaunuvuokraus on noussut erittäin suosituksi vaihtoehdoksi. Ennusteissa kerrotaan vaunupulan kasvavan ainakin vuoteen 2010 saakka. Jos rautateiden suosio rahtikuljetusmuotona kasvaa, niin voimistuva vaunukysyntä jatkuu huomattavan paljon pidemmän aikaa. Euroopan ja Venäjän vaunukannan tilanne näkyy myös sitä palvelevan konepajateollisuuden ongelmina - yleisesti ottaen alan eurooppalaiset yritykset ovat heikosti kannattavia ja niiden liikevaihto ei juuri kasva, venäläiset ja ukrainalaiset yritykset ovat olleet samassa tilanteessa, joskin aivan viime vuosina tilanne on osassa kääntynyt paremmaksi. Kun näiden maanosien yritysten liikevaihtoa, voittoa ja omistaja-arvoa verrataan yhdysvaltalaisiin kilpailijoihin, huomataan että jälkimmäisten suoriutuminen on huomattavan paljon parempaa, ja näillä yrityksillä on myös kyky maksaa osinkoja omistajilleen. Tutkimuksen tarkoituksena oli kehittää uuden tyyppinen kuljetusvaunu Suomen, Venäjän sekä mahdollisesti myös Kiinan väliseen liikenteeseen. Vaunutyypin tarkoituksena olisi kyetä toimimaan monikäyttöisenä, niin raaka-aineiden kuin konttienkin kuljetuksessa, tasapainottaen kuljetusmuotojen aiheuttamaa kuljetuspaino-ongelmaa. Kehitystyön pohjana käytimme yli 1000 venäläisen vaunutyypin tietokantaa, josta valitsimme Data Envelopment Analysis -menetelmällä soveliaimmat vaunut kontinkuljetukseen (lähemmin tarkastelimme n. 40 vaunutyyppiä), jättäen mahdollisimman vähän tyhjää tilaa junaan, mutta silti kyeten kantamaan valitun konttilastin. Kun kantokykyongelmia venäläisissä vaunuissa ei useinkaan ole, on vertailu tehtävissä tavarajunan pituuden ja kokonaispainon perusteella. Simuloituamme yhdistettyihin kuljetuksiin soveliasta vaunutyyppiä käytännössä löytyvässä kuljetusverkostossa (esim. raakapuuta Suomeen tai Kiinaan ja kontteja takaisin Venäjän suuntaan), huomasimme lyhemmän vaunupituuden sisältävän kustannusetua, erityisesti raakaainekuljetuksissa, mutta myös rajanylityspaikkojen mahdollisesti vähentyessä. Lyhempi vaunutyyppi on myös joustavampi erilaisten konttipituuksien suhteen (40 jalan kontin käyttö on yleistynyt viime vuosina). Työn lopuksi ehdotamme uuden vaunutyypin tuotantotavaksi verkostomaista lähestymistapaa, jossa osa vaunusta tehtäisiin Suomessa ja osa Venäjällä ja/tai Ukrainassa. Vaunutyypin tulisi olla rekisteröity Venäjälle, sillä silloin sitä voi käyttää Suomen ja Venäjän, kuten myös soveltuvin osin Venäjän ja Kiinan välisessä liikenteessä.
Resumo:
Inspections of pleasure boats in Spain can be carried out by collaborating entities of inspection, entities that must be authorized by the Maritime Administration. This authorization allows to perform effective inspections and technical controls of recreational crafts. Recreational crafts are subjected to surveys that are based on the registration list and on the material used in the hull. In addition, required safety equipment of the recreational boat depends on the distance that the recreational boat is authorized to navigate. Following data obtained from inspections of recreational craft, this paper aims to analyze information about hulls within dry and afloat conditions, about the equipment for rescue and safety, and about other nautical equipment; as well as to perform and improve different verifications during the inspections. All this information points to several aspects relevant for the optimization of the inspection process, the ultimate target being increasing efficiency and effectiveness, and ensuring more safety in recreational craft.
Resumo:
AbstractObjective:The present study is aimed at contributing to identify the most appropriate OSEM parameters to generate myocardial perfusion imaging reconstructions with the best diagnostic quality, correlating them with patients' body mass index.Materials and Methods:The present study included 28 adult patients submitted to myocardial perfusion imaging in a public hospital. The OSEM method was utilized in the images reconstruction with six different combinations of iterations and subsets numbers. The images were analyzed by nuclear cardiology specialists taking their diagnostic value into consideration and indicating the most appropriate images in terms of diagnostic quality.Results:An overall scoring analysis demonstrated that the combination of four iterations and four subsets has generated the most appropriate images in terms of diagnostic quality for all the classes of body mass index; however, the role played by the combination of six iterations and four subsets is highlighted in relation to the higher body mass index classes.Conclusion:The use of optimized parameters seems to play a relevant role in the generation of images with better diagnostic quality, ensuring the diagnosis and consequential appropriate and effective treatment for the patient.
Resumo:
The threats caused by global warming motivate different stake holders to deal with and control them. This Master's thesis focuses on analyzing carbon trade permits in optimization framework. The studied model determines optimal emission and uncertainty levels which minimize the total cost. Research questions are formulated and answered by using different optimization tools. The model is developed and calibrated by using available consistent data in the area of carbon emission technology and control. Data and some basic modeling assumptions were extracted from reports and existing literatures. The data collected from the countries in the Kyoto treaty are used to estimate the cost functions. Theory and methods of constrained optimization are briefly presented. A two-level optimization problem (individual and between the parties) is analyzed by using several optimization methods. The combined cost optimization between the parties leads into multivariate model and calls for advanced techniques. Lagrangian, Sequential Quadratic Programming and Differential Evolution (DE) algorithm are referred to. The role of inherent measurement uncertainty in the monitoring of emissions is discussed. We briefly investigate an approach where emission uncertainty would be described in stochastic framework. MATLAB software has been used to provide visualizations including the relationship between decision variables and objective function values. Interpretations in the context of carbon trading were briefly presented. Suggestions for future work are given in stochastic modeling, emission trading and coupled analysis of energy prices and carbon permits.
Resumo:
Current technology trends in medical device industry calls for fabrication of massive arrays of microfeatures such as microchannels on to nonsilicon material substrates with high accuracy, superior precision, and high throughput. Microchannels are typical features used in medical devices for medication dosing into the human body, analyzing DNA arrays or cell cultures. In this study, the capabilities of machining systems for micro-end milling have been evaluated by conducting experiments, regression modeling, and response surface methodology. In machining experiments by using micromilling, arrays of microchannels are fabricated on aluminium and titanium plates, and the feature size and accuracy (width and depth) and surface roughness are measured. Multicriteria decision making for material and process parameters selection for desired accuracy is investigated by using particle swarm optimization (PSO) method, which is an evolutionary computation method inspired by genetic algorithms (GA). Appropriate regression models are utilized within the PSO and optimum selection of micromilling parameters; microchannel feature accuracy and surface roughness are performed. An analysis for optimal micromachining parameters in decision variable space is also conducted. This study demonstrates the advantages of evolutionary computing algorithms in micromilling decision making and process optimization investigations and can be expanded to other applications
Resumo:
The uncertainty of any analytical determination depends on analysis and sampling. Uncertainty arising from sampling is usually not controlled and methods for its evaluation are still little known. Pierre Gy’s sampling theory is currently the most complete theory about samplingwhich also takes the design of the sampling equipment into account. Guides dealing with the practical issues of sampling also exist, published by international organizations such as EURACHEM, IUPAC (International Union of Pure and Applied Chemistry) and ISO (International Organization for Standardization). In this work Gy’s sampling theory was applied to several cases, including the analysis of chromite concentration estimated on SEM (Scanning Electron Microscope) images and estimation of the total uncertainty of a drug dissolution procedure. The results clearly show that Gy’s sampling theory can be utilized in both of the above-mentioned cases and that the uncertainties achieved are reliable. Variographic experiments introduced in Gy’s sampling theory are beneficially applied in analyzing the uncertainty of auto-correlated data sets such as industrial process data and environmental discharges. The periodic behaviour of these kinds of processes can be observed by variographic analysis as well as with fast Fourier transformation and auto-correlation functions. With variographic analysis, the uncertainties are estimated as a function of the sampling interval. This is advantageous when environmental data or process data are analyzed as it can be easily estimated how the sampling interval is affecting the overall uncertainty. If the sampling frequency is too high, unnecessary resources will be used. On the other hand, if a frequency is too low, the uncertainty of the determination may be unacceptably high. Variographic methods can also be utilized to estimate the uncertainty of spectral data produced by modern instruments. Since spectral data are multivariate, methods such as Principal Component Analysis (PCA) are needed when the data are analyzed. Optimization of a sampling plan increases the reliability of the analytical process which might at the end have beneficial effects on the economics of chemical analysis,
Resumo:
Methane combustion was studied by the Westbrook and Dryer model. This well-established simplified mechanism is very useful in combustion science, for computational effort can be notably reduced. In the inversion procedure to be studied, rate constants are obtained from [CO] concentration data. However, when inherent experimental errors in chemical concentrations are considered, an ill-conditioned inverse problem must be solved for which appropriate mathematical algorithms are needed. A recurrent neural network was chosen due to its numerical stability and robustness. The proposed methodology was compared against Simplex and Levenberg-Marquardt, the most used methods for optimization problems.