957 resultados para Optimization algorithms
Resumo:
Computed tomography (CT) is a modality of choice for the study of the musculoskeletal system for various indications including the study of bone, calcifications, internal derangements of joints (with CT arthrography), as well as periprosthetic complications. However, CT remains intrinsically limited by the fact that it exposes patients to ionizing radiation. Scanning protocols need to be optimized to achieve diagnostic image quality at the lowest radiation dose possible. In this optimization process, the radiologist needs to be familiar with the parameters used to quantify radiation dose and image quality. CT imaging of the musculoskeletal system has certain specificities including the focus on high-contrast objects (i.e., in CT of bone or CT arthrography). These characteristics need to be taken into account when defining a strategy to optimize dose and when choosing the best combination of scanning parameters. In the first part of this review, we present the parameters used for the evaluation and quantification of radiation dose and image quality. In the second part, we discuss different strategies to optimize radiation dose and image quality at CT, with a focus on the musculoskeletal system and the use of novel iterative reconstruction techniques.
Resumo:
Computed tomography (CT) is a modality of choice for the study of the musculoskeletal system for various indications including the study of bone, calcifications, internal derangements of joints (with CT arthrography), as well as periprosthetic complications. However, CT remains intrinsically limited by the fact that it exposes patients to ionizing radiation. Scanning protocols need to be optimized to achieve diagnostic image quality at the lowest radiation dose possible. In this optimization process, the radiologist needs to be familiar with the parameters used to quantify radiation dose and image quality. CT imaging of the musculoskeletal system has certain specificities including the focus on high-contrast objects (i.e., in CT of bone or CT arthrography). These characteristics need to be taken into account when defining a strategy to optimize dose and when choosing the best combination of scanning parameters. In the first part of this review, we present the parameters used for the evaluation and quantification of radiation dose and image quality. In the second part, we discuss different strategies to optimize radiation dose and image quality of CT, with a focus on the musculoskeletal system and the use of novel iterative reconstruction techniques.
Resumo:
Mapping the microstructure properties of the local tissues in the brain is crucial to understand any pathological condition from a biological perspective. Most of the existing techniques to estimate the microstructure of the white matter assume a single axon orientation whereas numerous regions of the brain actually present a fiber-crossing configuration. The purpose of the present study is to extend a recent convex optimization framework to recover microstructure parameters in regions with multiple fibers.
Resumo:
In diffusion MRI, traditional tractography algorithms do not recover truly quantitative tractograms and the structural connectivity has to be estimated indirectly by counting the number of fiber tracts or averaging scalar maps along them. Recently, global and efficient methods have emerged to estimate more quantitative tractograms by combining tractography with local models for the diffusion signal, like the Convex Optimization Modeling for Microstructure Informed Tractography (COMMIT) framework. In this abstract, we show the importance of using both (i) proper multi-compartment diffusion models and (ii) adequate multi-shell acquisitions, in order to evaluate the accuracy and the biological plausibility of the tractograms.
Resumo:
AbstractObjective:The present study is aimed at contributing to identify the most appropriate OSEM parameters to generate myocardial perfusion imaging reconstructions with the best diagnostic quality, correlating them with patients' body mass index.Materials and Methods:The present study included 28 adult patients submitted to myocardial perfusion imaging in a public hospital. The OSEM method was utilized in the images reconstruction with six different combinations of iterations and subsets numbers. The images were analyzed by nuclear cardiology specialists taking their diagnostic value into consideration and indicating the most appropriate images in terms of diagnostic quality.Results:An overall scoring analysis demonstrated that the combination of four iterations and four subsets has generated the most appropriate images in terms of diagnostic quality for all the classes of body mass index; however, the role played by the combination of six iterations and four subsets is highlighted in relation to the higher body mass index classes.Conclusion:The use of optimized parameters seems to play a relevant role in the generation of images with better diagnostic quality, ensuring the diagnosis and consequential appropriate and effective treatment for the patient.
Resumo:
In this thesis programmatic, application-layer means for better energy-efficiency in the VoIP application domain are studied. The work presented concentrates on optimizations which are suitable for VoIP-implementations utilizing SIP and IEEE 802.11 technologies. Energy-saving optimizations can have an impact on perceived call quality, and thus energy-saving means are studied together with those factors affecting perceived call quality. In this thesis a general view on a topic is given. Based on theory, adaptive optimization schemes for dynamic controlling of application's operation are proposed. A runtime quality model, capable of being integrated into optimization schemes, is developed for VoIP call quality estimation. Based on proposed optimization schemes, some power consumption measurements are done to find out achievable advantages. Measurement results show that a reduction in power consumption is possible to achieve with the help of adaptive optimization schemes.
Resumo:
Russian and Baltic electricity markets are in the process of reformation and development on the way for competitive and transparent market. Nordic market also undergoes some changes on the way to market integration. Old structure and practices have been expired whereas new laws and rules come into force. The master thesis describes structure and functioning of wholesale electricity markets, cross-border connections between different countries. Additionally methods of cross-border trading using different methods of capacity allocation are disclosed. The main goal of present thesis is to study current situation at different electricity markets and observe changes coming into force as well as the capacity and electricity balances forecast in order to optimize short term power trading between countries and estimate the possible profit for the company.
Resumo:
Network virtualisation is considerably gaining attentionas a solution to ossification of the Internet. However, thesuccess of network virtualisation will depend in part on how efficientlythe virtual networks utilise substrate network resources.In this paper, we propose a machine learning-based approachto virtual network resource management. We propose to modelthe substrate network as a decentralised system and introducea learning algorithm in each substrate node and substrate link,providing self-organization capabilities. We propose a multiagentlearning algorithm that carries out the substrate network resourcemanagement in a coordinated and decentralised way. The taskof these agents is to use evaluative feedback to learn an optimalpolicy so as to dynamically allocate network resources to virtualnodes and links. The agents ensure that while the virtual networkshave the resources they need at any given time, only the requiredresources are reserved for this purpose. Simulations show thatour dynamic approach significantly improves the virtual networkacceptance ratio and the maximum number of accepted virtualnetwork requests at any time while ensuring that virtual networkquality of service requirements such as packet drop rate andvirtual link delay are not affected.
Resumo:
In this thesis (TFG) the results of the comparison between different methods to obtain a recombinant protein, by orthologous and heterologous expression, are exposed. This study will help us to identify the best way to express and purify a recombinant protein that will be used for biotechnology applications. In the first part of the project the goal was to find the best expression and purification system to obtain the recombinant protein of interest. To achieve this objective, a system expression in bacteria and in yeast was designed. The DNA was cloned into two different expression vectors to create a fusion protein with two different tags, and the expression of the protein was induced by IPTG or glucose. Additionally, in yeast, two promoters where used to express the protein, the one corresponding to the same protein (orthologous expression), and the ENO2 promoter (heterologous expression). The protein of interest is a NAD-dependent enzyme so, in a second time, its specific activity was evaluated by coenzyme conversion. The results of the TFG suggest that, comparing the model organisms, bacteria are more efficient than yeast because the quantity of protein obtained is higher and better purified. Regarding yeast, comparing the two expression mechanisms that were designed, heterologous expression works much better than the orthologous expression, so in case that we want to use yeast as expression model for the protein of interest, ENO2 will be the best option. Finally, the enzymatic assays, done to compare the effectiveness of the different expression mechanisms respect to the protein activity, revealed that the protein purified in yeast had more activity in converting the NAD coenzyme.
Resumo:
Tässä diplomityössä optimoitiin nelivaiheinen 1 MWe höyryturbiinin prototyyppimalli evoluutioalgoritmien avulla sekä tutkittiin optimoinnista saatuja kustannushyötyjä. Optimoinnissa käytettiin DE – algoritmia. Optimointi saatiin toimimaan, mutta optimoinnissa käytetyn laskentasovelluksen (semiempiirisiin yhtälöihin perustuvat mallit) luonteesta johtuen optimoinnin tarkkuus CFD – laskennalla suoritettuun tarkastusmallinnukseen verrattuna oli jonkin verran toivottua pienempi. Tulosten em. epätarkkuus olisi tuskin ollut vältettävissä, sillä ongelma johtui puoliempiirisiin laskentamalleihin liittyvistä lähtöoletusongelmista sekä epävarmuudesta sovitteiden absoluuttisista pätevyysalueista. Optimoinnin onnistumisen kannalta tällainen algebrallinen mallinnus oli kuitenkin välttämätöntä, koska esim. CFD-laskentaa ei olisi mitenkään voitu tehdä jokaisella optimointiaskeleella. Optimoinnin aikana ongelmia esiintyi silti konetehojen riittävyydessä sekä sellaisen sopivan rankaisumallin löytämisessä, joka pitäisi algoritmin matemaattisesti sallitulla alueella, muttei rajoittaisi liikaa optimoinnin edistymistä. Loput ongelmat johtuivat sovelluksen uutuudesta sekä täsmällisyysongelmista sovitteiden pätevyysalueiden käsittelyssä. Vaikka optimoinnista saatujen tulosten tarkkuus ei ollut aivan tavoitteen mukainen, oli niillä kuitenkin koneensuunnittelua edullisesti ohjaava vaikutus. DE – algoritmin avulla suoritetulla optimoinnilla saatiin turbiinista noin 2,2 % enemmän tehoja, joka tarkoittaa noin 15 000 € konekohtaista kustannushyötyä. Tämä on yritykselle erittäin merkittävä konekohtainen kustannushyöty. Loppujen lopuksi voitaneen sanoa, etteivät evoluutioalgoritmit olleet parhaimmillaan prototyyppituotteen optimoinnissa. Evoluutioalgoritmeilla teknisten laitteiden optimoinnissa piilee valtavasti mahdollisuuksia, mutta se vaatii kypsän sovelluskohteen, joka tunnetaan jo entuudestaan erinomaisesti tai on yksinkertainen ja aukottomasti laskettavissa.
Resumo:
In the literature on housing market areas, different approaches can be found to defining them, for example, using travel-to-work areas and, more recently, making use of migration data. Here we propose a simple exercise to shed light on which approach performs better. Using regional data from Catalonia, Spain, we have computed housing market areas with both commuting data and migration data. In order to decide which procedure shows superior performance, we have looked at uniformity of prices within areas. The main finding is that commuting algorithms present more homogeneous areas in terms of housing prices.
Resumo:
The threats caused by global warming motivate different stake holders to deal with and control them. This Master's thesis focuses on analyzing carbon trade permits in optimization framework. The studied model determines optimal emission and uncertainty levels which minimize the total cost. Research questions are formulated and answered by using different optimization tools. The model is developed and calibrated by using available consistent data in the area of carbon emission technology and control. Data and some basic modeling assumptions were extracted from reports and existing literatures. The data collected from the countries in the Kyoto treaty are used to estimate the cost functions. Theory and methods of constrained optimization are briefly presented. A two-level optimization problem (individual and between the parties) is analyzed by using several optimization methods. The combined cost optimization between the parties leads into multivariate model and calls for advanced techniques. Lagrangian, Sequential Quadratic Programming and Differential Evolution (DE) algorithm are referred to. The role of inherent measurement uncertainty in the monitoring of emissions is discussed. We briefly investigate an approach where emission uncertainty would be described in stochastic framework. MATLAB software has been used to provide visualizations including the relationship between decision variables and objective function values. Interpretations in the context of carbon trading were briefly presented. Suggestions for future work are given in stochastic modeling, emission trading and coupled analysis of energy prices and carbon permits.