851 resultados para multivariate optimization
Resumo:
Computed tomography (CT) is a modality of choice for the study of the musculoskeletal system for various indications including the study of bone, calcifications, internal derangements of joints (with CT arthrography), as well as periprosthetic complications. However, CT remains intrinsically limited by the fact that it exposes patients to ionizing radiation. Scanning protocols need to be optimized to achieve diagnostic image quality at the lowest radiation dose possible. In this optimization process, the radiologist needs to be familiar with the parameters used to quantify radiation dose and image quality. CT imaging of the musculoskeletal system has certain specificities including the focus on high-contrast objects (i.e., in CT of bone or CT arthrography). These characteristics need to be taken into account when defining a strategy to optimize dose and when choosing the best combination of scanning parameters. In the first part of this review, we present the parameters used for the evaluation and quantification of radiation dose and image quality. In the second part, we discuss different strategies to optimize radiation dose and image quality of CT, with a focus on the musculoskeletal system and the use of novel iterative reconstruction techniques.
Resumo:
Mapping the microstructure properties of the local tissues in the brain is crucial to understand any pathological condition from a biological perspective. Most of the existing techniques to estimate the microstructure of the white matter assume a single axon orientation whereas numerous regions of the brain actually present a fiber-crossing configuration. The purpose of the present study is to extend a recent convex optimization framework to recover microstructure parameters in regions with multiple fibers.
Resumo:
AbstractObjective:The present study is aimed at contributing to identify the most appropriate OSEM parameters to generate myocardial perfusion imaging reconstructions with the best diagnostic quality, correlating them with patients' body mass index.Materials and Methods:The present study included 28 adult patients submitted to myocardial perfusion imaging in a public hospital. The OSEM method was utilized in the images reconstruction with six different combinations of iterations and subsets numbers. The images were analyzed by nuclear cardiology specialists taking their diagnostic value into consideration and indicating the most appropriate images in terms of diagnostic quality.Results:An overall scoring analysis demonstrated that the combination of four iterations and four subsets has generated the most appropriate images in terms of diagnostic quality for all the classes of body mass index; however, the role played by the combination of six iterations and four subsets is highlighted in relation to the higher body mass index classes.Conclusion:The use of optimized parameters seems to play a relevant role in the generation of images with better diagnostic quality, ensuring the diagnosis and consequential appropriate and effective treatment for the patient.
Resumo:
In this thesis programmatic, application-layer means for better energy-efficiency in the VoIP application domain are studied. The work presented concentrates on optimizations which are suitable for VoIP-implementations utilizing SIP and IEEE 802.11 technologies. Energy-saving optimizations can have an impact on perceived call quality, and thus energy-saving means are studied together with those factors affecting perceived call quality. In this thesis a general view on a topic is given. Based on theory, adaptive optimization schemes for dynamic controlling of application's operation are proposed. A runtime quality model, capable of being integrated into optimization schemes, is developed for VoIP call quality estimation. Based on proposed optimization schemes, some power consumption measurements are done to find out achievable advantages. Measurement results show that a reduction in power consumption is possible to achieve with the help of adaptive optimization schemes.
Resumo:
Russian and Baltic electricity markets are in the process of reformation and development on the way for competitive and transparent market. Nordic market also undergoes some changes on the way to market integration. Old structure and practices have been expired whereas new laws and rules come into force. The master thesis describes structure and functioning of wholesale electricity markets, cross-border connections between different countries. Additionally methods of cross-border trading using different methods of capacity allocation are disclosed. The main goal of present thesis is to study current situation at different electricity markets and observe changes coming into force as well as the capacity and electricity balances forecast in order to optimize short term power trading between countries and estimate the possible profit for the company.
Resumo:
In this thesis (TFG) the results of the comparison between different methods to obtain a recombinant protein, by orthologous and heterologous expression, are exposed. This study will help us to identify the best way to express and purify a recombinant protein that will be used for biotechnology applications. In the first part of the project the goal was to find the best expression and purification system to obtain the recombinant protein of interest. To achieve this objective, a system expression in bacteria and in yeast was designed. The DNA was cloned into two different expression vectors to create a fusion protein with two different tags, and the expression of the protein was induced by IPTG or glucose. Additionally, in yeast, two promoters where used to express the protein, the one corresponding to the same protein (orthologous expression), and the ENO2 promoter (heterologous expression). The protein of interest is a NAD-dependent enzyme so, in a second time, its specific activity was evaluated by coenzyme conversion. The results of the TFG suggest that, comparing the model organisms, bacteria are more efficient than yeast because the quantity of protein obtained is higher and better purified. Regarding yeast, comparing the two expression mechanisms that were designed, heterologous expression works much better than the orthologous expression, so in case that we want to use yeast as expression model for the protein of interest, ENO2 will be the best option. Finally, the enzymatic assays, done to compare the effectiveness of the different expression mechanisms respect to the protein activity, revealed that the protein purified in yeast had more activity in converting the NAD coenzyme.
Resumo:
Neural Networks are a set of mathematical methods and computer programs designed to simulate the information process and the knowledge acquisition of the human brain. In last years its application in chemistry is increasing significantly, due the special characteristics for model complex systems. The basic principles of two types of neural networks, the multi-layer perceptrons and radial basis functions, are introduced, as well as, a pruning approach to architecture optimization. Two analytical applications based on near infrared spectroscopy are presented, the first one for determination of nitrogen content in wheat leaves using multi-layer perceptrons networks and second one for determination of BRIX in sugar cane juices using radial basis functions networks.
Resumo:
This paper is a translation from IUPAC nomenclature document by K. Danzer and L. A. Currie (Pure Appl. Chem., 1998, 70(4), 993-1014). Its goal is to establish an uniform and meaningful approach to terminology (in Portuguese), notation, and formulation for calibation in analytical chemistry. In this first part, general fundamentals of calibration are presented, namely for both relationships of qualitative and quantitative variables (relations between variables characterizing certain types analytes of the measured function on the other hand and between variables characterizing the amount or concentration of the chemical species and the intensities of the measured signals, on the other hand). On this basis, the fundamentals of the common single component calibration (Univariate Calibration) which models the relationship y = f(x) between the signal intensities y and the amounts or concentrations x of the analyte under given conditions are represented. Additional papers will be prepared dealing with extensive relationships between several intensities and analyte contents, namely with multivariate calibrations and with optimization and experimental design.
Resumo:
Last two decades have seen a rapid change in the global economic and financial situation; the economic conditions in many small and large underdeveloped countries started to improve and they became recognized as emerging markets. This led to growth in the amounts of global investments in these countries, partly spurred by expectations of higher returns, favorable risk-return opportunities, and better diversification alternatives to global investors. This process, however, has not been without problems and it has emphasized the need for more information on these markets. In particular, the liberalization of financial markets around the world, globalization of trade and companies, recent formation of economic and regional blocks, and the rapid development of underdeveloped countries during the last two decades have brought a major challenge to the financial world and researchers alike. This doctoral dissertation studies one of the largest emerging markets, namely Russia. The motivation why the Russian equity market is worth investigating includes, among other factors, its sheer size, rapid and robust economic growth since the turn of the millennium, future prospect for international investors, and a number of important major financial reforms implemented since the early 1990s. Another interesting feature of the Russian economy, which gives motivation to study Russian market, is Russia’s 1998 financial crisis, considered as one of the worst crisis in recent times, affecting both developed and developing economies. Therefore, special attention has been paid to Russia’s 1998 financial crisis throughout this dissertation. This thesis covers the period from the birth of the modern Russian financial markets to the present day, Special attention is given to the international linkage and the 1998 financial crisis. This study first identifies the risks associated with Russian market and then deals with their pricing issues. Finally some insights about portfolio construction within Russian market are presented. The first research paper of this dissertation considers the linkage of the Russian equity market to the world equity market by examining the international transmission of the Russia’s 1998 financial crisis utilizing the GARCH-BEKK model proposed by Engle and Kroner. Empirical results shows evidence of direct linkage between the Russian equity market and the world market both in regards of returns and volatility. However, the weakness of the linkage suggests that the Russian equity market was only partially integrated into the world market, even though the contagion can be clearly seen during the time of the crisis period. The second and the third paper, co-authored with Mika Vaihekoski, investigate whether global, local and currency risks are priced in the Russian stock market from a US investors’ point of view. Furthermore, the dynamics of these sources of risk are studied, i.e., whether the prices of the global and local risk factors are constant or time-varying over time. We utilize the multivariate GARCH-M framework of De Santis and Gérard (1998). Similar to them we find price of global market risk to be time-varying. Currency risk also found to be priced and highly time varying in the Russian market. Moreover, our results suggest that the Russian market is partially segmented and local risk is also priced in the market. The model also implies that the biggest impact on the US market risk premium is coming from the world risk component whereas the Russian risk premium is on average caused mostly by the local and currency components. The purpose of the fourth paper is to look at the relationship between the stock and the bond market of Russia. The objective is to examine whether the correlations between two classes of assets are time varying by using multivariate conditional volatility models. The Constant Conditional Correlation model by Bollerslev (1990), the Dynamic Conditional Correlation model by Engle (2002), and an asymmetric version of the Dynamic Conditional Correlation model by Cappiello et al. (2006) are used in the analysis. The empirical results do not support the assumption of constant conditional correlation and there was clear evidence of time varying correlations between the Russian stocks and bond market and both asset markets exhibit positive asymmetries. The implications of the results in this dissertation are useful for both companies and international investors who are interested in investing in Russia. Our results give useful insights to those involved in minimising or managing financial risk exposures, such as, portfolio managers, international investors, risk analysts and financial researchers. When portfolio managers aim to optimize the risk-return relationship, the results indicate that at least in the case of Russia, one should account for the local market as well as currency risk when calculating the key inputs for the optimization. In addition, the pricing of exchange rate risk implies that exchange rate exposure is partly non-diversifiable and investors are compensated for bearing the risk. Likewise, international transmission of stock market volatility can profoundly influence corporate capital budgeting decisions, investors’ investment decisions, and other business cycle variables. Finally, the weak integration of the Russian market and low correlations between Russian stock and bond market offers good opportunities to the international investors to diversify their portfolios.
Resumo:
Current technology trends in medical device industry calls for fabrication of massive arrays of microfeatures such as microchannels on to nonsilicon material substrates with high accuracy, superior precision, and high throughput. Microchannels are typical features used in medical devices for medication dosing into the human body, analyzing DNA arrays or cell cultures. In this study, the capabilities of machining systems for micro-end milling have been evaluated by conducting experiments, regression modeling, and response surface methodology. In machining experiments by using micromilling, arrays of microchannels are fabricated on aluminium and titanium plates, and the feature size and accuracy (width and depth) and surface roughness are measured. Multicriteria decision making for material and process parameters selection for desired accuracy is investigated by using particle swarm optimization (PSO) method, which is an evolutionary computation method inspired by genetic algorithms (GA). Appropriate regression models are utilized within the PSO and optimum selection of micromilling parameters; microchannel feature accuracy and surface roughness are performed. An analysis for optimal micromachining parameters in decision variable space is also conducted. This study demonstrates the advantages of evolutionary computing algorithms in micromilling decision making and process optimization investigations and can be expanded to other applications
Resumo:
The objective of the thesis was to examine the possibilities in designing better performing nozzles for the heatset drying oven in Forest Pilot Center. To achieve the objective, two predesigned nozzle types along with the replicas of the current nozzles in the heatset drying oven were tested on a pilot-scale dryer. During the runnability trials, the pilot dryer was installed between the last printing unit and the drying oven. The two sets of predesigned nozzles were consecutively installed in the dryer. Four web tension values and four different impingement air velocities were used and the web behavior during the trial points was evaluated and recorded. The runnability in all trial conditions was adequate or even good. During the heat transfer trials, each nozzle type was tested on at least two different nozzle-to-surface distances and four different impingement air velocities. In a test situation, an aluminum plate fitted with thermocouples was set below a nozzle and the temperature measurement of each block was logged. From the measurements, a heat transfer coefficient profile for the nozzle was calculated. The performance of each nozzle type in tested conditions could now be rated and compared. The results verified that the predesigned simpler nozzles were better than the replicas. For runnability reasons, there were rows of inclined orifices on the leading and trailing edges of the current nozzles. They were believed to deteriorate the overall performance of the nozzle, and trials were conducted to test this hypothesis. The perpendicular orifices and inclined orifices of a replica nozzle were consecutively taped shut and the performance of the modified nozzles was measured as before, and then compared to the performance of the whole nozzle. It was found out, that after a certain nozzle-to-surface distance the jets from the two nozzles would collide, which deteriorates the heat transfer.
Resumo:
This study presents examination of ways to increase power generation in pulp mills. The main purpose was to identify and verify the best ways of power generation growth. The literature part of this study presented operation of energy pulp mill departments, energy consumption and generation by the recovery and power boilers. The second chapter of this part described the main directions for increase of electricity generation rise of black liquor dry solid content, increase of main steam parameters, flue gas heat recovery technologies, feed water and combustion air preheating. The third chapter of the literature part presented possible technical, environment and corrosion risks appeared from described alternatives. In the experimental part of this study, calculations and results of possible models with alternatives was presented. The possible combinations of alternatives were generated in 44 `models of energy pulp mill. The target of this part was define extra electricity generation after alternatives using and estimate profitability of generated models. The calculations were made by computer programme PROSIM. In the conclusions, the results were estimated on the basis of extra electricity generation and equipment design data of models. The profitability of cases was verified by their payback periods and additional incomes.