926 resultados para ASSESSMENT MODELS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the development of large scale power grid interconnections and power markets, research on available transfer capability (ATC) has attracted great attention. The challenges for accurate assessment of ATC originate from the numerous uncertainties in electricity generation, transmission, distribution and utilization sectors. Power system uncertainties can be mainly described as two types: randomness and fuzziness. However, the traditional transmission reliability margin (TRM) approach only considers randomness. Based on credibility theory, this paper firstly built models of generators, transmission lines and loads according to their features of both randomness and fuzziness. Then a random fuzzy simulation is applied, along with a novel method proposed for ATC assessment, in which both randomness and fuzziness are considered. The bootstrap method and multi-core parallel computing technique are introduced to enhance the processing speed. By implementing simulation for the IEEE-30-bus system and a real-life system located in Northwest China, the viability of the models and the proposed method is verified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Small and Medium Enterprises (SMEs) play an important part in the economy of any country. Initially, a flat management hierarchy, quick response to market changes and cost competitiveness were seen as the competitive characteristics of an SME. Recently, in developed economies, technological capabilities (TCs) management- managing existing and developing or assimilating new technological capabilities for continuous process and product innovations, has become important for both large organisations and SMEs to achieve sustained competitiveness. Therefore, various technological innovation capability (TIC) models have been developed at firm level to assess firms‘ innovation capability level. These models output help policy makers and firm managers to devise policies for deepening a firm‘s technical knowledge generation, acquisition and exploitation capabilities for sustained technological competitive edge. However, in developing countries TCs management is more of TCs upgrading: acquisitions of TCs from abroad, and then assimilating, innovating and exploiting them. Most of the TIC models for developing countries delineate the level of TIC required as firms move from the acquisition to innovative level. However, these models do not provide tools for assessing the existing level of TIC of a firm and various factors affecting TIC, to help practical interventions for TCs upgrading of firms for improved or new processes and products. Recently, the Government of Pakistan (GOP) has realised the importance of TCs upgrading in SMEs-especially export-oriented, for their sustained competitiveness. The GOP has launched various initiatives with local and foreign assistance to identify ways and means of upgrading local SMEs capabilities. This research targets this gap and developed a TICs assessment model for identifying the existing level of TIC of manufacturing SMEs existing in clusters in Sialkot, Pakistan. SME executives in three different export-oriented clusters at Sialkot were interviewed to analyse technological capabilities development initiatives (CDIs) taken by them to develop and upgrade their firms‘ TCs. Data analysed at CDI, firm, cluster and cross-cluster level first helped classify interviewed firms as leader, follower and reactor, with leader firms claiming to introduce mostly new CDIs to their cluster. Second, the data analysis displayed that mostly interviewed leader firms exhibited ‗learning by interacting‘ and ‗learning by training‘ capabilities for expertise acquisition from customers and international consultants. However, these leader firms did not show much evidence of learning by using, reverse engineering and R&D capabilities, which according to the extant literature are necessary for upgrading existing TIC level and thus TCs of firm for better value-added processes and products. The research results are supported by extant literature on Sialkot clusters. Thus, in sum, a TIC assessment model was developed in this research which qualitatively identified interviewed firms‘ TIC levels, the factors affecting them, and is validated by existing literature on interviewed Sialkot clusters. Further, the research gives policy level recommendations for TIC and thus TCs upgrading at firm and cluster level for targeting better value-added markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A tanulmány a PPP különböző strukturális modelljeinek csoportosítását mutatja be, az egyes típusok rövid rendszerező áttekintésével. A tipológiák vizsgálata hasznos ahhoz, hogy a PPP projektek struktúrájának kialakításakor a különböző lehetőségeket mérlegelni tudjuk. Többféle megközelítésben lehet a modelleket tipizálni. Az együttműködés célja alapján a hatékonyság-, a minőség- és a finanszírozás-orientált modellek a legelterjedtebbek, a kockázatmegosztás módja alapján BOT, DBFO és koncessziós változatok, a haszonmegosztás szabályozása alapján árplafon-szabályozású, közvetlen haszonszabályozású, fixdíjas és árnyékáras megoldások a leginkább bevettek. A tanulmány ezek elemző bemutatása alapján arra a következtetésre jut, hogy a gyakorlati megoldások a legtöbb esetben az elméleti típusok valamilyen kombinációját tartalmazzák, a konkrét eset feltételeinek megfelelően. Így a gyakorlatban a fix tipológiák helyett alkalmasabb úgy megközelítenünk a PPP-t, mint egy folyamatosan változó, a helyi igényekhez idomuló jelenséget. A haszonszabályozó tipológia kapcsán a tanulmány melléklete rövid áttekintést nyújt a PPP esetében kritikus méltányos haszon becslésének lehetséges megoldásairól is. = This study shows a categorization of the different structural models of Public-Private Partnership (PPP) projects. The typologies are useful to assess the available options when decisions on PPP project structures are made. There are different categorizing aspects. Based ont he key purpose of the partnership there are efficiency, quality and financing focused models. From a risk sharing point of view, BOT, DBFO and concession models are most typical. Regarding the regulation of returns price-cap models, ’open book’ models, fixed price and shadow pricing models are most common. Based on the analytical assessment of these, they study concludes that actual projects are mostly a combination of theoretical types, as required by the given case. Therefore in practice, it is more appropriate to approach PPP projects as a constantly shaping concept, adjustable to particular conditions. Supporting the approaches to the regulation of returns, an appendix of the study summarizes the different methods to estimate fair returns, a critical issue in PPP projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Miami-Dade County implemented a series of water conservation programs, which included rebate/exchange incentives to encourage the use of high efficiency aerators (AR), showerheads (SH), toilets (HET) and clothes washers (HEW), to respond to the environmental sustainability issue in urban areas. This study first used panel data analysis of water consumption to evaluate the performance and actual water savings of individual programs. Integrated water demand model has also been developed for incorporating property’s physical characteristics into the water consumption profiles. Life cycle assessment (with emphasis on end-use stage in water system) of water intense appliances was conducted to determine the environmental impacts brought by each practice. Approximately 6 to 10 % of water has been saved in the first and second year of implementation of high efficiency appliances, and with continuing savings in the third and fourth years. Water savings (gallons per household per day) for water efficiency appliances were observed at 28 (11.1%) for SH, 34.7 (13.3%) for HET, and 39.7 (14.5%) for HEW. Furthermore, the estimated contributions of high efficiency appliances for reducing water demand in the integrated water demand model were between 5 and 19% (highest in the AR program). Results indicated that adoption of more than one type of water efficiency appliance could significantly reduce residential water demand. For the sustainable water management strategies, the appropriate water conservation rate was projected to be 1 to 2 million gallons per day (MGD) through 2030. With 2 MGD of water savings, the estimated per capita water use (GPCD) could be reduced from approximately 140 to 122 GPCD. Additional efforts are needed to reduce the water demand to US EPA’s “Water Sense” conservation levels of 70 GPCD by 2030. Life cycle assessment results showed that environmental impacts (water and energy demands and greenhouse gas emissions) from end-use and demand phases are most significant within the water system, particularly due to water heating (73% for clothes washer and 93% for showerhead). Estimations of optimal lifespan for appliances (8 to 21 years) implied that earlier replacement with efficiency models is encouraged in order to minimize the environmental impacts brought by current practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urban growth models have been used for decades to forecast urban development in metropolitan areas. Since the 1990s cellular automata, with simple computational rules and an explicitly spatial architecture, have been heavily utilized in this endeavor. One such cellular-automata-based model, SLEUTH, has been successfully applied around the world to better understand and forecast not only urban growth but also other forms of land-use and land-cover change, but like other models must be fed important information about which particular lands in the modeled area are available for development. Some of these lands are in categories for the purpose of excluding urban growth that are difficult to quantify since their function is dictated by policy. One such category includes voluntary differential assessment programs, whereby farmers agree not to develop their lands in exchange for significant tax breaks. Since they are voluntary, today’s excluded lands may be available for development at some point in the future. Mapping the shifting mosaic of parcels that are enrolled in such programs allows this information to be used in modeling and forecasting. In this study, we added information about California’s Williamson Act into SLEUTH’s excluded layer for Tulare County. Assumptions about the voluntary differential assessments were used to create a sophisticated excluded layer that was fed into SLEUTH’s urban growth forecasting routine. The results demonstrate not only a successful execution of this method but also yielded high goodness-of-fit metrics for both the calibration of enrollment termination as well as the urban growth modeling itself.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: (1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (E LUMO) via QSAR modelling and analysis; (2) to validate the models by using internal and external cross-validation techniques; (3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl ) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: (1) Linear or Multi-linear Regression (MLR); (2) Partial Least Squares (PLS); and (3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: (1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; (2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; (3) E LUMO are shown to correlate highly with the NCl for several classes of DBPs; and (4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge of cell electronics has led to their integration to medicine either by physically interfacing electronic devices with biological systems or by using electronics for both detection and characterization of biological materials. In this dissertation, an electrical impedance sensor (EIS) was used to measure the electrode surface impedance changes from cell samples of human and environmental toxicity of nanoscale materials in 2D and 3D cell culture models. The impedimetric response of human lung fibroblasts and rainbow trout gill epithelial cells when exposed to various nanomaterials was tested to determine their kinetic effects towards the cells and to demonstrate the biosensor's ability to monitor nanotoxicity in real-time. Further, the EIS allowed rapid, real-time and multi-sample analysis creating a versatile, noninvasive tool that is able to provide quantitative information with respect to alteration in cellular function. We then extended the application of the unique capabilities of the EIS to do real-time analysis of cancer cell response to externally applied alternating electric fields at different intermediate frequencies and low-intensity. Decreases in the growth profiles of the ovarian and breast cancer cells were observed with the application of 200 and 100 kHz, respectively, indicating specific inhibitory effects on dividing cells in culture in contrast to the non-cancerous HUVECs and mammary epithelial cells. We then sought to enhance the effects of the electric field by altering the cancer cell's electronegative membrane properties with HER2 antibody functionalized nanoparticles. An Annexin V/EthD-III assay and zeta potential were performed to determine the cell death mechanism indicating apoptosis and a decrease in zeta potential with the incorporation of the nanoparticles. With more negatively charged HER2-AuNPs attached to the cancer cell membrane, the decrease in membrane potential would thus leave the cells more vulnerable to the detrimental effects of the applied electric field due to the decrease in surface charge. Therefore, by altering the cell membrane potential, one could possibly control the fate of the cell. This whole cell-based biosensor will enhance our understanding of the responsiveness of cancer cells to electric field therapy and demonstrate potential therapeutic opportunities for electric field therapy in the treatment of cancer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We developed diatom-based prediction models of hydrology and periphyton abundance to inform assessment tools for a hydrologically managed wetland. Because hydrology is an important driver of ecosystem change, hydrologic alterations by restoration efforts could modify biological responses, such as periphyton characteristics. In karstic wetlands, diatoms are particularly important components of mat-forming calcareous periphyton assemblages that both respond and contribute to the structural organization and function of the periphyton matrix. We examined the distribution of diatoms across the Florida Everglades landscape and found hydroperiod and periphyton biovolume were strongly correlated with assemblage composition. We present species optima and tolerances for hydroperiod and periphyton biovolume, for use in interpreting the directionality of change in these important variables. Predictions of these variables were mapped to visualize landscape-scale spatial patterns in a dominant driver of change in this ecosystem (hydroperiod) and an ecosystem-level response metric of hydrologic change (periphyton biovolume). Specific diatom assemblages inhabiting periphyton mats of differing abundance can be used to infer past conditions and inform management decisions based on how assemblages are changing. This study captures diatom responses to wide gradients of hydrology and periphyton characteristics to inform ecosystem-scale bioassessment efforts in a large wetland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Disasters are complex events characterized by damage to key infrastructure and population displacements into disaster shelters. Assessing the living environment in shelters during disasters is a crucial health security concern. Until now, jurisdictional knowledge and preparedness on those assessment methods, or deficiencies found in shelters is limited. A cross-sectional survey (STUSA survey) ascertained knowledge and preparedness for those assessments in all 50 states, DC, and 5 US territories. Descriptive analysis of overall knowledge and preparedness was performed. Fisher’s exact statistics analyzed differences between two groups: jurisdiction type and population size. Two logistic regression models analyzed earthquakes and hurricane risks as predictors of knowledge and preparedness. A convenience sample of state shelter assessments records (n=116) was analyzed to describe environmental health deficiencies found during selected events. Overall, 55 (98%) of jurisdictions responded (states and territories) and appeared to be knowledgeable of these assessments (states 92%, territories 100%, p = 1.000), and engaged in disaster planning with shelter partners (states 96%, territories 83%, p = 0.564). Few had shelter assessment procedures (states 53%, territories 50%, p = 1.000); or training in disaster shelter assessments (states 41%, 60% territories, p = 0.638). Knowledge or preparedness was not predicted by disaster risks, population size, and jurisdiction type in neither model. Knowledge: hurricane (Adjusted OR 0.69, 95% C.I. 0.06-7.88); earthquake (OR 0.82, 95% C.I. 0.17-4.06); and both risks (OR 1.44, 95% C.I. 0.24-8.63); preparedness model: hurricane (OR 1.91, 95% C.I. 0.06-20.69); earthquake (OR 0.47, 95% C.I. 0.7-3.17); and both risks (OR 0.50, 95% C.I. 0.06-3.94). Environmental health deficiencies documented in shelter assessments occurred mostly in: sanitation (30%); facility (17%); food (15%); and sleeping areas (12%); and during ice storms and tornadoes. More research is needed in the area of environmental health assessments of disaster shelters, particularly, in those areas that may provide better insight into the living environment of all shelter occupants and potential effects in disaster morbidity and mortality. Also, to evaluate the effectiveness and usefulness of these assessments methods and the data available on environmental health deficiencies in risk management to protect those at greater risk in shelter facilities during disasters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigated the risk of accidental release of hydrocarbons during transportation and storage. Transportation of hydrocarbons from an offshore platform to processing units through subsea pipelines involves risk of release due to pipeline leakage resulting from corrosion, plastic deformation caused by seabed shakedown or damaged by contact with drifting iceberg. The environmental impacts of hydrocarbon dispersion can be severe. Overall safety and economic concerns of pipeline leakage at subsea environment are immense. A large leak can be detected by employing conventional technology such as, radar, intelligent pigging or chemical tracer but in a remote location like subsea or arctic, a small chronic leak may be undetected for a period of time. In case of storage, an accidental release of hydrocarbon from the storage tank could lead pool fire; further it could escalate to domino effects. This chain of accidents may lead to extremely severe consequences. Analyzing past accident scenarios it is observed that more than half of the industrial domino accidents involved fire as a primary event, and some other factors for instance, wind speed and direction, fuel type and engulfment of the compound. In this thesis, a computational fluid dynamics (CFD) approach is taken to model the subsea pipeline leak and the pool fire from a storage tank. A commercial software package ANSYS FLUENT Workbench 15 is used to model the subsea pipeline leakage. The CFD simulation results of four different types of fluids showed that the static pressure and pressure gradient along the axial length of the pipeline have a sharp signature variation near the leak orifice at steady state condition. Transient simulation is performed to obtain the acoustic signature of the pipe near leak orifice. The power spectral density (PSD) of acoustic signal is strong near the leak orifice and it dissipates as the distance and orientation from the leak orifice increase. The high-pressure fluid flow generates more noise than the low-pressure fluid flow. In order to model the pool fire from the storage tank, ANSYS CFX Workbench 14 is used. The CFD results show that the wind speed has significant contribution on the behavior of pool fire and its domino effects. The radiation contours are also obtained from CFD post processing, which can be applied for risk analysis. The outcome of this study will be helpful for better understanding of the domino effects of pool fire in complex geometrical settings of process industries. The attempt to reduce and prevent risks is discussed based on the results obtained from the numerical simulations of the numerical models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract

The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.

This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.

I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.

Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.

II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.

The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.

In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: 1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (ELUMO) via QSAR modelling and analysis; 2) to validate the models by using internal and external cross-validation techniques; 3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: 1) Linear or Multi-linear Regression (MLR); 2) Partial Least Squares (PLS); and 3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: 1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; 2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; 3) ELUMO are shown to correlate highly with the NCl for several classes of DBPs; and 4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis focuses on tectonic geomorphology and the response of the Ken River catchment to postulated tectonic forcing along a NE-striking monocline fold in the Panna region, Madhya Pradesh, India. Peninsular India is underlain by three northeast-trending paleotopographic ridges of Precambrian Indian basement, bounded by crustal-scale faults. Of particular interest is the Pokhara lineament, a crustal scale fault that defines the eastern edge of the Faizabad ridge, a paleotopographic high cored by the Archean Bundelkhand craton. The Pokhara lineament coincides with the monocline structure developed in the Proterozoic Vindhyan Supergroup rocks along the Bundelkhand cratonic margin. A peculiar, deeply incised meander-like feature, preserved along the Ken River where it flows through the monocline, may be intimately related to the tectonic regime of this system. This thesis examines 41 longitudinal stream profiles across the length of the monocline structure to identify any tectonic signals generated from recent surface uplift above the Pokhara lineament. It also investigates the evolution of the Ken River catchment in response to the generation of the monocline fold. Digital Elevation Models (DEM) from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) were used to delineate a series of tributary watersheds and extract individual stream profiles which were imported into MATLAB for analysis. Regression limits were chosen to define distinct channel segments, and knickpoints were defined at breaks between channel segments where there was a discrete change in the steepness of the channel profile. The longitudinal channel profiles exhibit the characteristics of a fluvial system in transient state. There is a significant downstream increase in normalized steepness index in the channel profiles, as well as a general increase in concavity downstream, with some channels exhibiting convex, over-steepened segments. Normalized steepness indices and uppermost knickpoint elevations are on average much higher in streams along the southwest segment of the monocline compared to streams along the northeast segment. Most channel profiles have two to three knickpoints, predominantly exhibiting slope-break morphology. These data have important implications for recent surface uplift above the Pokhara lineament. Furthermore, geomorphic features preserved along the Ken River suggest that it is an antecedent river. The incised meander-like feature appears to be the abandoned river valley of a former Ken River course that was captured during the evolution of the landscape by what is the present day Ken River.