187 resultados para interval-valued fuzzy set
Resumo:
Purpose This study evaluated the impact of patient set-up errors on the probability of pulmonary and cardiac complications in the irradiation of left-sided breast cancer. Methods and Materials Using the CMS XiO Version 4.6 (CMS Inc., St Louis, MO) radiotherapy planning system's NTCP algorithm and the Lyman -Kutcher-Burman (LKB) model, we calculated the DVH indices for the ipsilateral lung and heart and the resultant normal tissue complication probabilities (NTCP) for radiation-induced pneumonitis and excess cardiac mortality in 12 left-sided breast cancer patients. Results Isocenter shifts in the posterior direction had the greatest effect on the lung V20, heart V25, mean and maximum doses to the lung and the heart. Dose volume histograms (DVH) results show that the ipsilateral lung V20 tolerance was exceeded in 58% of the patients after 1cm posterior shifts. Similarly, the heart V25 tolerance was exceeded after 1cm antero-posterior and left-right isocentric shifts in 70% of the patients. The baseline NTCPs for radiation-induced pneumonitis ranged from 0.73% - 3.4% with a mean value of 1.7%. The maximum reported NTCP for radiation-induced pneumonitis was 5.8% (mean 2.6%) after 1cm posterior isocentric shift. The NTCP for excess cardiac mortality were 0 % in 100% of the patients (n=12) before and after setup error simulations. Conclusions Set-up errors in left sided breast cancer patients have a statistically significant impact on the Lung NTCPs and DVH indices. However, with a central lung distance of 3cm or less (CLD <3cm), and a maximum heart distance of 1.5cm or less (MHD<1.5cm), the treatment plans could tolerate set-up errors of up to 1cm without any change in the NTCP to the heart.
Resumo:
OBJECTIVE Corneal confocal microscopy is a novel diagnostic technique for the detection of nerve damage and repair in a range of peripheral neuropathies, in particular diabetic neuropathy. Normative reference values are required to enable clinical translation and wider use of this technique. We have therefore undertaken a multicenter collaboration to provide worldwide age-adjusted normative values of corneal nerve fiber parameters. RESEARCH DESIGN AND METHODS A total of 1,965 corneal nerve images from 343 healthy volunteers were pooled from six clinical academic centers. All subjects underwent examination with the Heidelberg Retina Tomograph corneal confocal microscope. Images of the central corneal subbasal nerve plexus were acquired by each center using a standard protocol and analyzed by three trained examiners using manual tracing and semiautomated software (CCMetrics). Age trends were established using simple linear regression, and normative corneal nerve fiber density (CNFD), corneal nerve fiber branch density (CNBD), corneal nerve fiber length (CNFL), and corneal nerve fiber tortuosity (CNFT) reference values were calculated using quantile regression analysis. RESULTS There was a significant linear age-dependent decrease in CNFD (-0.164 no./mm(2) per year for men, P < 0.01, and -0.161 no./mm(2) per year for women, P < 0.01). There was no change with age in CNBD (0.192 no./mm(2) per year for men, P = 0.26, and -0.050 no./mm(2) per year for women, P = 0.78). CNFL decreased in men (-0.045 mm/mm(2) per year, P = 0.07) and women (-0.060 mm/mm(2) per year, P = 0.02). CNFT increased with age in men (0.044 per year, P < 0.01) and women (0.046 per year, P < 0.01). Height, weight, and BMI did not influence the 5th percentile normative values for any corneal nerve parameter. CONCLUSIONS This study provides robust worldwide normative reference values for corneal nerve parameters to be used in research and clinical practice in the study of diabetic and other peripheral neuropathies.
Resumo:
With the rapid development of various technologies and applications in smart grid implementation, demand response has attracted growing research interests because of its potentials in enhancing power grid reliability with reduced system operation costs. This paper presents a new demand response model with elastic economic dispatch in a locational marginal pricing market. It models system economic dispatch as a feedback control process, and introduces a flexible and adjustable load cost as a controlled signal to adjust demand response. Compared with the conventional “one time use” static load dispatch model, this dynamic feedback demand response model may adjust the load to a desired level in a finite number of time steps and a proof of convergence is provided. In addition, Monte Carlo simulation and boundary calculation using interval mathematics are applied for describing uncertainty of end-user's response to an independent system operator's expected dispatch. A numerical analysis based on the modified Pennsylvania-Jersey-Maryland power pool five-bus system is introduced for simulation and the results verify the effectiveness of the proposed model. System operators may use the proposed model to obtain insights in demand response processes for their decision-making regarding system load levels and operation conditions.
Resumo:
Background: Smoking and physical inactivity are major risk factors for heart disease. Linking strategies that promote improvements in fitness and assist quitting smoking has potential to address both these risk factors simultaneously. The objective of this study is to compare the effects of two exercise interventions (high intensity interval training (HIIT) and lifestyle physical activity) on smoking cessation in female smokers. Method/design: This study will use a randomised controlled trial design. Participants: Women aged 18–55 years who smoke ≥ 5 cigarettes/day, and want to quit smoking. Intervention: all participants will receive usual care for quitting smoking. Group 1 - will complete two gym-based supervised HIIT sessions/week and one home-based HIIT session/week. At each training session participants will be asked to complete four 4-min (4 × 4 min) intervals at approximately 90 % of maximum heart rate interspersed with 3- min recovery periods. Group 2 - participants will receive a resource pack and pedometer, and will be asked to use the 10,000 steps log book to record steps and other physical activities. The aim will be to increase daily steps to 10,000 steps/day. Analysis will be intention to treat and measures will include smoking cessation, withdrawal and cravings, fitness, physical activity, and well-being. Discussion: The study builds on previous research suggesting that exercise intensity may influence the efficacy of exercise as a smoking cessation intervention. The hypothesis is that HIIT will improve fitness and assist women to quit smoking.
Resumo:
State-of-the-art image-set matching techniques typically implicitly model each image-set with a Gaussian distribution. Here, we propose to go beyond these representations and model image-sets as probability distribution functions (PDFs) using kernel density estimators. To compare and match image-sets, we exploit Csiszar´ f-divergences, which bear strong connections to the geodesic distance defined on the space of PDFs, i.e., the statistical manifold. Furthermore, we introduce valid positive definite kernels on the statistical manifold, which let us make use of more powerful classification schemes to match image-sets. Finally, we introduce a supervised dimensionality reduction technique that learns a latent space where f-divergences reflect the class labels of the data. Our experiments on diverse problems, such as video-based face recognition and dynamic texture classification, evidence the benefits of our approach over the state-of-the-art image-set matching methods.
Resumo:
The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a ‘magnitude-based inference’ approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.
Resumo:
The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.