963 resultados para 3.5G EUL Techniques
Resumo:
Capillary electrophoresis (CE) is a modern analytical technique, which is electrokinetic separation generated by high voltage and taken place inside the small capillaries. In this dissertation, several advanced capillary electrophoresis methods are presented using different approaches of CE and UV and mass spectrometry are utilized as the detection methods. ^ Capillary electrochromatography (CEC), as one of the CE modes, is a recent developed technique which is a hybrid of capillary electrophoresis and high performance liquid chromatography (HPLC). Capillary electrochromatography exhibits advantages of both techniques. In Chapter 2, monolithic capillary column are fabricated using in situ photoinitiation polymerization method. The column was then applied for the separation of six antidepressant compounds. ^ Meanwhile, a simple chiral separation method is developed and presented in Chapter 3. Beta cycodextrin was utilized to achieve the goal of chiral separation. Not only twelve cathinone analytes were separated, but also isomers of several analytes were enantiomerically separated. To better understand the molecular information on the analytes, the TOF-MS system was coupled with the CE. A sheath liquid and a partial filling technique (PFT) were employed to reduce the contamination of MS ionization source. Accurate molecular information was obtained. ^ It is necessary to propose, develop, and optimize new techniques that are suitable for trace-level analysis of samples in forensic, pharmaceutical, and environmental applications. Capillary electrophoresis (CE) was selected for this task, as it requires lower amounts of samples, it simplifies sample preparation, and it has the flexibility to perform separations of neutral and charged molecules as well as enantiomers. ^ Overall, the study demonstrates the versatility of capillary electrophoresis methods in forensic, pharmaceutical, and environmental applications.^
Resumo:
The task of expression undertaken by the performer falls largely on the right hand of guitarist. Aware of this fact, past and present masters have left their contributions to the development of right hand technique. It is clear, with rare exceptions, that educational and interpretative proposals, so far, have addressed the attack on the strings from the flexion of the fingers. This work, however, presents a technical resource called imalt, including in the attack action, the extension movement. Some techniques used in specific circumstances, such as the dedillo, the alzapúa, the tremulo and the rasgueado also use extension movements in the attack. They are put in perspective with the imalt providing a panoramic view of their individual characteristics. The use of imalt in the traditional guitar repertoire is exemplified in Villa Lobos, Ponce and Brouwer. Three pieces were composed for this work: Shravana, Alegoria and Vandana. Compositional techniques such as melodic contour applying and ostinato have been reviewed and used in the preparation of these compositions. A detailed record of compositional trajectory is presented. Therefore, the Model for the Compositional Process Accompaniment according Silva (2007) is used. Some events that have left the imalt in evidence are reported, as the launch and distribution of the Compact Disc (CD) Imalt, publishing scores and interviews. Finally is presented concluding comments, pointing possibilities opened up by this work.
Resumo:
STATEMENT OF PROBLEM: A number of methods have been described for the fabrication of complete dentures. There are 2 common ways to make conventional complete dentures: a traditional method and a simplified method. PURPOSE: The purpose of this study was to conduct a systematic review to compare the efficiency of simplified and traditional methods for the fabrication of complete dentures. MATERIAL AND METHODS: The review was conducted by 3 independent reviewers and included articles published up to December 2013. Three electronic databases were searched: MEDLINE-PubMed, The Cochrane Library, and ISI Web of Science. A manual search also was performed to identify clinical trials of simplified versus traditional fabrication of complete dentures. RESULTS: Six articles were classified as randomized controlled clinical trials and were included in this review. The majority of the selected articles analyzed general satisfaction, denture stability, chewing ability and function, comfort, hygiene, esthetics, speech function, quality of life, cost, and fabrication time. CONCLUSIONS: Although the studies reviewed demonstrate some advantages of simplified over traditional prostheses, such as lower cost and clinical time, good chewing efficiency, and a positive effect on the quality of life, the reports related the use of different simplified methods for the fabrication of complete dentures. Additional randomized controlled trials that used similar simplified techniques for the fabrication of complete dentures should be performed with larger sample sizes and longer follow-up periods.
Resumo:
In the present work, the deviations in the solubility of CO2, CH4, and N2 at 30 °c in the mixed gases (CO2/CH4) and (CO2/N2) from the pure gas behavior were studied using the dual-mode model over a wide range of equilibrium composition and pressure values in two glassy polymers. The first of which was PI-DAR which is the polyimide formed by the reaction between 4, 6-diaminoresorcinol dihydrochloride (DAR-Cl) and 2, 2’-bis-(3, 4-dicarboxyphenyl) hexafluoropropane dianhydride (6FDA). The other glassy polymer was TR-DAR which is the corresponding thermally rearranged polymer of PI-DAR. Also, mixed gas sorption experiments for the gas mixture (CO2/CH4) in TR-DAR at 30°c took place in order to assess the degree of accuracy of the dual-mode model in predicting the true mixed gas behavior. The experiments were conducted on a pressure decay apparatus coupled with a gas chromatography column. On the other hand, the solubility of CO2 and CH4 in two rubbery polymers at 30⁰c in the mixed gas (CO2/CH4) was modelled using the Lacombe and Sanchez equation of state at various values of equilibrium composition and pressure. These two rubbery polymers were cross-linked poly (ethylene oxide) (XLPEO) and poly (dimethylsiloxane) (PDMS). Moreover, data about the sorption of CO2 and CH4 in liquid methyl dietahnolamine MDEA that was collected from literature65-67 was used to determine the deviations in the sorption behavior in the mixed gas from that in the pure gases. It was observed that the competition effects between the penetrants were prevailing in the glassy polymers while swelling effects were predominant in the rubbery polymers above a certain value of the fugacity of CO2. Also, it was found that the dual-mode model showed a good prediction of the sorption of CH4 in the mixed gas for small pressure values but in general, it failed to predict the actual sorption of the penetrants in the mixed gas.
Resumo:
The composition and abundance of algal pigments provide information on phytoplankton community characteristics such as photoacclimation, overall biomass and taxonomic composition. In particular, pigments play a major role in photoprotection and in the light-driven part of photosynthesis. Most phytoplankton pigments can be measured by high-performance liquid chromatography (HPLC) techniques applied to filtered water samples. This method, as well as other laboratory analyses, is time consuming and therefore limits the number of samples that can be processed in a given time. In order to receive information on phytoplankton pigment composition with a higher temporal and spatial resolution, we have developed a method to assess pigment concentrations from continuous optical measurements. The method applies an empirical orthogonal function (EOF) analysis to remote-sensing reflectance data derived from ship-based hyperspectral underwater radiometry and from multispectral satellite data (using the Medium Resolution Imaging Spectrometer - MERIS - Polymer product developed by Steinmetz et al., 2011, doi:10.1364/OE.19.009783) measured in the Atlantic Ocean. Subsequently we developed multiple linear regression models with measured (collocated) pigment concentrations as the response variable and EOF loadings as predictor variables. The model results show that surface concentrations of a suite of pigments and pigment groups can be well predicted from the ship-based reflectance measurements, even when only a multispectral resolution is chosen (i.e., eight bands, similar to those used by MERIS). Based on the MERIS reflectance data, concentrations of total and monovinyl chlorophyll a and the groups of photoprotective and photosynthetic carotenoids can be predicted with high quality. As a demonstration of the utility of the approach, the fitted model based on satellite reflectance data as input was applied to 1 month of MERIS Polymer data to predict the concentration of those pigment groups for the whole eastern tropical Atlantic area. Bootstrapping explorations of cross-validation error indicate that the method can produce reliable predictions with relatively small data sets (e.g., < 50 collocated values of reflectance and pigment concentration). The method allows for the derivation of time series from continuous reflectance data of various pigment groups at various regions, which can be used to study variability and change of phytoplankton composition and photophysiology.
Resumo:
Acknowledgments The authors are very grateful to Mr. Fabiano Bielefeld Nardotto, owner of the Tabapuã dos Pireneus farm, for allowing our free movement around the farm and collection of soil samples, as well as providing information about soybean cultivation. The authors also thank Dr. Plínio de Camargo, who performed the isotopic analysis in the CENA laboratory at the University of São Paulo (USP). This work was supported by grants from the National Council of Technological and Scientific Development (CNPq), Brazilian Federal Agency for Support and Evaluation of Graduate Education (CAPES), and Foundation for Research Support of Distrito Federal (FAP-DF).
Resumo:
Purpose: To study the concentrations of diadenosine polyphosphates in the ocular surface after PRK and LASIK. Methods: Sixty-one patients (30 males and 31 females) with ages ranging from 20 to 63 (34.04 ± 9.13 years) were recruited in Balear Institute of Ophthalmology, Palma de Mallorca, Spain. LASIK was performed in 92 eyes of 46 patients and PRK in 25 eyes of 15 patients. Variations in the levels of diadenosine polyphosphate (Ap4A and Ap5A), Schirmer I (Jones test), TBUT, corneal staining together with the Dry Eye Questionnaire to evaluate discomfort and dryness were studied. All tests were performed at the preoperative visit and at 1-day, 2-week, 1-month and 3-month postoperative visits. Results: Ap4A showed a 5 and 3.5 fold increase at the 1-day visit for LASIK and PRK, respectively. LASIK patients continued having higher statistically significant concentrations (p = 0.01) all over the follow-up. Ap5A showed no significant differences at any visit. Tear volume decreased during the 3 months in LASIK. The PRK cases had a normal volume at 1 month. TBUT in LASIK increased at the 1-day visit (p = 0,002) and decreased from the 2 weeks onwards and for the PRK, decreased by a 35% at the 1-day visit and kept reduced for a month. Discomfort only increased at the 1-day visit (p = 0.007). Dryness frequency was similar in all visits. Conclusions: Ap4A levels only are increased in refractive surgery patients during the first day after the surgery. This increasing suggests that Ap4A may help accelerating the healing process.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
The use of structural health monitoring of civil structures is ever expanding and by assessing the dynamical condition of structures, informed maintenance management can be conducted at both individual and network levels. With the continued growth of information age technology, the potential arises for smart monitoring systems to be integrated with civil infrastructure to provide efficient information on the condition of a structure. The focus of this thesis is the integration of smart technology with civil infrastructure for the purposes of structural health monitoring. The technology considered in this regard are devices based on energy harvesting materials. While there has been considerable focus on the development and optimisation of such devices using steady state loading conditions, their applications for civil infrastructure are less known. Although research is still in initial stages, studies into the uses associated with such applications are very promising. Through the use of the dynamical response of structures to a variety of loading conditions, the energy harvesting outputs from such devices is established and the potential power output determined. Through a power variance output approach, damage detection of deteriorating structures using the energy harvesting devices is investigated. Further applications of the integration of energy harvesting devices with civil infrastructure investigated by this research includes the use of the power output as a indicator for control. Four approaches are undertaken to determine the potential applications arising from integrating smart technology with civil infrastructure, namely • Theoretical analysis to determine the applications of energy harvesting devices for vibration based health monitoring of civil infrastructure. • Laboratory experimentation to verify the performance of different energy harvesting configurations for civil infrastructure applications. • Scaled model testing as a method to experimentally validate the integration of the energy harvesting devices with civil infrastructure. • Full scale deployment of energy harvesting device with a bridge structure. These four approaches validate the application of energy harvesting technology with civil infrastructure from a theoretical, experimental and practical perspective.
Resumo:
Capillary electrophoresis (CE) is a modern analytical technique, which is electrokinetic separation generated by high voltage and taken place inside the small capillaries. In this dissertation, several advanced capillary electrophoresis methods are presented using different approaches of CE and UV and mass spectrometry are utilized as the detection methods. Capillary electrochromatography (CEC), as one of the CE modes, is a recent developed technique which is a hybrid of capillary electrophoresis and high performance liquid chromatography (HPLC). Capillary electrochromatography exhibits advantages of both techniques. In Chapter 2, monolithic capillary column are fabricated using in situ photoinitiation polymerization method. The column was then applied for the separation of six antidepressant compounds. Meanwhile, a simple chiral separation method is developed and presented in Chapter 3. Beta cycodextrin was utilized to achieve the goal of chiral separation. Not only twelve cathinone analytes were separated, but also isomers of several analytes were enantiomerically separated. To better understand the molecular information on the analytes, the TOF-MS system was coupled with the CE. A sheath liquid and a partial filling technique (PFT) were employed to reduce the contamination of MS ionization source. Accurate molecular information was obtained. It is necessary to propose, develop, and optimize new techniques that are suitable for trace-level analysis of samples in forensic, pharmaceutical, and environmental applications. Capillary electrophoresis (CE) was selected for this task, as it requires lower amounts of samples, it simplifies sample preparation, and it has the flexibility to perform separations of neutral and charged molecules as well as enantiomers. Overall, the study demonstrates the versatility of capillary electrophoresis methods in forensic, pharmaceutical, and environmental applications.
Resumo:
Mineralogical and chemical analyses performed on 67 ferromanganese nodules from widely varying locations and depths within the marine environment of the Pacific Ocean indicate that the minor element composition is controlled by the mineralogy and that the formation of the mineral phases is depth dependent. The pressure effect upon the thermodynamics or kinetics of mineral formation is suggested as the governing agent in the depth dependence of the mineralogy. The minor elements, Pb and Co, appear concentrated in the dMnO2 phase, whereas Cu and Ni are more or less excluded from this phase. In the manganites, Pb and Co are relatively low in concentration, whereas Cu and Ni are spread over a wide range of values. The oxidation of Pb and Co from divalent forms in sea water to higher states can explain their concentration in the dMnO2 phase.
Resumo:
The control of radioactive backgrounds will be key in the search for neutrinoless double beta decay at the SNO+ experiment. Several aspects of the SNO+ back- grounds have been studied. The SNO+ tellurium purification process may require ultra low background ethanol as a reagent. A low background assay technique for ethanol was developed and used to identify a source of ethanol with measured 238U and 232Th concentrations below 2.8 10^-13 g/g and 10^-14 g/g respectively. It was also determined that at least 99:997% of the ethanol can be removed from the purified tellurium using forced air ow in order to reduce 14C contamination. In addition, a quality-control technique using an oxygen sensor was studied to monitor 222Rn contamination due to air leaking into the SNO+ scintillator during transport. The expected sensitivity of the technique is 0.1mBq/L or better depending on the oxygen sensor used. Finally, the dependence of SNO+ neutrinoless double beta decay sensitivity on internal background levels was studied using Monte Carlo simulation. The half-life limit to neutrinoless double beta decay of 130Te after 3 years of operation was found to be 4.8 1025 years under default conditions.
Resumo:
The Semantic Annotation component is a software application that provides support for automated text classification, a process grounded in a cohesion-centered representation of discourse that facilitates topic extraction. The component enables the semantic meta-annotation of text resources, including automated classification, thus facilitating information retrieval within the RAGE ecosystem. It is available in the ReaderBench framework (http://readerbench.com/) which integrates advanced Natural Language Processing (NLP) techniques. The component makes use of Cohesion Network Analysis (CNA) in order to ensure an in-depth representation of discourse, useful for mining keywords and performing automated text categorization. Our component automatically classifies documents into the categories provided by the ACM Computing Classification System (http://dl.acm.org/ccs_flat.cfm), but also into the categories from a high level serious games categorization provisionally developed by RAGE. English and French languages are already covered by the provided web service, whereas the entire framework can be extended in order to support additional languages.
Resumo:
BACKGROUND: Prostate cancer might have high radiation-fraction sensitivity that would give a therapeutic advantage to hypofractionated treatment. We present a pre-planned analysis of the efficacy and side-effects of a randomised trial comparing conventional and hypofractionated radiotherapy after 5 years follow-up.
METHODS: CHHiP is a randomised, phase 3, non-inferiority trial that recruited men with localised prostate cancer (pT1b-T3aN0M0). Patients were randomly assigned (1:1:1) to conventional (74 Gy delivered in 37 fractions over 7·4 weeks) or one of two hypofractionated schedules (60 Gy in 20 fractions over 4 weeks or 57 Gy in 19 fractions over 3·8 weeks) all delivered with intensity-modulated techniques. Most patients were given radiotherapy with 3-6 months of neoadjuvant and concurrent androgen suppression. Randomisation was by computer-generated random permuted blocks, stratified by National Comprehensive Cancer Network (NCCN) risk group and radiotherapy treatment centre, and treatment allocation was not masked. The primary endpoint was time to biochemical or clinical failure; the critical hazard ratio (HR) for non-inferiority was 1·208. Analysis was by intention to treat. Long-term follow-up continues. The CHHiP trial is registered as an International Standard Randomised Controlled Trial, number ISRCTN97182923.
FINDINGS: Between Oct 18, 2002, and June 17, 2011, 3216 men were enrolled from 71 centres and randomly assigned (74 Gy group, 1065 patients; 60 Gy group, 1074 patients; 57 Gy group, 1077 patients). Median follow-up was 62·4 months (IQR 53·9-77·0). The proportion of patients who were biochemical or clinical failure free at 5 years was 88·3% (95% CI 86·0-90·2) in the 74 Gy group, 90·6% (88·5-92·3) in the 60 Gy group, and 85·9% (83·4-88·0) in the 57 Gy group. 60 Gy was non-inferior to 74 Gy (HR 0·84 [90% CI 0·68-1·03], pNI=0·0018) but non-inferiority could not be claimed for 57 Gy compared with 74 Gy (HR 1·20 [0·99-1·46], pNI=0·48). Long-term side-effects were similar in the hypofractionated groups compared with the conventional group. There were no significant differences in either the proportion or cumulative incidence of side-effects 5 years after treatment using three clinician-reported as well as patient-reported outcome measures. The estimated cumulative 5 year incidence of Radiation Therapy Oncology Group (RTOG) grade 2 or worse bowel and bladder adverse events was 13·7% (111 events) and 9·1% (66 events) in the 74 Gy group, 11·9% (105 events) and 11·7% (88 events) in the 60 Gy group, 11·3% (95 events) and 6·6% (57 events) in the 57 Gy group, respectively. No treatment-related deaths were reported.
INTERPRETATION: Hypofractionated radiotherapy using 60 Gy in 20 fractions is non-inferior to conventional fractionation using 74 Gy in 37 fractions and is recommended as a new standard of care for external-beam radiotherapy of localised prostate cancer.
FUNDING: Cancer Research UK, Department of Health, and the National Institute for Health Research Cancer Research Network.
Resumo:
Transient simulations are widely used in studying the past climate as they provide better comparison with any exisiting proxy data. However, multi-millennial transient simulations using coupled climate models are usually computationally very expensive. As a result several acceleration techniques are implemented when using numerical simulations to recreate past climate. In this study, we compare the results from transient simulations of the present and the last interglacial with and without acceleration of the orbital forcing, using the comprehensive coupled climate model CCSM3 (Community Climate System Model 3). Our study shows that in low-latitude regions, the simulation of long-term variations in interglacial surface climate is not significantly affected by the use of the acceleration technique (with an acceleration factor of 10) and hence, large-scale model-data comparison of surface variables is not hampered. However, in high-latitude regions where the surface climate has a direct connection to the deep ocean, e.g. in the Southern Ocean or the Nordic Seas, acceleration-induced biases in sea-surface temperature evolution may occur with potential influence on the dynamics of the overlying atmosphere. The data provided here are from both accelerated and non-accelerated runs as decadal mean values.