26 resultados para Data pre-processing
Resumo:
BACKGROUND: Patient-reported outcomes (PROs) might detect more toxic effects of radiotherapy than do clinician-reported outcomes. We did a quality of life (QoL) substudy to assess PROs up to 24 months after conventionally fractionated or hypofractionated radiotherapy in the Conventional or Hypofractionated High Dose Intensity Modulated Radiotherapy in Prostate Cancer (CHHiP) trial.
METHODS: The CHHiP trial is a randomised, non-inferiority phase 3 trial done in 71 centres, of which 57 UK hospitals took part in the QoL substudy. Men with localised prostate cancer who were undergoing radiotherapy were eligible for trial entry if they had histologically confirmed T1b-T3aN0M0 prostate cancer, an estimated risk of seminal vesicle involvement less than 30%, prostate-specific antigen concentration less than 30 ng/mL, and a WHO performance status of 0 or 1. Participants were randomly assigned (1:1:1) to receive a standard fractionation schedule of 74 Gy in 37 fractions or one of two hypofractionated schedules: 60 Gy in 20 fractions or 57 Gy in 19 fractions. Randomisation was done with computer-generated permuted block sizes of six and nine, stratified by centre and National Comprehensive Cancer Network (NCCN) risk group. Treatment allocation was not masked. UCLA Prostate Cancer Index (UCLA-PCI), including Short Form (SF)-36 and Functional Assessment of Cancer Therapy-Prostate (FACT-P), or Expanded Prostate Cancer Index Composite (EPIC) and SF-12 quality-of-life questionnaires were completed at baseline, pre-radiotherapy, 10 weeks post-radiotherapy, and 6, 12, 18, and 24 months post-radiotherapy. The CHHiP trial completed accrual on June 16, 2011, and the QoL substudy was closed to further recruitment on Nov 1, 2009. Analysis was on an intention-to-treat basis. The primary endpoint of the QoL substudy was overall bowel bother and comparisons between fractionation groups were done at 24 months post-radiotherapy. The CHHiP trial is registered with ISRCTN registry, number ISRCTN97182923.
FINDINGS: 2100 participants in the CHHiP trial consented to be included in the QoL substudy: 696 assigned to the 74 Gy schedule, 698 assigned to the 60 Gy schedule, and 706 assigned to the 57 Gy schedule. Of these individuals, 1659 (79%) provided data pre-radiotherapy and 1444 (69%) provided data at 24 months after radiotherapy. Median follow-up was 50·0 months (IQR 38·4-64·2) on April 9, 2014, which was the most recent follow-up measurement of all data collected before the QoL data were analysed in September, 2014. Comparison of 74 Gy in 37 fractions, 60 Gy in 20 fractions, and 57 Gy in 19 fractions groups at 2 years showed no overall bowel bother in 269 (66%), 266 (65%), and 282 (65%) men; very small bother in 92 (22%), 91 (22%), and 93 (21%) men; small bother in 26 (6%), 28 (7%), and 38 (9%) men; moderate bother in 19 (5%), 23 (6%), and 21 (5%) men, and severe bother in four (<1%), three (<1%) and three (<1%) men respectively (74 Gy vs 60 Gy, ptrend=0.64, 74 Gy vs 57 Gy, ptrend=0·59). We saw no differences between treatment groups in change of bowel bother score from baseline or pre-radiotherapy to 24 months.
INTERPRETATION: The incidence of patient-reported bowel symptoms was low and similar between patients in the 74 Gy control group and the hypofractionated groups up to 24 months after radiotherapy. If efficacy outcomes from CHHiP show non-inferiority for hypofractionated treatments, these findings will add to the growing evidence for moderately hypofractionated radiotherapy schedules becoming the standard treatment for localised prostate cancer.
FUNDING: Cancer Research UK, Department of Health, and the National Institute for Health Research Cancer Research Network.
Resumo:
Background and aims: Machine learning techniques for the text mining of cancer-related clinical documents have not been sufficiently explored. Here some techniques are presented for the pre-processing of free-text breast cancer pathology reports, with the aim of facilitating the extraction of information relevant to cancer staging.
Materials and methods: The first technique was implemented using the freely available software RapidMiner to classify the reports according to their general layout: ‘semi-structured’ and ‘unstructured’. The second technique was developed using the open source language engineering framework GATE and aimed at the prediction of chunks of the report text containing information pertaining to the cancer morphology, the tumour size, its hormone receptor status and the number of positive nodes. The classifiers were trained and tested respectively on sets of 635 and 163 manually classified or annotated reports, from the Northern Ireland Cancer Registry.
Results: The best result of 99.4% accuracy – which included only one semi-structured report predicted as unstructured – was produced by the layout classifier with the k nearest algorithm, using the binary term occurrence word vector type with stopword filter and pruning. For chunk recognition, the best results were found using the PAUM algorithm with the same parameters for all cases, except for the prediction of chunks containing cancer morphology. For semi-structured reports the performance ranged from 0.97 to 0.94 and from 0.92 to 0.83 in precision and recall, while for unstructured reports performance ranged from 0.91 to 0.64 and from 0.68 to 0.41 in precision and recall. Poor results were found when the classifier was trained on semi-structured reports but tested on unstructured.
Conclusions: These results show that it is possible and beneficial to predict the layout of reports and that the accuracy of prediction of which segments of a report may contain certain information is sensitive to the report layout and the type of information sought.
Resumo:
Dynamic power consumption is very dependent on interconnect, so clever mapping of digital signal processing algorithms to parallelised realisations with data locality is vital. This is a particular problem for fast algorithm implementations where typically, designers will have sacrificed circuit structure for efficiency in software implementation. This study outlines an approach for reducing the dynamic power consumption of a class of fast algorithms by minimising the index space separation; this allows the generation of field programmable gate array (FPGA) implementations with reduced power consumption. It is shown how a 50% reduction in relative index space separation results in a measured power gain of 36 and 37% over a Cooley-Tukey Fast Fourier Transform (FFT)-based solution for both actual power measurements for a Xilinx Virtex-II FPGA implementation and circuit measurements for a Xilinx Virtex-5 implementation. The authors show the generality of the approach by applying it to a number of other fast algorithms namely the discrete cosine, the discrete Hartley and the Walsh-Hadamard transforms.
Resumo:
The highly structured nature of many digital signal processing operations allows these to be directly implemented as regular VLSI circuits. This feature has been successfully exploited in the design of a number of commercial chips, some examples of which are described. While many of the architectures on which such chips are based were originally derived on heuristic basis, there is an increasing interest in the development of systematic design techniques for the direct mapping of computations onto regular VLSI arrays. The purpose of this paper is to show how the the technique proposed by Kung can be readily extended to the design of VLSI signal processing chips where the organisation of computations at the level of individual data bits is of paramount importance. The technique in question allows architectures to be derived using the projection and retiming of data dependence graphs.
Resumo:
Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods.
This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state-of-the-art registration methodologies used in a variety of targeted applications.
Key features:
- Provides a state-of-the-art review of image and video registration techniques, allowing readers to develop an understanding of how well the techniques perform by using specific quality assessment criteria
- Addresses a range of applications from familiar image and video processing domains to satellite and medical imaging among others, enabling readers to discover novel methodologies with utility in their own research
- Discusses quality evaluation metrics for each application domain with an interdisciplinary approach from different research perspectives
Resumo:
An outlier removal based data cleaning technique is proposed to
clean manually pre-segmented human skin data in colour images.
The 3-dimensional colour data is projected onto three 2-dimensional
planes, from which outliers are removed. The cleaned 2 dimensional
data projections are merged to yield a 3D clean RGB data. This data
is finally used to build a look up table and a single Gaussian classifier
for the purpose of human skin detection in colour images.
Resumo:
Current data-intensive image processing applications push traditional embedded architectures to their limits. FPGA based hardware acceleration is a potential solution but the programmability gap and time consuming HDL design flow is significant. The proposed research approach to develop “FPGA based programmable hardware acceleration platform” that uses, large number of Streaming Image processing Processors (SIPPro) potentially addresses these issues. SIPPro is pipelined in-order soft-core processor architecture with specific optimisations for image processing applications. Each SIPPro core uses 1 DSP48, 2 Block RAMs and 370 slice-registers, making the processor as compact as possible whilst maintaining flexibility and programmability. It is area efficient, scalable and high performance softcore architecture capable of delivering 530 MIPS per core using Xilinx Zynq SoC (ZC7Z020-3). To evaluate the feasibility of the proposed architecture, a Traffic Sign Recognition (TSR) algorithm has been prototyped on a Zedboard with the color and morphology operations accelerated using multiple SIPPros. Simulation and experimental results demonstrate that the processing platform is able to achieve a speedup of 15 and 33 times for color filtering and morphology operations respectively, with a significant reduced design effort and time.
Resumo:
AIMS: Mutation detection accuracy has been described extensively; however, it is surprising that pre-PCR processing of formalin-fixed paraffin-embedded (FFPE) samples has not been systematically assessed in clinical context. We designed a RING trial to (i) investigate pre-PCR variability, (ii) correlate pre-PCR variation with EGFR/BRAF mutation testing accuracy and (iii) investigate causes for observed variation. METHODS: 13 molecular pathology laboratories were recruited. 104 blinded FFPE curls including engineered FFPE curls, cell-negative FFPE curls and control FFPE tissue samples were distributed to participants for pre-PCR processing and mutation detection. Follow-up analysis was performed to assess sample purity, DNA integrity and DNA quantitation. RESULTS: Rate of mutation detection failure was 11.9%. Of these failures, 80% were attributed to pre-PCR error. Significant differences in DNA yields across all samples were seen using analysis of variance (p