26 resultados para Evaluation criteria
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
The second round of the NIST-run public competition is underway to find a new hash algorithm(s) for inclusion in the NIST Secure Hash Standard (SHA-3). This paper presents the full implementations of all of the second round candidates in hardware with all of their variants. In order to determine their computational efficiency, an important aspect in NIST's round two evaluation criteria, this paper gives an area/speed comparison of each design both with and without a hardware interface, thereby giving an overall impression of their performance in resource constrained and resource abundant environments. The implementation results are provided for a Virtex-5 FPGA device. The efficiency of the architectures for the hash functions are compared in terms of throughput per unit area. To the best of the authors' knowledge, this is the first work to date to present hardware designs which test for all message digest sizes (224, 256, 384, 512), and also the only work to include the padding as part of the hardware for the SHA-3 hash functions.
Resumo:
Making room for new marine uses and safeguarding more traditional uses, without degrading the marine environment, will require the adoption of new integrated management strategies. Current management frameworks do not facilitate the integrated management of all marine activities occurring in one area. To address this issue, the government developed Harnessing Our Ocean Wealth – An Integrated Marine Plan (IMP) for Ireland. Harnessing Our Ocean
Wealth presents a ‘roadmap’ for adopting an integrated approach to marine governance and for achieving the Government’s ambitious targets for the maritime sector, including: exceeding €6.4 billion turnover annually by 2020, and doubling its contribution to GDP to 2.4% by 2030. As part of this roadmap, Harnessing Our Ocean Wealth endorses the development of an appropriate Marine Spatial Planning (MSP) Framework. One way to develop an MSP Framework is to learn from early adapters. Critical assessments of key
elements of MSP as implemented in early initiatives can serve to inform the development of an appropriate framework. The aim of this project is to contribute to the development of this framework by reporting on
MSP best practice relevant to Ireland. Case study selection and evaluation criteria are outlined in the next section. This is followed by a presentation of case study findings. The final section of the report focuses on outlining how the lessons could be transferred to the Irish context.
Resumo:
BACKGROUND: Prostate cancer is a heterogeneous disease, but current treatments are not based on molecular stratification. We hypothesized that metastatic, castration-resistant prostate cancers with DNA-repair defects would respond to poly(adenosine diphosphate [ADP]-ribose) polymerase (PARP) inhibition with olaparib.
METHODS: We conducted a phase 2 trial in which patients with metastatic, castration-resistant prostate cancer were treated with olaparib tablets at a dose of 400 mg twice a day. The primary end point was the response rate, defined either as an objective response according to Response Evaluation Criteria in Solid Tumors, version 1.1, or as a reduction of at least 50% in the prostate-specific antigen level or a confirmed reduction in the circulating tumor-cell count from 5 or more cells per 7.5 ml of blood to less than 5 cells per 7.5 ml. Targeted next-generation sequencing, exome and transcriptome analysis, and digital polymerase-chain-reaction testing were performed on samples from mandated tumor biopsies.
RESULTS: Overall, 50 patients were enrolled; all had received prior treatment with docetaxel, 49 (98%) had received abiraterone or enzalutamide, and 29 (58%) had received cabazitaxel. Sixteen of 49 patients who could be evaluated had a response (33%; 95% confidence interval, 20 to 48), with 12 patients receiving the study treatment for more than 6 months. Next-generation sequencing identified homozygous deletions, deleterious mutations, or both in DNA-repair genes--including BRCA1/2, ATM, Fanconi's anemia genes, and CHEK2--in 16 of 49 patients who could be evaluated (33%). Of these 16 patients, 14 (88%) had a response to olaparib, including all 7 patients with BRCA2 loss (4 with biallelic somatic loss, and 3 with germline mutations) and 4 of 5 with ATM aberrations. The specificity of the biomarker suite was 94%. Anemia (in 10 of the 50 patients [20%]) and fatigue (in 6 [12%]) were the most common grade 3 or 4 adverse events, findings that are consistent with previous studies of olaparib.
CONCLUSIONS: Treatment with the PARP inhibitor olaparib in patients whose prostate cancers were no longer responding to standard treatments and who had defects in DNA-repair genes led to a high response rate. (Funded by Cancer Research UK and others; ClinicalTrials.gov number, NCT01682772; Cancer Research UK number, CRUK/11/029.).
Resumo:
We present a new wrapper feature selection algorithm for human detection. This algorithm is a hybrid featureselection approach combining the benefits of filter and wrapper methods. It allows the selection of an optimalfeature vector that well represents the shapes of the subjects in the images. In detail, the proposed featureselection algorithm adopts the k-fold subsampling and sequential backward elimination approach, while thestandard linear support vector machine (SVM) is used as the classifier for human detection. We apply theproposed algorithm to the publicly accessible INRIA and ETH pedestrian full image datasets with the PASCALVOC evaluation criteria. Compared to other state of the arts algorithms, our feature selection based approachcan improve the detection speed of the SVM classifier by over 50% with up to 2% better detection accuracy.Our algorithm also outperforms the equivalent systems introduced in the deformable part model approach witharound 9% improvement in the detection accuracy
Resumo:
Objective To develop a provisional definition for the evaluation of response to therapy in juvenile dermatomyositis (DM) based on the Paediatric Rheumatology International Trials Organisation juvenile DM core set of variables. Methods Thirty-seven experienced pediatric rheumatologists from 27 countries achieved consensus on 128 difficult patient profiles as clinically improved or not improved using a stepwise approach (patient's rating, statistical analysis, definition selection). Using the physicians' consensus ratings as the “gold standard measure,” chi-square, sensitivity, specificity, false-positive and-negative rates, area under the receiver operating characteristic curve, and kappa agreement for candidate definitions of improvement were calculated. Definitions with kappa values >0.8 were multiplied by the face validity score to select the top definitions. Results The top definition of improvement was at least 20% improvement from baseline in 3 of 6 core set variables with no more than 1 of the remaining worsening by more than 30%, which cannot be muscle strength. The second-highest scoring definition was at least 20% improvement from baseline in 3 of 6 core set variables with no more than 2 of the remaining worsening by more than 25%, which cannot be muscle strength (definition P1 selected by the International Myositis Assessment and Clinical Studies group). The third is similar to the second with the maximum amount of worsening set to 30%. This indicates convergent validity of the process. Conclusion We propose a provisional data-driven definition of improvement that reflects well the consensus rating of experienced clinicians, which incorporates clinically meaningful change in core set variables in a composite end point for the evaluation of global response to therapy in juvenile DM.
Resumo:
The complexity of sustainable development means that it is often difficult to evaluate and communicate the concept effectively. One standard method to reduce complexity and improve Communication, while maintaining scientific objectivity, is to use selected indicators. The aim of this paper is to describe and evaluate a process Of public participation in the selection of sustainable development indicators that utilised the Q-method for discourse analysis. The Q-method was Utilised to combine public opinion with technical expertise to create a list of technically robust indicators that would be relevant to the public, The method comprises statement collection, statement analysis, Q-sorts and Q-sort analysis. The results of the Q-method generated a list of statements for which a preliminary list of indicators was then developed by a team of experts from the fields of environmental science, sustainable development and Psychology. Subsequently members of the public evaluated the preliminary list of indicators, to select a final list of indicators that were both technically sound and incorporated the views of the public. The Utilisation of the Q-method in this process was evaluated using previously published criteria. The application of the Q-method in this context needs to be considered not only by the quality of the indicators developed, but also from the perspective of the benefit of the process to the participants. it was concluded that the Q-method provided an effective framework for public participation in the selection of indicators as it allowed the public to discuss Sustainable development in familiar language and in the context of their daily lives. By combining this information with expert input, a list of technically robust indicators that resonate with the public was developed. The results demonstrated that many citizens are not aware Of Sustainable development, and if it is to be successfully communicated to them, then indicators and policy need to be couched in terms familiar and relevant to citizen and communities. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The change of supply chain relationships from the traditional adversarial to the collaborative has been increasing in the UK construction industry. To reflect this change, some attempts have been made to establish models for measuring and improving supply chain relationships in construction. However, there are obvious deficiencies in these existing models. This highlights the need for a systematic model for the assessment of construction supply chain relationships. Based on a review of the literature and an expert group discussion, an assessment framework is developed in this paper, which consists of assessment criteria, relationship levels, detailed descriptions, assessment classes and assessment procedures. The proposed framework is evaluated through expert interviews and case studies. This framework provides a roadmap for the improvement of supply chain relationships. It can help construction organisations to position their current relationship and identify key areas for relationship improvement in the future.
Resumo:
Ecological coherence is a multifaceted conservation objective that includes some potentially conflicting concepts. These concepts include the extent to which the network maximises diversity (including genetic diversity) and the extent to which protected areas interact with non-reserve locations. To examine the consequences of different selection criteria, the preferred location to complement protected sites was examined using samples taken from four locations around each of two marine protected areas: Strangford Lough and Lough Hyne, Ireland. Three different measures of genetic distance were used: FST, Dest and a measure of allelic dissimilarity, along with a direct assessment of the total number of alleles in different candidate networks. Standardized site scores were used for comparisons across methods and selection criteria. The average score for Castlehaven, a site relatively close to Lough Hyne, was highest, implying that this site would capture the most genetic diversity while ensuring highest degree of interaction between protected and unprotected sites. Patterns around Strangford Lough were more ambiguous, potentially reflecting the weaker genetic structure around this protected area in comparison to Lough Hyne. Similar patterns were found across species with different dispersal capacities, indicating that methods based on genetic distance could be used to help maximise ecological coherence in reserve networks. ⺠Ecological coherence is a key component of marine protected area network design. ⺠Coherence contains a number of competing concepts. ⺠Genetic information from field populations can help guide assessments of coherence. ⺠Average choice across different concepts of coherence was consistent among species. ⺠Measures can be combined to compare the coherence of different network designs.
Resumo:
OBJECTIVES: To determine effective and efficient monitoring criteria for ocular hypertension [raised intraocular pressure (IOP)] through (i) identification and validation of glaucoma risk prediction models; and (ii) development of models to determine optimal surveillance pathways.
DESIGN: A discrete event simulation economic modelling evaluation. Data from systematic reviews of risk prediction models and agreement between tonometers, secondary analyses of existing datasets (to validate identified risk models and determine optimal monitoring criteria) and public preferences were used to structure and populate the economic model.
SETTING: Primary and secondary care.
PARTICIPANTS: Adults with ocular hypertension (IOP > 21 mmHg) and the public (surveillance preferences).
INTERVENTIONS: We compared five pathways: two based on National Institute for Health and Clinical Excellence (NICE) guidelines with monitoring interval and treatment depending on initial risk stratification, 'NICE intensive' (4-monthly to annual monitoring) and 'NICE conservative' (6-monthly to biennial monitoring); two pathways, differing in location (hospital and community), with monitoring biennially and treatment initiated for a ≥ 6% 5-year glaucoma risk; and a 'treat all' pathway involving treatment with a prostaglandin analogue if IOP > 21 mmHg and IOP measured annually in the community.
MAIN OUTCOME MEASURES: Glaucoma cases detected; tonometer agreement; public preferences; costs; willingness to pay and quality-adjusted life-years (QALYs).
RESULTS: The best available glaucoma risk prediction model estimated the 5-year risk based on age and ocular predictors (IOP, central corneal thickness, optic nerve damage and index of visual field status). Taking the average of two IOP readings, by tonometry, true change was detected at two years. Sizeable measurement variability was noted between tonometers. There was a general public preference for monitoring; good communication and understanding of the process predicted service value. 'Treat all' was the least costly and 'NICE intensive' the most costly pathway. Biennial monitoring reduced the number of cases of glaucoma conversion compared with a 'treat all' pathway and provided more QALYs, but the incremental cost-effectiveness ratio (ICER) was considerably more than £30,000. The 'NICE intensive' pathway also avoided glaucoma conversion, but NICE-based pathways were either dominated (more costly and less effective) by biennial hospital monitoring or had a ICERs > £30,000. Results were not sensitive to the risk threshold for initiating surveillance but were sensitive to the risk threshold for initiating treatment, NHS costs and treatment adherence.
LIMITATIONS: Optimal monitoring intervals were based on IOP data. There were insufficient data to determine the optimal frequency of measurement of the visual field or optic nerve head for identification of glaucoma. The economic modelling took a 20-year time horizon which may be insufficient to capture long-term benefits. Sensitivity analyses may not fully capture the uncertainty surrounding parameter estimates.
CONCLUSIONS: For confirmed ocular hypertension, findings suggest that there is no clear benefit from intensive monitoring. Consideration of the patient experience is important. A cohort study is recommended to provide data to refine the glaucoma risk prediction model, determine the optimum type and frequency of serial glaucoma tests and estimate costs and patient preferences for monitoring and treatment.
FUNDING: The National Institute for Health Research Health Technology Assessment Programme.
Resumo:
Objectives: To assess whether open angle glaucoma (OAG) screening meets the UK National Screening Committee criteria, to compare screening strategies with case finding, to estimate test parameters, to model estimates of cost and cost-effectiveness, and to identify areas for future research. Data sources: Major electronic databases were searched up to December 2005. Review methods: Screening strategies were developed by wide consultation. Markov submodels were developed to represent screening strategies. Parameter estimates were determined by systematic reviews of epidemiology, economic evaluations of screening, and effectiveness (test accuracy, screening and treatment). Tailored highly sensitive electronic searches were undertaken. Results: Most potential screening tests reviewed had an estimated specificity of 85% or higher. No test was clearly most accurate, with only a few, heterogeneous studies for each test. No randomised controlled trials (RCTs) of screening were identified. Based on two treatment RCTs, early treatment reduces the risk of progression. Extrapolating from this, and assuming accelerated progression with advancing disease severity, without treatment the mean time to blindness in at least one eye was approximately 23 years, compared to 35 years with treatment. Prevalence would have to be about 3-4% in 40 year olds with a screening interval of 10 years to approach cost-effectiveness. It is predicted that screening might be cost-effective in a 50-year-old cohort at a prevalence of 4% with a 10-year screening interval. General population screening at any age, thus, appears not to be cost-effective. Selective screening of groups with higher prevalence (family history, black ethnicity) might be worthwhile, although this would only cover 6% of the population. Extension to include other at-risk cohorts (e.g. myopia and diabetes) would include 37% of the general population, but the prevalence is then too low for screening to be considered cost-effective. Screening using a test with initial automated classification followed by assessment by a specialised optometrist, for test positives, was more cost-effective than initial specialised optometric assessment. The cost-effectiveness of the screening programme was highly sensitive to the perspective on costs (NHS or societal). In the base-case model, the NHS costs of visual impairment were estimated as £669. If annual societal costs were £8800, then screening might be considered cost-effective for a 40-year-old cohort with 1% OAG prevalence assuming a willingness to pay of £30,000 per quality-adjusted life-year. Of lesser importance were changes to estimates of attendance for sight tests, incidence of OAG, rate of progression and utility values for each stage of OAG severity. Cost-effectiveness was not particularly sensitive to the accuracy of screening tests within the ranges observed. However, a highly specific test is required to reduce large numbers of false-positive referrals. The findings that population screening is unlikely to be cost-effective are based on an economic model whose parameter estimates have considerable uncertainty, in particular, if rate of progression and/or costs of visual impairment are higher than estimated then screening could be cost-effective. Conclusions: While population screening is not cost-effective, the targeted screening of high-risk groups may be. Procedures for identifying those at risk, for quality assuring the programme, as well as adequate service provision for those screened positive would all be needed. Glaucoma detection can be improved by increasing attendance for eye examination, and improving the performance of current testing by either refining practice or adding in a technology-based first assessment, the latter being the more cost-effective option. This has implications for any future organisational changes in community eye-care services. Further research should aim to develop and provide quality data to populate the economic model, by conducting a feasibility study of interventions to improve detection, by obtaining further data on costs of blindness, risk of progression and health outcomes, and by conducting an RCT of interventions to improve the uptake of glaucoma testing. © Queen's Printer and Controller of HMSO 2007. All rights reserved.
Resumo:
Pregnancy and the postpartum period is a time of increased vulnerability for retention of excess body fat in women. Breastfeeding (BF) has been shown to have many health benefits for both mother and baby; however, its role in postpartum weight management is unclear. Our aim was to systematically review and critically appraise the literature published to date in relation to the impact of BF on postpartum weight change, weight retention and maternal body composition. Electronic literature searches were carried out using MEDLINE, EMBASE, PubMed, Web of Science, BIOSIS, CINAHL and British Nursing Index. The search covered publications up to 12 June 2012 and included observational studies (prospective and retrospective) carried out in BF mothers (either exclusively or as a subgroup), who were 2 years postpartum and with a body mass index (BMI) >18.5 kg m(-2), with an outcome measure of change in weight (including weight retention) and/or body composition. Thirty-seven prospective studies and eight retrospective studies were identified that met the selection criteria; studies were stratified according to study design and outcome measure. Overall, studies were heterogeneous, particularly in relation to sample size, measurement time points and in the classification of BF and postpartum weight change. The majority of studies reported little or no association between BF and weight change (n=27, 63%) or change in body composition (n=16, 89%), although this seemed to depend on the measurement time points and BF intensity. However, of the five studies that were considered to be of high methodological quality, four studies demonstrated a positive association between BF and weight change. This systematic review highlights the difficulties of examining the association between BF and weight management in observational research. Although the available evidence challenges the widely held belief that BF promotes weight loss, more robust studies are needed to reliably assess the impact of BF on postpartum weight management.International Journal of Obesity advance online publication, 20 August 2013; doi:10.1038/ijo.2013.132.