909 resultados para EVALUATION MODEL
Resumo:
OBJECTIVES: To determine effective and efficient monitoring criteria for ocular hypertension [raised intraocular pressure (IOP)] through (i) identification and validation of glaucoma risk prediction models; and (ii) development of models to determine optimal surveillance pathways.
DESIGN: A discrete event simulation economic modelling evaluation. Data from systematic reviews of risk prediction models and agreement between tonometers, secondary analyses of existing datasets (to validate identified risk models and determine optimal monitoring criteria) and public preferences were used to structure and populate the economic model.
SETTING: Primary and secondary care.
PARTICIPANTS: Adults with ocular hypertension (IOP > 21 mmHg) and the public (surveillance preferences).
INTERVENTIONS: We compared five pathways: two based on National Institute for Health and Clinical Excellence (NICE) guidelines with monitoring interval and treatment depending on initial risk stratification, 'NICE intensive' (4-monthly to annual monitoring) and 'NICE conservative' (6-monthly to biennial monitoring); two pathways, differing in location (hospital and community), with monitoring biennially and treatment initiated for a ≥ 6% 5-year glaucoma risk; and a 'treat all' pathway involving treatment with a prostaglandin analogue if IOP > 21 mmHg and IOP measured annually in the community.
MAIN OUTCOME MEASURES: Glaucoma cases detected; tonometer agreement; public preferences; costs; willingness to pay and quality-adjusted life-years (QALYs).
RESULTS: The best available glaucoma risk prediction model estimated the 5-year risk based on age and ocular predictors (IOP, central corneal thickness, optic nerve damage and index of visual field status). Taking the average of two IOP readings, by tonometry, true change was detected at two years. Sizeable measurement variability was noted between tonometers. There was a general public preference for monitoring; good communication and understanding of the process predicted service value. 'Treat all' was the least costly and 'NICE intensive' the most costly pathway. Biennial monitoring reduced the number of cases of glaucoma conversion compared with a 'treat all' pathway and provided more QALYs, but the incremental cost-effectiveness ratio (ICER) was considerably more than £30,000. The 'NICE intensive' pathway also avoided glaucoma conversion, but NICE-based pathways were either dominated (more costly and less effective) by biennial hospital monitoring or had a ICERs > £30,000. Results were not sensitive to the risk threshold for initiating surveillance but were sensitive to the risk threshold for initiating treatment, NHS costs and treatment adherence.
LIMITATIONS: Optimal monitoring intervals were based on IOP data. There were insufficient data to determine the optimal frequency of measurement of the visual field or optic nerve head for identification of glaucoma. The economic modelling took a 20-year time horizon which may be insufficient to capture long-term benefits. Sensitivity analyses may not fully capture the uncertainty surrounding parameter estimates.
CONCLUSIONS: For confirmed ocular hypertension, findings suggest that there is no clear benefit from intensive monitoring. Consideration of the patient experience is important. A cohort study is recommended to provide data to refine the glaucoma risk prediction model, determine the optimum type and frequency of serial glaucoma tests and estimate costs and patient preferences for monitoring and treatment.
FUNDING: The National Institute for Health Research Health Technology Assessment Programme.
Resumo:
Objectives: To assess whether open angle glaucoma (OAG) screening meets the UK National Screening Committee criteria, to compare screening strategies with case finding, to estimate test parameters, to model estimates of cost and cost-effectiveness, and to identify areas for future research. Data sources: Major electronic databases were searched up to December 2005. Review methods: Screening strategies were developed by wide consultation. Markov submodels were developed to represent screening strategies. Parameter estimates were determined by systematic reviews of epidemiology, economic evaluations of screening, and effectiveness (test accuracy, screening and treatment). Tailored highly sensitive electronic searches were undertaken. Results: Most potential screening tests reviewed had an estimated specificity of 85% or higher. No test was clearly most accurate, with only a few, heterogeneous studies for each test. No randomised controlled trials (RCTs) of screening were identified. Based on two treatment RCTs, early treatment reduces the risk of progression. Extrapolating from this, and assuming accelerated progression with advancing disease severity, without treatment the mean time to blindness in at least one eye was approximately 23 years, compared to 35 years with treatment. Prevalence would have to be about 3-4% in 40 year olds with a screening interval of 10 years to approach cost-effectiveness. It is predicted that screening might be cost-effective in a 50-year-old cohort at a prevalence of 4% with a 10-year screening interval. General population screening at any age, thus, appears not to be cost-effective. Selective screening of groups with higher prevalence (family history, black ethnicity) might be worthwhile, although this would only cover 6% of the population. Extension to include other at-risk cohorts (e.g. myopia and diabetes) would include 37% of the general population, but the prevalence is then too low for screening to be considered cost-effective. Screening using a test with initial automated classification followed by assessment by a specialised optometrist, for test positives, was more cost-effective than initial specialised optometric assessment. The cost-effectiveness of the screening programme was highly sensitive to the perspective on costs (NHS or societal). In the base-case model, the NHS costs of visual impairment were estimated as £669. If annual societal costs were £8800, then screening might be considered cost-effective for a 40-year-old cohort with 1% OAG prevalence assuming a willingness to pay of £30,000 per quality-adjusted life-year. Of lesser importance were changes to estimates of attendance for sight tests, incidence of OAG, rate of progression and utility values for each stage of OAG severity. Cost-effectiveness was not particularly sensitive to the accuracy of screening tests within the ranges observed. However, a highly specific test is required to reduce large numbers of false-positive referrals. The findings that population screening is unlikely to be cost-effective are based on an economic model whose parameter estimates have considerable uncertainty, in particular, if rate of progression and/or costs of visual impairment are higher than estimated then screening could be cost-effective. Conclusions: While population screening is not cost-effective, the targeted screening of high-risk groups may be. Procedures for identifying those at risk, for quality assuring the programme, as well as adequate service provision for those screened positive would all be needed. Glaucoma detection can be improved by increasing attendance for eye examination, and improving the performance of current testing by either refining practice or adding in a technology-based first assessment, the latter being the more cost-effective option. This has implications for any future organisational changes in community eye-care services. Further research should aim to develop and provide quality data to populate the economic model, by conducting a feasibility study of interventions to improve detection, by obtaining further data on costs of blindness, risk of progression and health outcomes, and by conducting an RCT of interventions to improve the uptake of glaucoma testing. © Queen's Printer and Controller of HMSO 2007. All rights reserved.
Resumo:
Here we report two novel 17-mer amidated linear peptides (TsAP-1 and TsAP-2) whose structures were deduced from cDNAs cloned from a venom-derived cDNA library of the Brazilian yellow scorpion, Tityus serrulatus. Both mature peptides were structurally-characterised following their location in chromatographic fractions of venom and synthetic replicates of each were subjected to a range of biological assays. The peptides were each active against model test micro-organisms but with different potencies. TsAP-1 was of low potency against all three test organisms (MICs 120-160µM), whereas TsAP-2 was of high potency against the Gram-positive bacterium, Staphylococcus aureus (MIC 5µM) and the yeast, Candida albicans (10µM). Haemolytic activity of TsAP-1 was low (4% at 160µM) and in contrast, that of TsAP-2 was considerably higher (18% at 20µM). Substitution of four neutral amino acid residues with Lys residues in each peptide had dramatic effects on their antimicrobial potencies and haemolytic activities, particularly those of TsAP-1. The MICs of the enhanced cationic analogue (TsAP-S1) were 2.5µM for S.aureus/C.albicans and 5µM for E.coli but with an associated large increase in haemolytic activity (30% at 5µM). The same Lys residue substitutions in TsAP-2 produced a dramatic effect on its MIC for E.coli lowering this from >320µM to 5µM. TsAP-1 was ineffective against three of the five human cancer cell lines tested while TsAP-2 inhibited the growth of all five. Lys residue substitution of both peptides enhanced their potency against all five cell lines with TsAp-S2 being the most potent with IC50 values ranging between 0.83 and 2.0 µM. TsAP-1 and TsAP-2 are novel scorpion venom peptides with broad spectrum antimicrobial and anticancer cell activities the potencies of which can be significantly enhanced by increasing their cationicity.
Resumo:
The aspiration the spatial planning should act as the main coordinating function for the transition to a sustainable society is grounded on the assumption that it is capable of incorporating both a strong evidence base of environmental accounting for policy, coupled with opportunities for open, deliberative decision-making. While there are a number of increasingly sophisticated methods (such as material flow analysis and ecological footprinting) that can be used to longitudinally determine the impact of policy, there are fewer that can provide a robust spatial assessment of sustainability policy. In this paper, we introduce the Spatial Allocation of Material Flow Analysis (SAMFA) model, which uses the concept of socio-economic metabolism to extrapolate the impact of local consumption patterns that may occur as a result of the local spatial planning process at multiple spatial levels. The initial application the SAMFA model is based on County Kildare in the Republic of Ireland, through spatial temporal simulation and visualisation of construction material flows and associated energy use in the housing sector. Thus, while we focus on an Ireland case study, the model is applicable to spatial planning and sustainability research more generally. Through the development and evaluation of alternative scenarios, the model appears to be successful in its prediction of the cumulative resource and energy impacts arising from consumption and development patterns. This leads to some important insights in relation to the differential spatial distribution of disaggregated allocation of material balance and energy use, for example that rural areas have greater resource accumulation (and are therefore in a sense “less sustainable”) than urban areas, confirming that rural housing in Ireland is both more material and energy intensive. This therefore has the potential to identify hotspots of higher material and energy use, which can be addressed through targeted planning initiatives or focussed community engagement. Furthermore, due to the ability of the model to allow manipulation of different policy criteria (increased density, urban conservation etc), it can also act as an effective basis for multi-stakeholder engagement.
Resumo:
The hybrid test method is a relatively recently developed dynamic testing technique that uses numerical modelling combined with simultaneous physical testing. The concept of substructuring allows the critical or highly nonlinear part of the structure that is difficult to numerically model with accuracy to be physically tested whilst the remainder of the structure, that has a more predictable response, is numerically modelled. In this paper, a substructured soft-real time hybrid test is evaluated as an accurate means of performing seismic tests of complex structures. The structure analysed is a three-storey, two-by-one bay concentrically braced frame (CBF) steel structure subjected to seismic excitation. A ground storey braced frame substructure whose response is critical to the overall response of the structure is tested, whilst the remainder of the structure is numerically modelled. OpenSees is used for numerical modelling and OpenFresco is used for the communication between the test equipment and numerical model. A novel approach using OpenFresco to define the complex numerical substructure of an X-braced frame within a hybrid test is also presented. The results of the hybrid tests are compared to purely numerical models using OpenSees and a simulated test using a combination of OpenSees and OpenFresco. The comparative results indicate that the test method provides an accurate and cost effective procedure for performing
full scale seismic tests of complex structural systems.
Resumo:
Abstract:
Background: Health care organisations
worldwide are faced with the need to develop
and implement strategic organisational plans
to meet the challenges of modern health care.
There is a need for models for developing, implementing and evaluating strategic plans that engage practitioners, and make a measurable difference to the patients that they serve. These presentations describe the development, implementation and evaluation of such a model by a team of senior nurses and practice developers, to underpin a strategy for nursing and midwifery in an acute hospital trust. Developing a Strategy The PARIHS (Promoting Action on Research Implementation in Health Services) conceptual framework (Kitson et al, 1998) proposes that successful implementation of change in practice is a function of the interplay of three core elements: the level of evidence supporting the proposed change; the context or environment in which the change takes place, and the way in which change is facilitated. We chose to draw on this framework to develop our strategy and implementation plan (O’Halloran, Martin and Connolly, 2005). At the centre of the plan are ward managers. These professionals provide leadership for the majority of staff in the trust and so were seen to be a key group in the implementation process.
Resumo:
The safety of our food is an essential requirement of society. One well-recognised threat is that of chemical contamination of our food, where low-molecular-weight compounds such as biotoxins, drug residues and pesticides are present. Low-cost, rapid screening procedures are sought to discriminate the suspect samples from the population, thus selecting only these to be forwarded for confirmatory analysis. Many biosensor assays have been developed as screening tools in food contaminant analysis, but these tend to be electrochemical, fluorescence or surface plasmon resonance based. An alternative approach is the use of biolayer interferometry, which has become established in drug discovery and life science studies but is only now emerging as a potential tool in the analysis of food contaminants. A biolayer interferometry biosensor was assessed using domoic acid as a model compound. Instrument repeatability was tested by simultaneously producing six calibration curves showing replicate repeatability (n = 2) ranging from 0.1 to 6.5 % CV with individual concentration measurements (n = 12) ranging from 4.3 to 9.3 % CV, giving a calibration curve midpoint of 7.5 ng/ml (2.3 % CV (n = 6)). Reproducibility was assessed by producing three calibration curves on different days, giving a midpoint of 7.5 ng/ml (3.4 %CV (n = 3)). It was further shown, using assay development techniques, that the calibration curve midpoint could be adjusted from 10.4 to 1.9 ng/ml by varying assay parameters before the simultaneous construction of three calibration curves in matrix and buffer. Sensitivity of the assay compared favourably with previously published biosensor data for domoic acid. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
We report a simple and facile methodology for constructing Pt (6.3 mm x 50 mu m) and Cu (6.3 mm x 30 mu m) annular microband electrodes for use in room temperature ionic liquids (RTILs) and propose their use for amperometric gas sensing. The suitability of microband electrodes for use in electrochemical analysis was examined in experiments on two systems. The first system studied to validate the electrochemical responses of the annular microband electrode was decamethylferrocene (DmFc), as a stable internal reference probe commonly used in ionic liquids, in [Pmim][NTf2], where the diffusion coefficients of DmFc and DmFc(+) and the standard electron rate constant for the DmFc/DmFc(+) couple were determined through fitting chronoamperometric and cyclic voltammetric responses with relevant simulations. These values are independently compared with those collected from a commercially available Pt microdisc electrode with excellent agreement. The second system focuses on O-2 reduction in [Pmim][NTf2], which is used as a model for gas sensing. The diffusion coefficients of O-2 and O-2(-) and the electron transfer rate constant were again obtained using chronoamperometry and cyclic voltammetry, along with simulations. Results determined from the microbands are again consistent to those evaluated from the Pt microdisc electrode when compared these results from home-made microband and commercially available microdisc electrodes. These observations indicate that the fabricated annular microband electrodes are suitable for quantitative measurements. Further the successful use of the Cu electrodes in the O-2 system suggests a cheap disposable sensor for gas detection. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Drilling of Ti6Al4V is investigated experimentally and numerically. A 3D finite element model developed based on Lagrangian approach using commercial finite element software ABAQUS/explicit. 3D complex drill geometry is included in the model. The drilling process simulations are performed at the combinations of three cutting speed and four feed rates. The effects of cutting parameters on the induced thrust force and torque are predicted by the developed model. For validation purpose, experimental trials have been performed in similar condition to the simulations. The forces and torques measured during experiment are compared to the results of the finite element analysis. The agreement of the experimental results for force and torque values with the FE results is very good. Moreover, surface roughness of the holes was measured for mapping of machining. Copyright © 2013 Inderscience Enterprises Ltd.
Resumo:
BACKGROUND: Age-related macular degeneration is the most common cause of sight impairment in the UK. In neovascular age-related macular degeneration (nAMD), vision worsens rapidly (over weeks) due to abnormal blood vessels developing that leak fluid and blood at the macula.
OBJECTIVES: To determine the optimal role of optical coherence tomography (OCT) in diagnosing people newly presenting with suspected nAMD and monitoring those previously diagnosed with the disease.
DATA SOURCES: Databases searched: MEDLINE (1946 to March 2013), MEDLINE In-Process & Other Non-Indexed Citations (March 2013), EMBASE (1988 to March 2013), Biosciences Information Service (1995 to March 2013), Science Citation Index (1995 to March 2013), The Cochrane Library (Issue 2 2013), Database of Abstracts of Reviews of Effects (inception to March 2013), Medion (inception to March 2013), Health Technology Assessment database (inception to March 2013).
REVIEW METHODS: Types of studies: direct/indirect studies reporting diagnostic outcomes.
INDEX TEST: time domain optical coherence tomography (TD-OCT) or spectral domain optical coherence tomography (SD-OCT).
COMPARATORS: clinical evaluation, visual acuity, Amsler grid, colour fundus photographs, infrared reflectance, red-free images/blue reflectance, fundus autofluorescence imaging, indocyanine green angiography, preferential hyperacuity perimetry, microperimetry. Reference standard: fundus fluorescein angiography (FFA). Risk of bias was assessed using quality assessment of diagnostic accuracy studies, version 2. Meta-analysis models were fitted using hierarchical summary receiver operating characteristic curves. A Markov model was developed (65-year-old cohort, nAMD prevalence 70%), with nine strategies for diagnosis and/or monitoring, and cost-utility analysis conducted. NHS and Personal Social Services perspective was adopted. Costs (2011/12 prices) and quality-adjusted life-years (QALYs) were discounted (3.5%). Deterministic and probabilistic sensitivity analyses were performed.
RESULTS: In pooled estimates of diagnostic studies (all TD-OCT), sensitivity and specificity [95% confidence interval (CI)] was 88% (46% to 98%) and 78% (64% to 88%) respectively. For monitoring, the pooled sensitivity and specificity (95% CI) was 85% (72% to 93%) and 48% (30% to 67%) respectively. The FFA for diagnosis and nurse-technician-led monitoring strategy had the lowest cost (£39,769; QALYs 10.473) and dominated all others except FFA for diagnosis and ophthalmologist-led monitoring (£44,649; QALYs 10.575; incremental cost-effectiveness ratio £47,768). The least costly strategy had a 46.4% probability of being cost-effective at £30,000 willingness-to-pay threshold.
LIMITATIONS: Very few studies provided sufficient information for inclusion in meta-analyses. Only a few studies reported other tests; for some tests no studies were identified. The modelling was hampered by a lack of data on the diagnostic accuracy of strategies involving several tests.
CONCLUSIONS: Based on a small body of evidence of variable quality, OCT had high sensitivity and moderate specificity for diagnosis, and relatively high sensitivity but low specificity for monitoring. Strategies involving OCT alone for diagnosis and/or monitoring were unlikely to be cost-effective. Further research is required on (i) the performance of SD-OCT compared with FFA, especially for monitoring but also for diagnosis; (ii) the performance of strategies involving combinations/sequences of tests, for diagnosis and monitoring; (iii) the likelihood of active and inactive nAMD becoming inactive or active respectively; and (iv) assessment of treatment-associated utility weights (e.g. decrements), through a preference-based study.
STUDY REGISTRATION: This study is registered as PROSPERO CRD42012001930.
FUNDING: The National Institute for Health Research Health Technology Assessment programme.
Resumo:
Comprehensive testing for asymptomatic sexually transmitted infections in Northern Ireland has traditionally been provided by genitourinary medicine clinics. As patient demand for services has increased while budgets have remained limited, there has been increasing difficulty in accommodating this demand. In May 2013, the newly commissioned specialist Sexual Health service in the South Eastern Trust sought to pilot a new model of care working alongside a GP partnership of 12 practices. A training programme to enable GPs and practice nurses to deliver Level 1 sexual health care to heterosexual patients aged >16 years, in accordance with the standards of BASHH, was developed. A comprehensive care pathway and dedicated community health advisor supported this new model with close liaison between primary and secondary care. Testing for Chlamydia, gonorrhoea, HIV and syphilis was offered. The aims of the pilot were achieved, namely to provide accessible, cost-effective sexual health care within a framework of robust clinical governance. Furthermore, it uncovered a high positivity rate for Chlamydia, especially in young men attending their general practice, and demonstrated a high level of patient satisfaction. Moreover the capacity of secondary care to deliver Levels 2 and 3 services was increased.
Resumo:
We introduce a task-based programming model and runtime system that exploit the observation that not all parts of a program are equally significant for the accuracy of the end-result, in order to trade off the quality of program outputs for increased energy-efficiency. This is done in a structured and flexible way, allowing for easy exploitation of different points in the quality/energy space, without adversely affecting application performance. The runtime system can apply a number of different policies to decide whether it will execute less-significant tasks accurately or approximately.
The experimental evaluation indicates that our system can achieve an energy reduction of up to 83% compared with a fully accurate execution and up to 35% compared with an approximate version employing loop perforation. At the same time, our approach always results in graceful quality degradation.
Resumo:
The rock/atmosphere interface is inhabited by a complex microbial community including bacteria, algae and fungi. These communities are prominent biodeterioration agents and remarkably influence the status of stone monuments and buildings. Deeper comprehension of natural biodeterioration processes on stone surfaces has brought about a concept of complex microbial communities referred to as "subaerial biofilms". The practical implications of biofilm formation are that control strategies must be devised both for testing the susceptibility of the organisms within the biofilm and treating the established biofilm. Model multi-species biofilms associated with mineral surfaces that are frequently refractory to conventional treatment have been used as test targets. A combination of scanning microscopy with image analysis was applied along with traditional cultivation methods and fluorescent activity stains. Such a polyphasic approach allowed a comprehensive quantitative evaluation of the biofilm status and development. Effective treatment strategies incorporating chemical and physical agents have been demonstrated to prevent biofilm growth in vitro. Model biofilm growth on inorganic support was significantly reduced by a combination of PDT and biocides
Resumo:
As the emphasis on initiatives that can improve environmental efficiency while simultaneously maintaining economic viability has escalated in recent years, attention has turned to more radical concepts of operation. In particular, the cruiser–feeder concept has shown potential for a new generation, environmentally friendly, air-transport system to alleviate the growing pressure on the passenger air-transportation network. However, a full evaluation of realizable benefits is needed to determine how the design and operation of potential feeder-aircraft configurations impact on the feasibility of the overall concept. This paper presents an analysis of a cruiser–feeder concept, in which fuel is transferred between the feeder and the cruiser in an aerial-refueling configuration to extend range while reducing cruiser weight, compared against the effects of escalating existing technology levels while retaining the existing passenger levels. Up to 14% fuel-burn and 12% operating-cost savings can be achieved when compared to a similar technology-level aircraft concept without aerial refueling, representing up to 26% in fuel burn and 25% in total operating cost over the existing operational model at today’s standard fleet technology and performance. However, these potential savings are not uniformly distributed across the network, and the system is highly sensitive to the routes serviced, with reductions in revenue-generation potential observed across the network for aerial-refueling operations due to reductions in passenger revenue.
Resumo:
Side-channel analysis of cryptographic systems can allow for the recovery of secret information by an adversary even where the underlying algorithms have been shown to be provably secure. This is achieved by exploiting the unintentional leakages inherent in the underlying implementation of the algorithm in software or hardware. Within this field of research, a class of attacks known as profiling attacks, or more specifically as used here template attacks, have been shown to be extremely efficient at extracting secret keys. Template attacks assume a strong adversarial model, in that an attacker has an identical device with which to profile the power consumption of various operations. This can then be used to efficiently attack the target device. Inherent in this assumption is that the power consumption across the devices under test is somewhat similar. This central tenet of the attack is largely unexplored in the literature with the research community generally performing the profiling stage on the same device as being attacked. This is beneficial for evaluation or penetration testing as it is essentially the best case scenario for an attacker where the model built during the profiling stage matches exactly that of the target device, however it is not necessarily a reflection on how the attack will work in reality. In this work, a large scale evaluation of this assumption is performed, comparing the key recovery performance across 20 identical smart-cards when performing a profiling attack.