39 resultados para comparison method
em Aston University Research Archive
Resumo:
Purpose: To determine the most appropriate analysis technique for the differentiation of multifocal intraocular lens (MIOL) designs using defocus curve assessment of visual capability.Methods:Four groups of fifteen subjects were implanted bilaterally with either monofocal intraocular lenses, refractive MIOLs, diffractive MIOLs, or a combination of refractive and diffractive MIOLs. Defocus curves between -5.0D and +1.5D were evaluated using an absolute and relative depth-of-focus method, the direct comparison method and a new 'Area-of-focus' metric. The results were correlated with a subjective perception of near and intermediate vision. Results:Neither depth-of-focus method of analysis were sensitive enough to differentiate between MIOL groups (p>0.05). The direct comparison method indicated that the refractive MIOL group performed better at +1.00, -1.00 and -1.50 D and worse at -3.00, -3.50, -4.00 and -5.00D compared to the diffractive MIOL group (p
Resumo:
Measurements of neutron and gamma dose rates in mixed radiation fields, and gamma dose rates from calibrated gamma sources, were performed using a liquid scintillation counter NE213 with a pulse shape discrimination technique based on the charge comparison method. A computer program was used to analyse the experimental data. The radiation field was obtained from a 241Am-9Be source. There was general agreement between measured and calculated neutron and gamma dose rates in the mixed radiation field, but some disagreement in the measurements of gamma dose rates for gamma sources, due to the dark current of the photomultiplier and the effect of the perturbation of the radiation field by the detector. An optical fibre bundle was used to couple an NE213 scintillator to a photomultiplier, in an attempt to minimise these effects. This produced an improvement in the results for gamma sources. However, the optically coupled detector system could not be used for neutron and gamma dose rate measurements in mixed radiation fields. The pulse shape discrimination system became ineffective as a consequence of the slower time response of the detector system.
Resumo:
In the present work the neutron emission spectra from a graphite cube, and from natural uranium, lithium fluoride, graphite, lead and steel slabs bombarded with 14.1 MeV neutrons were measured to test nuclear data and calculational methods for D - T fusion reactor neutronics. The neutron spectra measured were performed by an organic scintillator using a pulse shape discrimination technique based on a charge comparison method to reject the gamma rays counts. A computer programme was used to analyse the experimental data by the differentiation unfolding method. The 14.1 MeV neutron source was obtained from T(d,n)4He reaction by the bombardment of T - Ti target with a deuteron beam of energy 130 KeV. The total neutron yield was monitored by the associated particle method using a silicon surface barrier detector. The numerical calculations were performed using the one-dimensional discrete-ordinate neutron transport code ANISN with the ZZ-FEWG 1/ 31-1F cross section library. A computer programme based on Gaussian smoothing function was used to smooth the calculated data and to match the experimental data. There was general agreement between measured and calculated spectra for the range of materials studied. The ANISN calculations carried out with P3 - S8 calculations together with representation of the slab assemblies by a hollow sphere with no reflection at the internal boundary were adequate to model the experimental data and hence it appears that the cross section set is satisfactory and for the materials tested needs no modification in the range 14.1 MeV to 2 MeV. Also it would be possible to carry out a study on fusion reactor blankets, using cylindrical geometry and including a series of concentric cylindrical shells to represent the torus wall, possible neutron converter and breeder regions, and reflector and shielding regions.
Resumo:
Objective: The Any Qualified Provider framework in the National Health Service has changed the way adult audiology services are offered in England. Under the new rules, patients are being offered a choice in geographical location and audiology provider. This study aimed to explore how choices in treatment are presented and to identify what information patients need when they are seeking help with hearing loss. Design: This study adopted qualitative methods of ethnographic observations and focus group interviews to identify information needed prior to, and during, help-seeking. Observational data and focus group data were analysed using the constant comparison method of grounded theory. Study sample: Participants were recruited from a community Health and Social Care Trust in the west of England. This service incorporates both an Audiology and a Hearing Therapy service. Twenty seven participants were involved in focus groups or interviews. Results: Participants receive little information beyond the detail of hearing aids. Participants report little information that was not directly related to uptake of hearing aids. Conclusions: Participant preferences were not explored and limited information resulted in decisions that were clinician-led. The gaps in information reflect previous data on clinician communication and highlight the need for consistent information on a range of interventions to manage hearing loss.
Resumo:
Background Evaluation of anterior chamber depth (ACD) can potentially identify those patients at risk of angle-closure glaucoma. We aimed to: compare van Herick’s limbal chamber depth (LCDvh) grades with LCDorb grades calculated from the Orbscan anterior chamber angle values; determine Smith’s technique ACD and compare to Orbscan ACD; and calculate a constant for Smith’s technique using Orbscan ACD. Methods Eighty participants free from eye disease underwent LCDvh grading, Smith’s technique ACD, and Orbscan anterior chamber angle and ACD measurement. Results LCDvh overestimated grades by a mean of 0.25 (coefficient of repeatability [CR] 1.59) compared to LCDorb. Smith’s technique (constant 1.40 and 1.31) overestimated ACD by a mean of 0.33 mm (CR 0.82) and 0.12 mm (CR 0.79) respectively, compared to Orbscan. Using linear regression, we determined a constant of 1.22 for Smith’s slit-length method. Conclusions Smith’s technique (constant 1.31) provided an ACD that is closer to that found with Orbscan compared to a constant of 1.40 or LCDvh. Our findings also suggest that Smith’s technique would produce values closer to that obtained with Orbscan by using a constant of 1.22.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Background Medicines reconciliation-identifying and maintaining an accurate list of a patient's current medications-should be undertaken at all transitions of care and available to all patients. Objective A self-completion web survey was conducted for chief pharmacists (or equivalent) to evaluate medicines reconciliation levels in secondary care mental health organisations. Setting The survey was sent to secondary care mental health organisations in England, Scotland, Northern Ireland and Wales. Method The survey was launched via Bristol Online Surveys. Quantitative data was analysed using descriptive statistics and qualitative data was collected through respondents free-text answers to specific questions. Main outcomes measure Investigate how medicines reconciliation is delivered, incorporate a clear description of the role of pharmacy staff and identify areas of concern. Results Forty-two (52 % response rate) surveys were completed. Thirty-seven (88.1 %) organisations have a formal policy for medicines reconciliation with defined steps. Results show that the pharmacy team (pharmacists and pharmacy technicians) are the main professionals involved in medicines reconciliation with a high rate of doctors also involved. Training procedures frequently include an induction by pharmacy for doctors whilst the pharmacy team are generally trained by another member of pharmacy. Mental health organisations estimate that nearly 80 % of medicines reconciliation is carried out within 24 h of admission. A full medicines reconciliation is not carried out on patient transfer between mental health wards; instead quicker and less exhaustive variations are implemented. 71.4 % of organisations estimate that pharmacy staff conduct daily medicine reconciliations for acute admission wards (Monday to Friday). However, only 38 % of organisations self-report to pharmacy reconciling patients' medication for other teams that admit from primary care. Conclusion Most mental health organisations appear to be complying with NICE guidance on medicines reconciliation for their acute admission wards. However, medicines reconciliation is conducted less frequently on other units that admit from primary care and rarely completed on transfer when it significantly differs to that on admission. Formal training and competency assessments on medicines reconciliation should be considered as current training varies and adherence to best practice is questionable.
Resumo:
The overall objective of this work was to compare the effect of pre-treatment and catalysts on the quality of liquid products from fast pyrolysis of biomass. This study investigated the upgrading of bio-oil in terms of its quality as a bio-fuel and/or source of chemicals. Bio-oil used directly as a biofuel for heat or power needs to be improved particularly in terms of temperature sensitivity, oxygen content, chemical instability, solid content, and heating values. Chemicals produced from bio-oil need to be able to meet product specifications for market acceptability. There were two main objectives in this research. The first was to examine the influence of pre-treatment of biomass on the fast pyrolysis process and liquid quality. The relationship between the method of pre-treatment of biomass feedstock to fast pyrolysis oil quality was studied. The thermal decomposition behaviour of untreated and pretreated feedstocks was studied by using a TGA (thermogravimetric analysis) and a Py-GC/MS (pyroprobe-gas chromatography/mass spectrometry). Laboratory scale reactors (100g/h, 300g/h, 1kg/h) were used to process untreated and pretreated feedstocks by fast pyrolysis. The second objective was to study the influence of numerous catalysts on fast pyrolysis liquids from wheat straw. The first step applied analytical pyrolysis (Py-GC/MS) to determine which catalysts had an effect on fast pyrolysis liquid, in order to select catalysts for further laboratory fast pyrolysis. The effect of activation, temperature, and biomass pre-treatment on catalysts were also investigated. Laboratory experiments were also conducted using the existing 300g/h fluidised bed reactor system with a secondary catalytic fixed bed reactor. The screening of catalysts showed that CoMo was a highly active catalyst, which particularly reduced the higher molecular weight products of fast pyrolysis. From these screening tests, CoMo catalyst was selected for larger scale laboratory experiments. With reference to the effect of pre-treatment work on fast pyrolysis process, a significant effect occurred on the thermal decomposition of biomass, as well as the pyrolysis products composition, and the proportion of key components in bio-oil. Torrefaction proved to have a mild influence on pyrolysis products, when compared to aquathermolysis and steam pre-treatment.
Resumo:
Bone marrow mesenchymal stem cells (MSCs) promote nerve growth and functional recovery in animal models of spinal cord injury (SCI) to varying levels. The authors have tested high-content screening to examine the effects of MSC-conditioned medium (MSC-CM) on neurite outgrowth from the human neuroblastoma cell line SH-SY5Y and from explants of chick dorsal root ganglia (DRG). These analyses were compared to previously published methods that involved hand-tracing individual neurites. Both methods demonstrated that MSC-CM promoted neurite outgrowth. Each showed the proportion of SH-SY5Y cells with neurites increased by ~200% in MSC-CM within 48 h, and the number of neurites/SH-SY5Y cells was significantly increased in MSC-CM compared with control medium. For high-content screening, the analysis was performed within minutes, testing multiple samples of MSC-CM and in each case measuring >15,000 SH-SY5Y cells. In contrast, the manual measurement of neurite outgrowth from >200 SH-SY5Y cells in a single sample of MSC-CM took at least 1 h. High-content analysis provided additional measures of increased neurite branching in MSC-CM compared with control medium. MSC-CM was also found to stimulate neurite outgrowth in DRG explants using either method. The application of the high-content analysis was less well optimized for measuring neurite outgrowth from DRG explants than from SH-SY5Y cells.
Resumo:
A novel dissolution method was developed, suitable for powder mixtures, based on the USP basket apparatus. The baskets were modified such that the powder mixtures were retained within the baskets and not dispersed, a potential difficulty that may arise when using conventional USP basket and paddle apparatus. The advantages of this method were that the components of the mixtures were maintained in close proximity, maximizing any drug:excipient interaction and leading to more linear dissolution profiles. Two weakly acidic model drugs, ibuprofen and acetaminophen, and a selection of pharmaceutical excipients, including potential dissolution-enhancing alkalizing agents, were chosen for investigation. Dissolution profiles were obtained for simple physical mixtures. The f1 fit factor values, calculated using pure drug as the reference material, demonstrated a trend in line with expectations, with several dissolution enhancers apparent for both drugs. Also, the dissolution rates were linear over substantial parts of the profiles. For both drugs, a rank order comparison between the f1 fit factor and calculated dissolution rate, obtained from the linear section of the dissolution profile, demonstrated a correlation using a significance level of P=0.05. The method was proven to be suitable for discriminating between the effects of excipients on the dissolution of the model drugs. The method design produced dissolution profiles where the dissolution rate was linear for a substantial time, allowing determination of the dissolution rate without mathematical transformation of the data. This method may be suitable as a preliminary excipient-screening tool in the drug formulation development process.
Resumo:
Recently, we introduced a new 'GLM-beamformer' technique for MEG analysis that enables accurate localisation of both phase-locked and non-phase-locked neuromagnetic effects, and their representation as statistical parametric maps (SPMs). This provides a useful framework for comparison of the full range of MEG responses with fMRI BOLD results. This paper reports a 'proof of principle' study using a simple visual paradigm (static checkerboard). The five subjects each underwent both MEG and fMRI paradigms. We demonstrate, for the first time, the presence of a sustained (DC) field in the visual cortex, and its co-localisation with the visual BOLD response. The GLM-beamformer analysis method is also used to investigate the main non-phase-locked oscillatory effects: an event-related desynchronisation (ERD) in the alpha band (8-13 Hz) and an event-related synchronisation (ERS) in the gamma band (55-70 Hz). We show, using SPMs and virtual electrode traces, the spatio-temporal covariance of these effects with the visual BOLD response. Comparisons between MEG and fMRI data sets generally focus on the relationship between the BOLD response and the transient evoked response. Here, we show that the stationary field and changes in oscillatory power are also important contributors to the BOLD response, and should be included in future studies on the relationship between neuronal activation and the haemodynamic response. © 2005 Elsevier Inc. All rights reserved.
Resumo:
In recent work we have developed a novel variational inference method for partially observed systems governed by stochastic differential equations. In this paper we provide a comparison of the Variational Gaussian Process Smoother with an exact solution computed using a Hybrid Monte Carlo approach to path sampling, applied to a stochastic double well potential model. It is demonstrated that the variational smoother provides us a very accurate estimate of mean path while conditional variance is slightly underestimated. We conclude with some remarks as to the advantages and disadvantages of the variational smoother. © 2008 Springer Science + Business Media LLC.
Resumo:
OBJECTIVE: To assess the effect of using different risk calculation tools on how general practitioners and practice nurses evaluate the risk of coronary heart disease with clinical data routinely available in patients' records. DESIGN: Subjective estimates of the risk of coronary heart disease and results of four different methods of calculation of risk were compared with each other and a reference standard that had been calculated with the Framingham equation; calculations were based on a sample of patients' records, randomly selected from groups at risk of coronary heart disease. SETTING: General practices in central England. PARTICIPANTS: 18 general practitioners and 18 practice nurses. MAIN OUTCOME MEASURES: Agreement of results of risk estimation and risk calculation with reference calculation; agreement of general practitioners with practice nurses; sensitivity and specificity of the different methods of risk calculation to detect patients at high or low risk of coronary heart disease. RESULTS: Only a minority of patients' records contained all of the risk factors required for the formal calculation of the risk of coronary heart disease (concentrations of high density lipoprotein (HDL) cholesterol were present in only 21%). Agreement of risk calculations with the reference standard was moderate (kappa=0.33-0.65 for practice nurses and 0.33 to 0.65 for general practitioners, depending on calculation tool), showing a trend for underestimation of risk. Moderate agreement was seen between the risks calculated by general practitioners and practice nurses for the same patients (kappa=0.47 to 0.58). The British charts gave the most sensitive results for risk of coronary heart disease (practice nurses 79%, general practitioners 80%), and it also gave the most specific results for practice nurses (100%), whereas the Sheffield table was the most specific method for general practitioners (89%). CONCLUSIONS: Routine calculation of the risk of coronary heart disease in primary care is hampered by poor availability of data on risk factors. General practitioners and practice nurses are able to evaluate the risk of coronary heart disease with only moderate accuracy. Data about risk factors need to be collected systematically, to allow the use of the most appropriate calculation tools.
Resumo:
Two contrasting multivariate statistical methods, viz., principal components analysis (PCA) and cluster analysis were applied to the study of neuropathological variations between cases of Alzheimer's disease (AD). To compare the two methods, 78 cases of AD were analyzed, each characterised by measurements of 47 neuropathological variables. Both methods of analysis revealed significant variations between AD cases. These variations were related primarily to differences in the distribution and abundance of senile plaques (SP) and neurofibrillary tangles (NFT) in the brain. Cluster analysis classified the majority of AD cases into five groups which could represent subtypes of AD. However, PCA suggested that variation between cases was more continuous with no distinct subtypes. Hence, PCA may be a more appropriate method than cluster analysis in the study of neuropathological variations between AD cases.
Resumo:
Counts of Pick bodies (PB), Pick cells (PC), senile plaques (SP) and neurofibrillary tangles (NFT) were made in the frontal and temporal cortex from patients with Pick's disease (PD). Lesions were stained histologically with hematoxylin and eosin (HE) and the Bielschowsky silver impregnation method and labeled immunohistochemically with antibodies raised to ubiquitin and tau. The greatest numbers of PB were revealed by immunohistochemistry. Counts of PB revealed by ubiquitin and tau were highly positively correlated which suggested that the two antibodies recognized virtually identical populations of PB. The greatest numbers of PC were revealed by HE followed by the anti-ubiquitin antibody. However, the correlation between counts was poor, suggesting that HE and ubiquitin revealed different populations of PC. The greatest numbers of SP and NFT were revealed by the Bielschowsky method indicating the presence of Alzheimer-type lesions not revealed by the immunohistochemistry. In addition, more NFT were revealed by the anti-ubiquitin compared with the anti-tau antibody. The data suggested that in PD: (i) the anti-ubiquitin and anti-tau antibodies were equally effective at labeling PB; (ii) both HE and anti-ubiquitin should be used to quantitate PC; and (iii) the Bielschowsky method should be used to quantitate SP and NFT.