29 resultados para Experience Sampling Methods
Resumo:
The use of Diagnosis Related Groups (DRG) as a mechanism for hospital financing is a currently debated topic in Portugal. The DRG system was scheduled to be initiated by the Health Ministry of Portugal on January 1, 1990 as an instrument for the allocation of public hospital budgets funded by the National Health Service (NHS), and as a method of payment for other third party payers (e.g., Public Employees (ADSE), private insurers, etc.). Based on experience from other countries such as the United States, it was expected that implementation of this system would result in more efficient hospital resource utilisation and a more equitable distribution of hospital budgets. However, in order to minimise the potentially adverse financial impact on hospitals, the Portuguese Health Ministry decided to gradually phase in the use of the DRG system for budget allocation by using blended hospitalspecific and national DRG casemix rates. Since implementation in 1990, the percentage of each hospitals budget based on hospital specific costs was to decrease, while the percentage based on DRG casemix was to increase. This was scheduled to continue until 1995 when the plan called for allocating yearly budgets on a 50% national and 50% hospitalspecific cost basis. While all other nonNHS third party payers are currently paying based on DRGs, the adoption of DRG casemix as a National Health Service budget setting tool has been slower than anticipated. There is now some argument in both the political and academic communities as to the appropriateness of DRGs as a budget setting criterion as well as to their impact on hospital efficiency in Portugal. This paper uses a twostage procedure to assess the impact of actual DRG payment on the productivity (through its components, i.e., technological change and technical efficiency change) of diagnostic technology in Portuguese hospitals during the years 1992–1994, using both parametric and nonparametric frontier models. We find evidence that the DRG payment system does appear to have had a positive impact on productivity and technical efficiency of some commonly employed diagnostic technologies in Portugal during this time span.
Resumo:
In any investigation in optometry involving more that two treatment or patient groups, an investigator should be using ANOVA to analyse the results assuming that the data conform reasonably well to the assumptions of the analysis. Ideally, specific null hypotheses should be built into the experiment from the start so that the treatments variation can be partitioned to test these effects directly. If 'post-hoc' tests are used, then an experimenter should examine the degree of protection offered by the test against the possibilities of making either a type 1 or a type 2 error. All experimenters should be aware of the complexity of ANOVA. The present article describes only one common form of the analysis, viz., that which applies to a single classification of the treatments in a randomised design. There are many different forms of the analysis each of which is appropriate to the analysis of a specific experimental design. The uses of some of the most common forms of ANOVA in optometry have been described in a further article. If in any doubt, an investigator should consult a statistician with experience of the analysis of experiments in optometry since once embarked upon an experiment with an unsuitable design, there may be little that a statistician can do to help.
Resumo:
PCA/FA is a method of analyzing complex data sets in which there are no clearly defined X or Y variables. It has multiple uses including the study of the pattern of variation between individual entities such as patients with particular disorders and the detailed study of descriptive variables. In most applications, variables are related to a smaller number of ‘factors’ or PCs that account for the maximum variance in the data and hence, may explain important trends among the variables. An increasingly important application of the method is in the ‘validation’ of questionnaires that attempt to relate subjective aspects of a patients experience with more objective measures of vision.
Resumo:
A combination of experimental methods was applied at a clogged, horizontal subsurface flow (HSSF) municipal wastewater tertiary treatment wetland (TW) in the UK, to quantify the extent of surface and subsurface clogging which had resulted in undesirable surface flow. The three dimensional hydraulic conductivity profile was determined, using a purpose made device which recreates the constant head permeameter test in-situ. The hydrodynamic pathways were investigated by performing dye tracing tests with Rhodamine WT and a novel multi-channel, data-logging, flow through Fluorimeter which allows synchronous measurements to be taken from a matrix of sampling points. Hydraulic conductivity varied in all planes, with the lowest measurement of 0.1 md1 corresponding to the surface layer at the inlet, and the maximum measurement of 1550 md1 located at a 0.4m depth at the outlet. According to dye tracing results, the region where the overland flow ceased received five times the average flow, which then vertically short-circuited below the rhizosphere. The tracer break-through curve obtained from the outlet showed that this preferential flow-path accounted for approximately 80% of the flow overall and arrived 8 h before a distinctly separate secondary flow-path. The overall volumetric efficiencyof the clogged system was 71% and the hydrology was simulated using a dual-path, dead-zone storage model. It is concluded that uneven inlet distribution, continuous surface loading and high rhizosphere resistance is responsible for the clog formation observed in this system. The average inlet hydraulic conductivity was 2 md1, suggesting that current European design guidelines, which predict that the system will reach an equilibrium hydraulic conductivity of 86 md1, do not adequately describe the hydrology of mature systems.
Resumo:
Purpose - To assess clinical outcomes and subjective experience after bilateral implantation of a diffractive trifocal intraocular lens (IOL). Setting - Midland Eye Institute, Solihull, United Kingdom. Design - Cohort study. Methods - Patients had bilateral implantation of Finevision trifocal IOLs. Uncorrected distance visual acuity, corrected distance visual acuity (CDVA), and manifest refraction were measured 2 months postoperatively. Defocus curves were assessed under photopic and mesopic conditions over a range of +1.50 to -4.00 diopters (D) in 0.50 D steps. Contrast sensitivity function was assessed under photopic conditions. Halometry was used to measure the angular size of monocular and binocular photopic scotomas arising from a glare source. Patient satisfaction with uncorrected near vision was assessed using the Near Activity Visual Questionnaire (NAVQ). Results - The mean monocular CDVA was 0.08 logMAR ± 0.08 (SD) and the mean binocular CDVA, 0.06 ± 0.08 logMAR. Defocus curve testing showed an extended range of clear vision from +1.00 to -2.50 D defocus, with a significant difference in acuity between photopic conditions and mesopic conditions at -1.50 D defocus only. Photopic contrast sensitivity was significantly better binocularly than monocularly at all spatial frequencies. Halometry showed a glare scotoma of a mean size similar to that in previous studies of multifocal and accommodating IOLs; there were no subjective complaints of dysphotopsia. The mean NAVQ Rasch score for satisfaction with near vision was 15.9 ± 10.7 logits. Conclusions - The trifocal IOL implanted binocularly produced good distance visual acuity and near and intermediate visual function. Patients were very satisfied with their uncorrected near vision.
Resumo:
In this thesis I contribute to the understanding of the experience of living with Age-Related Macular Degeneration (AMD) and its impact on quality of life through the use of a pragmatically guided mixed methods approach. AMD is a condition resulting in the loss of central vision in old age which can have a huge impact on the lives of patients. This thesis includes: literature reviewing; qualitative meta-synthesis; surveys and descriptive statistics; observation; and analysis of in-depth interviewing, in order to build a picture of what it is like for older people to live with AMD. I present the findings from six separate studies each designed to answer specific research questions. I begin with a mixed methods study to determine how well the most commonly used measure of quality of life for AMD patients’ represents patient experiences. I then go on to investigate the experiences of patients with AMD through a meta-synthesis of qualitative research and finally present four of my own empirical studies three of which investigate the experiences of patients with different types of AMD: early dry AMD, treatable wet AMD and advanced wet AMD and the final study investigates what it is like for a couple living together with AMD. Throughout the qualitative studies I use Interpretative Phenomenological Analysis (IPA) to develop an understanding of the experiences and life contexts of patients with AMD. Through rigorous analysis, I identify a range of themes which highlight the shared and divergent experiences of individuals with AMD and the need to acknowledge patients’ past, present and potential future life contexts and experiences when providing services to older people with AMD. I relate the findings of the six studies to the wider psychological literature on chronic illness and make recommendations for services for patients with AMD to be provided holistically within a lifeworld-led health care model.
Resumo:
The CASE Award PhD is a relatively new approach to completing academic research degrees, aligning the ideals of comprehensive research training and cross-collaboration between academics and organisations. As the initial wave of CASE funded PhD research begins to near completion, and indeed become evident through the publication of results, now is an appropriate time to begin the evaluation process of how to successfully deliver a CASE PhD, and to analyse the best practice approaches of completing a CASE Award with an organisation. This article intends to offer a picture into the CASE PhD process, with a focus on methods of communication to successfully implement this kind of research in collaboration with an organisation.
Resumo:
Objective: To characterize the population pharmacokinetics of canrenone following administration of potassium canrenoate (K-canrenoate) in paediatric patients. Methods: Data were collected prospectively from 37 paediatric patients (median weight 2.9 kg, age range 2 days–0.85 years) who received intravenous K-canrenoate for management of retained fluids, for example in heart failure and chronic lung disease. Dried blood spot (DBS) samples (n = 213) from these were analysed for canrenone content and the data subjected to pharmacokinetic analysis using nonlinear mixed-effects modelling. Another group of patients (n = 16) who had 71 matching plasma and DBS samples was analysed separately to compare canrenone pharmacokinetic parameters obtained using the two different matrices. Results: A one-compartment model best described the DBS data. Significant covariates were weight, postmenstrual age (PMA) and gestational age. The final population models for canrenone clearance (CL/F) and volume of distribution (V/F) in DBS were CL/F (l/h) = 12.86 × (WT/70.0)0.75 × e [0.066 × (PMA - 40]) and V/F (l) = 603.30 × (WT/70) × (GA/40)1.89 where weight is in kilograms. The corresponding values of CL/F and V/F in a patient with a median weight of 2.9 kg are 1.11 l/h and 20.48 l, respectively. Estimated half-life of canrenone based on DBS concentrations was similar to that based on matched plasma concentrations (19.99 and 19.37 h, respectively, in 70 kg patient). Conclusion: The range of estimated CL/F in DBS for the study population was 0.12–9.62 l/h; hence, bodyweight-based dosage adjustment of K-canrenoate appears necessary. However, a dosing scheme that takes into consideration both weight and age (PMA/gestational age) of paediatric patients seems more appropriate.
Resumo:
Background: The NHS Health Check was designed by UK Department of Health to address increased prevalence of cardiovascular disease by identifying risk levels and facilitating behaviour change. It constituted biomedical testing, personalised advice and lifestyle support. The objective of the study was to explore Health Care Professionals' (HCPs) and patients' experiences of delivering and receiving the NHS Health Check in an inner-city region of England. Methods: Patients and HCPs in primary care were interviewed using semi-structured schedules. Data were analysed using Thematic Analysis. Results: Four themes were identified. Firstly, Health Check as a test of 'roadworthiness' for people. The roadworthiness metaphor resonated with some patients but it signified a passive stance toward illness. Some patients described the check as useful in the theme, Health check as revelatory. HCPs found visual aids demonstrating levels of salt/fat/sugar in everyday foods and a 'traffic light' tape measure helpful in communicating such 'revelations' with patients. Being SMART and following the protocolrevealed that few HCPs used SMART goals and few patients spoke of them. HCPs require training to understand their rationale compared with traditional advice-giving. The need for further follow-up revealed disparity in follow-ups and patients were not systematically monitored over time. Conclusions: HCPs' training needs to include the use and evidence of the effectiveness of SMART goals in changing health behaviours. The significance of fidelity to protocol needs to be communicated to HCPs and commissioners to ensure consistency. Monitoring and measurement of follow-up, e.g., tracking of referrals, need to be resourced to provide evidence of the success of the NHS Health Check in terms of healthier lifestyles and reduced CVD risk.
Resumo:
Purpose: To assess the inter and intra observer variability of subjective grading of the retinal arterio-venous ratio (AVR) using a visual grading and to compare the subjectively derived grades to an objective method using a semi-automated computer program. Methods: Following intraocular pressure and blood pressure measurements all subjects underwent dilated fundus photography. 86 monochromatic retinal images with the optic nerve head centred (52 healthy volunteers) were obtained using a Zeiss FF450+ fundus camera. Arterio-venous ratios (AVR), central retinal artery equivalent (CRAE) and central retinal vein equivalent (CRVE) were calculated on three separate occasions by one single observer semi-automatically using the software VesselMap (ImedosSystems, Jena, Germany). Following the automated grading, three examiners graded the AVR visually on three separate occasions in order to assess their agreement. Results: Reproducibility of the semi-automatic parameters was excellent (ICCs: 0.97 (CRAE); 0.985 (CRVE) and 0.952 (AVR)). However, visual grading of AVR showed inter grader differences as well as discrepancies between subjectively derived and objectively calculated AVR (all p < 0.000001). Conclusion: Grader education and experience leads to inter-grader differences but more importantly, subjective grading is not capable to pick up subtle differences across healthy individuals and does not represent true AVR when compared with an objective assessment method. Technology advancements mean we no longer rely on opthalmoscopic evaluation but can capture and store fundus images with retinal cameras, enabling us to measure vessel calibre more accurately compared to visual estimation; hence it should be integrated in optometric practise for improved accuracy and reliability of clinical assessments of retinal vessel calibres. © 2014 Spanish General Council of Optometry.
Resumo:
This paper advances a philosophically informed rationale for the broader, reflexive and practical application of arts-based methods to benefit research, practice and pedagogy. It addresses the complexity and diversity of learning and knowing, foregrounding a cohabitative position and recognition of a plurality of research approaches, tailored and responsive to context. Appreciation of art and aesthetic experience is situated in the everyday, underpinned by multi-layered exemplars of pragmatic visual-arts narrative inquiry undertaken in the third, creative and communications sectors. Discussion considers semi-guided use of arts-based methods as a conduit for topic engagement, reflection and intersubjective agreement; alongside observation and interpretation of organically employed approaches used by participants within daily norms. Techniques span handcrafted (drawing), digital (photography), hybrid (cartooning), performance dimensions (improvised installations) and music (metaphor and structure). The process of creation, the artefact/outcome produced and experiences of consummation are all significant, with specific reflexivity impacts. Exploring methodology and epistemology, both the "doing" and its interpretation are explicated to inform method selection, replication, utility, evaluation and development of cross-media skills literacy. Approaches are found engaging, accessible and empowering, with nuanced capabilities to alter relationships with phenomena, experiences and people. By building a discursive space that reduces barriers; emancipation, interaction, polyphony, letting-go and the progressive unfolding of thoughts are supported, benefiting ways of knowing, narrative (re)construction, sensory perception and capacities to act. This can also present underexplored researcher risks in respect to emotion work, self-disclosure, identity and agenda. The paper therefore elucidates complex, intricate relationships between form and content, the represented and the representation or performance, researcher and participant, and the self and other. This benefits understanding of phenomena including personal experience, sensitive issues, empowerment, identity, transition and liminality. Observations are relevant to qualitative and mixed methods researchers and a multidisciplinary audience, with explicit identification of challenges, opportunities and implications.
Resumo:
This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.
Resumo:
Abstract (provisional): Background Failing a high-stakes assessment at medical school is a major event for those who go through the experience. Students who fail at medical school may be more likely to struggle in professional practice, therefore helping individuals overcome problems and respond appropriately is important. There is little understanding about what factors influence how individuals experience failure or make sense of the failing experience in remediation. The aim of this study was to investigate the complexity surrounding the failure experience from the student’s perspective using interpretative phenomenological analysis (IPA). Methods The accounts of 3 medical students who had failed final re-sit exams, were subjected to in-depth analysis using IPA methodology. IPA was used to analyse each transcript case-by-case allowing the researcher to make sense of the participant’s subjective world. The analysis process allowed the complexity surrounding the failure to be highlighted, alongside a narrative describing how students made sense of the experience. Results The circumstances surrounding students as they approached assessment and experienced failure at finals were a complex interaction between academic problems, personal problems (specifically finance and relationships), strained relationships with friends, family or faculty, and various mental health problems. Each student experienced multi-dimensional issues, each with their own individual combination of problems, but experienced remediation as a one-dimensional intervention with focus only on improving performance in written exams. What these students needed to be included was help with clinical skills, plus social and emotional support. Fear of termination of the their course was a barrier to open communication with staff. Conclusions These students’ experience of failure was complex. The experience of remediation is influenced by the way in which students make sense of failing. Generic remediation programmes may fail to meet the needs of students for whom personal, social and mental health issues are a part of the picture.
Resumo:
One of the most pressing demands on electrophysiology applied to the diagnosis of epilepsy is the non-invasive localization of the neuronal generators responsible for brain electrical and magnetic fields (the so-called inverse problem). These neuronal generators produce primary currents in the brain, which together with passive currents give rise to the EEG signal. Unfortunately, the signal we measure on the scalp surface doesn't directly indicate the location of the active neuronal assemblies. This is the expression of the ambiguity of the underlying static electromagnetic inverse problem, partly due to the relatively limited number of independent measures available. A given electric potential distribution recorded at the scalp can be explained by the activity of infinite different configurations of intracranial sources. In contrast, the forward problem, which consists of computing the potential field at the scalp from known source locations and strengths with known geometry and conductivity properties of the brain and its layers (CSF/meninges, skin and skull), i.e. the head model, has a unique solution. The head models vary from the computationally simpler spherical models (three or four concentric spheres) to the realistic models based on the segmentation of anatomical images obtained using magnetic resonance imaging (MRI). Realistic models – computationally intensive and difficult to implement – can separate different tissues of the head and account for the convoluted geometry of the brain and the significant inter-individual variability. In real-life applications, if the assumptions of the statistical, anatomical or functional properties of the signal and the volume in which it is generated are meaningful, a true three-dimensional tomographic representation of sources of brain electrical activity is possible in spite of the ‘ill-posed’ nature of the inverse problem (Michel et al., 2004). The techniques used to achieve this are now referred to as electrical source imaging (ESI) or magnetic source imaging (MSI). The first issue to influence reconstruction accuracy is spatial sampling, i.e. the number of EEG electrodes. It has been shown that this relationship is not linear, reaching a plateau at about 128 electrodes, provided spatial distribution is uniform. The second factor is related to the different properties of the source localization strategies used with respect to the hypothesized source configuration.