793 resultados para PHARMACY-BASED MEASURES
Resumo:
This papers examines the use of trajectory distance measures and clustering techniques to define normal
and abnormal trajectories in the context of pedestrian tracking in public spaces. In order to detect abnormal
trajectories, what is meant by a normal trajectory in a given scene is firstly defined. Then every trajectory
that deviates from this normality is classified as abnormal. By combining Dynamic Time Warping and a
modified K-Means algorithms for arbitrary-length data series, we have developed an algorithm for trajectory
clustering and abnormality detection. The final system performs with an overall accuracy of 83% and 75%
when tested in two different standard datasets.
Resumo:
OBJECTIVES: The aim of this study was to describe the epidemiology of Ebstein's anomaly in Europe and its association with maternal health and medication exposure during pregnancy.
DESIGN: We carried out a descriptive epidemiological analysis of population-based data.
SETTING: We included data from 15 European Surveillance of Congenital Anomalies Congenital Anomaly Registries in 12 European countries, with a population of 5.6 million births during 1982-2011. Participants Cases included live births, fetal deaths from 20 weeks gestation, and terminations of pregnancy for fetal anomaly. Main outcome measures We estimated total prevalence per 10,000 births. Odds ratios for exposure to maternal illnesses/medications in the first trimester of pregnancy were calculated by comparing Ebstein's anomaly cases with cardiac and non-cardiac malformed controls, excluding cases with genetic syndromes and adjusting for time period and country.
RESULTS: In total, 264 Ebstein's anomaly cases were recorded; 81% were live births, 2% of which were diagnosed after the 1st year of life; 54% of cases with Ebstein's anomaly or a co-existing congenital anomaly were prenatally diagnosed. Total prevalence rose over time from 0.29 (95% confidence interval (CI) 0.20-0.41) to 0.48 (95% CI 0.40-0.57) (p<0.01). In all, nine cases were exposed to maternal mental health conditions/medications (adjusted odds ratio (adjOR) 2.64, 95% CI 1.33-5.21) compared with cardiac controls. Cases were more likely to be exposed to maternal β-thalassemia (adjOR 10.5, 95% CI 3.13-35.3, n=3) and haemorrhage in early pregnancy (adjOR 1.77, 95% CI 0.93-3.38, n=11) compared with cardiac controls.
CONCLUSIONS: The increasing prevalence of Ebstein's anomaly may be related to better and earlier diagnosis. Our data suggest that Ebstein's anomaly is associated with maternal mental health problems generally rather than lithium or benzodiazepines specifically; therefore, changing or stopping medications may not be preventative. We found new associations requiring confirmation.
Resumo:
1. Genomewide association studies (GWAS) enable detailed dissections of the genetic basis for organisms' ability to adapt to a changing environment. In long-term studies of natural populations, individuals are often marked at one point in their life and then repeatedly recaptured. It is therefore essential that a method for GWAS includes the process of repeated sampling. In a GWAS, the effects of thousands of single-nucleotide polymorphisms (SNPs) need to be fitted and any model development is constrained by the computational requirements. A method is therefore required that can fit a highly hierarchical model and at the same time is computationally fast enough to be useful. 2. Our method fits fixed SNP effects in a linear mixed model that can include both random polygenic effects and permanent environmental effects. In this way, the model can correct for population structure and model repeated measures. The covariance structure of the linear mixed model is first estimated and subsequently used in a generalized least squares setting to fit the SNP effects. The method was evaluated in a simulation study based on observed genotypes from a long-term study of collared flycatchers in Sweden. 3. The method we present here was successful in estimating permanent environmental effects from simulated repeated measures data. Additionally, we found that especially for variable phenotypes having large variation between years, the repeated measurements model has a substantial increase in power compared to a model using average phenotypes as a response. 4. The method is available in the R package RepeatABEL. It increases the power in GWAS having repeated measures, especially for long-term studies of natural populations, and the R implementation is expected to facilitate modelling of longitudinal data for studies of both animal and human populations.
Resumo:
In many product categories, unit prices facilitate price comparisons across brands and package sizes; this enables consumers to identify those products that provide the greatest value. However in other product categories, unit prices may be confusing. This is because there are two types of unit pricing, measure-based and usage-based. Measure-based unit prices are what the name implies; price is expressed in cents or dollars per unit of measure (e.g. ounce). Usage-based unit prices, on the other hand, are expressed in terms of cents or dollars per use (e.g., wash load or serving). The results of this study show that in two different product categories (i.e., laundry detergent and dry breakfast cereal), measure-based unit prices reduced consumers’ ability to identify higher value products, but when a usage-based unit price was provided, their ability to identify product value was increased. When provided with both a measure-based and a usage-based unit price, respondents did not perform as well as when they were provided only a usage-based unit price, additional evidence that the measure-based unit price hindered consumers’ comparisons. Finally, the presence of two potential moderators, education about the meaning of the two measures and having to rank order the options in the choice set in terms of value before choosing, did not eliminate these effects.
Resumo:
The current study was carried out to evaluate the impact of a well-being curriculum based on existing knowledge of themes within PP, which contribute to well-being. The Positive Well-Being Curriculum consists of twelve ninety minute sessions delivered weekly during a school term. The twelve well-being sessions fit into four domains: positive experience, positive emotions, positive relationships, achievement and meaning (Seligman, 2007). The objectives of the study were to test the practical implications of running a well-being curriculum, to develop a range of activities within each domain and to evaluate the impact on student well-being with regard to life satisfaction, positive affectivity and subjective happiness. A pilot was carried out as preparation for the main mixed method intervention study, which was conducted in two London primary schools. Pre and post data was collected using standardised measures, focus groups and one to one interviews. Findings from the pilot demonstrated a significant increase in well-being as demonstrated by increases in: life satisfaction, positive affect and subjective happiness. Additional information was gathered which informed the content and implementation of the curriculum in the main study. The experience of taking part in the study as evidenced through qualitative and quantitative results, indicate that the Positive Well-being Curriculum was perceived by participating teachers and children to contribute positively to the well-being of the children. These findings would be of interest to educational psychologists as there is an increasing interest by schools to include creative and validated resources to support and enhance the well-being of all children. A number of useful insights were developed about the usefulness of the curriculum for children in a variety of educational settings.
Resumo:
This thesis examines the impact on child and adolescent psychotherapists within CAMHS of the introduction of routine outcome measures (ROMs) associated with the Children and Young People’s Improving access to Psychological Therapies programme (CYP-IAPT). All CAMHS therapists working within a particular NHS mental health Trust1 were required to trial CYP-IAPT ROMs as part of their everyday clinical practice from October 2013-September 2014. During this period considerable freedom was allowed as to which of the measures each therapist used and at what frequency. In order to assess the impact of CYP-IAPT ROMs on child psychotherapy, I conducted semi-structured interviews with eight psychotherapists within a particular CAMHS partnership within one NHS Trust. Each statement was coded and grouped according to whether it related to initial (generic) assessment, goal setting / monitoring, monitoring on-going progress, therapeutic alliance, or to issues concerning how data might be used or interpreted by managers and commissioners. Analysis of interviews revealed greatest concern about session-by session ROMs, as these are felt to impact most significantly on psychotherapy; therapists felt that session-by-session ROMs do not take account of negative transference relationships, they are overly repetitive and used to reward / punish the therapist. Measures used at assessment and review were viewed as most compatible with psychotherapy, although often experienced as excessively time consuming. The Goal Based Outcome Measure was generally experienced as compatible with psychotherapy so long as goals are formed collaboratively between therapist and young person. There was considerable anxiety about how data may be (mis)used and (mis)interpreted by managers and commissioners, for example to end treatment prematurely, trigger change of therapist in the face of negative ROMs data, or to damage psychotherapy. Use of ROMs for short term and generic work was experienced as less intrusive and contentious.
Resumo:
Purpose: To evaluate if physical measures of noise predict image quality at high and low noise levels. Method: Twenty-four images were acquired on a DR system using a Pehamed DIGRAD phantom at three kVp settings (60, 70 and 81) across a range of mAs values. The image acquisition setup consisted of 14 cm of PMMA slabs with the phantom placed in the middle at 120 cm SID. Signal-to-noise ratio (SNR) and Contrast-tonoise ratio (CNR) were calculated for each of the images using ImageJ software and 14 observers performed image scoring. Images were scored according to the observer`s evaluation of objects visualized within the phantom. Results: The R2 values of the non-linear relationship between objective visibility score and CNR (60kVp R2 = 0.902; 70Kvp R2 = 0.913; 80kVp R2 = 0.757) demonstrate a better fit for all 3 kVp settings than the linear R2 values. As CNR increases for all kVp settings the Object Visibility also increases. The largest increase for SNR at low exposure values (up to 2 mGy) is observed at 60kVp, when compared with 70 or 81kVp.CNR response to exposure is similar. Pearson r was calculated to assess the correlation between Score, OV, SNR and CNR. None of the correlations reached a level of statistical significance (p>0.01). Conclusion: For object visibility and SNR, tube potential variations may play a role in object visibility. Higher energy X-ray beam settings give lower SNR but higher object visibility. Object visibility and CNR at all three tube potentials are similar, resulting in a strong positive relationship between CNR and object visibility score. At low doses the impact of radiographic noise does not have a strong influence on object visibility scores because in noisy images objects could still be identified.
Resumo:
Occupational exposure assessment can be a challenge due to several factors being the most important the costs associate and the result's dependence from the conditions at the time of sampling. Conducting a task-based exposure assessment allow defining better control measures to eliminate or reduce exposure since more easily identifies the task with higher exposure. A research study was developed to show the importance of task-based exposure assessment in four different settings (bakery, horsemanship, waste sorting and cork industry). Measurements were performed using a portable direct-reading hand-held equipment and were conducted near the workers nose during tasks performance. For each task were done measurements of approximately 5 minutes. It was possible to detect the task in each setting that was responsible for higher particles exposure allowing the priority definition regarding investments in preventive and protection measures.
Computer-based tools for assessing micro-longitudinal patterns of cognitive function in older adults
Resumo:
Patterns of cognitive change over micro-longitudinal timescales (i.e., ranging from hours to days) are associated with a wide range of age-related health and functional outcomes. However, practical issues with conducting high-frequency assessments make investigations of micro-longitudinal cognition costly and burdensome to run. One way of addressing this is to develop cognitive assessments that can be performed by older adults, in their own homes, without a researcher being present. Here, we address the question of whether reliable and valid cognitive data can be collected over micro-longitudinal timescales using unsupervised cognitive tests.In study 1, 48 older adults completed two touchscreen cognitive tests, on three occasions, in controlled conditions, alongside a battery of standard tests of cognitive functions. In study 2, 40 older adults completed the same two computerized tasks on multiple occasions, over three separate week-long periods, in their own homes, without a researcher present. Here, the tasks were incorporated into a wider touchscreen system (Novel Assessment of Nutrition and Ageing (NANA)) developed to assess multiple domains of health and behavior. Standard tests of cognitive function were also administered prior to participants using the NANA system.Performance on the two “NANA” cognitive tasks showed convergent validity with, and similar levels of reliability to, the standard cognitive battery in both studies. Completion and accuracy rates were also very high. These results show that reliable and valid cognitive data can be collected from older adults using unsupervised computerized tests, thus affording new opportunities for the investigation of cognitive function.
Resumo:
Background: The transport of children in ground ambulances is a rarely studied topic worldwide. The ambulance vehicle is a unique and complex environment with particular challenges for the safe, correct and effective transportation of patients. Unlike the well developed and readily available guidelines on the safe transportation of a child in motor vehicles, there is a lack on consistent specifications for transporting children in ambulances. Nurses are called daily to transfer children to hospitals or other care centers, so safe transport practices should be a major concern. Purpose: to know which are the safety precautions and specific measures used in the transport of children in ground ambulances by nurses and firefighters and to identify what knowledge these professionals had about safe modes of children transportation in ground ambulances. Methods: In this context, an exploratory - descriptive study and quantitative analysis was conducted. A questionnaire was completed by 135 nurses and firefighters / ambulance crew based on 4 possible children transport scenarios proposed by the NHTSA (National Highway Traffic Safety Administration) and covered 5 different children´s age groups (new born children, 1 to 12 months; 1 to 3 years old; 4 to 7 years old and 8 to 12 years old). Results: The main results showed a variety of safety measures used by the professionals and a significant difference between their actual mode of transportation and the mode they consider to be the ideal considering security goals. In addition, findings showed that achieved scores related to what ambulance crews do in the considered scenarios reflect mostly satisfactory levels of transportation rather than optimum levels of safety, according to NHTSA recommendations. Variables as gender, educational qualifications, occupational group and local where professionals work seem to influence the transport options. Female professionals and nurses from pediatric units appear to do a safer transportation of children in ground ambulances than other professionals. Conclusion: Several professionals refereed unawareness of the safest transportation options for children in ambulances and did not to know the existence of specific recommendations for this type of transportation. The dispersion of the results suggests the need for investment in professional training and further regulation for this type of transportation.
Resumo:
Objective: In the setting of the increasing use of closed systems for reconstitution and preparation of these drugs, we intend to analyze the correct use of these systems in the Hospital Pharmacy, with the objective to minimize the risks of exposure not only for those professionals directly involved, but also for all the staff in the unit, taking also into account efficiency criteria. Method: Since some systems protect against aerosol formation but not from vapours, we decided to review which cytostatics should be prepared using an awl with an air inlet valve, in order to implement a new working procedure. We reviewed the formulations available in our hospital, with the following criteria: method of administration, excipients, and potential hazard for the staff handling them. We measured the diameters of the vials. We selected drugs with Level 1 Risk and also those including alcohol-based excipients, which could generate vapours. Outcomes: Out of the 66 reviewed formulations, we concluded that 11 drugs should be reconstituted with this type of awl: busulfan, cabazitaxel, carmustine, cyclophosphamide, eribulin, etoposide, fotemustine, melphalan, paclitaxel, temsirolimus and thiotepa; these represented an 18% of the total volume of formulations. Conclusions: The selection of healthcare products must be done at the Hospital Pharmacy, because the use of a system with an air valve inlet only for those drugs selected led to an outcome of savings and a more efficient use of materials. In our experience, we confirmed that the use of the needle could only be avoided when the awl could adapt to the different formulations of cytostatics, and this is only possible when different types of awls are available. Besides, connections were only really closed when a single awl was used for each vial. The change in working methodology when handling these drugs, as a result of this study, will allow us to start different studies about environmental contamination as a future line of work.
Resumo:
Objective: To evaluate the reliability of a peer evaluation instrument in a longitudinal team-based learning setting. Methods: Student pharmacists were instructed to evaluate the contributions of their peers. Evaluations were analyzed for the variance of the scores by identifying low, medium, and high scores. Agreement between performance ratings within each group of students was assessed via intra-class correlation coefficient (ICC). Results: We found little variation in the standard deviation (SD) based on the score means among the high, medium, and low scores within each group. The lack of variation in SD of results between groups suggests that the peer evaluation instrument produces precise results. The ICC showed strong concordance among raters. Conclusions: Findings suggest that our student peer evaluation instrument provides a reliable method for peer assessment in team-based learning settings.
Resumo:
A large number of heuristic algorithms have been developed over the years which have been aimed at solving examination timetabling problems. However, many of these algorithms have been developed specifically to solve one particular problem instance or a small subset of instances related to a given real-life problem. Our aim is to develop a more general system which, when given any exam timetabling problem, will produce results which are comparative to those of a specially designed heuristic for that problem. We are investigating a Case based reasoning (CBR) technique to select from a set of algorithms which have been applied successfully to similar problem instances in the past. The assumption in CBR is that similar problems have similar solutions. For our system, the assumption is that an algorithm used to find a good solution to one problem will also produce a good result for a similar problem. The key to the success of the system will be our definition of similarity between two exam timetabling problems. The study will be carried out by running a series of tests using a simple Simulated Annealing Algorithm on a range of problems with differing levels of similarity and examining the data sets in detail. In this paper an initial investigation of the key factors which will be involved in this measure is presented with a discussion of how the definition of good impacts on this.
Resumo:
A large number of heuristic algorithms have been developed over the years which have been aimed at solving examination timetabling problems. However, many of these algorithms have been developed specifically to solve one particular problem instance or a small subset of instances related to a given real-life problem. Our aim is to develop a more general system which, when given any exam timetabling problem, will produce results which are comparative to those of a specially designed heuristic for that problem. We are investigating a Case based reasoning (CBR) technique to select from a set of algorithms which have been applied successfully to similar problem instances in the past. The assumption in CBR is that similar problems have similar solutions. For our system, the assumption is that an algorithm used to find a good solution to one problem will also produce a good result for a similar problem. The key to the success of the system will be our definition of similarity between two exam timetabling problems. The study will be carried out by running a series of tests using a simple Simulated Annealing Algorithm on a range of problems with differing levels of similarity and examining the data sets in detail. In this paper an initial investigation of the key factors which will be involved in this measure is presented with a discussion of how the definition of good impacts on this.
Resumo:
Task-based approach implicates identifying all the tasks developed in each workplace aiming to refine the exposure characterization. The starting point of this approach is the recognition that only through a more detailed and comprehensive understanding of tasks is possible to understand, in more detail, the exposure scenario. In addition allows also the most suitable risk management measures identification. This approach can be also used when there is a need of identifying the workplace surfaces for sampling chemicals that have the dermal exposure route as the most important. In this case is possible to identify, through detail observation of tasks performance, the surfaces that involves higher contact (frequency) by the workers and can be contaminated. Identify the surfaces to sample when performing occupational exposure assessment to antineoplasic agents. Surfaces selection done based on the task-based approach.