944 resultados para Quality criteria


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of nonstandardized and inadequately validated outcome measures in atopic eczema trials is a major obstacle to practising evidence-based dermatology. The Harmonising Outcome Measures for Eczema (HOME) initiative is an international multiprofessional group dedicated to atopic eczema outcomes research. In June 2011, the HOME initiative conducted a consensus study involving 43 individuals from 10 countries, representing different stakeholders (patients, clinicians, methodologists, pharmaceutical industry) to determine core outcome domains for atopic eczema trials, to define quality criteria for atopic eczema outcome measures and to prioritize topics for atopic eczema outcomes research. Delegates were given evidence-based information, followed by structured group discussion and anonymous consensus voting. Consensus was achieved to include clinical signs, symptoms, long-term control of flares and quality of life into the core set of outcome domains for atopic eczema trials. The HOME initiative strongly recommends including and reporting these core outcome domains as primary or secondary endpoints in all future atopic eczema trials. Measures of these core outcome domains need to be valid, sensitive to change and feasible. Prioritized topics of the HOME initiative are the identification/development of the most appropriate instruments for the four core outcome domains. HOME is open to anyone with an interest in atopic eczema outcomes research.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Technology scaling increasingly emphasizes complexity and non-ideality of the electrical behavior of semiconductor devices and boosts interest on alternatives to the conventional planar MOSFET architecture. TCAD simulation tools are fundamental to the analysis and development of new technology generations. However, the increasing device complexity is reflected in an augmented dimensionality of the problems to be solved. The trade-off between accuracy and computational cost of the simulation is especially influenced by domain discretization: mesh generation is therefore one of the most critical steps and automatic approaches are sought. Moreover, the problem size is further increased by process variations, calling for a statistical representation of the single device through an ensemble of microscopically different instances. The aim of this thesis is to present multi-disciplinary approaches to handle this increasing problem dimensionality in a numerical simulation perspective. The topic of mesh generation is tackled by presenting a new Wavelet-based Adaptive Method (WAM) for the automatic refinement of 2D and 3D domain discretizations. Multiresolution techniques and efficient signal processing algorithms are exploited to increase grid resolution in the domain regions where relevant physical phenomena take place. Moreover, the grid is dynamically adapted to follow solution changes produced by bias variations and quality criteria are imposed on the produced meshes. The further dimensionality increase due to variability in extremely scaled devices is considered with reference to two increasingly critical phenomena, namely line-edge roughness (LER) and random dopant fluctuations (RD). The impact of such phenomena on FinFET devices, which represent a promising alternative to planar CMOS technology, is estimated through 2D and 3D TCAD simulations and statistical tools, taking into account matching performance of single devices as well as basic circuit blocks such as SRAMs. Several process options are compared, including resist- and spacer-defined fin patterning as well as different doping profile definitions. Combining statistical simulations with experimental data, potentialities and shortcomings of the FinFET architecture are analyzed and useful design guidelines are provided, which boost feasibility of this technology for mainstream applications in sub-45 nm generation integrated circuits.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wird zum einen ein Instrument zur Erfassung der Patient-Therapeut-Bindung validiert (Client Attachment to Therapist Scale, CATS; Mallinckrodt, Coble & Gantt, 1995), zum anderen werden Hypothesen zu den Zusammenhängen zwischen Selbstwirksamkeitserwartung, allgemeinem Bindungsstil, therapeutischer Beziehung (bzw. Therapiezufriedenheit), Patient-Therapeut-Bindung und Therapieerfolg bei Drogen-abhängigen in stationärer Postakutbehandlung überprüft. In die Instrumentenvalidierung (einwöchiger Retest) wurden 119 Patienten aus 2 Kliniken und 13 Experten einbezogen. Die Gütekriterien des Instrumentes fallen sehr zufriedenstellend aus. An der naturalistischen Therapieevaluationsstudie (Prä-, Prozess-, Post-Messung: T0, T1, T2) nahmen 365 Patienten und 27 Therapeuten aus 4 Kliniken teil. Insgesamt beendeten 44,1% der Patienten ihren stationären Aufenthalt planmäßig. Auf Patientenseite erweisen sich Alter und Hauptdiagnose, auf Therapeutenseite die praktizierte Therapierichtung als Therapieerfolgsprädiktoren. Selbstwirksamkeitserwartung, allgemeiner Bindungsstil, Patient-Therapeut-Bindung und Therapiezufriedenheit eignen sich nicht zur Prognose des Therapieerfolgs. Die zu T0 stark unterdurchschnittlich ausgeprägte Selbstwirksamkeits-erwartung steigert sich über den Interventionszeitraum, wobei sich ein Moderatoreffekt der Patient-Therapeut-Bindung beobachten lässt. Es liegt eine hohe Prävalenz unsicherer allgemeiner Bindungsstile vor, welche sich über den Therapiezeitraum nicht verändern. Die patientenseitige Zufriedenheit mit der Therapie steigt von T1 zu T2 an. Die Interrater-Konkordanz (Patient/Therapeut) zur Einschätzung der Patient-Therapeut-Bindung erhöht sich leicht von T1 zu T2. Im Gegensatz dazu wird die Therapiezufriedenheit von Patienten und Therapeuten zu beiden Messzeitpunkten sehr unterschiedlich beurteilt. Die guten Testgütekriterien der CATS sprechen für eine Überlegenheit dieses Instrumentes gegenüber der Skala zur Erfassung der Therapiezufriedenheit. Deshalb sollte die Patient-Therapeut-Bindung anhand dieses Instrumentes in weiteren Forschungsarbeiten an anderen Patientenkollektiven untersucht werden, um generalisierbare Aussagen zur Validität treffen zu können.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Structured cardiac rehabilitation goes back to the late 1960s also in Switzerland and in the beginning was only available in rehabilitation clinics. In 1972 the first ambulatory rehabilitation programs became available to patients in Zurich and Bern. In the following years, in addition to the increasing number of rehabilitation centers for inpatients, more and more ambulatory rehabilitation programs were developed, especially in the larger Midlands population area in German and French-speaking Switzerland. In 1985 the Swiss Working Group of Cardiac Rehabilitation (SAKR) was initiated as an official working group of the Swiss Society of Cardiology and one of its first tasks was to establish a list of the institutions for cardiac rehabilitation in Switzerland. At that time there were 42 rehabilitation programs for a population of approx. 6.5 million, 21 for inpatients and 21 ambulatory; however, 90% of the patients were in inpatient programs. In 1992 the SAKR group defined the quality criteria which were to be applied for official recognition of institutions for cardiac rehabilitation in Switzerland. Due to these criteria, plus the fact that an increasing number of rehabilitation clinics in the mountains had been closed down, the number of inpatient rehabilitation centers decreased from 21 to 11 between 1989 and 2011, whereas the number of ambulatory programs increased from 21 to 51. The ambulatory rehabilitation centers are partially organized by local medical groups; however, most have integrated their activities into the local hospitals. The trend shows a developing preference for ambulatory rehabilitation. More and more elderly, polymorbid patients, however, will still need care in inpatient programs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As education providers increasingly integrate digital learning media into their education processes, the need for the systematic management of learning materials and learning arrangements becomes clearer. Digital repositories, often called Learning Object Repositories (LOR), promise to provide an answer to this challenge. This article is composed of two parts. In this part, we derive technological and pedagogical requirements for LORs from a concretization of information quality criteria for e-learning technology. We review the evolution of learning object repositories and discuss their core features in the context of pedagogical requirements, information quality demands, and e-learning technology standards. We conclude with an outlook in Part 2, which presents concrete technical solutions, in particular networked repository architectures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In Part 1 of this article we discussed the need for information quality and the systematic management of learning materials and learning arrangements. Digital repositories, often called Learning Object Repositories (LOR), were introduced as a promising answer to this challenge. We also derived technological and pedagogical requirements for LORs from a concretization of information quality criteria for e-learning technology. This second part presents technical solutions that particularly address the demands of open education movements, which aspire to a global reuse and sharing culture. From this viewpoint, we develop core requirements for scalable network architectures for educational content management. We then present edu-sharing, an advanced example of a network of homogeneous repositories for learning resources, and discuss related technology. We conclude with an outlook in terms of emerging developments towards open and networked system architectures in e-learning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction Since the quality of patient portrayal of standardized patients (SPs) during an Objective Structured Clinical Exam (OSCE) has a major impact on the reliability and validity of the exam, quality control should be initiated. Literature about quality control of SPs’ performance focuses on feedback [1, 2] or completion of checklists [3, 4]. Since we did not find a published instrument meeting our needs for the assessment of patient portrayal, we developed such an instrument after being inspired by others [5] and used it in our high-stakes exam. Project description SP trainers from five medical faculties collected and prioritized quality criteria for patient portrayal. Items were revised twice, based on experiences during OSCEs. The final instrument contains 14 criteria for acting (i.e. adequate verbal and non-verbal expression) and standardization (i.e. verbatim delivery of the first sentence). All partners used the instrument during a high-stakes OSCE. SPs and trainers were introduced to the instrument. The tool was used in training (more than 100 observations) and during the exam (more than 250 observations). Outcome High quality of SPs’ patient portrayal during the exam was documented. More than 90% of SP performances were rated to be completely correct or sufficient. An increase in quality of performance between training and exam was noted. For example, the rate of completely correct reaction in medical tests increased from 88% to 95%. Together with 4% of sufficient performances these 95% add up to 99% of the reactions in medical tests meeting the standards of the exam. SP educators using the instrument reported an augmentation of SPs’ performance induced by the use of the instrument. Disadvantages mentioned were the high concentration needed to observe all criteria and the cumbersome handling of the paper-based forms. Discussion We were able to document a very high quality of SP performance in our exam. The data also indicates that our training is effective. We believe that the high concentration needed using the instrument is well invested, considering the observed enhancement of performance. The development of an iPad-based application for the form is planned to address the cumbersome handling of the paper.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aims To explore the impact of the functional severity of coronary artery stenosis on changes in myocardial oxygenation during pharmacological vasodilation, using oxygenation-sensitive cardiovascular magnetic resonance (OS-CMR) imaging and invasive fractional flow reserve (FFR). An FFR is considered a standard of reference for assessing haemodynamic relevance of coronary artery stenosis; yet, the relationship of FFR to changes in myocardial oxygenation during vasodilator stress and thus to an objective marker for ischaemia on the tissue level is not well understood. Methods and results We prospectively recruited 64 patients with suspected/known coronary artery disease undergoing invasive angiography. The FFR was performed in intermediate coronary artery stenosis. OS-CMR images were acquired using a T2*-sensitive sequence before and after adenosine-induced vasodilation, with myocardial segments matched to angiography. Very strict image quality criteria were defined to ensure the validity of results. The FFR was performed in 37 patients. Because of the strict image quality criteria, 41% of segments had to be excluded, leaving 29/64 patients for the blinded OS-CMR analysis. Coronary territories with an associated FFR of <0.80 showed a lack of increase in myocardial oxygenation [mean signal intensity (ΔSI) −0.49%; 95% confidence interval (CI) −3.78 to 2.78 vs. +7.30%; 95% CI 4.08 to 10.64; P < 0.001]. An FFR of <0.54 best predicted a complete lack of a vasodilator-induced oxygenation increase (sensitivity 71% and specificity 75%). An OS-CMR ΔSI <4.78% identified an FFR of <0.8 with a sensitivity of 86% and specificity of 92%. Conclusion An FFR of <0.80 is associated with a lack of an adenosine-inducible increase in oxygenation of the dependent coronary territory, while a complete lack of such an increase was best predicted by an FFR of <0.54. Further studies are warranted to identify clinically meaningful cut-off values for FFR measurements and to assess the utility of OS-CMR as an alternative clinical tool for assessing the functional relevance of coronary artery stenosis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009–2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft’s solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which substantially reduces the spurious signals in the geocenter coordinate z (by about a factor of 2–6), reduces the orbit misclosures at the day boundaries by about 10 %, slightly improves the consistency of the estimated ERPs with those of the IERS 08 C04 Earth rotation series, and substantially reduces the systematics in the SLR validation of the GNSS orbits.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND After the introduction of instruments for benchmarking, certification and a national guideline for acute pain management, the aim of this study was to describe the current structure, processes and quality of German acute pain services (APS). METHODS All directors of German departments of anaesthesiology were invited to complete a postal questionnaire on structures und processes of acute pain management. The survey asked for staff, techniques and quality criteria, which enabled a comparison to previous data from 1999 and surveys from other countries. RESULTS Four hundred and eight (46%) questionnaires were returned. APS have increased considerably and are now available in 81% of the hospitals, mainly anaesthesia based. However, only 45% fulfilled the minimum quality criteria, such as the assignment of personnel, the organization of patient care during nights and weekends, written protocols for postoperative pain management, regular assessments and documenting pain scores. Staff resources varied considerably, but increased compared to 1999. Two daily rounds were performed in 71%, either by physicians and nurses (42%), by physicians only (25%) or by supervised nurses (31%). Most personnel assigned to the APS shared this work along with other duties. Only 53% of the hospitals had an integrated rotation for training their specialty trainees. CONCLUSIONS The availability of APS in Germany and other countries has increased over the last decade; however, the quality of nearly half of the APS is questionable. Against the disillusioning background of recently reported unfavourable pain-related patient outcomes, the structures, organization and quality of APS should be revisited.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE To investigate if image registration of diffusion tensor imaging (DTI) allows omitting respiratory triggering for both transplanted and native kidneys MATERIALS AND METHODS: Nine kidney transplant recipients and eight healthy volunteers underwent renal DTI on a 3T scanner with and without respiratory triggering. DTI images were registered using a multimodal nonrigid registration algorithm. Apparent diffusion coefficient (ADC), the contribution of perfusion (FP ), and the fractional anisotropy (FA) were determined. Relative root mean square errors (RMSE) of the fitting and the standard deviations of the derived parameters within the regions of interest (SDROI ) were evaluated as quality criteria. RESULTS Registration significantly reduced RMSE in all DTI-derived parameters of triggered and nontriggered measurements in cortex and medulla of both transplanted and native kidneys (P < 0.05 for all). In addition, SDROI values were lower with registration for all 16 parameters in transplanted kidneys (14 of 16 SDROI values were significantly reduced, P < 0.04) and for 15 of 16 parameters in native kidneys (9 of 16 SDROI values were significantly reduced, P < 0.05). Comparing triggered versus nontriggered DTI in transplanted kidneys revealed no significant difference for RMSE (P > 0.14) and for SDROI (P > 0.13) of all parameters. In contrast, in native kidneys relative RMSE from triggered scans were significantly lower than those from nontriggered scans (P < 0.02), while SDROI was slightly higher in triggered compared to nontriggered measurements in 15 out of 16 comparisons (significantly for two, P < 0.05). CONCLUSION Registration improves the quality of DTI in native and transplanted kidneys. Diffusion parameters in renal allografts can be measured without respiratory triggering. In native kidneys, respiratory triggering appears advantageous. J. Magn. Reson. Imaging 2016.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND Non-steroidal anti-inflammatory drugs (NSAIDs) are the backbone of osteoarthritis pain management. We aimed to assess the effectiveness of different preparations and doses of NSAIDs on osteoarthritis pain in a network meta-analysis. METHODS For this network meta-analysis, we considered randomised trials comparing any of the following interventions: NSAIDs, paracetamol, or placebo, for the treatment of osteoarthritis pain. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) and the reference lists of relevant articles for trials published between Jan 1, 1980, and Feb 24, 2015, with at least 100 patients per group. The prespecified primary and secondary outcomes were pain and physical function, and were extracted in duplicate for up to seven timepoints after the start of treatment. We used an extension of multivariable Bayesian random effects models for mixed multiple treatment comparisons with a random effect at the level of trials. For the primary analysis, a random walk of first order was used to account for multiple follow-up outcome data within a trial. Preparations that used different total daily dose were considered separately in the analysis. To assess a potential dose-response relation, we used preparation-specific covariates assuming linearity on log relative dose. FINDINGS We identified 8973 manuscripts from our search, of which 74 randomised trials with a total of 58 556 patients were included in this analysis. 23 nodes concerning seven different NSAIDs or paracetamol with specific daily dose of administration or placebo were considered. All preparations, irrespective of dose, improved point estimates of pain symptoms when compared with placebo. For six interventions (diclofenac 150 mg/day, etoricoxib 30 mg/day, 60 mg/day, and 90 mg/day, and rofecoxib 25 mg/day and 50 mg/day), the probability that the difference to placebo is at or below a prespecified minimum clinically important effect for pain reduction (effect size [ES] -0·37) was at least 95%. Among maximally approved daily doses, diclofenac 150 mg/day (ES -0·57, 95% credibility interval [CrI] -0·69 to -0·46) and etoricoxib 60 mg/day (ES -0·58, -0·73 to -0·43) had the highest probability to be the best intervention, both with 100% probability to reach the minimum clinically important difference. Treatment effects increased as drug dose increased, but corresponding tests for a linear dose effect were significant only for celecoxib (p=0·030), diclofenac (p=0·031), and naproxen (p=0·026). We found no evidence that treatment effects varied over the duration of treatment. Model fit was good, and between-trial heterogeneity and inconsistency were low in all analyses. All trials were deemed to have a low risk of bias for blinding of patients. Effect estimates did not change in sensitivity analyses with two additional statistical models and accounting for methodological quality criteria in meta-regression analysis. INTERPRETATION On the basis of the available data, we see no role for single-agent paracetamol for the treatment of patients with osteoarthritis irrespective of dose. We provide sound evidence that diclofenac 150 mg/day is the most effective NSAID available at present, in terms of improving both pain and function. Nevertheless, in view of the safety profile of these drugs, physicians need to consider our results together with all known safety information when selecting the preparation and dose for individual patients. FUNDING Swiss National Science Foundation (grant number 405340-104762) and Arco Foundation, Switzerland.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Inefficiencies during the management of healthcare waste can give rise to undesirable health effects such as transmission of infections and environmental pollution within and beyond the health facilities generating these wastes. Factors such as prevalence of diseases, conflicts, and the efflux of intellectual capacity make low income countries more susceptible to these adverse health effects. The purpose of this systematic review was to describe the effectiveness of interventions geared towards better managing the generation, collection, transport, treatment and disposal of medical waste, as they have been applied in lower and middle income countries.^ Using a systematic search strategy and evaluation of study quality, this study reviewed the literature for published studies on healthcare waste management interventions carried out in developing countries, specifically the low and lower middle income countries from year 2000 to the current year. From an initially identified set of 829 studies, only three studies ultimately met all inclusion, exclusion and high quality criteria. A multi component intervention in Syrian Arab Republic, conducted in 2007 was aimed at improving waste segregation practice in a hospital setting. There was an increased use of segregation boxes and reduced rates of sharps injury among staff as a result of the intervention. Another study, conducted in 2008, trained medical students as monitors of waste segregation practice in an Indian teaching hospital. There was improved practice in wards and laboratories but not in the intensive care units. The third study, performed in 2008 in China, consisted of modification of the components of a medical waste incinerator to improve efficiency and reduce stack emissions. Gaseous pollutants emitted, except polychlorodibenzofurans (PCDF) were below US EPA permissible exposure limits. Heavy metal residues in the fly ash remained unchanged.^ Due to the paucity of well-designed studies, there is insufficient evidence in literature to conclude on the effectiveness of interventions in low income settings. There is suggestive but insufficient evident that multi-component interventions aimed at improving waste segregation through behavior modification, provision of segregation tools and training of monitors are effective in low income settings.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Gaining valid answers to so-called sensitive questions is an age-old problem in survey research. Various techniques have been developed to guarantee anonymity and minimize the respondent's feelings of jeopardy. Two such techniques are the randomized response technique (RRT) and the unmatched count technique (UCT). In this study we evaluate the effectiveness of different implementations of the RRT (using a forced-response design) in a computer-assisted setting and also compare the use of the RRT to that of the UCT. The techniques are evaluated according to various quality criteria, such as the prevalence estimates they provide, the ease of their use, and respondent trust in the techniques. Our results indicate that the RRTs are problematic with respect to several domains, such as the limited trust they inspire and non-response, and that the RRT estimates are unreliable due to a strong false "no" bias, especially for the more sensitive questions. The UCT, however, performed well compared to the RRTs on all the evaluated measures. The UCT estimates also had more face validity than the RRT estimates. We conclude that the UCT is a promising alternative to RRT in self-administered surveys and that future research should be directed towards evaluating and improving the technique.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The optimum quality that can be asymptotically achieved in the estimation of a probability p using inverse binomial sampling is addressed. A general definition of quality is used in terms of the risk associated with a loss function that satisfies certain assumptions. It is shown that the limit superior of the risk for p asymptotically small has a minimum over all (possibly randomized) estimators. This minimum is achieved by certain non-randomized estimators. The model includes commonly used quality criteria as particular cases. Applications to the non-asymptotic regime are discussed considering specific loss functions, for which minimax estimators are derived.