889 resultados para Standardized tools


Relevância:

100.00% 100.00%

Publicador:

Resumo:

IEEE 1451 Standard is intended to address the smart transducer interfacing problematic in network environments. Usually, proprietary hardware and software is a very efficient solution to in planent the IEEE 1451 normative, although can be expensive and inflexible. In contrast, the use of open and standardized tools for implementing the IEEE 1451 normative is proposed in this paper. Tools such as Java and Phyton programming languages, Linux, programmable logic technology, Personal Computer resources and Ethernet architecture were integrated in order to constructa network node based on the IEEE 1451 standards. The node can be applied in systems based on the client-server communication model The evaluation of the employed tools and expermental results are presented. © 2005 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Standard indicators of quality of care have been developed in the United States. Limited information exists about quality of care in countries with universal health care coverage.OBJECTIVE: To assess the quality of preventive care and care for cardiovascular risk factors in a country with universal health care coverage.DESIGN AND PARTICIPANTS: Retrospective cohort of a random sample of 1,002 patients aged 50-80 years followed for 2 years from all Swiss university primary care settings.MAIN MEASURES: We used indicators derived from RAND's Quality Assessment Tools. Each indicator was scored by dividing the number of episodes when recommended care was delivered by the number of times patients were eligible for indicators. Aggregate scores were calculated by taking into account the number of eligible patients for each indicator.KEY RESULTS: Overall, patients (44% women) received 69% of recommended preventive care, but rates differed by indicators. Indicators assessing annual blood pressure and weight measurements (both 95%) were more likely to be met than indicators assessing smoking cessation counseling (72%), breast (40%) and colon cancer screening (35%; all p < 0.001 for comparisons with blood pressure and weight measurements). Eighty-three percent of patients received the recommended care for cardiovascular risk factors, including > 75% for hypertension, dyslipidemia and diabetes. However, foot examination was performed only in 50% of patients with diabetes. Prevention indicators were more likely to be met in men (72.2% vs 65.3% in women, p < 0.001) and patients < 65 years (70.1% vs 68.0% in those a parts per thousand yen65 years, p = 0.047).CONCLUSIONS: Using standardized tools, these adults received 69% of recommended preventive care and 83% of care for cardiovascular risk factors in Switzerland, a country with universal coverage. Prevention indicator rates were lower for women and the elderly, and for cancer screening. Our study helps pave the way for targeted quality improvement initiatives and broader assessment of health care in Continental Europe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soluble MHC-peptide complexes, commonly known as tetramers, allow the detection and isolation of antigen-specific T cells. Although other types of soluble MHC-peptide complexes have been introduced, the most commonly used MHC class I staining reagents are those originally described by Altman and Davis. As these reagents have become an essential tool for T cell analysis, it is important to have a large repertoire of such reagents to cover a broad range of applications in cancer research and clinical trials. Our tetramer collection currently comprises 228 human and 60 mouse tetramers and new reagents are continuously being added. For the MHC II tetramers, the list currently contains 21 human (HLA-DR, DQ and DP) and 5 mouse (I-A(b)) tetramers. Quantitative enumeration of antigen-specific T cells by tetramer staining, especially at low frequencies, critically depends on the quality of the tetramers and on the staining procedures. For conclusive longitudinal monitoring, standardized reagents and analysis protocols need to be used. This is especially true for the monitoring of antigen-specific CD4+ T cells, as there are large variations in the quality of MHC II tetramers and staining conditions. This commentary provides an overview of our tetramer collection and indications on how tetramers should be used to obtain optimal results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pressing global environmental problems highlight the need to develop tools to measure progress towards "sustainability." However, some argue that any such attempt inevitably reflects the views of those creating such tools and only produce highly contested notions of "reality." To explore this tension, we critically assesses the Environmental Sustainability Index (ESI), a well-publicized product of the World Economic Forum that is designed to measure 'sustainability' by ranking nations on league tables based on extensive databases of environmental indicators. By recreating this index, and then using statistical tools (principal components analysis) to test relations between various components of the index, we challenge ways in which countries are ranked in the ESI. Based on this analysis, we suggest (1) that the approach taken to aggregate, interpret and present the ESI creates a misleading impression that Western countries are more sustainable than the developing world; (2) that unaccounted methodological biases allowed the authors of the ESI to over-generalize the relative 'sustainability' of different countries; and, (3) that this has resulted in simplistic conclusions on the relation between economic growth and environmental sustainability. This criticism should not be interpreted as a call for the abandonment of efforts to create standardized comparable data. Instead, this paper proposes that indicator selection and data collection should draw on a range of voices, including local stakeholders as well as international experts. We also propose that aggregating data into final league ranking tables is too prone to error and creates the illusion of absolute and categorical interpretations. (c) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bovine besnoitiosis is considered an emerging chronic and debilitating disease in Europe. Many infections remain subclinical, and the only sign of disease is the presence of parasitic cysts in the sclera and conjunctiva. Serological tests are useful for detecting asymptomatic cattle/sub-clinical infections for control purposes, as there are no effective drugs or vaccines. For this purpose, diagnostic tools need to be further standardized. Thus, the aim of this study was to compare the serological tests available in Europe in a multi-centred study. A coded panel of 241 well-characterized sera from infected and non-infected bovines was provided by all participants (SALUVET-Madrid, FLI-Wusterhausen, ENV-Toulouse, IPB-Berne). The tests evaluated were as follows: an in-house ELISA, three commercial ELISAs (INGEZIM BES 12.BES.K1 INGENASA, PrioCHECK Besnoitia Ab V2.0, ID Screen Besnoitia indirect IDVET), two IFATs and seven Western blot tests (tachyzoite and bradyzoite extracts under reducing and non-reducing conditions). Two different definitions of a gold standard were used: (i) the result of the majority of tests ('Majority of tests') and (ii) the majority of test results plus pre-test information based on clinical signs ('Majority of tests plus pre-test info'). Relative to the gold standard 'Majority of tests', almost 100% sensitivity (Se) and specificity (Sp) were obtained with SALUVET-Madrid and FLI-Wusterhausen tachyzoite- and bradyzoite-based Western blot tests under non-reducing conditions. On the ELISAs, PrioCHECK Besnoitia Ab V2.0 showed 100% Se and 98.8% Sp, whereas ID Screen Besnoitia indirect IDVET showed 97.2% Se and 100% Sp. The in-house ELISA and INGEZIM BES 12.BES.K1 INGENASA showed 97.3% and 97.2% Se; and 94.6% and 93.0% Sp, respectively. IFAT FLI-Wusterhausen performed better than IFAT SALUVET-Madrid, with 100% Se and 95.4% Sp. Relative to the gold standard 'Majority of test plus pre-test info', Sp significantly decreased; this result was expected because of the existence of seronegative animals with clinical signs. All ELISAs performed very well and could be used in epidemiological studies; however, Western blot tests performed better and could be employed as a posteriori tests for control purposes in the case of uncertain results from valuable samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automated and semi-automated accessibility evaluation tools are key to streamline the process of accessibility assessment, and ultimately ensure that software products, contents, and services meet accessibility requirements. Different evaluation tools may better fit different needs and concerns, accounting for a variety of corporate and external policies, content types, invocation methods, deployment contexts, exploitation models, intended audiences and goals; and the specific overall process where they are introduced. This has led to the proliferation of many evaluation tools tailored to specific contexts. However, tool creators, who may be not familiar with the realm of accessibility and may be part of a larger project, lack any systematic guidance when facing the implementation of accessibility evaluation functionalities. Herein we present a systematic approach to the development of accessibility evaluation tools, leveraging the different artifacts and activities of a standardized development process model (the Unified Software Development Process), and providing templates of these artifacts tailored to accessibility evaluation tools. The work presented specially considers the work in progress in this area by the W3C/WAI Evaluation and Report Working Group (ERT WG)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a national need to increase the STEM-related workforce. Among factors leading towards STEM careers include the number of advanced high school mathematics and science courses students complete. Florida's enrollment patterns in STEM-related Advanced Placement (AP) courses, however, reveal that only a small percentage of students enroll into these classes. Therefore, screening tools are needed to find more students for these courses, who are academically ready, yet have not been identified. The purpose of this study was to investigate the extent to which scores from a national standardized test, Preliminary Scholastic Assessment Test/ National Merit Qualifying Test (PSAT/NMSQT), in conjunction with and compared to a state-mandated standardized test, Florida Comprehensive Assessment Test (FCAT), are related to selected AP exam performance in Seminole County Public Schools. An ex post facto correlational study was conducted using 6,189 student records from the 2010 - 2012 academic years. Multiple regression analyses using simultaneous Full Model testing showed differential moderate to strong relationships between scores in eight of the nine AP courses (i.e., Biology, Environmental Science, Chemistry, Physics B, Physics C Electrical, Physics C Mechanical, Statistics, Calculus AB and BC) examined. For example, the significant unique contribution to overall variance in AP scores was a linear combination of PSAT Math (M), Critical Reading (CR) and FCAT Reading (R) for Biology and Environmental Science. Moderate relationships for Chemistry included a linear combination of PSAT M, W (Writing) and FCAT M; a combination of FCAT M and PSAT M was most significantly associated with Calculus AB performance. These findings have implications for both research and practice. FCAT scores, in conjunction with PSAT scores, can potentially be used for specific STEM-related AP courses, as part of a systematic approach towards AP course identification and placement. For courses with moderate to strong relationships, validation studies and development of expectancy tables, which estimate the probability of successful performance on these AP exams, are recommended. Also, findings established a need to examine other related research issues including, but not limited to, extensive longitudinal studies and analyses of other available or prospective standardized test scores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While substance use problems are considered to be common in medical settings, they are not systematically assessed and diagnosed for treatment management. Research data suggest that the majority of individuals with a substance use disorder either do not use treatment or delay treatment-seeking for over a decade. The separation of substance abuse services from mainstream medical care and a lack of preventive services for substance abuse in primary care can contribute to under-detection of substance use problems. When fully enacted in 2014, the Patient Protection and Affordable Care Act 2010 will address these barriers by supporting preventive services for substance abuse (screening, counseling) and integration of substance abuse care with primary care. One key factor that can help to achieve this goal is to incorporate the standardized screeners or common data elements for substance use and related disorders into the electronic health records (EHR) system in the health care setting. Incentives for care providers to adopt an EHR system for meaningful use are part of the Health Information Technology for Economic and Clinical Health Act 2009. This commentary focuses on recent evidence about routine screening and intervention for alcohol/drug use and related disorders in primary care. Federal efforts in developing common data elements for use as screeners for substance use and related disorders are described. A pressing need for empirical data on screening, brief intervention, and referral to treatment (SBIRT) for drug-related disorders to inform SBIRT and related EHR efforts is highlighted.