991 resultados para Databases -- Evaluation
Resumo:
Memoria de TFC en el que se analiza el estándar SQL:1999 y se compara con PostgreeSQL y Oracle.
Resumo:
This study examines current and forthcoming measures related to the exchange of data and information in EU Justice and Home Affairs policies, with a focus on the ‘smart borders’ initiative. It argues that there is no reversibility in the growing reliance on such schemes and asks whether current and forthcoming proposals are necessary and original. It outlines the main challenges raised by the proposals, including issues related to the right to data protection, but also to privacy and non-discrimination.
Resumo:
Knowledge sharing is an essential component of effective knowledge management. However, evaluation apprehension, or the fear that your work may be critiqued, can inhibit knowledge sharing. Using the general framework of social exchange theory, we examined the effects of evaluation apprehension and perceived benefit of knowledge sharing ( such as enhanced reputation) on employees' knowledge sharing intentions in two contexts: interpersonal (i.e., by direct contact between two employees) and database (i.e., via repositories). Evaluation apprehension was negatively associated with knowledge sharing intentions in both contexts while perceived bene. it was only positively associated with knowledge sharing intentions in the database context. Moreover, compared to the interpersonal context, evaluation apprehension was higher and knowledge sharing lower in the database context. Finally, the negative effects of evaluation apprehension upon knowledge sharing intentions were worse when perceived benefits were low compared to when perceived benefits were high.
Resumo:
Fifty Bursa of Fabricius (BF) were examined by conventional optical microscopy and digital images were acquired and processed using Matlab® 6.5 software. The Artificial Neuronal Network (ANN) was generated using Neuroshell® Classifier software and the optical and digital data were compared. The ANN was able to make a comparable classification of digital and optical scores. The use of ANN was able to classify correctly the majority of the follicles, reaching sensibility and specificity of 89% and 96%, respectively. When the follicles were scored and grouped in a binary fashion the sensibility increased to 90% and obtained the maximum value for the specificity of 92%. These results demonstrate that the use of digital image analysis and ANN is a useful tool for the pathological classification of the BF lymphoid depletion. In addition it provides objective results that allow measuring the dimension of the error in the diagnosis and classification therefore making comparison between databases feasible.
Resumo:
This publication is a support and resource document for the "National Action Plan for Promotion, Prevention and Early Intervention for Mental Health 2000". It includes indicators, measurement tools and databases relevant to assessing the implementation of the outcomes and strategies identified in the action plan.
Resumo:
With the proliferation of relational database programs for PC's and other platforms, many business end-users are creating, maintaining, and querying their own databases. More importantly, business end-users use the output of these queries as the basis for operational, tactical, and strategic decisions. Inaccurate data reduce the expected quality of these decisions. Implementing various input validation controls, including higher levels of normalisation, can reduce the number of data anomalies entering the databases. Even in well-maintained databases, however, data anomalies will still accumulate. To improve the quality of data, databases can be queried periodically to locate and correct anomalies. This paper reports the results of two experiments that investigated the effects of different data structures on business end-users' abilities to detect data anomalies in a relational database. The results demonstrate that both unnormalised and higher levels of normalisation lower the effectiveness and efficiency of queries relative to the first normal form. First normal form databases appear to provide the most effective and efficient data structure for business end-users formulating queries to detect data anomalies.
Resumo:
The data indispensable for carrying out the comprehensive, multi-faceted process of medical technology assessment (MTA) should be collected from a variety of sources. The authors distinguish between type "A" general data, useful for assessment but collected without this specific aim, and type "B" data. Registries of health care procedures or of diseases, as well as clinical data bases are quoted as examples of type "B" data, specifically relating to MTA. Since demographic methods are of importance for the evaluation of long-term effects of medical technologies, examples of sources of type "A" data are presented. Their significance for health policy making is discussed.
Resumo:
Introduction Preventing drug incompatibilities has a high impact onthe safety of drug therapy. Although there are no internationalguidelines to manage drug incompatibilities, different decision-supporttools such as handbooks, cross-tables and databases are available.In a previous study, two decision-support tools have been pre-selectedby pharmacists as fitting nurses' needs on the wards1. The objective ofthis study was to have these both tools evaluated by nurses todetermine which would be the most suitable for their daily practice.Materials & Methods Evaluated tools were:1. Cross-table of drug pairs (http://files.chuv.ch/internet-docs/pha/medicaments/pha_phatab_compatibilitessip.pdf)2. Colour-table (a colour for each drug according to the pH: red =acid; blue = basic; yellow = neutral; black = to be infused alone)2Tools were assessed by 48 nurses in 5 units (PICU, adult andgeriatric intensive care, surgery, onco-hematology) using a standardizedform1. The scientific accuracy of the tools was evaluated bydetermining the compatibility of five drugs pairs (rate of correctanswers according to the Trissel's Handbook on Injectable Drugs,chi-square test). Their ergonomics, design, reliability and applicabilitywere estimated using visual analogue scales (VAS 0-10; 0 =null, 10 = excellent). Results are expressed as the median and interquartilerange (IQR) for 25% and 75% (Wilcoxon rank sum test).Results The rate of correct answers was above 90% for both tools(cross-table 96.2% vs colour-table 92.5%, p[0.05).The ergonomics and the applicability were higher for the crosstable[7.1 (IQR25 4.0, IQR75 8.0) vs 5.0 (IQR25 2.7, IQR75 7.0), p =0.025 resp. 8.3 (IQR25 7.4, IQR75 9.2) vs 7.6 (IQR25 5.9, IQR75 8.8)p = 0.047].The design of the colour-table was judged better [4.6 (IQR25 2.9,IQR75 7.1) vs 7.1 (IQR25 5.4, IQR75 8.4) p = 0.002].No difference was observed in terms of reliability [7.3 (IQR25 6.5,IQR75 8.4) vs 6.7 (IQR25 5.0, IQR758.6) p[0.05].The cross-table was globally preferred by 65% of the nurses (27%colour-table, 8% undetermined) and 68% would like to have thisdecision-support tool available for their daily practice.Discussion & Conclusion Both tools showed the same accuracy toassess drug compatibility. In terms of ergonomics and applicabilitythe cross-table was better than the colour-table, and was preferred bythe nurses for their daily practice. The cross-table will be implementedin our hospital as decision-support tool to help nurses tomanage drug incompatibilities.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
The evaluation’s overarching question was “Did the activities undertaken through the state’s LSTA plan achieve results related to priorities identified in the Act?” The evaluation was conducted and is organized according to the six LSTA priorities. The research design employed two major methodologies: 1. Data sources from Iowa Library Services / State Library of Iowa2 as well as U.S and state sources were indentified for quantitative analysis. These sources, which primarily reflect outputs for various projects, included: Statistics from the Public Library Annual Survey Statistics collected internally by Iowa Library Services such as number of libraries subscribing to sponsored databases, number of database searches, attendance at continuing education events, number of interlibrary loan transactions Evaluation surveys from library training sessions, professional development workshops and other programs supported by LSTA funds Internal databases maintained by Iowa Library Services Impact results from post training evaluations conducted by Iowa Library Services 2010 Iowa census data from the U.S. Census Bureau LSTA State Program Reports for the grant period 2. Following the quantitative analysis, the evaluator gathered qualitative data through interviews with key employees, a telephone focus group with district library consultants and two surveys: LSTA Evaluation Survey (Public Libraries) and LSTA Evaluation Survey (Academic Libraries). Both surveys provided sound samples with 43 representatives of Iowa’s 77 academic libraries and 371 representatives of Iowa’s 544 public libraries participating. Respondents represented libraries of all sizes and geographical areas. Both surveys included multiple choice and rating scale items as well as open-ended questions from which results were coded to identify trends, issues and recommendations.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
This report presents the results of a literature review conducted to evaluate differences in seat belt use by race. A literature review was conducted to evaluate overall seat belt use, racial differences in seat belt use, overall child restraint use, racial differences in child restraint use, and information about seat belt and child restraint use specific to Iowa. A number of national studies and regional studies were found and are presented. Mixed results were found as to whether racial differences exist in both seat belt use and child restraint use. However, in the course of the literature review, several items that are of interest to safety in Iowa have emerged, although little data specific to Iowa was encountered. First, national seat belt use appears to be lower among African-Americans than for Caucasians or Hispanics. Second, national crash rates among Hispanics appear to be higher than those for Caucasians, particularly when population and lower vehicle miles traveled (VMT) are considered. One issue that should be considered throughout this literature review is that the Hispanic population may be higher than reported due to large numbers of undocumented persons who do not appear in population estimates, driver’s license, or other databases.
Resumo:
Despite the development of novel typing methods based on whole genome sequencing, most laboratories still rely on classical molecular methods for outbreak investigation or surveillance. Reference methods for Clostridium difficile include ribotyping and pulsed-field gel electrophoresis, which are band-comparing methods often difficult to establish and which require reference strain collections. Here, we present the double locus sequence typing (DLST) scheme as a tool to analyse C. difficile isolates. Using a collection of clinical C. difficile isolates recovered during a 1-year period, we evaluated the performance of DLST and compared the results to multilocus sequence typing (MLST), a sequence-based method that has been used to study the structure of bacterial populations and highlight major clones. DLST had a higher discriminatory power compared to MLST (Simpson's index of diversity of 0.979 versus 0.965) and successfully identified all isolates of the study (100 % typeability). Previous studies showed that the discriminatory power of ribotyping was comparable to that of MLST; thus, DLST might be more discriminatory than ribotyping. DLST is easy to establish and provides several advantages, including absence of DNA extraction [polymerase chain reaction (PCR) is performed on colonies], no specific instrumentation, low cost and unambiguous definition of types. Moreover, the implementation of a DLST typing scheme on an Internet database, such as that previously done for Staphylococcus aureus and Pseudomonas aeruginosa ( http://www.dlst.org ), will allow users to easily obtain the DLST type by submitting directly sequencing files and will avoid problems associated with multiple databases.
Resumo:
Fifty Bursa of Fabricius (BF) were examined by conventional optical microscopy and digital images were acquired and processed using Matlab® 6.5 software. The Artificial Neuronal Network (ANN) was generated using Neuroshell® Classifier software and the optical and digital data were compared. The ANN was able to make a comparable classification of digital and optical scores. The use of ANN was able to classify correctly the majority of the follicles, reaching sensibility and specificity of 89% and 96%, respectively. When the follicles were scored and grouped in a binary fashion the sensibility increased to 90% and obtained the maximum value for the specificity of 92%. These results demonstrate that the use of digital image analysis and ANN is a useful tool for the pathological classification of the BF lymphoid depletion. In addition it provides objective results that allow measuring the dimension of the error in the diagnosis and classification therefore making comparison between databases feasible.