756 resultados para Language Analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. Despite the rapid growth of disease management programs, there are still questions about their efficacy and effectiveness for improving patient outcomes and their ability to reduce costs associated with chronic disease. ^ Objective. To determine the effectiveness of disease management programs on improving the results of HbA1c tests, lipid profiles and systolic blood pressure (SBP) readings among diabetics. These three quantitative measures are widely accepted methods of determining the quality of a patient's diabetes management and the potential for future complications. ^ Data Sources. MEDLINE and CINAHL were searched from 1950 to June 2008 using MeSH terms designed to capture all relevant studies. Scopus pearling and hand searching were also done. Only English language articles were selected. ^ Study Selection. Titles and abstracts for the 2347 articles were screened against predetermined inclusion and exclusion criteria, yielding 217 articles for full screening. After full article screening, 29 studies were selected for inclusion in the review. ^ Data Extraction. From the selected studies, data extraction included sample size, mean change over baseline, and standard deviation for each control and experimental arm. ^ Results. The pooled results show a mean HbA1c reduction of 0.64%, 95% CI (-0.83 to -0.44), mean SBP reduction of 7.39 mmHg (95% CI to -11.58 to -3.2), mean total cholesterol reduction of 5.74 mg/dL (95% CI, -10.01 to -1.43), and mean LDL cholesterol reduction of 3.74 mg/dL (95% CI, -8.34 to 0.87). Results for HbA1c, SBP and total cholesterol were statistically significant, while the results for LDL cholesterol were not. ^ Conclusions. The findings suggest that disease management programs utilizing five hallmarks of care can be effective at improving intermediate outcomes among diabetics. However, given the significant heterogeneity present, there may be fundamental differences with respect to study-specific interventions and populations that render them inappropriate for meta-analysis. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As schools are pressured to perform on academics and standardized examinations, schools are reluctant to dedicate increased time to physical activity. After-school exercise and health programs may provide an opportunity to engage in more physical activity without taking time away from coursework during the day. The current study is a secondary data analysis of data from a randomized trial of a 10-week after-school program (six schools, n = 903) that implemented an exercise component based on the CATCH physical activity component and health modules based on the culturally-tailored Bienestar health education program. Outcome variables included BMI and aerobic capacity, health knowledge and healthy food intentions as assessed through path analysis techniques. Both the baseline model (χ2 (df = 8) = 16.90, p = .031; RMSEA = .035 (90% CI of .010–.058), NNFI = 0.983 and the CFI = 0.995) and the model incorporating intervention participation proved to be a good fit to the data (χ2 (df = 10) = 11.59, p = .314. RMSEA = .013 (90% CI of .010–.039); NNFI = 0.996 and CFI = 0.999). Experimental group participation was not predictive of changes in health knowledge, intentions to eat healthy foods or changes in Body Mass Index, but it was associated with increased aerobic capacity, β = .067, p < .05. School characteristics including SES and Language proficiency proved to be significantly associated with changes in knowledge and physical indicators. Further effects of school level variables on intervention outcomes are recommended so that tailored interventions can be developed aimed at the specific characteristics of each participating school. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In December, 1980, following increasing congressional and constituent-interest in problems associated with hazardous waste, the Comprehensive Environmental Recovery, Compensation and Liability Act (CERCLA) was passed. During its development, the legislative initiative was seriously compromised which resulted in a less exhaustive approach than was formerly sought. Still, CERCLA (Superfund) which established, among other things, authority to clean up abandoned waste dumps and to respond to emergencies caused by releases of hazardous substances was welcomed by many as an important initial law critical to the cleanup of the nation's hazardous waste. Expectations raised by passage of this bill were tragically unmet. By the end of four years, only six sites had been declared by the EPA as cleaned. Seemingly, even those determinations were liberal; of the six sites, two were identified subsequently as requiring further cleanup.^ This analysis is focused upon the implementation failure of the Superfund. In light of that focus, discussion encompasses development of linkages between flaws in the legislative language and foreclosure of chances for implementation success. Specification of such linkages is achieved through examination of the legislative initiative, identification of its flaws and characterization of attendant deficits in implementation ability. Subsequent analysis is addressed to how such legislative frailities might have been avoided and to attendant regulatory weaknesses which have contributed to implementation failure. Each of these analyses are accomplished through application of an expanded approach to the backward mapping analytic technique as presented by Elmore. Results and recommendations follow.^ Consideration is devoted to a variety of regulatory issues as well as to those pertinent to legislative and implementation analysis. Problems in assessing legal liability associated with hazardous waste management are presented, as is a detailed review of the legislative development of Superfund, and its initial implementation by Gorsuch's EPA. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Among Hispanics, the HPV vaccine has the potential to eliminate disparities in cervical cancer incidence and mortality but only if optimal rates of vaccination are achieved. Media can be an important information source for increasing HPV knowledge and awareness of the vaccine. Very little is known about how media use among Hispanics affects their HPV knowledge and vaccine awareness. Even less is known about what differences exist in media use and information processing among English- and Spanish-speaking Hispanics.^ Aims. Examine the relationships between three health communication variables (media exposure, HPV-specific information scanning and seeking) and three HPV outcomes (knowledge, vaccine awareness and initiation) among English- and Spanish-speaking Hispanics.^ Methods. Cross-sectional data from a survey administered to Hispanic mothers in Dallas, Texas was used for univariate and multivariate logistic regression analyses. Sample used for analysis included 288 mothers of females aged 8-22 recruited from clinics and community events. Dependent variables of interest were HPV knowledge, HPV vaccine awareness and initiation. Independent variables were media exposure, HPV-specific information scanning and seeking. Language was tested as an effect modifier on the relationship between health communication variables and HPV outcomes.^ Results. English-speaking mothers reported more media exposure, HPV-specific information scanning and seeking than Spanish-speakers. Scanning for HPV information was associated with more HPV knowledge (OR = 4.26, 95% CI = 2.41 - 7.51), vaccine awareness (OR = 10.01, 95% CI = 5.43 - 18.47) and vaccine initiation (OR = 2.54, 95% CI = 1.09 - 5.91). Seeking HPV-specific information was associated with more knowledge (OR = 2.27, 95% CI = 1.23 - 4.16), awareness (OR = 6.60, 95% CI = 2.74 - 15.91) and initiation (OR = 4.93, 95% CI = 2.64 - 9.20). Language moderated the effect of information scanning and seeking on vaccine awareness.^ Discussion. Differences in information scanning and seeking behaviors among Hispanic subgroups have the potential to lead to disparities in vaccine awareness.^ Conclusion. Findings from this study underscore health communication differences among Hispanics and emphasize the need to target Spanish language media as well as English language media aimed at Hispanics to improve knowledge and awareness.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hepatitis B virus (HBV) is a significant cause of liver diseases and related complications worldwide. Both injecting and non-injecting drug users are at increased risk of contracting HBV infection. Scientific evidence suggests that drug users have subnormal response to HBV vaccination and the seroprotection rates are lower than that in the general population; potentially due to vaccine factors, host factors, or both. The purpose of this systematic review is to examine the rates of seroprotection following HBV vaccination in drug using populations and to conduct a meta-analysis to identify the factors associated with varying seroprotection rates. Seroprotection is defined as developing an anti-HBs antibody level of ≥ 10 mIU/ml after receiving the HBV vaccine. Original research articles were searched using online databases and reference lists of shortlisted articles. HBV vaccine intervention studies reporting seroprotection rates in drug users and published in English language during or after 1989 were eligible. Out of 235 citations reviewed, 11 studies were included in this review. The reported seroprotection rates ranged from 54.5 – 97.1%. Combination vaccine (HAV and HBV) (Risk ratio 12.91, 95% CI 2.98-55.86, p = 0.003), measurement of anti-HBs with microparticle immunoassay (Risk ratio 3.46, 95% CI 1.11-10.81, p = 0.035) and anti-HBs antibody measurement at 2 months after the last HBV vaccine dose (RR 4.11, 95% CI 1.55-10.89, p = 0.009) were significantly associated with higher seroprotection rates. Although statistically nonsignificant, the variables mean age>30 years, higher prevalence of anti-HBc antibody and anti-HIV antibody in the sample population, and current drug use (not in drug rehabilitation treatment) were strongly associated with decreased seroprotection rates. Proportion of injecting drug users, vaccine dose and accelerated vaccine schedule were not predictors of heterogeneity across studies. Studies examined in this review were significantly heterogeneous (Q = 180.850, p = 0.000) and factors identified should be considered when comparing immune response across studies. The combination vaccine showed promising results; however, its effectiveness compared to standard HBV vaccine needs to be examined systematically. Immune response in DUs can possibly be improved by the use of bivalent vaccines, booster doses, and improving vaccine completion rates through integrated public programs and incentives.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clinical text understanding (CTU) is of interest to health informatics because critical clinical information frequently represented as unconstrained text in electronic health records are extensively used by human experts to guide clinical practice, decision making, and to document delivery of care, but are largely unusable by information systems for queries and computations. Recent initiatives advocating for translational research call for generation of technologies that can integrate structured clinical data with unstructured data, provide a unified interface to all data, and contextualize clinical information for reuse in multidisciplinary and collaborative environment envisioned by CTSA program. This implies that technologies for the processing and interpretation of clinical text should be evaluated not only in terms of their validity and reliability in their intended environment, but also in light of their interoperability, and ability to support information integration and contextualization in a distributed and dynamic environment. This vision adds a new layer of information representation requirements that needs to be accounted for when conceptualizing implementation or acquisition of clinical text processing tools and technologies for multidisciplinary research. On the other hand, electronic health records frequently contain unconstrained clinical text with high variability in use of terms and documentation practices, and without commitmentto grammatical or syntactic structure of the language (e.g. Triage notes, physician and nurse notes, chief complaints, etc). This hinders performance of natural language processing technologies which typically rely heavily on the syntax of language and grammatical structure of the text. This document introduces our method to transform unconstrained clinical text found in electronic health information systems to a formal (computationally understandable) representation that is suitable for querying, integration, contextualization and reuse, and is resilient to the grammatical and syntactic irregularities of the clinical text. We present our design rationale, method, and results of evaluation in processing chief complaints and triage notes from 8 different emergency departments in Houston Texas. At the end, we will discuss significance of our contribution in enabling use of clinical text in a practical bio-surveillance setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a preprocessing module for improving the performance of a Spanish into Spanish Sign Language (Lengua de Signos Espanola: LSE) translation system when dealing with sparse training data. This preprocessing module replaces Spanish words with associated tags. The list with Spanish words (vocabulary) and associated tags used by this module is computed automatically considering those signs that show the highest probability of being the translation of every Spanish word. This automatic tag extraction has been compared to a manual strategy achieving almost the same improvement. In this analysis, several alternatives for dealing with non-relevant words have been studied. Non-relevant words are Spanish words not assigned to any sign. The preprocessing module has been incorporated into two well-known statistical translation architectures: a phrase-based system and a Statistical Finite State Transducer (SFST). This system has been developed for a specific application domain: the renewal of Identity Documents and Driver's License. In order to evaluate the system a parallel corpus made up of 4080 Spanish sentences and their LSE translation has been used. The evaluation results revealed a significant performance improvement when including this preprocessing module. In the phrase-based system, the proposed module has given rise to an increase in BLEU (Bilingual Evaluation Understudy) from 73.8% to 81.0% and an increase in the human evaluation score from 0.64 to 0.83. In the case of SFST, BLEU increased from 70.6% to 78.4% and the human evaluation score from 0.65 to 0.82.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Globalization has intensified competition, as evidenced by the growing number of international classification systems (rankings) and the attention paid to them. Doctoral education has an international character in itself. It should promote opportunities for graduate students lo participate in these international studies. The quality and competitiveness are two of the most important issues for universities. To promote the interest of graduates to continue their education after the graduate level, it would be necessary to improve the published information of ihe doctoral programs. It should increase the visibility and provide high-quality, easily accessible and comparable information which includes all the relevant aspects of these programs. The authors analysed the website contents of doctoral programs, it was observed a lack of quality of them and very poor information about the contents, so that it was decided that any of them could constitute a model for creating new websites. The recommendations on the format and contents in the web were made by a discussion group. They recommended an attractive design; a page with easy access to contents and easy to find on Ihe net and with the information in more than one language. It should include complete program and academic staff information. It should also be included the study's results which should be easily accessible and includes quantitative data, such as number of students who completed scholars, publications, research projects, average duration of the studies, etc. It will facilitate the choice of program

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an evaluation of a spoken language dialogue system with a module for the management of userrelated information, stored as user preferences and privileges. The flexibility of our dialogue management approach, based on Bayesian Networks (BN), together with a contextual information module, which performs different strategies for handling such information, allows us to include user information as a new level into the Context Manager hierarchy. We propose a set of objective and subjective metrics to measure the relevance of the different contextual information sources. The analysis of our evaluation scenarios shows that the relevance of the short-term information (i.e. the system status) remains pretty stable throughout the dialogue, whereas the dialogue history and the user profile (i.e. the middle-term and the long-term information, respectively) play a complementary role, evolving their usefulness as the dialogue evolves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modelling of critical infrastructures (CIs) is an important issue that needs to be properly addressed, for several reasons. It is a basic support for making decisions about operation and risk reduction. It might help in understanding high-level states at the system-of-systems layer, which are not ready evident to the organisations that manage the lower level technical systems. Moreover, it is also indispensable for setting a common reference between operator and authorities, for agreeing on the incident scenarios that might affect those infrastructures. So far, critical infrastructures have been modelled ad-hoc, on the basis of knowledge and practice derived from less complex systems. As there is no theoretical framework, most of these efforts proceed without clear guides and goals and using informally defined schemas based mostly on boxes and arrows. Different CIs (electricity grid, telecommunications networks, emergency support, etc) have been modelled using particular schemas that were not directly translatable from one CI to another. If there is a desire to build a science of CIs it is because there are some observable commonalities that different CIs share. Up until now, however, those commonalities were not adequately compiled or categorized, so building models of CIs that are rooted on such commonalities was not possible. This report explores the issue of which elements underlie every CI and how those elements can be used to develop a modelling language that will enable CI modelling and, subsequently, analysis of CI interactions, with a special focus on resilience

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systems of Systems (SoS) present challenging features and existing tools result often inadequate for their analysis, especially for heteregeneous networked infrastructures. Most accident scenarios in networked systems cannot be addressed by a simplistic black or white (i.e. functioning or failed) approach. Slow deviations from nominal operation conditions may cause degraded behaviours that suddenly end up into unexpected malfunctioning, with large portions of the network affected. In this paper,we present a language for modelling networked SoS. The language makes it possible to represent interdependencies of various natures, e.g. technical, organizational and human. The representation of interdependencies is based on control relationships that exchange physical quantities and related information. The language also makes it possible the identification of accident scenarios, by representing the propagation of failure events throughout the network. The results can be used for assessing the effectiveness of those mechanisms and measures that contribute to the overall resilience, both in qualitative and quantitative terms. The presented modelling methodology is general enough to be applied in combination with already existing system analysis techniques, such as risk assessment, dependability and performance evaluation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Static analyses of object-oriented programs usually rely on intermediate representations that respect the original semantics while having a more uniform and basic syntax. Most of the work involving object-oriented languages and abstract interpretation usually omits the description of that language or just refers to the Control Flow Graph(CFG) it represents. However, this lack of formalization on one hand results in an absence of assurances regarding the correctness of the transformation and on the other it typically strongly couples the analysis to the source language. In this work we present a framework for analysis of object-oriented languages in which in a first phase we transform the input program into a representation based on Horn clauses. This allows on one hand proving the transformation correct attending to a simple condition and on the other being able to apply an existing analyzer for (constraint) logic programming to automatically derive a safe approximation of the semantics of the original program. The approach is flexible in the sense that the first phase decouples the analyzer from most languagedependent features, and correct because the set of Horn clauses returned by the transformation phase safely approximates the standard semantics of the input program. The resulting analysis is also reasonably scalable due to the use of mature, modular (C)LP-based analyzers. The overall approach allows us to report results for medium-sized programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a static analysis that infers both upper and lower bounds on the usage that a logic program makes of a set of user-definable resources. The inferred bounds will in general be functions of input data sizes. A resource in our approach is a quite general, user-defined notion which associates a basic cost function with elementary operations. The analysis then derives the related (upper- and lower-bound) resource usage functions for all predicates in the program. We also present an assertion language which is used to define both such resources and resourcerelated properties that the system can then check based on the results of the analysis. We have performed some preliminary experiments with some concrete resources such as execution steps, bytes sent or received by an application, number of files left open, number of accesses to a datábase, number of calis to a procedure, number of asserts/retracts, etc. Applications of our analysis include resource consumption verification and debugging (including for mobile code), resource control in parallel/distributed computing, and resource-oriented specialization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract interpretation-based data-flow analysis of logic programs is, at this point, relatively well understood from the point of view of general frameworks and abstract domains. On the other hand, comparatively little attention has been given to the problems which arise when analysis of a full, practical dialect of the Prolog language is attempted, and only few solutions to these problems have been proposed to date. Existing proposals generally restrict in one way or another the classes of programs which can be analyzed. This paper attempts to fill this gap by considering a full dialect of Prolog, essentially the recent ISO standard, pointing out the problems that may arise in the analysis of such a dialect, and proposing a combination of known and novel solutions that together allow the correct analysis of arbitrary programs which use the full power of the language.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While logic programming languages offer a great deal of scope for parallelism, there is usually some overhead associated with the execution of goals in parallel because of the work involved in task creation and scheduling. In practice, therefore, the "granularity" of a goal, i.e. an estimate of the work available under it, should be taken into account when deciding whether or not to execute a goal concurrently as a sepárate task. This paper describes a method for estimating the granularity of a goal at compile time. The runtime overhead associated with our approach is usually quite small, and the performance improvements resulting from the incorporation of grainsize control can be quite good. This is shown by means of experimental results.