967 resultados para explicit categorization
Resumo:
Background: Colonoscopy is usually proposed for the evaluation of lower gastrointestinal blood loss (hematochezia) or iron deficiency anemia (IDA). Clinical practice guidelines support this approach but formal evidence is lacking. Real clinical scenarios made available on the web would be of great help in decision-making in clinical practice as to whether colonoscopy is appropriate for a given patient. Method: A multidisciplinary multinational expert panel (EPAGE II) developed appropriateness criteria based on best published evidence (systematic reviews, clinical trials, guidelines) and experts' judgement. Using the explicit RAND Appropriateness Method (3 round of experts' votes and a panel meeting) 102 clinical scenarios were judged inappropriate, uncertain, appropriate, or necessary. Results: In IDA, colonoscopy was appropriate in patients >50 years and necessary in the presence of lower abdominal symptoms. In both men and women aged <50 years, colonoscopy was appropriate if prior sigmoidoscopy and/or gastroscopy did not explain the IDA, and necessary if lower gastrointestinal symptoms were present. In women <50 years with a potential gynecological cause, additional lower gastrointestinal symptoms rendered colonoscopy appropriate. In patients >50 years with hematochezia, colonoscopy was always appropriate and mostly necessary, except if a prior colonoscopy was normal within the previous 5 years. Under age 50 years, the presence of any risk factor for colorectal cancer (CRC) and no previous normal colonoscopy (within the last 5 years) made this procedure appropriate and necessary. Conclusion: Colonoscopy is appropriate and even necessary for many indications related to iron deficiency anemia or hematochezia, in particular in patients aged >50 years. The main factors influencing appropriateness are age, results of prior investigations (sigmoidoscopy, gastroscopy, previous colonoscopy), CRC risk and sex. EPAGE II appropriateness criteria are available on www.epage.ch
Resumo:
In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of p H and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups.
Resumo:
The 2009 International Society of Urological Pathology Consensus Conference in Boston made recommendations regarding the standardization of pathology reporting of radical prostatectomy specimens. Issues relating to the infiltration of tumor into the seminal vesicles and regional lymph nodes were coordinated by working group 4. There was a consensus that complete blocking of the seminal vesicles was not necessary, although sampling of the junction of the seminal vesicles and prostate was mandatory. There was consensus that sampling of the vas deferens margins was not obligatory. There was also consensus that muscular wall invasion of the extraprostatic seminal vesicle only should be regarded as seminal vesicle invasion. Categorization into types of seminal vesicle spread was agreed by consensus to be not necessary. For examination of lymph nodes, there was consensus that special techniques such as frozen sectioning were of use only in high-risk cases. There was no consensus on the optimal sampling method for pelvic lymph node dissection specimens, although there was consensus that all lymph nodes should be completely blocked as a minimum. There was also a consensus that a count of the number of lymph nodes harvested should be attempted. In view of recent evidence, there was consensus that the diameter of the largest lymph node metastasis should be measured. These consensus decisions will hopefully clarify the difficult areas of pathological assessment in radical prostatectomy evaluation and improve the concordance of research series to allow more accurate assessment of patient prognosis.
Resumo:
ABSTRACT: BACKGROUND: Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. METHODS: We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. RESULTS: We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. CONCLUSIONS: We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.
Resumo:
We investigate the selective pressures on a social trait when evolution occurs in a population of constant size. We show that any social trait that is spiteful simultaneously qualifies as altruistic. In other words, any trait that reduces the fitness of less related individuals necessarily increases that of related ones. Our analysis demonstrates that the distinction between "Hamiltonian spite" and "Wilsonian spite" is not justified on the basis of fitness effects. We illustrate this general result with an explicit model for the evolution of a social act that reduces the recipient's survival ("harming trait"). This model shows that the evolution of harming is favoured if local demes are of small size and migration is low (philopatry). Further, deme size and migration rate determine whether harming evolves as a selfish strategy by increasing the fitness of the actor, or as a spiteful/altruistic strategy through its positive effect on the fitness of close kin.
Resumo:
Aim To assess the geographical transferability of niche-based species distribution models fitted with two modelling techniques. Location Two distinct geographical study areas in Switzerland and Austria, in the subalpine and alpine belts. Methods Generalized linear and generalized additive models (GLM and GAM) with a binomial probability distribution and a logit link were fitted for 54 plant species, based on topoclimatic predictor variables. These models were then evaluated quantitatively and used for spatially explicit predictions within (internal evaluation and prediction) and between (external evaluation and prediction) the two regions. Comparisons of evaluations and spatial predictions between regions and models were conducted in order to test if species and methods meet the criteria of full transferability. By full transferability, we mean that: (1) the internal evaluation of models fitted in region A and B must be similar; (2) a model fitted in region A must at least retain a comparable external evaluation when projected into region B, and vice-versa; and (3) internal and external spatial predictions have to match within both regions. Results The measures of model fit are, on average, 24% higher for GAMs than for GLMs in both regions. However, the differences between internal and external evaluations (AUC coefficient) are also higher for GAMs than for GLMs (a difference of 30% for models fitted in Switzerland and 54% for models fitted in Austria). Transferability, as measured with the AUC evaluation, fails for 68% of the species in Switzerland and 55% in Austria for GLMs (respectively for 67% and 53% of the species for GAMs). For both GAMs and GLMs, the agreement between internal and external predictions is rather weak on average (Kulczynski's coefficient in the range 0.3-0.4), but varies widely among individual species. The dominant pattern is an asymmetrical transferability between the two study regions (a mean decrease of 20% for the AUC coefficient when the models are transferred from Switzerland and 13% when they are transferred from Austria). Main conclusions The large inter-specific variability observed among the 54 study species underlines the need to consider more than a few species to test properly the transferability of species distribution models. The pronounced asymmetry in transferability between the two study regions may be due to peculiarities of these regions, such as differences in the ranges of environmental predictors or the varied impact of land-use history, or to species-specific reasons like differential phenotypic plasticity, existence of ecotypes or varied dependence on biotic interactions that are not properly incorporated into niche-based models. The lower variation between internal and external evaluation of GLMs compared to GAMs further suggests that overfitting may reduce transferability. Overall, a limited geographical transferability calls for caution when projecting niche-based models for assessing the fate of species in future environments.
Resumo:
A prominent categorization of Indian classical music is the Hindustani and Carnatic traditions, the two styleshaving evolved under distinctly different historical andcultural influences. Both styles are grounded in the melodicand rhythmic framework of raga and tala. The styles differ along dimensions such as instrumentation,aesthetics and voice production. In particular, Carnatic music is perceived as being more ornamented. The hypothesisthat style distinctions are embedded in the melodic contour is validated via subjective classification tests. Melodic features representing the distinctive characteristicsare extracted from the audio. Previous work based on the extent of stable pitch regions is supported by measurements of musicians’ annotations of stable notes. Further, a new feature is introduced that captures thepresence of specific pitch modulations characteristic ofornamentation in Indian classical music. The combined features show high classification accuracy on a database of vocal music of prominent artistes. The misclassifications are seen to match actual listener confusions.
Resumo:
The cytotoxic T-cell and natural killer (NK)-cell lymphomas and related disorders are important but relatively rare lymphoid neoplasms that frequently are a challenge for practicing pathologists. This selective review, based on a meeting of the International Lymphoma Study Group, briefly reviews T-cell and NK-cell development and addresses questions related to the importance of precise cell lineage (αβ-type T cell, γδ T cell, or NK cell), the implications of Epstein-Barr virus infection, the significance of anatomic location including nodal disease, and the question of further categorization of enteropathy-associated T-cell lymphomas. Finally, developments subsequent to the 2008 World Health Organization Classification, including the recognition of indolent NK-cell and T-cell disorders of the gastrointestinal tract are presented.
Resumo:
Introduction: Biological. therapy has dramatically changed management of Crohn's disease (CD). New data have confirmed the benefit and relative long-term safety of anti-TNF alpha inhibition as part of a regular scheduled administration programme. The EPACT appropriateness criteria for maintenance treatment after medically-induced remission (MIR) or surgically-induced remission (SIR) of CD thus required updating. Methods: A multidisciplinary international expert panel (EPACT II, Geneva, Switzerland) discussed and anonymously rated detailed, explicit clinical indications based on evidence in the literature and personal expertise. Median ratings (on a 9-point scale) were stratified into three assessment categories: appropriate (7-9), uncertain (4-6 and/or disagreement) and inappropriate (1-3). Experts ranked appropriate medication according to their own clinical practice, without any consideration of cost. Results: Three hundred and ninety-two specific indications for maintenance treatment of CD were rated (200 for MIR and 192 for SIR). Azathioprine, methotrexate and/or anti-TNF alpha antibodies were considered appropriate in 42 indications, corresponding to 68% of all appropriate interventions (97% of MIR and 39% of SIR). The remaining appropriate interventions consisted of mesalazine and a "wait-and-see" strategy. Factors that influenced the panel's voting were patient characteristics and outcome of previous treatment. Results favour use of anti-TNF alpha agents after failure of any immunosuppressive therapy, while earlier primary use remains controversial. Conclusion: Detailed explicit appropriateness criteria (EPACT) have been updated for maintenance treatment of CD. New expert recommendations for use of the classic immunosuppressors as well as anti-TNF alpha agents are now freely available online (www.epact.ch). The validity of these criteria should now be tested by prospective evaluation. (C) 2009 European Crohn's and Colitis Organisation. Published by Elsevier B.V. All rights reserved.
Resumo:
BACKGROUND: In Switzerland, 30% of HIV-infected individuals are diagnosed late. To optimize HIV testing, the Swiss Federal Office of Public Health (FOPH) updated 'Provider Induced Counseling and Testing' (PICT) recommendations in 2010. These permit doctors to test patients if HIV infection is suspected, without explicit consent or pre-test counseling; patients should nonetheless be informed that testing will be performed. We examined awareness of these updated recommendations among emergency department (ED) doctors. METHODS: We conducted a questionnaire-based survey among 167 ED doctors at five teaching hospitals in French-Speaking Switzerland between 1(st) May and 31(st) July 2011. For 25 clinical scenarios, participants had to state whether HIV testing was indicated or whether patient consent or pre-test counseling was required. We asked how many HIV tests participants had requested in the previous month, and whether they were aware of the FOPH testing recommendations. RESULTS: 144/167 doctors (88%) returned the questionnaire. Median postgraduate experience was 6.5 years (interquartile range [IQR] 3; 12). Mean percentage of correct answers was 59 ± 11%, senior doctors scoring higher (P=0.001). Lowest-scoring questions pertained to acute HIV infection and scenarios where patient consent was not required. Median number of test requests was 1 (IQR 0-2, range 0-10). Only 26/144 (18%) of participants were aware of the updated FOPH recommendations. Those aware had higher scores (P=0.001) but did not perform more HIV tests. CONCLUSIONS: Swiss ED doctors are not aware of the national HIV testing recommendations and rarely perform HIV tests. Improved recommendation dissemination and adherence is required if ED doctors are to contribute to earlier HIV diagnoses.
Resumo:
INTRODUCTION: urinary incontinence (UI) is a phenomenon with high prevalence in hospitalized elderly patients, effecting up to 70% of patients requiring long term care. However, despite the discomfort it causes and its association with functional decline, it seems to be given insufficient attention by nurses in geriatric care. OBJECTIVES: to assess the prevalence of urinary incontinence in geriatric patients at admission and the level of nurse involvement as characterized by the explicit documentation of UI diagnosis in the patient's record, prescription of nursing intervention, or nursing actions related to UI. METHODS: cross-sectional retrospective chart review. One hundred cases were randomly selected from those patients 65 years or older admitted to the geriatric ward of a university hospital. The variables examined included: total and continence scores on the Measure of Functional Independence (MIF), socio-demographic variables, presence of a nursing diagnosis in the medical record, prescription of or documentation of a nursing intervention related to UI. RESULTS: the prevalence of urinary incontinence was 72 % and UI was positively correlated with a low MIF score, age and status of awaiting placement. Of the examined cases, nursing diagnosis of UI was only documented in 1.4 % of cases, nursing interventions were prescribed in 54 % of cases, and at least one nursing intervention was performed in 72 % of cases. The vast majority of the interventions were palliative. DISCUSSION: the results on the prevalence of IU are similar to those reported in several other studies. This is also the case in relation to nursing interventions. In this study, people with UI were given the same care regardless of their MIF score MIF, age or gender. One limitation of this study is that it is retrospective and therefore dependent on the quality of the nursing documentation. CONCLUSIONS: this study is novel because it examines UI in relation to nursing interventions. It demonstrates that despite a high prevalence of UI, the general level of concern for nurses remains relatively low. Individualized care is desirable and clinical innovations must be developed for primary and secondary prevention of UI during hospitalization.
Resumo:
Screening people without symptoms of disease is an attractive idea. Screening allows early detection of disease or elevated risk of disease, and has the potential for improved treatment and reduction of mortality. The list of future screening opportunities is set to grow because of the refinement of screening techniques, the increasing frequency of degenerative and chronic diseases, and the steadily growing body of evidence on genetic predispositions for various diseases. But how should we decide on the diseases for which screening should be done and on recommendations for how it should be implemented? We use the examples of prostate cancer and genetic screening to show the importance of considering screening as an ongoing population-based intervention with beneficial and harmful effects, and not simply the use of a test. Assessing whether screening should be recommended and implemented for any named disease is therefore a multi-dimensional task in health technology assessment. There are several countries that already use established processes and criteria to assess the appropriateness of screening. We argue that the Swiss healthcare system needs a nationwide screening commission mandated to conduct appropriate evidence-based evaluation of the impact of proposed screening interventions, to issue evidence-based recommendations, and to monitor the performance of screening programmes introduced. Without explicit processes there is a danger that beneficial screening programmes could be neglected and that ineffective, and potentially harmful, screening procedures could be introduced.
Resumo:
The purpose of this article is to treat a currently much debated issue, the effects of age on second language learning. To do so, we contrast data collected by our research team from over one thousand seven hundred young and adult learners with four popular beliefs or generalizations, which, while deeply rooted in this society, are not always corroborated by our data.Two of these generalizations about Second Language Acquisition (languages spoken in the social context) seem to be widely accepted: a) older children, adolescents and adults are quicker and more efficient at the first stages of learning than are younger learners; b) in a natural context children with an early start are more liable to attain higher levels of proficiency. However, in the context of Foreign Language Acquisition, the context in which we collect the data, this second generalization is difficult to verify due to the low number of instructional hours (a maximum of some 800 hours) and the lower levels of language exposure time provided. The design of our research project has allowed us to study differences observed with respect to the age of onset (ranging from 2 to 18+), but in this article we focus on students who began English instruction at the age of 8 (LOGSE Educational System) and those who began at the age of 11 (EGB). We have collected data from both groups after a period of 200 (Time 1) and 416 instructional hours (Time 2), and we are currently collecting data after a period of 726 instructional hours (Time 3). We have designed and administered a variety of tests: tests on English production and reception, both oral and written, and within both academic and communicative oriented approaches, on the learners' L1 (Spanish and Catalan), as well as a questionnaire eliciting personal and sociolinguistic information. The questions we address and the relevant empirical evidence are as follows: 1. "For young children, learning languages is a game. They enjoy it more than adults."Our data demonstrate that the situation is not quite so. Firstly, both at the levels of Primary and Secondary education (ranging from 70.5% in 11-year-olds to 89% in 14-year-olds) students have a positive attitude towards learning English. Secondly, there is a difference between the two groups with respect to the factors they cite as responsible for their motivation to learn English: the younger students cite intrinsic factors, such as the games they play, the methodology used and the teacher, whereas the older students cite extrinsic factors, such as the role of their knowledge of English in the achievement of their future professional goals. 2 ."Young children have more resources to learn languages." Here our data suggest just the opposite. The ability to employ learning strategies (actions or steps used) increases with age. Older learners' strategies are more varied and cognitively more complex. In contrast, younger learners depend more on their interlocutor and external resources and therefore have a lower level of autonomy in their learning. 3. "Young children don't talk much but understand a lot"This third generalization does seem to be confirmed, at least to a certain extent, by our data in relation to the analysis of differences due to the age factor and productive use of the target language. As seen above, the comparably slower progress of the younger learners is confirmed. Our analysis of interpersonal receptive abilities demonstrates as well the advantage of the older learners. Nevertheless, with respect to passive receptive activities (for example, simple recognition of words or sentences) no great differences are observed. Statistical analyses suggest that in this test, in contrast to the others analyzed, the dominance of the subjects' L1s (reflecting a cognitive capacity that grows with age) has no significant influence on the learning process. 4. "The sooner they begin, the better their results will be in written language"This is not either completely confirmed in our research. First of all, we perceive that certain compensatory strategies disappear only with age, but not with the number of instructional hours. Secondly, given an identical number of instructional hours, the older subjects obtain better results. With respect to our analysis of data from subjects of the same age (12 years old) but with a different number of instructional hours (200 and 416 respectively, as they began at the ages of 11 and 8), we observe that those who began earlier excel only in the area of lexical fluency. In conclusion, the superior rate of older learners appears to be due to their higher level of cognitive development, a factor which allows them to benefit more from formal or explicit instruction in the school context. Younger learners, however, do not benefit from the quantity and quality of linguistic exposure typical of a natural acquisition context in which they would be allowed to make use of implicit learning abilities. It seems clear, then, that the initiative in this country to begin foreign language instruction earlier will have positive effects only if it occurs in combination with either higher levels of exposure time to the foreign language, or, alternatively, with its use as the language of instruction in other areas of the curriculum.
Resumo:
The purpose of this article is to treat a currently much debated issue, the effects of age on second language learning. To do so, we contrast data collected by our research team from over one thousand seven hundred young and adult learners with four popular beliefs or generalizations, which, while deeply rooted in this society, are not always corroborated by our data.Two of these generalizations about Second Language Acquisition (languages spoken in the social context) seem to be widely accepted: a) older children, adolescents and adults are quicker and more efficient at the first stages of learning than are younger learners; b) in a natural context children with an early start are more liable to attain higher levels of proficiency. However, in the context of Foreign Language Acquisition, the context in which we collect the data, this second generalization is difficult to verify due to the low number of instructional hours (a maximum of some 800 hours) and the lower levels of language exposure time provided. The design of our research project has allowed us to study differences observed with respect to the age of onset (ranging from 2 to 18+), but in this article we focus on students who began English instruction at the age of 8 (LOGSE Educational System) and those who began at the age of 11 (EGB). We have collected data from both groups after a period of 200 (Time 1) and 416 instructional hours (Time 2), and we are currently collecting data after a period of 726 instructional hours (Time 3). We have designed and administered a variety of tests: tests on English production and reception, both oral and written, and within both academic and communicative oriented approaches, on the learners' L1 (Spanish and Catalan), as well as a questionnaire eliciting personal and sociolinguistic information. The questions we address and the relevant empirical evidence are as follows: 1. "For young children, learning languages is a game. They enjoy it more than adults."Our data demonstrate that the situation is not quite so. Firstly, both at the levels of Primary and Secondary education (ranging from 70.5% in 11-year-olds to 89% in 14-year-olds) students have a positive attitude towards learning English. Secondly, there is a difference between the two groups with respect to the factors they cite as responsible for their motivation to learn English: the younger students cite intrinsic factors, such as the games they play, the methodology used and the teacher, whereas the older students cite extrinsic factors, such as the role of their knowledge of English in the achievement of their future professional goals. 2 ."Young children have more resources to learn languages." Here our data suggest just the opposite. The ability to employ learning strategies (actions or steps used) increases with age. Older learners' strategies are more varied and cognitively more complex. In contrast, younger learners depend more on their interlocutor and external resources and therefore have a lower level of autonomy in their learning. 3. "Young children don't talk much but understand a lot"This third generalization does seem to be confirmed, at least to a certain extent, by our data in relation to the analysis of differences due to the age factor and productive use of the target language. As seen above, the comparably slower progress of the younger learners is confirmed. Our analysis of interpersonal receptive abilities demonstrates as well the advantage of the older learners. Nevertheless, with respect to passive receptive activities (for example, simple recognition of words or sentences) no great differences are observed. Statistical analyses suggest that in this test, in contrast to the others analyzed, the dominance of the subjects' L1s (reflecting a cognitive capacity that grows with age) has no significant influence on the learning process. 4. "The sooner they begin, the better their results will be in written language"This is not either completely confirmed in our research. First of all, we perceive that certain compensatory strategies disappear only with age, but not with the number of instructional hours. Secondly, given an identical number of instructional hours, the older subjects obtain better results. With respect to our analysis of data from subjects of the same age (12 years old) but with a different number of instructional hours (200 and 416 respectively, as they began at the ages of 11 and 8), we observe that those who began earlier excel only in the area of lexical fluency. In conclusion, the superior rate of older learners appears to be due to their higher level of cognitive development, a factor which allows them to benefit more from formal or explicit instruction in the school context. Younger learners, however, do not benefit from the quantity and quality of linguistic exposure typical of a natural acquisition context in which they would be allowed to make use of implicit learning abilities. It seems clear, then, that the initiative in this country to begin foreign language instruction earlier will have positive effects only if it occurs in combination with either higher levels of exposure time to the foreign language, or, alternatively, with its use as the language of instruction in other areas of the curriculum.
Resumo:
Viruses are known to tolerate wide ranges of pH and salt conditions and to withstand internal pressures as high as 100 atmospheres. In this paper we investigate the mechanical properties of viral capsids, calling explicit attention to the inhomogeneity of the shells that is inherent to their discrete and polyhedral nature. We calculate the distribution of stress in these capsids and analyze their response to isotropic internal pressure (arising, for instance, from genome confinement and/or osmotic activity). We compare our results with appropriate generalizations of classical (i.e., continuum) elasticity theory. We also examine competing mechanisms for viral shell failure, e.g., in-plane crack formation vs radial bursting. The biological consequences of the special stabilities and stress distributions of viral capsids are also discussed.