764 resultados para Consensus algorithm
Resumo:
To complement the existing treatment guidelines for all tumour types, ESMO organises consensus conferences to focus on specific issues in each type of tumour. The 2nd ESMO Consensus Conference on Lung Cancer was held on 11-12 May 2013 in Lugano. A total of 35 experts met to address several questions on non-small-cell lung cancer (NSCLC) in each of four areas: pathology and molecular biomarkers, first-line/second and further lines of treatment in advanced disease, early-stage disease and locally advanced disease. For each question, recommendations were made including reference to the grade of recommendation and level of evidence. This consensus paper focuses on first line/second and further lines of treatment in advanced disease.
Resumo:
Introduction New evidence from randomized controlled and etiology of fever studies, the availability of reliable RDT for malaria, and novel technologies call for revision of the IMCI strategy. We developed a new algorithm based on (i) a systematic review of published studies assessing the safety and appropriateness of RDT and antibiotic prescription, (ii) results from a clinical and microbiological investigation of febrile children aged <5 years, (iii) international expert IMCI opinions. The aim of this study was to assess the safety of the new algorithm among patients in urban and rural areas of Tanzania.Materials and Methods The design was a controlled noninferiority study. Enrolled children aged 2-59 months with any illness were managed either by a study clinician using the new Almanach algorithm (two intervention health facilities), or clinicians using standard practice, including RDT (two control HF). At day 7 and day 14, all patients were reassessed. Patients who were ill in between or not cured at day 14 were followed until recovery or death. Primary outcome was rate of complications, secondary outcome rate of antibiotic prescriptions.Results 1062 children were recruited. Main diagnoses were URTI 26%, pneumonia 19% and gastroenteritis (9.4%). 98% (531/541) were cured at D14 in the Almanach arm and 99.6% (519/521) in controls. Rate of secondary hospitalization was 0.2% in each. One death occurred in controls. None of the complications was due to withdrawal of antibiotics or antimalarials at day 0. Rate of antibiotic use was 19% in the Almanach arm and 84% in controls.Conclusion Evidence suggests that the new algorithm, primarily aimed at the rational use of drugs, is as safe as standard practice and leads to a drastic reduction of antibiotic use. The Almanach is currently being tested for clinician adherence to proposed procedures when used on paper or a mobile phone
Resumo:
While a popular vote supported a new article on complementary and alternative medicines (CAM) in the Swiss Constitution, this assessment in 14 wards of the University Hospital of Lausanne, Switzerland, attempted at answering the question: How can CAM use be better taken into account and patients informed with more rigor and respect for their choices? Confronted with a review of the literature (> 2000 publications in "Evidence-based complementary medicines" since 1998), respondents declared their ignorance of the clinical data presently available on CAM. All were in favour of more teaching and information on the subject, plus an official statement from the Hospital direction, ensuring production and diffusion of rigorous and clinically significant information on CAM.
Resumo:
Este trabajo presenta un Algoritmo Genético (GA) del problema de secuenciar unidades en una línea de producción. Se tiene en cuenta la posibilidad de cambiar la secuencia de piezas mediante estaciones con acceso a un almacén intermedio o centralizado. El acceso al almacén además está restringido, debido al tamaño de las piezas.AbstractThis paper presents a Genetic Algorithm (GA) for the problem of sequencing in a mixed model non-permutation flowshop. Resequencingis permitted where stations have access to intermittent or centralized resequencing buffers. The access to a buffer is restricted by the number of available buffer places and the physical size of the products.
Resumo:
The decision-making process regarding drug dose, regularly used in everyday medical practice, is critical to patients' health and recovery. It is a challenging process, especially for a drug with narrow therapeutic ranges, in which a medical doctor decides the quantity (dose amount) and frequency (dose interval) on the basis of a set of available patient features and doctor's clinical experience (a priori adaptation). Computer support in drug dose administration makes the prescription procedure faster, more accurate, objective, and less expensive, with a tendency to reduce the number of invasive procedures. This paper presents an advanced integrated Drug Administration Decision Support System (DADSS) to help clinicians/patients with the dose computing. Based on a support vector machine (SVM) algorithm, enhanced with the random sample consensus technique, this system is able to predict the drug concentration values and computes the ideal dose amount and dose interval for a new patient. With an extension to combine the SVM method and the explicit analytical model, the advanced integrated DADSS system is able to compute drug concentration-to-time curves for a patient under different conditions. A feedback loop is enabled to update the curve with a new measured concentration value to make it more personalized (a posteriori adaptation).
Resumo:
In October 2011 the Task Force Therapeutic Drug Monitoring of the Association for Neuropsychopharmacology and Pharmacopsychiatry (AGNP) published an update (Pharmacopsychiatry 2011, 44: 195-235) of the first version of the consensus paper on therapeutic drug monitoring (TDM) published in 2004. This article summarizes the essential statements to make them accessible to a wider readership in German speaking countries.
Resumo:
De nombreuses recommandations de pratique clinique (RPC) ont été publiées, en réponse au développement du concept de la médecine fondée sur les preuves et comme solution à la difficulté de synthétiser et trier l'abondante littérature médicale. Pour faire un choix parmi le foisonnement de nouvelles RPC, il est primordial d'évaluer leur qualité. Récemment, le premier instrument d'évaluation standardisée de la qualité des RPC, appelé " AGREE " pour appraisal of guidelines for research and evaluation, a été validé. Nous avons comparé - avec l'aide de la grille " AGREE " - les six principales RPC publiées depuis une dizaine d'années sur le traitement de la schizophrénie : (1) les Recommandations de l'Agence nationale pour le développement de l'évaluation médicale (ANDEM) ; (2) The American Psychiatric Association (APA) practice guideline for the treatment of patients with schizophrenia ; (3) The quick reference guide of APA practice guideline for the treatment of patients with schizophrenia [APA - guide rapide de référence] ; (4) The schizophrenia patient outcomes research team (PORT) treatment recommandations ; (5) The Texas medication algorithm project " T-MAP " ; (6) The expert consensus guideline for the treatment of schizophrenia. Les résultats de notre étude ont ensuite été comparés avec ceux d'une étude similaire publiée en 2005 par Gæbel et al. portant sur 24 RPC abordant le traitement de la schizophrénie, réalisée également avec l'aide de la grille " AGREE " et deux évaluateurs [Br J Psychiatry 187 (2005) 248-255]. De manière générale, les scores des deux études ne sont pas trop éloignés et les deux évaluations globales des RPC convergent : chacune des six RPC est perfectible et présente différemment des points faibles et des points forts. La rigueur d'élaboration des six RPC est dans l'ensemble très moyenne, la prise en compte de l'opinion des utilisateurs potentiels est lacunaire et un effort sur la présentation des recommandations faciliterait leur utilisation clinique. L'applicabilité des recommandations est également peu considérée par les auteurs. Globalement, deux RPC se distinguent et peuvent être fortement recommandées selon les critères de la grille " AGREE " : " l'APA - guide rapide de référence " et le " T-MAP ".
Resumo:
AIM: The study aimed to analyse the currently available national and international guidelines for areas of consensus and contrasting recommendations in the treatment of diverticulitis and thereby to design questions for future research. METHOD: MEDLINE, EMBASE and PubMed were systematically searched for guidelines on diverticular disease and diverticulitis. Inclusion was confined to papers in English and those < 10 years old. The included topics were classified as consensus or controversy between guidelines, and the highest level of evidence was scored as sufficient (Oxford Centre of Evidence-Based Medicine Level of Evidence of 3a or higher) or insufficient. RESULTS: Six guidelines were included and all topics with recommendations were compared. Overall, in 13 topics consensus was reached and 10 topics were regarded as controversial. In five topics, consensus was reached without sufficient evidence and in three topics there was no evidence and no consensus. Clinical staging, the need for intraluminal imaging, dietary restriction, duration of antibiotic treatment, the protocol for abscess treatment, the need for elective surgery in subgroups of patients, the need for surgery after abscess treatment and the level of the proximal resection margin all lack consensus or evidence. CONCLUSION: Evidence on the diagnosis and treatment of diverticular disease and diverticulitis ranged from nonexistent to strong, regardless of consensus. The most relevant research questions were identified and proposed as topics for future research.
Resumo:
La théorie de l'autocatégorisation est une théorie de psychologie sociale qui porte sur la relation entre l'individu et le groupe. Elle explique le comportement de groupe par la conception de soi et des autres en tant que membres de catégories sociales, et par l'attribution aux individus des caractéristiques prototypiques de ces catégories. Il s'agit donc d'une théorie de l'individu qui est censée expliquer des phénomènes collectifs. Les situations dans lesquelles un grand nombre d'individus interagissent de manière non triviale génèrent typiquement des comportements collectifs complexes qui sont difficiles à prévoir sur la base des comportements individuels. La simulation informatique de tels systèmes est un moyen fiable d'explorer de manière systématique la dynamique du comportement collectif en fonction des spécifications individuelles. Dans cette thèse, nous présentons un modèle formel d'une partie de la théorie de l'autocatégorisation appelée principe du métacontraste. À partir de la distribution d'un ensemble d'individus sur une ou plusieurs dimensions comparatives, le modèle génère les catégories et les prototypes associés. Nous montrons que le modèle se comporte de manière cohérente par rapport à la théorie et est capable de répliquer des données expérimentales concernant divers phénomènes de groupe, dont par exemple la polarisation. De plus, il permet de décrire systématiquement les prédictions de la théorie dont il dérive, notamment dans des situations nouvelles. Au niveau collectif, plusieurs dynamiques peuvent être observées, dont la convergence vers le consensus, vers une fragmentation ou vers l'émergence d'attitudes extrêmes. Nous étudions également l'effet du réseau social sur la dynamique et montrons qu'à l'exception de la vitesse de convergence, qui augmente lorsque les distances moyennes du réseau diminuent, les types de convergences dépendent peu du réseau choisi. Nous constatons d'autre part que les individus qui se situent à la frontière des groupes (dans le réseau social ou spatialement) ont une influence déterminante sur l'issue de la dynamique. Le modèle peut par ailleurs être utilisé comme un algorithme de classification automatique. Il identifie des prototypes autour desquels sont construits des groupes. Les prototypes sont positionnés de sorte à accentuer les caractéristiques typiques des groupes, et ne sont pas forcément centraux. Enfin, si l'on considère l'ensemble des pixels d'une image comme des individus dans un espace de couleur tridimensionnel, le modèle fournit un filtre qui permet d'atténuer du bruit, d'aider à la détection d'objets et de simuler des biais de perception comme l'induction chromatique. Abstract Self-categorization theory is a social psychology theory dealing with the relation between the individual and the group. It explains group behaviour through self- and others' conception as members of social categories, and through the attribution of the proto-typical categories' characteristics to the individuals. Hence, it is a theory of the individual that intends to explain collective phenomena. Situations involving a large number of non-trivially interacting individuals typically generate complex collective behaviours, which are difficult to anticipate on the basis of individual behaviour. Computer simulation of such systems is a reliable way of systematically exploring the dynamics of the collective behaviour depending on individual specifications. In this thesis, we present a formal model of a part of self-categorization theory named metacontrast principle. Given the distribution of a set of individuals on one or several comparison dimensions, the model generates categories and their associated prototypes. We show that the model behaves coherently with respect to the theory and is able to replicate experimental data concerning various group phenomena, for example polarization. Moreover, it allows to systematically describe the predictions of the theory from which it is derived, specially in unencountered situations. At the collective level, several dynamics can be observed, among which convergence towards consensus, towards frag-mentation or towards the emergence of extreme attitudes. We also study the effect of the social network on the dynamics and show that, except for the convergence speed which raises as the mean distances on the network decrease, the observed convergence types do not depend much on the chosen network. We further note that individuals located at the border of the groups (whether in the social network or spatially) have a decisive influence on the dynamics' issue. In addition, the model can be used as an automatic classification algorithm. It identifies prototypes around which groups are built. Prototypes are positioned such as to accentuate groups' typical characteristics and are not necessarily central. Finally, if we consider the set of pixels of an image as individuals in a three-dimensional color space, the model provides a filter that allows to lessen noise, to help detecting objects and to simulate perception biases such as chromatic induction.
Resumo:
Adaptació de l'algorisme de Kumar per resoldre sistemes d'equacions amb matrius de Toeplitz sobre els reals a cossos finits en un temps 0 (n log n).
Resumo:
La principal motivació d'aquest treball ha estat implementar l'algoritme Rijndael-AES en un full Sage-math, paquet de software matemàtic de lliure distribució i en actual desenvolupament, aprofitant les seves eines i funcionalitats integrades.
Resumo:
The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.