894 resultados para False concepts
Resumo:
Recent studies have demonstrated that the improved prognosis derived from resection of gliomas largely depends on the extent and quality of the resection, making maximum but safe resection the ultimate goal. Simultaneously, technical innovations and refined neurosurgical methods have rapidly improved efficacy and safety. Because gliomas derive from intrinsic brain cells, they often cannot be visually distinguished from the surrounding brain tissue during surgery. In order to appreciate the full extent of their solid compartment, various technologies have recently been introduced. However, radical resection of infiltrative glioma puts neurological function at risk, with potential detrimental consequences for patients' survival and quality of life. The allocation of various neurological functions within the brain varies in each patient and may undergo additional changes in the presence of a tumour (brain plasticity), making intra-operative localisation of eloquent areas mandatory for preservation of essential brain functions. Combining methods that visually distinguish tumour tissue and detect tissues responsible for critical functions now enables resection of tumours in brain regions that were previously considered off-limits, and benefits patients by enabling a more radical resection, while simultaneously lowering the risk of neurological deficits. Here we review recent and expected developments in microsurgery for glioma and their respective benefits.
Resumo:
Chronic infection and inflammation are defining characteristics of cystic fibrosis (CF) airway disease. Conditions within the airways of patients living with CF are conducive to colonisation by a variety of opportunistic bacterial, viral and fungal pathogens. Improved molecular identification of microorganisms has begun to emphasise the polymicrobial nature of infections in the CF airway microenvironment. Changes to CF airway physiology through loss of cystic fibrosis transmembrane conductance regulator functionality result in a wide range of immune dysfunctions, which permit pathogen colonisation and persistence. This review will summarise the current understanding of how CF pathogens infect, interact with and evade the CF host.
Resumo:
Complexity has long been recognized and is increasingly becoming mainstream in geomorphology. However, the relative novelty of various concepts and techniques associated to it means that ambiguity continues to surround complexity. In this commentary, we present and discuss a variety of recent contributions that have the potential to help clarify issues and advance the use of complexity in geomorphology.
Resumo:
The new computing paradigm known as cognitive computing attempts to imitate the human capabilities of learning, problem solving, and considering things in context. To do so, an application (a cognitive system) must learn from its environment (e.g., by interacting with various interfaces). These interfaces can run the gamut from sensors to humans to databases. Accessing data through such interfaces allows the system to conduct cognitive tasks that can support humans in decision-making or problem-solving processes. Cognitive systems can be integrated into various domains (e.g., medicine or insurance). For example, a cognitive system in cities can collect data, can learn from various data sources and can then attempt to connect these sources to provide real time optimizations of subsystems within the city (e.g., the transportation system). In this study, we provide a methodology for integrating a cognitive system that allows data to be verbalized, making the causalities and hypotheses generated from the cognitive system more understandable to humans. We abstract a city subsystem—passenger flow for a taxi company—by applying fuzzy cognitive maps (FCMs). FCMs can be used as a mathematical tool for modeling complex systems built by directed graphs with concepts (e.g., policies, events, and/or domains) as nodes and causalities as edges. As a verbalization technique we introduce the restriction-centered theory of reasoning (RCT). RCT addresses the imprecision inherent in language by introducing restrictions. Using this underlying combinatorial design, our approach can handle large data sets from complex systems and make the output understandable to humans.
Resumo:
Pemphigus vulgaris (PV) and pemphigus foliaceus (PF) are two severe autoimmune bullous diseases of the mucosae and/or skin associated with autoantibodies directed against desmoglein (Dsg) 3 and/or Dsg1. These two desmosomal cadherins, typifying stratified epithelia, are components of cell adhesion complexes called desmosomes and represent extra-desmosomal adhesion receptors. We herein review the advances in our understanding of the immune response underlying pemphigus, including human leucocyte antigen (HLA) class II-associated genetic susceptibility, characteristics of pathogenic anti-Dsg antibodies, antigenic mapping studies as well as findings about Dsg-specific B and T cells. The pathogenicity of anti-Dsg autoantibodies has been convincingly demonstrated. Disease activity and clinical phenotype correlate with anti-Dsg antibody titers and profile while passive transfer of anti-Dsg IgG from pemphigus patients' results in pemphigus-like lesions in neonatal and adult mice. Finally, adoptive transfer of splenocytes from Dsg3-knockout mice immunized with murine Dsg3 into immunodeficient mice phenotypically recapitulates PV. Although the exact pathogenic mechanisms leading to blister formation have not been fully elucidated, intracellular signaling following antibody binding has been found to be necessary for inducing cell-cell dissociation, at least for PV. These new insights not only highlight the key role of Dsgs in maintenance of tissue homeostasis but are expected to progressively change pemphigus management, paving the way for novel targeted immunologic and pharmacologic therapies.
Resumo:
Polymorbid patients, diverse diagnostic and therapeutic options, more complex hospital structures, financial incentives, benchmarking, as well as perceptional and societal changes put pressure on medical doctors, specifically if medical errors surface. This is particularly true for the emergency department setting, where patients face delayed or erroneous initial diagnostic or therapeutic measures and costly hospital stays due to sub-optimal triage. A "biomarker" is any laboratory tool with the potential better to detect and characterise diseases, to simplify complex clinical algorithms and to improve clinical problem solving in routine care. They must be embedded in clinical algorithms to complement and not replace basic medical skills. Unselected ordering of laboratory tests and shortcomings in test performance and interpretation contribute to diagnostic errors. Test results may be ambiguous with false positive or false negative results and generate unnecessary harm and costs. Laboratory tests should only be ordered, if results have clinical consequences. In studies, we must move beyond the observational reporting and meta-analysing of diagnostic accuracies for biomarkers. Instead, specific cut-off ranges should be proposed and intervention studies conducted to prove outcome relevant impacts on patient care. The focus of this review is to exemplify the appropriate use of selected laboratory tests in the emergency setting for which randomised-controlled intervention studies have proven clinical benefit. Herein, we focus on initial patient triage and allocation of treatment opportunities in patients with cardiorespiratory diseases in the emergency department. The following five biomarkers will be discussed: proadrenomedullin for prognostic triage assessment and site-of-care decisions, cardiac troponin for acute myocardial infarction, natriuretic peptides for acute heart failure, D-dimers for venous thromboembolism, C-reactive protein as a marker of inflammation, and procalcitonin for antibiotic stewardship in infections of the respiratory tract and sepsis. For these markers we provide an overview on physiopathology, historical evolution of evidence, strengths and limitations for a rational implementation into clinical algorithms. We critically discuss results from key intervention trials that led to their use in clinical routine and potential future indications. The rational for the use of all these biomarkers, first, tackle diagnostic ambiguity and consecutive defensive medicine, second, delayed and sub-optimal therapeutic decisions, and third, prognostic uncertainty with misguided triage and site-of-care decisions all contributing to the waste of our limited health care resources. A multifaceted approach for a more targeted management of medical patients from emergency admission to discharge including biomarkers, will translate into better resource use, shorter length of hospital stay, reduced overall costs, improved patients satisfaction and outcomes in terms of mortality and re-hospitalisation. Hopefully, the concepts outlined in this review will help the reader to improve their diagnostic skills and become more parsimonious laboratory test requesters.
Resumo:
The clinical and demographic characteristics of patients undergoing TAVI pose unique challenges for developing and implementing optimal antithrombotic therapy. Ischaemic and bleeding events in the periprocedural period and months after TAVI still remain a relevant concern to be faced with optimised antithrombotic therapy. Moreover, the antiplatelet and anticoagulant pharmacopeia has evolved significantly in recent years with new drugs and multiple possible combinations. Dual antiplatelet therapy (DAPT) is currently recommended after TAVI with oral anticoagulation (OAC) restricted for specific indications. However, atrial fibrillation (which is often clinically silent and unrecognised) is common after the procedure and embolic material often thrombin-rich. Recent evidence has therefore questioned this approach, suggesting that DAPT may be futile compared with aspirin alone and that OAC could be a relevant alternative. Future randomised and appropriately powered trials comparing different regimens of antithrombotic therapy, including new antiplatelet and anticoagulant agents, are warranted to increase the available evidence on this topic and create appropriate recommendations for this frail population. Meanwhile, it remains rational to adhere to current guidelines, with routine DAPT and recourse to OAC when specifically indicated, whilst always tailoring therapy on the basis of individual bleeding and thromboembolic risk.
Resumo:
A vast amount of temporal information is provided on the Web. Even though many facts expressed in documents are time-related, the temporal properties of Web presentations have not received much attention. In database research, temporal databases have become a mainstream topic in recent years. In Web documents, temporal data may exist as meta data in the header and as user-directed data in the body of a document. Whereas temporal data can easily be identified in the semi-structured meta data, it is more difficult to determine temporal data and its role in the body. We propose procedures for maintaining temporal integrity of Web pages and outline different approaches of applying bitemporal data concepts for Web documents. In particular, we regard desirable functionalities of Web repositories and other Web-related tools that may support the Webmasters in managing the temporal data of their Web documents. Some properties of a prototype environment are described.
Resumo:
BACKGROUND Lung clearance index (LCI), a marker of ventilation inhomogeneity, is elevated early in children with cystic fibrosis (CF). However, in infants with CF, LCI values are found to be normal, although structural lung abnormalities are often detectable. We hypothesized that this discrepancy is due to inadequate algorithms of the available software package. AIM Our aim was to challenge the validity of these software algorithms. METHODS We compared multiple breath washout (MBW) results of current software algorithms (automatic modus) to refined algorithms (manual modus) in 17 asymptomatic infants with CF, and 24 matched healthy term-born infants. The main difference between these two analysis methods lies in the calculation of the molar mass differences that the system uses to define the completion of the measurement. RESULTS In infants with CF the refined manual modus revealed clearly elevated LCI above 9 in 8 out of 35 measurements (23%), all showing LCI values below 8.3 using the automatic modus (paired t-test comparing the means, P < 0.001). Healthy infants showed normal LCI values using both analysis methods (n = 47, paired t-test, P = 0.79). The most relevant reason for false normal LCI values in infants with CF using the automatic modus was the incorrect recognition of the end-of-test too early during the washout. CONCLUSION We recommend the use of the manual modus for the analysis of MBW outcomes in infants in order to obtain more accurate results. This will allow appropriate use of infant lung function results for clinical and scientific purposes.
Resumo:
Fragestellung/Einleitung: Prüfungen sind essentieller Bestandteil in der ärztlichen Ausbildung. Sie liefern wertvolle Informationen über den Entwicklungsprozess der Studierenden und wirken lernbegleitend und lernmodulierend [1], [2]. Bei schriftlichen Prüfungen dominieren derzeit Multiple Choice Fragen, die in verschiedenen Typen verwendet werden. Zumeist werden Typ-A Fragen genutzt, bei denen genau eine Antwort richtig ist. Multiple True-False (MTF) Fragen hingegen lassen mehrere richtige Antworten zu: es muss für jede Antwortmöglichkeit entschieden werden, ob diese richtig oder falsch ist. Durch die Mehrfachantwort scheinen MTF Fragen bestimmte klinische Sachverhalte besser widerspiegeln zu können. Auch bezüglich Reliabilität und dem Informationsgewinn pro Testzeit scheinen MTF Fragen den Typ-A Fragen überlegen zu sein [3]. Dennoch werden MTF Fragen bislang selten genutzt und es gibt wenig Literatur zu diesem Fragenformat. In dieser Studie soll untersucht werden, inwiefern die Verwendung von MTF Fragen die Nutzbarkeit (Utility) nach van der Vleuten (Reliabilität, Validität, Kostenaufwand, Effekt auf den Lernprozess und Akzeptanz der Teilnehmer) [4] schriftlicher Prüfungen erhöhen kann. Um die Testreliabilität zu steigern, sowie den Kostenaufwand für Prüfungen zu senken, möchten wir das optimale Bewertungssystem (Scoring) für MTF Fragen ermitteln. Methoden: Wir analysieren die Daten summativer Prüfungen der Medizinischen Fakultät der Universität Bern. Unsere Daten beinhalten Prüfungen vom ersten bis zum sechsten Studienjahr, sowie eine Facharztprüfung. Alle Prüfungen umfassen sowohl MTF als auch Typ-A Fragen. Für diese Prüfungen vergleichen wir die Viertel-, Halb- und Ganzpunktbewertung für MTF Fragen. Bei der Viertelpunktbewertung bekommen Kandidaten für jede richtige Teilantwort ¼ Punkt. Bei der Halbpunktbewertung wird ½ Punkt vergeben, wenn mehr als die Hälfte der Antwortmöglichkeiten richtig ist, einen ganzen Punkt erhalten die Kandidaten wenn alle Antworten richtig beantwortet wurden. Bei der Ganzpunktbewertung erhalten Kandidaten lediglich einen Punkt wenn die komplette Frage richtig beantwortet wurde. Diese unterschiedlichen Bewertungsschemata werden hinsichtlich Fragencharakteristika wie Trennschärfe und Schwierigkeit sowie hinsichtlich Testcharakteristika wie der Reliabilität einander gegenübergestellt. Die Ergebnisse werden ausserdem mit denen für Typ A Fragen verglichen. Ergebnisse: Vorläufige Ergebnisse deuten darauf hin, dass eine Halbpunktbewertung optimal zu sein scheint. Eine Halbpunktbewertung führt zu mittleren Item-Schwierigkeiten und daraus resultierend zu hohen Trennschärfen. Dies trägt zu einer hohen Testreliabilität bei. Diskussion/Schlussfolgerung: MTF Fragen scheinen in Verbindung mit einem optimalen Bewertungssystem, zu höheren Testreliabilitäten im Vergleich zu Typ A Fragen zu führen. In Abhängigkeit des zu prüfenden Inhalts könnten MTF Fragen einen wertvolle Ergänzung zu Typ-A Fragen darstellen. Durch die geeignete Kombination von MTF und Typ A Fragen könnte die Nutzbarkeit (Utility) schriftlicher Prüfungen verbessert werden.
Resumo:
Background: Multiple True-False-Items (MTF-Items) might offer some advantages compared to one-best-answer-questions (TypeA) as they allow more than one correct answer and may better represent clinical decisions. However, in medical education assessment MTF-Items are seldom used. Summary of Work: With this literature review existing findings on MTF-items and on TypeA were compared along the Ottawa Criteria for Good Assessment, i.e. (1) reproducibility, (2) feasibility, (3) validity, (4) acceptance, (5) educational effect, (6) catalytic effects, and (7) equivalence. We conducted a literature research on ERIC and Google Scholar including papers from the years 1935 to 2014. We used the search terms “multiple true-false”, “true-false”, “true/false”, and “Kprim” combined with “exam”, “test”, and “assessment”. Summary of Results: We included 29 out of 33 studies. Four of them were carried out in the medical field Compared to TypeA, MTF-Items are associated with (1) higher reproducibility (2) lower feasibility (3) similar validity (4) higher acceptance (5) higher educational effect (6) no studies on catalytic effects or (7) equivalence. Discussion and Conclusions: While studies show overall good characteristics of MTF items according to the Ottawa criteria, this type of question seems to be rather seldom used. One reason might be the reported lower feasibility. Overall the literature base is still weak. Furthermore, only 14 % of literature is from the medical domain. Further studies to better understand the characteristics of MTF-Items in the medical domain are warranted. Take-home messages: Overall the literature base is weak and therefore further studies are needed. Existing studies show that: MTF-Items show higher reliability, acceptance and educational effect; MTF-Items are more difficult to produce
Resumo:
The diagnosis of neuroendocrine tumors is based on their histopathologic appearance and immunohistochemical profile. With the WHO 2010 classification formal staging and grading was introduced for gastro-entero-pancreatic NET, however, the nomenclature for lung neuroendocrine tumors still relies on the carcinoid term. In this review we also focus on the situation of neuroendocrine carcinoma of unknown primary, tissue biomarkers and actual controversies in the histopathology of NEN.