253 resultados para Naive Bayes Classifier


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adenovirus is a nonenveloped dsDNA virus that activates intracellular innate immune pathways. In vivo, adenovirus-immunized mice displayed an enhanced innate immune response and diminished virus-mediated gene delivery following challenge with the adenovirus vector AdLacZ suggesting that antiviral Abs modulate viral interactions with innate immune cells. Under naive serum conditions in vitro, adenovirus binding and internalization in macrophages and the subsequent activation of innate immune mechanisms were inefficient. In contrast to the neutralizing effect observed in nonhematopoietic cells, adenovirus infection in the presence of antiviral Abs significantly increased FcR-dependent viral internalization in macrophages. In direct correlation with the increased viral internalization, antiviral Abs amplified the innate immune response to adenovirus as determined by the expression of NF-kappaB-dependent genes, type I IFNs, and caspase-dependent IL-1beta maturation. Immune serum amplified TLR9-independent type I IFN expression and enhanced NLRP3-dependent IL-1beta maturation in response to adenovirus, confirming that antiviral Abs specifically amplify intracellular innate pathways. In the presence of Abs, confocal microscopy demonstrated increased targeting of adenovirus to LAMP1-positive phagolysosomes in macrophages but not epithelial cells. These data show that antiviral Abs subvert natural viral tropism and target the adenovirus to phagolysosomes and the intracellular innate immune system in macrophages. Furthermore, these results illustrate a cross-talk where the adaptive immune system positively regulates the innate immune system and the antiviral state.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wolves in Italy strongly declined in the past and were confined south of the Alps since the turn of the last century, reduced in the 1970s to approximately 100 individuals surviving in two fragmented subpopulations in the central-southern Apennines. The Italian wolves are presently expanding in the Apennines, and started to recolonize the western Alps in Italy, France and Switzerland about 16 years ago. In this study, we used a population genetic approach to elucidate some aspects of the wolf recolonization process. DNA extracted from 3068 tissue and scat samples collected in the Apennines (the source populations) and in the Alps (the colony), were genotyped at 12 microsatellite loci aiming to assess (i) the strength of the bottleneck and founder effects during the onset of colonization; (ii) the rates of gene flow between source and colony; and (iii) the minimum number of colonizers that are needed to explain the genetic variability observed in the colony. We identified a total of 435 distinct wolf genotypes, which showed that wolves in the Alps: (i) have significantly lower genetic diversity (heterozygosity, allelic richness, number of private alleles) than wolves in the Apennines; (ii) are genetically distinct using pairwise F(ST) values, population assignment test and Bayesian clustering; (iii) are not in genetic equilibrium (significant bottleneck test). Spatial autocorrelations are significant among samples separated up to c. 230 km, roughly correspondent to the apparent gap in permanent wolf presence between the Alps and north Apennines. The estimated number of first-generation migrants indicates that migration has been unidirectional and male-biased, from the Apennines to the Alps, and that wolves in southern Italy did not contribute to the Alpine population. These results suggest that: (i) the Alps were colonized by a few long-range migrating wolves originating in the north Apennine subpopulation; (ii) during the colonization process there has been a moderate bottleneck; and (iii) gene flow between sources and colonies was moderate (corresponding to 1.25-2.50 wolves per generation), despite high potential for dispersal. Bottleneck simulations showed that a total of c. 8-16 effective founders are needed to explain the genetic diversity observed in the Alps. Levels of genetic diversity in the expanding Alpine wolf population, and the permanence of genetic structuring, will depend on the future rates of gene flow among distinct wolf subpopulation fragments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Signal transducer and activator of transcription (STAT)-3 inhibitors play an important role in regulating immune responses. Galiellalactone (GL) is a fungal secondary metabolite known to interfere with the binding of phosphorylated signal transducer and activator of transcription (pSTAT)-3 as well of pSTAT-6 dimers to their target DNA in vitro. Intra nasal delivery of 50 μg GL into the lung of naive Balb/c mice induced FoxP3 expression locally and IL-10 production and IL-12p40 in RNA expression in the airways in vivo. In a murine model of allergic asthma, GL significantly suppressed the cardinal features of asthma, such as airway hyperresponsiveness, eosinophilia and mucus production, after sensitization and subsequent challenge with ovalbumin (OVA). These changes resulted in induction of IL-12p70 and IL-10 production by lung CD11c(+) dendritic cells (DCs) accompanied by an increase of IL-3 receptor α chain and indoleamine-2,3-dioxygenase expression in these cells. Furthermore, GL inhibited IL-4 production in T-bet-deficient CD4(+) T cells and down-regulated the suppressor of cytokine signaling-3 (SOCS-3), also in the absence of STAT-3 in T cells, in the lung in a murine model of asthma. In addition, we found reduced amounts of pSTAT-5 in the lung of GL-treated mice that correlated with decreased release of IL-2 by lung OVA-specific CD4(+) T cells after treatment with GL in vitro also in the absence of T-bet. Thus, GL treatment in vivo and in vitro emerges as a novel therapeutic approach for allergic asthma by modulating lung DC phenotype and function resulting in a protective response via CD4(+)FoxP3(+) regulatory T cells locally.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cancer is one of the world's leading causes of death with a rising trend in incidence. These epidemiologic observations underline the need for novel treatment strategies. In this regard, a promising approach takes advantage of the adaptive effector mechanisms of the immune system, using T lymphocytes to specifically target and destroy tumour cells. However, whereas current approaches mainly depend on short-lived, terminally differentiated effector T cells, increasing evidence suggests that long lasting and maximum efficient immune responses are mediated by low differentiated memory T cells. These memory T cells should display characteristics of stem cells, such as longevity, self-renewal capacity and the ability to continuously give rise to further differentiated effectors. These stem celllike memory T (TSCM) cells are thought to be of key therapeutic value as they might not only attack differentiated tumour cells, but also eradicate the root cause of cancer, the cancer stem cells themselves. Thus, efforts are made to characterize TSCM cells and to identify the signalling pathways which mediate their induction. Recently, a human TSCM cell subset was described and the activation of the Wnt-ß-catenin signalling pathway by the drug TWS119 during naive CD8+ T (TN) cell priming was suggested to mediate their induction. However, a precise deciphering of the signalling pathways leading to TSCM cell induction and an in-depth characterization of in vitro induced and in vivo occurring TSCM cells remain to be performed. Here, evidence is presented that the induction of human and mouse CD8+ and CD4+ TSCM cells may be triggered by inhibition of mechanistic/mammalian target of rapamycin (mTOR) complex 1 with simultaneously active mTOR complex 2. This molecular mechanism arrests a fraction of activated TN cells in a stem cell-like differentiation state independently of the Wnt-ß-catenin signalling pathway. Of note, TWS119 was found to also inhibit mTORCl, thereby mediating the induction of TSCM cells. Suggesting an immunostimulatory effect, the acquired data broaden the therapeutic range of mTORCl inhibitors like rapamycin, which are, at present, exclusively used due to their immunosuppressive function. Furthermore, by performing broad metabolic analyses, a well-orchestrated interplay between intracellular signalling pathways and the T cells' metabolic programmes could be identified as important regulator of the T cells' differentiation fate. Moreover, in vitro induced CD4+ TSCM cells possess superior functional capacities and share fate-determining key factors with their naturally occurring counterparts, assessed by a first-time full transcriptome analysis of in vivo occurring CD4+ TN cell, TSCM cells and central memory (TCM) cells and in vitro induced CD4+ TSCM cells. Of interest, a group of 56 genes, with a unique expression profile in TSCM cells could be identified. Thus, a pharmacological mechanism allowing to confer sternness to activated TN cells has been found which might be highly relevant for the design of novel T cell-based cancer immunotherapies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract : The occupational health risk involved with handling nanoparticles is the probability that a worker will experience an adverse health effect: this is calculated as a function of the worker's exposure relative to the potential biological hazard of the material. Addressing the risks of nanoparticles requires therefore knowledge on occupational exposure and the release of nanoparticles into the environment as well as toxicological data. However, information on exposure is currently not systematically collected; therefore this risk assessment lacks quantitative data. This thesis aimed at, first creating the fundamental data necessary for a quantitative assessment and, second, evaluating methods to measure the occupational nanoparticle exposure. The first goal was to determine what is being used where in Swiss industries. This was followed by an evaluation of the adequacy of existing measurement methods to assess workplace nanopaiticle exposure to complex size distributions and concentration gradients. The study was conceived as a series of methodological evaluations aimed at better understanding nanoparticle measurement devices and methods. lt focused on inhalation exposure to airborne particles, as respiration is considered to be the most important entrance pathway for nanoparticles in the body in terms of risk. The targeted survey (pilot study) was conducted as a feasibility study for a later nationwide survey on the handling of nanoparticles and the applications of specific protection means in industry. The study consisted of targeted phone interviews with health and safety officers of Swiss companies that were believed to use or produce nanoparticles. This was followed by a representative survey on the level of nanoparticle usage in Switzerland. lt was designed based on the results of the pilot study. The study was conducted among a representative selection of clients of the Swiss National Accident Insurance Fund (SUVA), covering about 85% of Swiss production companies. The third part of this thesis focused on the methods to measure nanoparticles. Several pre- studies were conducted studying the limits of commonly used measurement devices in the presence of nanoparticle agglomerates, This focus was chosen, because several discussions with users and producers of the measurement devices raised questions about their accuracy measuring nanoparticle agglomerates and because, at the same time, the two survey studies revealed that such powders are frequently used in industry. The first preparatory experiment focused on the accuracy of the scanning mobility particle sizer (SMPS), which showed an improbable size distribution when measuring powders of nanoparticle agglomerates. Furthermore, the thesis includes a series of smaller experiments that took a closer look at problems encountered with other measurement devices in the presence of nanoparticle agglomerates: condensation particle counters (CPC), portable aerosol spectrometer (PAS) a device to estimate the aerodynamic diameter, as well as diffusion size classifiers. Some initial feasibility tests for the efficiency of filter based sampling and subsequent counting of carbon nanotubes (CNT) were conducted last. The pilot study provided a detailed picture of the types and amounts of nanoparticles used and the knowledge of the health and safety experts in the companies. Considerable maximal quantities (> l'000 kg/year per company) of Ag, Al-Ox, Fe-Ox, SiO2, TiO2, and ZnO (mainly first generation particles) were declared by the contacted Swiss companies, The median quantity of handled nanoparticles, however, was 100 kg/year. The representative survey was conducted by contacting by post mail a representative selection of l '626 SUVA-clients (Swiss Accident Insurance Fund). It allowed estimation of the number of companies and workers dealing with nanoparticles in Switzerland. The extrapolation from the surveyed companies to all companies of the Swiss production sector suggested that l'309 workers (95%-confidence interval l'073 to l'545) of the Swiss production sector are potentially exposed to nanoparticles in 586 companies (145 to l'027). These numbers correspond to 0.08% (0.06% to 0.09%) of all workers and to 0.6% (0.2% to 1.1%) of companies in the Swiss production sector. To measure airborne concentrations of sub micrometre-sized particles, a few well known methods exist. However, it was unclear how well the different instruments perform in the presence of the often quite large agglomerates of nanostructured materials. The evaluation of devices and methods focused on nanoparticle agglomerate powders. lt allowed the identification of the following potential sources of inaccurate measurements at workplaces with considerable high concentrations of airborne agglomerates: - A standard SMPS showed bi-modal particle size distributions when measuring large nanoparticle agglomerates. - Differences in the range of a factor of a thousand were shown between diffusion size classifiers and CPC/SMPS. - The comparison between CPC/SMPS and portable aerosol Spectrometer (PAS) was much better, but depending on the concentration, size or type of the powders measured, the differences were still of a high order of magnitude - Specific difficulties and uncertainties in the assessment of workplaces were identified: the background particles can interact with particles created by a process, which make the handling of background concentration difficult. - Electric motors produce high numbers of nanoparticles and confound the measurement of the process-related exposure. Conclusion: The surveys showed that nanoparticles applications exist in many industrial sectors in Switzerland and that some companies already use high quantities of them. The representative survey demonstrated a low prevalence of nanoparticle usage in most branches of the Swiss industry and led to the conclusion that the introduction of applications using nanoparticles (especially outside industrial chemistry) is only beginning. Even though the number of potentially exposed workers was reportedly rather small, it nevertheless underscores the need for exposure assessments. Understanding exposure and how to measure it correctly is very important because the potential health effects of nanornaterials are not yet fully understood. The evaluation showed that many devices and methods of measuring nanoparticles need to be validated for nanoparticles agglomerates before large exposure assessment studies can begin. Zusammenfassung : Das Gesundheitsrisiko von Nanopartikel am Arbeitsplatz ist die Wahrscheinlichkeit dass ein Arbeitnehmer einen möglichen Gesundheitsschaden erleidet wenn er diesem Stoff ausgesetzt ist: sie wird gewöhnlich als Produkt von Schaden mal Exposition gerechnet. Für eine gründliche Abklärung möglicher Risiken von Nanomaterialien müssen also auf der einen Seite Informationen über die Freisetzung von solchen Materialien in die Umwelt vorhanden sein und auf der anderen Seite solche über die Exposition von Arbeitnehmenden. Viele dieser Informationen werden heute noch nicht systematisch gesarnmelt und felilen daher für Risikoanalysen, Die Doktorarbeit hatte als Ziel, die Grundlagen zu schaffen für eine quantitative Schatzung der Exposition gegenüber Nanopartikel am Arbeitsplatz und die Methoden zu evaluieren die zur Messung einer solchen Exposition nötig sind. Die Studie sollte untersuchen, in welchem Ausmass Nanopartikel bereits in der Schweizer Industrie eingesetzt werden, wie viele Arbeitnehrner damit potentiel] in Kontakt komrrien ob die Messtechnologie für die nötigen Arbeitsplatzbelastungsmessungen bereits genügt, Die Studie folcussierte dabei auf Exposition gegenüber luftgetragenen Partikel, weil die Atmung als Haupteintrittspforte iïlr Partikel in den Körper angesehen wird. Die Doktorarbeit besteht baut auf drei Phasen auf eine qualitative Umfrage (Pilotstudie), eine repräsentative, schweizerische Umfrage und mehrere technische Stndien welche dem spezitischen Verständnis der Mëglichkeiten und Grenzen einzelner Messgeräte und - teclmikeri dienen. Die qualitative Telephonumfrage wurde durchgeführt als Vorstudie zu einer nationalen und repräsentativen Umfrage in der Schweizer Industrie. Sie zielte auf Informationen ab zum Vorkommen von Nanopartikeln, und den angewendeten Schutzmassnahmen. Die Studie bestand aus gezielten Telefoninterviews mit Arbeit- und Gesundheitsfachpersonen von Schweizer Unternehmen. Die Untemehmen wurden aufgrund von offentlich zugànglichen lnformationen ausgewählt die darauf hinwiesen, dass sie mit Nanopartikeln umgehen. Der zweite Teil der Dolctorarbeit war die repräsentative Studie zur Evalniernng der Verbreitnng von Nanopaitikelanwendungen in der Schweizer lndustrie. Die Studie baute auf lnformationen der Pilotstudie auf und wurde mit einer repräsentativen Selektion von Firmen der Schweizerischen Unfall Versicherungsanstalt (SUVA) durchgeüihxt. Die Mehrheit der Schweizerischen Unternehmen im lndustrieselctor wurde damit abgedeckt. Der dritte Teil der Doktorarbeit fokussierte auf die Methodik zur Messung von Nanopartikeln. Mehrere Vorstudien wurden dnrchgefîihrt, um die Grenzen von oft eingesetzten Nanopartikelmessgeräten auszuloten, wenn sie grösseren Mengen von Nanopartikel Agglomeraten ausgesetzt messen sollen. Dieser F okns wurde ans zwei Gründen gewählt: weil mehrere Dislcussionen rnit Anwendem und auch dem Produzent der Messgeràte dort eine Schwachstelle vermuten liessen, welche Zweifel an der Genauigkeit der Messgeräte aufkommen liessen und weil in den zwei Umfragestudien ein häufiges Vorkommen von solchen Nanopartikel-Agglomeraten aufgezeigt wurde. i Als erstes widmete sich eine Vorstndie der Genauigkeit des Scanning Mobility Particle Sizer (SMPS). Dieses Messgerät zeigte in Präsenz von Nanopartikel Agglorneraten unsinnige bimodale Partikelgrössenverteilung an. Eine Serie von kurzen Experimenten folgte, welche sich auf andere Messgeräte und deren Probleme beim Messen von Nanopartikel-Agglomeraten konzentrierten. Der Condensation Particle Counter (CPC), der portable aerosol spectrometer (PAS), ein Gerät zur Schàtzung des aerodynamischen Durchniessers von Teilchen, sowie der Diffusion Size Classifier wurden getestet. Einige erste Machbarkeitstests zur Ermittlnng der Effizienz von tilterbasierter Messung von luftgetragenen Carbon Nanotubes (CNT) wnrden als letztes durchgeiührt. Die Pilotstudie hat ein detailliiertes Bild der Typen und Mengen von genutzten Nanopartikel in Schweizer Unternehmen geliefert, und hat den Stand des Wissens der interviewten Gesundheitsschntz und Sicherheitsfachleute aufgezeigt. Folgende Typen von Nanopaitikeln wurden von den kontaktierten Firmen als Maximalmengen angegeben (> 1'000 kg pro Jahr / Unternehrnen): Ag, Al-Ox, Fe-Ox, SiO2, TiO2, und ZnO (hauptsächlich Nanopartikel der ersten Generation). Die Quantitäten von eingesetzten Nanopartikeln waren stark verschieden mit einem ein Median von 100 kg pro Jahr. ln der quantitativen Fragebogenstudie wurden l'626 Unternehmen brieflich kontaktiert; allesamt Klienten der Schweizerischen Unfallversicherringsanstalt (SUVA). Die Resultate der Umfrage erlaubten eine Abschätzung der Anzahl von Unternehmen und Arbeiter, welche Nanopartikel in der Schweiz anwenden. Die Hochrechnung auf den Schweizer lndnstriesektor hat folgendes Bild ergeben: ln 586 Unternehmen (95% Vertrauensintervallz 145 bis 1'027 Unternehmen) sind 1'309 Arbeiter potentiell gegenüber Nanopartikel exponiert (95%-Vl: l'073 bis l'545). Diese Zahlen stehen für 0.6% der Schweizer Unternehmen (95%-Vl: 0.2% bis 1.1%) und 0.08% der Arbeiternehmerschaft (95%-V1: 0.06% bis 0.09%). Es gibt einige gut etablierte Technologien um die Luftkonzentration von Submikrometerpartikel zu messen. Es besteht jedoch Zweifel daran, inwiefern sich diese Technologien auch für die Messurrg von künstlich hergestellten Nanopartikeln verwenden lassen. Aus diesem Grund folcussierten die vorbereitenden Studien für die Arbeitsplatzbeurteilnngen auf die Messung von Pulverri, welche Nan0partike1-Agg10merate enthalten. Sie erlaubten die ldentifikation folgender rnöglicher Quellen von fehlerhaften Messungen an Arbeitsplätzen mit erhöhter Luft-K0nzentrati0n von Nanopartikel Agglomeratenz - Ein Standard SMPS zeigte eine unglaubwürdige bimodale Partikelgrössenverteilung wenn er grössere Nan0par'til<e1Agg10merate gemessen hat. - Grosse Unterschiede im Bereich von Faktor tausend wurden festgestellt zwischen einem Diffusion Size Classiîier und einigen CPC (beziehungsweise dem SMPS). - Die Unterschiede zwischen CPC/SMPS und dem PAS waren geringer, aber abhängig von Grosse oder Typ des gemessenen Pulvers waren sie dennoch in der Grössenordnung von einer guten Grössenordnung. - Spezifische Schwierigkeiten und Unsicherheiten im Bereich von Arbeitsplatzmessungen wurden identitiziert: Hintergrundpartikel können mit Partikeln interagieren die während einem Arbeitsprozess freigesetzt werden. Solche Interaktionen erschweren eine korrekte Einbettung der Hintergrunds-Partikel-Konzentration in die Messdaten. - Elektromotoren produzieren grosse Mengen von Nanopartikeln und können so die Messung der prozessbezogenen Exposition stören. Fazit: Die Umfragen zeigten, dass Nanopartikel bereits Realitàt sind in der Schweizer Industrie und dass einige Unternehmen bereits grosse Mengen davon einsetzen. Die repräsentative Umfrage hat diese explosive Nachricht jedoch etwas moderiert, indem sie aufgezeigt hat, dass die Zahl der Unternehmen in der gesamtschweizerischen Industrie relativ gering ist. In den meisten Branchen (vor allem ausserhalb der Chemischen Industrie) wurden wenig oder keine Anwendungen gefunden, was schliessen last, dass die Einführung dieser neuen Technologie erst am Anfang einer Entwicklung steht. Auch wenn die Zahl der potentiell exponierten Arbeiter immer noch relativ gering ist, so unterstreicht die Studie dennoch die Notwendigkeit von Expositionsmessungen an diesen Arbeitsplätzen. Kenntnisse um die Exposition und das Wissen, wie solche Exposition korrekt zu messen, sind sehr wichtig, vor allem weil die möglichen Auswirkungen auf die Gesundheit noch nicht völlig verstanden sind. Die Evaluation einiger Geräte und Methoden zeigte jedoch, dass hier noch Nachholbedarf herrscht. Bevor grössere Mess-Studien durgefîihrt werden können, müssen die Geräte und Methodem für den Einsatz mit Nanopartikel-Agglomeraten validiert werden.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Poor tolerance and adverse drug reactions are main reasons for discontinuation of antiretroviral therapy (ART). Identifying predictors of ART discontinuation is a priority in HIV care. METHODS: A genetic association study in an observational cohort to evaluate the association of pharmacogenetic markers with time to treatment discontinuation during the first year of ART. Analysis included 577 treatment-naive individuals initiating tenofovir (n = 500) or abacavir (n = 77), with efavirenz (n = 272), lopinavir/ritonavir (n = 184), or atazanavir/ritonavir (n = 121). Genotyping included 23 genetic markers in 15 genes associated with toxicity or pharmacokinetics of the study medication. Rates of ART discontinuation between groups with and without genetic risk markers were assessed by survival analysis using Cox regression models. RESULTS: During the first year of ART, 190 individuals (33%) stopped 1 or more drugs. For efavirenz and atazanavir, individuals with genetic risk markers experienced higher discontinuation rates than individuals without (71.15% vs 28.10%, and 62.5% vs 14.6%, respectively). The efavirenz discontinuation hazard ratio (HR) was 3.14 (95% confidence interval (CI): 1.35-7.33, P = .008). The atazanavir discontinuation HR was 9.13 (95% CI: 3.38-24.69, P < .0001). CONCLUSIONS: Several pharmacogenetic markers identify individuals at risk for early treatment discontinuation. These markers should be considered for validation in the clinical setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discriminating complex sounds relies on multiple stages of differential brain activity. The specific roles of these stages and their links to perception were the focus of the present study. We presented 250ms duration sounds of living and man-made objects while recording 160-channel electroencephalography (EEG). Subjects categorized each sound as that of a living, man-made or unknown item. We tested whether/when the brain discriminates between sound categories even when not transpiring behaviorally. We applied a single-trial classifier that identified voltage topographies and latencies at which brain responses are most discriminative. For sounds that the subjects could not categorize, we could successfully decode the semantic category based on differences in voltage topographies during the 116-174ms post-stimulus period. Sounds that were correctly categorized as that of a living or man-made item by the same subjects exhibited two periods of differences in voltage topographies at the single-trial level. Subjects exhibited differential activity before the sound ended (starting at 112ms) and on a separate period at ~270ms post-stimulus onset. Because each of these periods could be used to reliably decode semantic categories, we interpreted the first as being related to an implicit tuning for sound representations and the second as being linked to perceptual decision-making processes. Collectively, our results show that the brain discriminates environmental sounds during early stages and independently of behavioral proficiency and that explicit sound categorization requires a subsequent processing stage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background and Aims: The three anti-TNF agents infliximab (IFX), adalimumab (ADA) andcertolizumab pegol (CZP) have demonstrated similar efficacy in induction and maintenanceof response and remission in Crohn's disease (CD) treatment. Given the comparability ofthese drugs, patient's preferences may influence the choice of the product. However, dataon patient's preferences for choosing anti-TNF agents are lacking. We therefore aimed toassess the CD patient's appraisal to select the drug of his choice and to identify factorsguiding this decision.Methods: A prospective survey among anti-TNF-naive CD patientswas performed. Patients were provided a description of the three anti-TNF agents focusingon indication, application mode (s.c. vs. i.v.), application time intervals, setting of application(hospital vs. private practice vs. patient's home), average time to apply the medication permonth, typical side effects, and the scientific evidence of efficacy and safety available for everydrug. Patients answered a questionnaire consisting of 17 questions, covering demographic,disease-specific, and medication data.Results: Hundred patients (47f/53m, mean age 45±16years) completed the questionnaire. Disease duration was <1year in 7%, 1-5 years in 31%,and >5 years in 62% of patients. Disease location was ileal in 33%, colonic in 40%, andileocolonic in 27%. Disease phenotype was inflammatory in 68%, stenosing in 29%, andinternally fistulizing in 3% of patients. Additionally, 20% had perianal fistulizing disease.Patients were already treated with the following drugs: mesalamines 61%, budesonide 44%,prednisone 97%, thiopurines 78%, methotrexate 16%. In total, 30% had already heardabout IFX, 20% about ADA, and 11% about CZP. Thirty-six percent voted for treatmentwith ADA, 28% for CZP, and 25% for IFX, whereas 11% were undecided. The followingfactors influenced the patient's decision for choosing a specific anti-TNF drug (severalanswers possible): side effects 76%, physician's recommendation 66%, application mode54%, efficacy experience 52%, time to spend for therapy 27%, patient's recommendations21%, interactions with other medications 12%. The single most important factor for choosinga specific anti-TNF was (1 answer): side effect profile 35%, physician's recommendation22%, efficacy experience 21%, application mode 13%, patient's recommendations 5%, timespent for therapy 3%, interaction with other medications 1%.Conclusions: The majority ofpatients preferred anti-TNF syringes to infusions. The safety profile of the drugs and thephysician's recommendation are major factors influencing the patient's choice for a specificanti-TNF drug. Patient's issues about safety and lifestyle habits should be taken into accountwhen prescribing specific anti-TNF formulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Gefitinib is active in patients with pretreated non-small-cell lung cancer (NSCLC). We evaluated the activity and toxicity of gefitinib first-line treatment in advanced NSCLC followed by chemotherapy at disease progression. PATIENTS AND METHODS: In all, 63 patients with chemotherapy-naive stage IIIB/IV NSCLC received gefitinib 250 mg/day. At disease progression, gefitinib was replaced by cisplatin 80 mg/m(2) on day 1 and gemcitabine 1250 mg/m(2) on days 1, 8 for up to six 3-week cycles. Primary end point was the disease stabilization rate (DSR) after 12 weeks of gefitinib. RESULTS: After 12 weeks of gefitinib, the DSR was 24% and the response rate (RR) was 8%. Median time to progression (TtP) was 2.5 months and median overall survival (OS) 11.5 months. Never smokers (n = 9) had a DSR of 56% and a median OS of 20.2 months; patients with epidermal growth factor receptor (EGFR) mutation (n = 4) had a DSR of 75% and the median OS was not reached after the follow-up of 21.6 months. In all, 41 patients received chemotherapy with an overall RR of 34%, DSR of 71% and median TtP of 6.7 months. CONCLUSIONS: First-line gefitinib monotherapy led to a DSR of 24% at 12 weeks in an unselected patients population. Never smokers and patients with EGFR mutations tend to have a better outcome; hence, further trials in selected patients are warranted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context: Foreign body aspiration (FbA) is a serious problem in children. Accurate clinical and radiographic diagnosis is important because missed or delayed diagnosis can result in respiratory difficulties ranging from life-treatening airway obstruction to chronic wheezing or recurrent pneumonia. Bronchoscopy also has risks and accurate clinical and radiographc diagnosis can support the decision of bronchoscopy. Objective: To rewiev the diagnostic accuracy of clinical presentation (CP) and pulmonary radiograph (PR) for the diagnosis of FbA. There is no previous rewievMethods: A search of Medline is conducted for articles containing data regarding CP and PR signes of FbA. Calculation of likelihood ratios (LR) and pre and post test probability using Bayes theorem were performed for all signs of CP and PR. Inclusion criteria: Articles containing prospective data regarding CP and PR of FbA. Exclusion criteria: Retrospectives studies. Articles containing incomplete data for calculation of LR. Results: Five prospectives studies are included with a total of 585 patients. Prevalence of FbA is 63% in children suspected of FbA. If CP is normal, probability of FbA is 25% and if PR is normal, probability is 14%. If CP is pathologic, probability of FbA is 69-76% with presence of cough (LR = 1.32) or dyspnea (LR = 1.84) or localized crackles (LR = 1.5). Probability is 81-88% if cyanosis (LR = 4.8) or decreased breaths sounds (LR = 4.3) or asymetric auscultation (LR = 2.9) or localized wheezing (LR = 2.5) are present. When CP is anormal and PR show mediatinal shift (LR = 100), pneumomediatin (LR = 100), radio opaque foreign body (LR = 100), lobar distention (LR = 4), atelectasis (LR = 2.5), inspiratory/expiratory abnormal (LR = 7), the probability of FbA is 96-100%. If CP is normal and PR is abnormal the probability is 40-100%. If CP is abnormal and PR is normal the probability is 55-75%. Conclusions: This rewiev of prospective studies demonstrates the importance of CP and PR and an algorithm can be proposed. When CP is abnormal with or without PR pathologic, the probability of FbA is high and bronchoscopy is indicated. When CP and PR are normal the probability of FbA is low and bronchoscopy is not necessary immediatly, observation should be proposed. This approach should be validated with prospective study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SUMMARY: A top scoring pair (TSP) classifier consists of a pair of variables whose relative ordering can be used for accurately predicting the class label of a sample. This classification rule has the advantage of being easily interpretable and more robust against technical variations in data, as those due to different microarray platforms. Here we describe a parallel implementation of this classifier which significantly reduces the training time, and a number of extensions, including a multi-class approach, which has the potential of improving the classification performance. AVAILABILITY AND IMPLEMENTATION: Full C++ source code and R package Rgtsp are freely available from http://lausanne.isb-sib.ch/~vpopovic/research/. The implementation relies on existing OpenMP libraries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The TCR repertoire of CD8+ T cells specific for Moloney murine leukemia virus (M-MuLV)-associated Ags has been investigated in vitro and in vivo. Analysis of a large panel of established CD8+ CTL clones specific for M-MuLV indicated an overwhelming bias for V beta4 in BALB/c mice and for V beta5.2 in C57BL/6 mice. These V beta biases were already detectable in mixed lymphocyte:tumor cell cultures established from virus-immune spleen cells. Furthermore, direct ex vivo analysis of PBL from BALB/c or C57BL/6 mice immunized with syngeneic M-MuLV-infected tumor cells revealed a dramatic increase in CD8+ cells expressing V beta4 or V beta5.2, respectively. M-MuLV-specific CD8+ cells with an activated (CD62L-) phenotype persisted in blood of immunized mice for at least 2 mo, and exhibited decreased TCR and CD8 levels compared with their naive counterparts. In C57BL/6 mice, most M-MuLV-specific CD8+ CTL clones and immune PBL coexpressed V alpha3.2 in association with V beta5.2. Moreover, these V beta5.2+ V alpha3.2+ cells were shown to recognize the recently described H-2Db-restricted epitope (CCLCLTVFL) encoded in the leader sequence of the M-MuLV gag polyprotein. Collectively, our data demonstrate a highly restricted TCR repertoire in the CD8+ T cell response to M-MuLV-associated Ags in vivo, and suggest the potential utility of flow-microfluorometric analysis of V beta and V alpha expression in the diagnosis and monitoring of viral infections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.