812 resultados para Boolean-like laws. Fuzzy implications. Fuzzy rule based systens. Fuzzy set theories
Resumo:
I sistemi di raccomandazione per come li conosciamo nascono alla fine del XX secolo, e si sono evoluti fino ai giorni nostri approcciandosi a numerosi campi, tra i quali analizzeremo l’ingegneria del software, la medicina, la gestione delle reti aziendali e infine, come argomento focale della tesi, l’e-Learning. Dopo una rapida panoramica sullo stato dell’arte dei sistemi di raccomandazione al giorno d’oggi, discorrendo velocemente tra metodi puri e metodi ibridi ottenuti come combinazione dei primi, analizzeremo varie applicazioni pratiche per dare un’idea al lettore di quanto possano essere vari i settori di utilizzo di questi software. Tratteremo nello specifico il funzionamento di varie tecniche per la raccomandazione in ambito e-Learning, analizzando tutte le problematiche che distinguono questo settore da tutti gli altri. Nello specifico, dedicheremo un’intera sezione alla descrizione della psicologia dello studente, e su come capire il suo profilo cognitivo aiuti a suggerire al meglio la giusta risorsa da apprendere nel modo più corretto. È doveroso, infine, parlare di privacy: come vedremo nel primo capitolo, i sistemi di raccomandazione utilizzano al massimo dati sensibili degli utenti al fine di fornire un suggerimento il più accurato possibile. Ma come possiamo tutelarli contro intrusioni e quindi contro violazioni della privacy? L’obiettivo di questa tesi è quindi quello di presentare al meglio lo stato attuale dei sistemi di raccomandazione in ambito e-Learning e non solo, in modo da costituire un riferimento chiaro, semplice ma completo per chiunque si volesse affacciare a questo straordinario ed affascinante mondo della raccomandazione on line.
Resumo:
In den westlichen Industrieländern ist das Mammakarzinom der häufigste bösartige Tumor der Frau. Sein weltweiter Anteil an allen Krebserkrankungen der Frau beläuft sich auf etwa 21 %. Inzwischen ist jede neunte Frau bedroht, während ihres Lebens an Brustkrebs zu erkranken. Die alterstandardisierte Mortalitätrate liegt derzeit bei knapp 27 %.rnrnDas Mammakarzinom hat eine relative geringe Wachstumsrate. Die Existenz eines diagnostischen Verfahrens, mit dem alle Mammakarzinome unter 10 mm Durchmesser erkannt und entfernt werden, würden den Tod durch Brustkrebs praktisch beseitigen. Denn die 20-Jahres-Überlebungsrate bei Erkrankung durch initiale Karzinome der Größe 5 bis 10 mm liegt mit über 95 % sehr hoch.rnrnMit der Kontrastmittel gestützten Bildgebung durch die MRT steht eine relativ junge Untersuchungsmethode zur Verfügung, die sensitiv genug zur Erkennung von Karzinomen ab einer Größe von 3 mm Durchmesser ist. Die diagnostische Methodik ist jedoch komplex, fehleranfällig, erfordert eine lange Einarbeitungszeit und somit viel Erfahrung des Radiologen.rnrnEine Computer unterstützte Diagnosesoftware kann die Qualität einer solch komplexen Diagnose erhöhen oder zumindest den Prozess beschleunigen. Das Ziel dieser Arbeit ist die Entwicklung einer vollautomatischen Diagnose Software, die als Zweitmeinungssystem eingesetzt werden kann. Meines Wissens existiert eine solche komplette Software bis heute nicht.rnrnDie Software führt eine Kette von verschiedenen Bildverarbeitungsschritten aus, die dem Vorgehen des Radiologen nachgeahmt wurden. Als Ergebnis wird eine selbstständige Diagnose für jede gefundene Läsion erstellt: Zuerst eleminiert eine 3d Bildregistrierung Bewegungsartefakte als Vorverarbeitungsschritt, um die Bildqualität der nachfolgenden Verarbeitungsschritte zu verbessern. Jedes kontrastanreichernde Objekt wird durch eine regelbasierte Segmentierung mit adaptiven Schwellwerten detektiert. Durch die Berechnung kinetischer und morphologischer Merkmale werden die Eigenschaften der Kontrastmittelaufnahme, Form-, Rand- und Textureeigenschaften für jedes Objekt beschrieben. Abschließend werden basierend auf den erhobenen Featurevektor durch zwei trainierte neuronale Netze jedes Objekt in zusätzliche Funde oder in gut- oder bösartige Läsionen klassifiziert.rnrnDie Leistungsfähigkeit der Software wurde auf Bilddaten von 101 weiblichen Patientinnen getested, die 141 histologisch gesicherte Läsionen enthielten. Die Vorhersage der Gesundheit dieser Läsionen ergab eine Sensitivität von 88 % bei einer Spezifität von 72 %. Diese Werte sind den in der Literatur bekannten Vorhersagen von Expertenradiologen ähnlich. Die Vorhersagen enthielten durchschnittlich 2,5 zusätzliche bösartige Funde pro Patientin, die sich als falsch klassifizierte Artefakte herausstellten.rn
Resumo:
In questo lavoro si introducono i concetti di base di Natural Language Processing, soffermandosi su Information Extraction e analizzandone gli ambiti applicativi, le attività principali e la differenza rispetto a Information Retrieval. Successivamente si analizza il processo di Named Entity Recognition, focalizzando l’attenzione sulle principali problematiche di annotazione di testi e sui metodi per la valutazione della qualità dell’estrazione di entità. Infine si fornisce una panoramica della piattaforma software open-source di language processing GATE/ANNIE, descrivendone l’architettura e i suoi componenti principali, con approfondimenti sugli strumenti che GATE offre per l'approccio rule-based a Named Entity Recognition.
Resumo:
Visual imagery – similar to visual perception – activates feature-specific and category-specific visual areas. This is frequently observed in experiments where the instruction is to imagine stimuli that have been shown immediately before the imagery task. Hence, feature-specific activation could be related to the short-term memory retrieval of previously presented sensory information. Here, we investigated mental imagery of stimuli that subjects had not seen before, eliminating the effects of short-term memory. We recorded brain activation using fMRI while subjects performed a behaviourally controlled guided imagery task in predefined retinotopic coordinates to optimize sensitivity in early visual areas. Whole brain analyses revealed activation in a parieto-frontal network and lateral–occipital cortex. Region of interest (ROI) based analyses showed activation in left hMT/V5+. Granger causality mapping taking left hMT/V5+ as source revealed an imagery-specific directed influence from the left inferior parietal lobule (IPL). Interestingly, we observed a negative BOLD response in V1–3 during imagery, modulated by the retinotopic location of the imagined motion trace. Our results indicate that rule-based motion imagery can activate higher-order visual areas involved in motion perception, with a role for top-down directed influences originating in IPL. Lower-order visual areas (V1, V2 and V3) were down-regulated during this type of imagery, possibly reflecting inhibition to avoid visual input from interfering with the imagery construction. This suggests that the activation in early visual areas observed in previous studies might be related to short- or long-term memory retrieval of specific sensory experiences.
Resumo:
This paper presents a kernel density correlation based nonrigid point set matching method and shows its application in statistical model based 2D/3D reconstruction of a scaled, patient-specific model from an un-calibrated x-ray radiograph. In this method, both the reference point set and the floating point set are first represented using kernel density estimates. A correlation measure between these two kernel density estimates is then optimized to find a displacement field such that the floating point set is moved to the reference point set. Regularizations based on the overall deformation energy and the motion smoothness energy are used to constraint the displacement field for a robust point set matching. Incorporating this non-rigid point set matching method into a statistical model based 2D/3D reconstruction framework, we can reconstruct a scaled, patient-specific model from noisy edge points that are extracted directly from the x-ray radiograph by an edge detector. Our experiment conducted on datasets of two patients and six cadavers demonstrates a mean reconstruction error of 1.9 mm
Resumo:
Written text is an important component in the process of knowledge acquisition and communication. Poorly written text fails to deliver clear ideas to the reader no matter how revolutionary and ground-breaking these ideas are. Providing text with good writing style is essential to transfer ideas smoothly. While we have sophisticated tools to check for stylistic problems in program code, we do not apply the same techniques for written text. In this paper we present TextLint, a rule-based tool to check for common style errors in natural language. TextLint provides a structural model of written text and an extensible rule-based checking mechanism.
Resumo:
Mesenchymal stromal cells (MSCs), which reside within various tissues, are utilized in the engineering of cartilage tissue. Dexamethasone (DEX)--a synthetic glucocorticoid--is almost invariably applied to potentiate the growth-factor-induced chondrogenesis of MSCs in vitro, albeit that this effect has been experimentally demonstrated only for transforming-growth-factor-beta (TGF-β)-stimulated bone-marrow-derived MSCs. Clinically, systemic glucocorticoid therapy is associated with untoward side effects (e.g., bone loss and increased susceptibility to infection). Hence, the use of these agents should be avoided or limited. We hypothesize that the influence of DEX on the chondrogenesis of MSCs depends upon their tissue origin and microenvironment [absence or presence of an extracellular matrix (ECM)], as well as upon the nature of the growth factor. We investigated its effects upon the TGF-β1- and bone-morphogenetic-protein 2 (BMP-2)-induced chondrogenesis of MSCs as a function of tissue source (bone marrow vs. synovium) and microenvironment [cell aggregates (no ECM) vs. explants (presence of a natural ECM)]. In aggregates of bone-marrow-derived MSCs, DEX enhanced TGF-β1-induced chondrogenesis by an up-regulation of cartilaginous genes, but had little influence on the BMP-2-induced response. In aggregates of synovial MSCs, DEX exerted no remarkable effect on either TGF-β1- or BMP-2-induced chondrogenesis. In synovial explants, DEX inhibited BMP-2-induced chondrogenesis almost completely, but had little impact on the TGF-β1-induced response. Our data reveal that steroids are not indispensable for the chondrogenesis of MSCs in vitro. Their influence is context dependent (tissue source of the MSCs, their microenvironment and the nature of the growth-factor). This finding has important implications for MSC based approaches to cartilage repair.
Resumo:
BACKGROUND: Patients with chemotherapy-related neutropenia and fever are usually hospitalized and treated on empirical intravenous broad-spectrum antibiotic regimens. Early diagnosis of sepsis in children with febrile neutropenia remains difficult due to non-specific clinical and laboratory signs of infection. We aimed to analyze whether IL-6 and IL-8 could define a group of patients at low risk of septicemia. METHODS: A prospective study was performed to assess the potential value of IL-6, IL-8 and C-reactive protein serum levels to predict severe bacterial infection or bacteremia in febrile neutropenic children with cancer during chemotherapy. Statistical test used: Friedman test, Wilcoxon-Test, Kruskal-Wallis H test, Mann-Whitney U-Test and Receiver Operating Characteristics. RESULTS: The analysis of cytokine levels measured at the onset of fever indicated that IL-6 and IL-8 are useful to define a possible group of patients with low risk of sepsis. In predicting bacteremia or severe bacterial infection, IL-6 was the best predictor with the optimum IL-6 cut-off level of 42 pg/ml showing a high sensitivity (90%) and specificity (85%). CONCLUSION: These findings may have clinical implications for risk-based antimicrobial treatment strategies.
Resumo:
Second Life (SL) is an ideal platform for language learning. It is called a Multi-User Virtual Environment, where users can have varieties of learning experiences in life-like environments. Numerous attempts have been made to use SL as a platform for language teaching and the possibility of SL as a means to promote conversational interactions has been reported. However, the research so far has largely focused on simply using SL without further augmentations for communication between learners or between teachers and learners in a school-like environment. Conversely, not enough attention has been paid to its controllability which builds on the embedded functions in SL. This study, based on the latest theories of second language acquisition, especially on the Task Based Language Teaching and the Interaction Hypothesis, proposes to design and implement an automatized interactive task space (AITS) where robotic agents work as interlocutors of learners. This paper presents a design that incorporates the SLA theories into SL and the implementation method of the design to construct AITS, fulfilling the controllability of SL. It also presents the result of the evaluation experiment conducted on the constructed AITS.
Resumo:
Ahead of the World Cup in Brazil the crucial question for the Swiss national coach is the nomination of the starting eleven central back pair. A fuzzy set Qualitative Comparative Analysis assesses the defensive performances of different Swiss central back pairs during the World Cup campaign (2011 – 2014). This analysis advises Ottmar Hitzfeld to nominate Steve von Bergen and Johan Djourou as the starting eleven central back pair. The alternative with a substantially weaker empirical validity would be Johan Djourou together with Phillippe Senderos. Furthermore, this paper aims to be a step forward in mainstream football analytics. It analyses the undervalued and understudied defense (Anderson and Sally 2012, Statsbomb 2013) by explaining collective defensive performances instead of assessments of individual player or team performances. However, a qualitatively (better defensive metrics) and quantitatively (more games) improved and extended data set would allow for a more sophisticated analysis of collective defensive performances.
Resumo:
Rhythm is a central characteristic of music and speech, the most important domains of human communication using acoustic signals. Here, we investigated how rhythmical patterns in music are processed in the human brain, and, in addition, evaluated the impact of musical training on rhythm processing. Using fMRI, we found that deviations from a rule-based regular rhythmic structure activated the left planum temporale together with Broca's area and its right-hemispheric homolog across subjects, that is, a network also crucially involved in the processing of harmonic structure in music and the syntactic analysis of language. Comparing the BOLD responses to rhythmic variations between professional jazz drummers and musical laypersons, we found that only highly trained rhythmic experts show additional activity in left-hemispheric supramarginal gyrus, a higher-order region involved in processing of linguistic syntax. This suggests an additional functional recruitment of brain areas usually dedicated to complex linguistic syntax processing for the analysis of rhythmical patterns only in professional jazz drummers, who are especially trained to use rhythmical cues for communication.
Resumo:
Microsoft Project is one of the most-widely used software packages for project management. For the scheduling of resource-constrained projects, the package applies a priority-based procedure using a specific schedule-generation scheme. This procedure performs relatively poorly when compared against other software packages or state-of-the-art methods for resource-constrained project scheduling. In Microsoft Project 2010, it is possible to work with schedules that are infeasible with respect to the precedence or the resource constraints. We propose a novel schedule-generation scheme that makes use of this possibility. Under this scheme, the project tasks are scheduled sequentially while taking into account all temporal and resource constraints that a user can define within Microsoft Project. The scheme can be implemented as a priority-rule based heuristic procedure. Our computational results for two real-world construction projects indicate that this procedure outperforms the built-in procedure of Microsoft Project
Resumo:
Actors with joint beliefs in a decision-making process form coalitions in order to translate their goals into policy. Yet, coalitions are not formed in an institutional void, but rather institutions confer opportunities and constraints to actors. This paper studies the institutional conditions under which either coalition structures with a dominant coalition or with competing coalitions emerge. It takes into account three conditions, i.e. the degree of federalism of a project, its degree of Europeanisation and the openness of the pre-parliamentary phase of the decision-making process. The cross-sectoral comparison includes the 11 most important decision-making processes in Switzerland between 2001 and 2006 with a fuzzy-set Qualitative Comparative Analysis. Results suggest that Europeanisation or an open pre-parliamentary phase lead to a dominant coalition, whereas only a specific combination of all three conditions is able to explain a structure with competing coalitions.
Resumo:
Die Organisation und die strategische Kommunikation von Wahlkämpfen haben sich in den letzten Jahrzehnten in den meisten westeuropäischen Staaten gewandelt, so auch in der Schweiz. Die Kommunikationswissenschaft hat dafür den Begriff der „Professionalisierung“ geprägt und Eigenschaften zusammengetragen, die zu einem „professionalisierten“ Wahlkampf gehören – wie z.B. die Beauftragung von externen Expertinnen und Experten oder die direkte Ansprache von Wählerinnen und Wählern („narrowcasting“). Welche Hintergründe diese Professionalisierung aber hat und wie das Phänomen nicht nur praktisch zu beschreiben, sondern auch theoretisch zu begründen ist, wurde bisher kaum diskutiert. Hier setzt die vorliegende Dissertation an. Basierend auf einer Analyse von 23 Wahlkämpfen aus den Kantonen Aargau, Appenzell Ausserrhoden, Bern, Neuchâtel und Zürich mithilfe der Methode Fuzzy Set Qualitative Comparative Analysis (fsQCA) kommt sie zum Schluss, dass die Professionalisierung der Wahlkämpfe vor dem theoretischen Hintergrund des soziologischen Neo-Institutionalismus als Anpassung von Wahlkämpfen an sich verändernde Bedingungen, Erwartungen und Anforderungen in den wichtigsten Anspruchsgruppen oder „Umwelten“ für den Wahlkampf (Wählerinnen und Wähler, Mitglieder, Medien, andere Parteien) definiert werden kann. Daraus folgt, dass es nicht nur „die“ Professionalisierung gibt, sondern dass jeder Wahlkampf an jene Umwelten angepasst wird, wo diese Anpassung den Wahlkampfverantwortlichen am dringlichsten erscheint. Daher sollte Professionalisierung mit vier einzelnen Messinstrumenten bzw. Professionalisierungsindices – einem pro Umwelt – gemessen werden. Misst man Professionalisierung wie bisher üblich nur mit einem einzigen Messinstrument, gibt der resultierende Wert nur ein ungenaues Bild vom Grad der Professionalisierung des Wahlkampfs wieder und verschleiert, als Anpassung an welche Umwelt die Professionalisierung geschieht. Hat man ermittelt, wie professionalisiert ein Wahlkampf im Hinblick auf jede der vier relevantesten Umwelten ist, können dann auch zuverlässiger die Gründe analysiert werden, die zur jeweiligen Professionalisierung geführt haben. Die empirische Analyse der kantonalen Wahlkämpfe bestätigte, dass hinter der Professionalisierung in Bezug auf jede der vier Umwelten auch tatsächlich unterschiedliche Gründe stecken. Wahlkämpfe werden in Bezug auf die Ansprache der Wähler angepasst („professionalisiert“), wenn sie in urbanen Kontexten stattfinden. Den Wahlkampf im Hinblick auf die Mitglieder zu professionalisieren ist besonders wichtig, wenn die Konkurrenz zwischen den Parteien gross ist oder wenn eine Ansprache der Gesamtwählerschaft für eine Partei wenig gewinnbringend erscheint. Die Professionalisierung des Wahlkampfes in Bezug auf die Medien erfolgt dann, wenn er eine grosse, regional stark verteilte oder aber eine urbane Wählerschaft ansprechen muss. Für die Professionalisierung der Wahlkämpfe gegenüber anderen Parteien kann kein aussagekräftiger Schluss gezogen werden, da nur wenige der untersuchten Kantonalparteien ihre Wahlkämpfe überhaupt im Hinblick auf andere Parteien professionalisierten, indem sie die gegnerischen Wahlkämpfe beobachteten und den eigenen wenn nötig entsprechend anpassten.
Resumo:
Activities of daily living (ADL) are important for quality of life. They are indicators of cognitive health status and their assessment is a measure of independence in everyday living. ADL are difficult to reliably assess using questionnaires due to self-reporting biases. Various sensor-based (wearable, in-home, intrusive) systems have been proposed to successfully recognize and quantify ADL without relying on self-reporting. New classifiers required to classify sensor data are on the rise. We propose two ad-hoc classifiers that are based only on non-intrusive sensor data. METHODS: A wireless sensor system with ten sensor boxes was installed in the home of ten healthy subjects to collect ambient data over a duration of 20 consecutive days. A handheld protocol device and a paper logbook were also provided to the subjects. Eight ADL were selected for recognition. We developed two ad-hoc ADL classifiers, namely the rule based forward chaining inference engine (RBI) classifier and the circadian activity rhythm (CAR) classifier. The RBI classifier finds facts in data and matches them against the rules. The CAR classifier works within a framework to automatically rate routine activities to detect regular repeating patterns of behavior. For comparison, two state-of-the-art [Naïves Bayes (NB), Random Forest (RF)] classifiers have also been used. All classifiers were validated with the collected data sets for classification and recognition of the eight specific ADL. RESULTS: Out of a total of 1,373 ADL, the RBI classifier correctly determined 1,264, while missing 109 and the CAR determined 1,305 while missing 68 ADL. The RBI and CAR classifier recognized activities with an average sensitivity of 91.27 and 94.36%, respectively, outperforming both RF and NB. CONCLUSIONS: The performance of the classifiers varied significantly and shows that the classifier plays an important role in ADL recognition. Both RBI and CAR classifier performed better than existing state-of-the-art (NB, RF) on all ADL. Of the two ad-hoc classifiers, the CAR classifier was more accurate and is likely to be better suited than the RBI for distinguishing and recognizing complex ADL.