958 resultados para sentiment d’auto-efficacité


Relevância:

20.00% 20.00%

Publicador:

Resumo:

3. "Plan d'ensemble d'une enquête sur les attitudes generales de la population allemande a l'egard de la France et leurs consequences en ce qui concerne l'orientation des emissions en langue allemande de la radiodiffusion francaise", 18.05.1953. Typoskript, 7 Blatt; 4. "Note" Über Methode, Forschungsrichtung und Reichweite der Ergebnisse der Untersuchung; 18.05.1953; Typoskript, 7 Blatt; 5. "Note" Über Geschichte und Tätigkeit des Instituts für Sozialforschung; 18.05.1953; Typoskript, 5 Blatt; 6. Memorandum des Instituts zu Verfahren und ergebnissen der Untersuchung; 1954 [?]; Typoskript, 2 Blatt; 7.-17. Décamps, Jacques: Memoranden; 7. Memorandum, 12.09.1953; Typoskript, 1 Blatt; 8. "Memorandum re: Besprechung in Bad Godesberg in Bezug auf die französische Studie, am 04.September 1953", 10.09.1953. Typoskript, 1 Blatt; 9. "Memorandum re: Vorhaben des 'Centre d'Etudes Sociologiques, Paris', eine deutsch-französische Arbeitsgemeinschft für die Durchführung von Gemeindestudien zu gründen", 15.06.1953. Typoskript, 1 Blatt; 10. "Memorandum über den Besuch von M. Jean L. Pelosse, Centre d'Etudes sociologiques Paris", 12.06.1953. Typoskript, 3 Blatt; 11. "Bericht über die 'Journées d'Etudes eurropéennes sur la Population' Paris, 21., 22. und 23. Mai 1953", 01.06.1953; 12. "Bericht über den Stand der Verhandlungen mit dem Französischen Auswärtigen Amt und dem französischem Rundfunk. Besprechungen in Paris am 27. und 28. Mai 1953", 01.06.1953. Typoskript, 2 Blatt; 13. Angaben für Max Horkheimer zur Übergabe von Memoranden, Projektbeschreibungen und Briefentwürfen, Mai 1953; Typoskript, 1 Blatt; 14. "Bericht über das 'Institut National d'Etudes Démographiques'", 07.05.1953. Typoskript, 4 Blatt; 15. "Memorandum re: Methode der Gruppendiskussion", 04.05.1953. Typoskript, 1 Blatt; 16. "Besprechung im 'Institut francaise d'Opinion Publique, Paris' und bei der hohen Behörde Luxemburg" 30.04.1953; 17. "Besprechung im Auswärtigen Amt und bei dem französischen Rundfunk", 29.04.1953. Typoskript, 6 Blatt; 18. Horkheimer, Max: 1 Brief an den französischen Botschafter in der Bundesrepublik Deutschland, ohen Ort, ohne Datum; Typoskript, 1 Blatt; 19. Radiodiffusion-Télévision Francaise, le Directeur: 1 Briefabschrift an Jacques Décamps, Paris, 09.03.1954; 1 Blatt; 20. Plessner, Helmuth: 1 Brief an den französischen Außenminister, ohne Ort, 18.05.1953; 1 Blatt; 21. Plessner, Helmuth: 1 Brief an Radiodiffusion Francaise, ohne Ort, 18.05.1953; 1 Blatt; 22. Plessner, Helmuth: 1 Brief an den Ministerialrat der Sektion "Agences et Radio" im französischem Außenministerium, ohne Ort, 18.05.1953; 1 Blatt; "The Effectiveness of Candid versus Evasive German-Language Broadcasts of the Voice of America. Final Report", 1953. Typoskript, gebunden, 432 Blatt;

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we describe the specification of amodel for the semantically interoperable representation of language resources for sentiment analysis. The model integrates "lemon", an RDF-based model for the specification of ontology-lexica (Buitelaar et al. 2009), which is used increasinglyfor the representation of language resources asLinked Data, with Marl, an RDF-based model for the representation of sentiment annotations (West-erski et al., 2011; Sánchez-Rada et al., 2013)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes our participation at SemEval- 2014 sentiment analysis task, in both contextual and message polarity classification. Our idea was to com- pare two different techniques for sentiment analysis. First, a machine learning classifier specifically built for the task using the provided training corpus. On the other hand, a lexicon-based approach using natural language processing techniques, developed for a ge- neric sentiment analysis task with no adaptation to the provided training corpus. Results, though far from the best runs, prove that the generic model is more robust as it achieves a more balanced evaluation for message polarity along the different test sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL). This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P, Z, N} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and -1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and -1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This approach aims at aligning, unifying and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. A sentiment lexicon is a critical and essential resource for tagging subjective corpora on the web or elsewhere. In many situations, the multilingual property of the sentiment lexicon is important because the writer is using two languages alternately in the same text, message or post. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and -1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and -1 means they are perfectly inversely correlated and the UnifiedMetrics procedure for CPU and GPU, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sentiment analysis has recently gained popularity in the financial domain thanks to its capability to predict the stock market based on the wisdom of the crowds. Nevertheless, current sentiment indicators are still silos that cannot be combined to get better insight about the mood of different communities. In this article we propose a Linked Data approach for modelling sentiment and emotions about financial entities. We aim at integrating sentiment information from different communities or providers, and complements existing initiatives such as FIBO. The ap- proach has been validated in the semantic annotation of tweets of several stocks in the Spanish stock market, including its sentiment information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a methodology for legacy language resource adaptation that generates domain-specific sentiment lexicons organized around domain entities described with lexical information and sentiment words described in the context of these entities. We explain the steps of the methodology and we give a working example of our initial results. The resulting lexicons are modelled as Linked Data resources by use of established formats for Linguistic Linked Data (lemon, NIF) and for linked sentiment expressions (Marl), thereby contributing and linking to existing Language Resources in the Linguistic Linked Open Data cloud.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sentiment and Emotion Analysis strongly depend on quality language resources, especially sentiment dictionaries. These resources are usually scattered, heterogeneous and limited to specific domains of appli- cation by simple algorithms. The EUROSENTIMENT project addresses these issues by 1) developing a common language resource representation model for sentiment analysis, and APIs for sentiment analysis services based on established Linked Data formats (lemon, Marl, NIF and ONYX) 2) by creating a Language Resource Pool (a.k.a. LRP) that makes avail- able to the community existing scattered language resources and services for sentiment analysis in an interoperable way. In this paper we describe the available language resources and services in the LRP and some sam- ple applications that can be developed on top of the EUROSENTIMENT LRP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a dataset componsed of domain-specific sentiment lexicons in six languages for two domains. We used existing collections of reviews from Trip Advisor, Amazon, the Stanford Network Analysis Project and the OpinRank Review Dataset. We use an RDF model based on the lemon and Marl formats to represent the lexicons. We describe the methodology that we applied to generate the domain-specific lexicons and we provide access information to our datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is the result of a project whose objective has been to develop and deploy a dashboard for sentiment analysis of football in Twitter based on web components and D3.js. To do so, a visualisation server has been developed in order to present the data obtained from Twitter and analysed with Senpy. This visualisation server has been developed with Polymer web components and D3.js. Data mining has been done with a pipeline between Twitter, Senpy and ElasticSearch. Luigi have been used in this process because helps building complex pipelines of batch jobs, so it has analysed all tweets and stored them in ElasticSearch. To continue, D3.js has been used to create interactive widgets that make data easily accessible, this widgets will allow the user to interact with them and �filter the most interesting data for him. Polymer web components have been used to make this dashboard according to Google's material design and be able to show dynamic data in widgets. As a result, this project will allow an extensive analysis of the social network, pointing out the influence of players and teams and the emotions and sentiments that emerge in a lapse of time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Preliminary research demonstrated the EmotiBlog annotated corpus relevance as a Machine Learning resource to detect subjective data. In this paper we compare EmotiBlog with the JRC Quotes corpus in order to check the robustness of its annotation. We concentrate on its coarse-grained labels and carry out a deep Machine Learning experimentation also with the inclusion of lexical resources. The results obtained show a similarity with the ones obtained with the JRC Quotes corpus demonstrating the EmotiBlog validity as a resource for the SA task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

EmotiBlog is a corpus labelled with the homonymous annotation schema designed for detecting subjectivity in the new textual genres. Preliminary research demonstrated its relevance as a Machine Learning resource to detect opinionated data. In this paper we compare EmotiBlog with the JRC corpus in order to check the EmotiBlog robustness of annotation. For this research we concentrate on its coarse-grained labels. We carry out a deep ML experimentation also with the inclusion of lexical resources. The results obtained show a similarity with the ones obtained with the JRC demonstrating the EmotiBlog validity as a resource for the SA task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comunicación presentada en las IV Jornadas TIMM, Torres (Jaén), 7-8 abril 2011.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the chemical textile domain experts have to analyse chemical components and substances that might be harmful for their usage in clothing and textiles. Part of this analysis is performed searching opinions and reports people have expressed concerning these products in the Social Web. However, this type of information on the Internet is not as frequent for this domain as for others, so its detection and classification is difficult and time-consuming. Consequently, problems associated to the use of chemical substances in textiles may not be detected early enough, and could lead to health problems, such as allergies or burns. In this paper, we propose a framework able to detect, retrieve, and classify subjective sentences related to the chemical textile domain, that could be integrated into a wider health surveillance system. We also describe the creation of several datasets with opinions from this domain, the experiments performed using machine learning techniques and different lexical resources such as WordNet, and the evaluation focusing on the sentiment classification, and complaint detection (i.e., negativity). Despite the challenges involved in this domain, our approach obtains promising results with an F-score of 65% for polarity classification and 82% for complaint detection.