981 resultados para Semantic Text Analysis
Resumo:
This dissertation explores the complex process of organizational change, applying a behavioral lens to understand change in processes, products, and search behaviors. Chapter 1 examines new practice adoption, exploring factors that predict the extent to which routines are adopted “as designed” within the organization. Using medical record data obtained from the hospital’s Electronic Health Record (EHR) system I develop a novel measure of the “gap” between routine “as designed” and routine “as realized.” I link this to a survey administered to the hospital’s professional staff following the adoption of a new EHR system and find that beliefs about the expected impact of the change shape fidelity of the adopted practice to its design. This relationship is more pronounced in care units with experienced professionals and less pronounced when the care unit includes departmental leadership. This research offers new insights into the determinants of routine change in organizations, in particular suggesting the beliefs held by rank-and-file members of an organization are critical in new routine adoption. Chapter 2 explores changes to products, specifically examining culling behaviors in the mobile device industry. Using a panel of quarterly mobile device sales in Germany from 2004-2009, this chapter suggests that the organization’s response to performance feedback is conditional upon the degree to which decisions are centralized. While much of the research on product exit has pointed to economic drivers or prior experience, these central finding of this chapter—that performance below aspirations decreases the rate of phase-out—suggests that firms seek local solutions when doing poorly, which is consistent with behavioral explanations of organizational action. Chapter 3 uses a novel text analysis approach to examine how the allocation of attention within organizational subunits shapes adaptation in the form of search behaviors in Motorola from 1974-1997. It develops a theory that links organizational attention to search, and the results suggest a trade-off between both attentional specialization and coupling on search scope and depth. Specifically, specialized unit attention to a more narrow set of problems increases search scope but reduces search depth; increased attentional coupling also increases search scope at the cost of depth. This novel approach and these findings help clarify extant research on the behavioral outcomes of attention allocation, which have offered mixed results.
Resumo:
Safeguarding organizations against opportunism and severe deception in computer-mediated communication (CMC) presents a major challenge to CIOs and IT managers. New insights into linguistic cues of deception derive from the speech acts innate to CMC. Applying automated text analysis to archival email exchanges in a CMC system as part of a reward program, we assess the ability of word use (micro-level), message development (macro-level), and intertextual exchange cues (meta-level) to detect severe deception by business partners. We empirically assess the predictive ability of our framework using an ordinal multilevel regression model. Results indicate that deceivers minimize the use of referencing and self-deprecation but include more superfluous descriptions and flattery. Deceitful channel partners also over structure their arguments and rapidly mimic the linguistic style of the account manager across dyadic e-mail exchanges. Thanks to its diagnostic value, the proposed framework can support firms’ decision-making and guide compliance monitoring system development.
Resumo:
The current study builds upon a previous study, which examined the degree to which the lexical properties of students’ essays could predict their vocabulary scores. We expand on this previous research by incorporating new natural language processing indices related to both the surface- and discourse-levels of students’ essays. Additionally, we investigate the degree to which these NLP indices can be used to account for variance in students’ reading comprehension skills. We calculated linguistic essay features using our framework, ReaderBench, which is an automated text analysis tools that calculates indices related to linguistic and rhetorical features of text. University students (n = 108) produced timed (25 minutes), argumentative essays, which were then analyzed by ReaderBench. Additionally, they completed the Gates-MacGinitie Vocabulary and Reading comprehension tests. The results of this study indicated that two indices were able to account for 32.4% of the variance in vocabulary scores and 31.6% of the variance in reading comprehension scores. Follow-up analyses revealed that these models further improved when only considering essays that contained multiple paragraph (R2 values = .61 and .49, respectively). Overall, the results of the current study suggest that natural language processing techniques can help to inform models of individual differences among student writers.
Resumo:
When something unfamiliar emerges or when something familiar does something unexpected people need to make sense of what is emerging or going on in order to act. Social representations theory suggests how individuals and society make sense of the unfamiliar and hence how the resultant social representations (SRs) cognitively, emotionally, and actively orient people and enable communication. SRs are social constructions that emerge through individual and collective engagement with media and with everyday conversations among people. Recent developments in text analysis techniques, and in particular topic modeling, provide a potentially powerful analytical method to examine the structure and content of SRs using large samples of narrative or text. In this paper I describe the methods and results of applying topic modeling to 660 micronarratives collected from Australian academics / researchers, government employees, and members of the public in 2010-2011. The narrative fragments focused on adaptation to climate change (CC) and hence provide an example of Australian society making sense of an emerging and conflict ridden phenomena. The results of the topic modeling reflect elements of SRs of adaptation to CC that are consistent with findings in the literature as well as being reasonably robust predictors of classes of action in response to CC. Bayesian Network (BN) modeling was used to identify relationships among the topics (SR elements) and in particular to identify relationships among topics, sentiment, and action. Finally the resulting model and topic modeling results are used to highlight differences in the salience of SR elements among social groups. The approach of linking topic modeling and BN modeling offers a new and encouraging approach to analysis for ongoing research on SRs.
Resumo:
In this study, 110 Swedish upper secondary students use a historical database designed for research. We analyze how they perceive the use of this digital tool in teaching and if they are able to use historical thinking and historical empathy in their historical writing and presentations. Using case-study methodology including questionnaires, observations, interviews and text analysis we find this to be a complex task for students. Our results highlight technological problems and problems in contextualizing historical evidence. However, students show interest in using primary sources and ability to use historical thinking and historical empathy, especially older students in more advanced courses when they have time to reflect upon the historical material.
Resumo:
Global projections for climate change impacts produce a startling picture of the future for low-lying coastal communities. The United States’ Chesapeake Bay region and especially marginalized and rural communities will be severely impacted by sea level rise and other changes over the next one hundred years. The concept of resilience has been theorized as a measure of social-ecological system health and as a unifying framework under which people can work together towards climate change adaptation. But it has also been critiqued for the way in which it does not adequately take into account local perspective and experiences, bringing into question the value of this concept as a tool for local communities. We must be sure that the concerns, weaknesses, and strengths of particular local communities are part of the climate change adaptation, decision-making, and planning process in which communities participate. An example of this type of planning process is the Deal Island Marsh and Community Project (DIMCP), a grant funded initiative to build resilience within marsh ecosystems and communities of the Deal Island Peninsula area of Maryland (USA) to environmental and social impacts from climate change. I argue it is important to have well-developed understandings of vulnerabilities and resiliencies identified by local residents and others to accomplish this type of work. This dissertation explores vulnerability and resilience to climate change using an engaged and ethnographic anthropological perspective. Utilizing participant observation, semi-structured and structured interviews, text analysis, and cultural domain analysis I produce an in-depth perspective of what vulnerability and resilience means to the DIMCP stakeholder network. Findings highlight significant vulnerabilities and resiliencies inherent in the local area and how these interface with additional vulnerabilities and resiliencies seen from a nonlocal perspective. I conclude that vulnerability and resilience are highly dynamic and context-specific for the local community. Vulnerabilities relate to climate change and other social and environmental changes. Resilience is a long-standing way of life, not a new concept related specifically to climate change. This ethnographic insight into vulnerability and resilience provides a basis for stronger engagement in collaboration and planning for the future.
Resumo:
Tässä tutkielmassa vertaillaan Burda Style -lehden käännöksiä alkuperäisestä saksan kielestä englannin, ranskan, suomen ja unkarin kielelle käännöstieteilijä Christiane Nordin käännöslähtöisen analyysimallin avulla. Tutkin erityisesti lehden ompeluohjeosiota, joka on lehdessä itsenäinen kokonaisuus. Tutkimuksen tarkoituksena on selvittää, miten ompeluohjeiden informaatio on säilynyt käännöksessä ja miten sitä on mahdollisesti muokattu uusi vastaanottaja huomioon ottaen. Burda Style on saksalainen lehti, jolla on pitkä historia. Lehti ilmestyy nykyään 99 maassa ja se on käännetty 17 kielelle. Christiane Nordin käännöslähteisessä analyysimallissa tarkastellaan tekstin ulkoisia ja sisäisiä tekijöitä. Analyysimalli on joustava ja tarkastelunkohteita voidaan käyttää niiden tarpeen mukaan. Tekstin ulkoisia tekijöitä ovat: lähettäjä, lähettäjän aikomus, vastaanottaja, väline, paikka aika, motiivi sekä funktio. Tekstin sisäisiä tekijöitä taas ovat: aihe, sisältö, presuppositiot, rakenne, nonverbaaliset elementit, sanasto, rakenne sekä suprasegmentaaliset piirteet. Nord esittelee nämä tekijät hyvin selkeästi teoksessaan Text Analysis in Translation: Theory, Methodology, and Didactic Application of a Model for Translation-Oriented Text Analysis (1991) ja tämä teos onkin tärkein teos tutkielmani kannalta. Lähestyn toisaalta aineistoani myös Katharina Reissin ja Hans J. Vermeerin funktionaalisen käännösanalyysimallin avulla, jonka mukaan kääntämisen ensisijainen tehtävä on mahdollistaan tekstin toimivuus uudessa tilanteessa. Tekstin skopos, eli funktio, määrää ensisijaisesti jokaisessa käännösvalinnan. Käytän työssäni Reissin ja Vermeerin teosta Mitä kääntäminen on: teoriaa ja käytäntöä (1986). Tutkielman empiirisessä osassa analysoin ompeluohjeet Nordin ulkoisten ja sisäisten tekijöiden avulla. Jotkut tekijät ovat toisia oleellisempia, siksi perehdyn tiettyihin tekijöihin enemmän. Tutkielmassa ilmeni, että lehtiä oli muokattu jonkin verran uutta vastaanottajaa huomioon ottaen. Ohjeisiin oli mm. tehty poistoja sekä lisäyksiä. Tekstilajin konventiot oli otettu hyvin huomioon eri käännöksissä ja ammattisanastoa oli tasapuolisesti. Suurimman muutoksen käännösprosessissa oli kokenut unkarinkielinen käännös, josta oli poistettu monia osakokonaisuuksia alkuperäiseen nähden.
Resumo:
This thesis takes two perspectives on political institutions. From the one side, it examines the long-run effects of institutions on cultural values. From the other side, I study strategic communication, and its determinants, of politicians, a pivotal actor inside those institutions. The first chapter provides evidence for the legacy of feudalism - a set of labor coercion and migration restrictions -, on interpersonal distrust. I combining administrative data on the feudal system in the Prussian Empire (1816 – 1849) with the geo-localized survey data from the German Socio-Economic Panel (1980 – 2020). I show that areas with strong historical exposure to feudalism have lower levels of inter-personal trust today, by means of OLS- and mover specifications. The second chapter builds a novel dataset that includes the Twitter handles of 18,000+ politicians and 61+ million tweets from 2008 – 2021 from all levels of government. I find substantial partisan differences in Twitter adoption, Twitter activity and audience engagement. I use established tools to measure ideological polarization to provide evidence that online-polarization follows similar trends to offline-polarization, at comparable magnitude and reaches unprecedented heights in 2018 and 2021. I develop a new tool to demonstrate a marked increase in affective polarization. The third chapter tests whether politicians disseminate distortive messages when exposed to bad news. Specifically, I study the diffusion of misleading communication from pro-gun politicians in the aftermath of mass shootings. I exploit the random timing of mass shootings and analyze half a million tweets between 2010 – 2020 in an event-study design. I develop and apply state-of-the-art text analysis tools to show that pro- gun politicians seek to decrease the salience of the mass shooting through distraction and try to alter voters’ belief formation through misrepresenting the causes of the mass shootings.
Resumo:
In the last decade, new kinds of European populist parties and movements characterized by a left wing, right wing or “eclectic” attitude have succeeded in entering in governments where they could exert a direct populist influence on their coalition partners or, conversely, become victims themselves of the influence of the institutional background. Such a scenario brought this research to formulate two questions: (i) “To what extent did populist parties succeed in influencing their government coalition partners, leading them to adopt populist rhetoric and change their policy positions?” and (ii) “Have populist parties been able to retain their populist “outside mainstream politics” identity, or have they been assimilated to mainstream parties?”. As a case study this project chose the Italian Five Star Movement. Since 2018 this eclectic populist actor has experienced three different governments first with the radical right wing populist League (2018-2019) and then with the mainstream center left Democratic Party (2019-2021). In addition to this, currently the Five Star Movement is a coalition partner of the ongoing Draghi’s government. Theoretically based on the ideological definition of populism (Mudde, 2004), on a new “revised” model of the inclusionary - exclusionary framework to classify populist parties and on a novel definition of “populist influence”,this research made use of both quantitative (bidimensional and text analysis) and qualitative methods (semi-structured interviews) and mainly focuses on the years 2017- 2020.The importance of this study is threefold. First it contributes to the study of populist influence in government in relation to the ideological attachment of the political actors involved. Second, it contributes to understand if populists in power necessarily need to tone down their anti-system character in order to survive. Third, this study introduces conceptual and methodological novelties within the study of populism and populist influence in government.
Resumo:
In order to explore the impact of a degraded semantic system on the structure of language production, we analysed transcripts from autobiographical memory interviews to identify naturally-occurring speech errors by eight patients with semantic dementia (SD) and eight age-matched normal speakers. Relative to controls, patients were significantly more likely to (a) substitute and omit open class words, (b) substitute (but not omit) closed class words, (c) substitute incorrect complex morphological forms and (d) produce semantically and/or syntactically anomalous sentences. Phonological errors were scarce in both groups. The study confirms previous evidence of SD patients’ problems with open class content words which are replaced by higher frequency, less specific terms. It presents the first evidence that SD patients have problems with closed class items and make syntactic as well as semantic speech errors, although these grammatical abnormalities are mostly subtle rather than gross. The results can be explained by the semantic deficit which disrupts the representation of a pre-verbal message, lexical retrieval and the early stages of grammatical encoding.
Resumo:
In this paper we present a novel approach to detect people meeting. The proposed approach works by translating people behaviour from trajectory information into semantic terms. Having available a semantic model of the meeting behaviour, the event detection is performed in the semantic domain. The model is learnt employing a soft-computing clustering algorithm that combines trajectory information and motion semantic terms. A stable representation can be obtained from a series of examples. Results obtained on a series of videos with different types of meeting situations show that the proposed approach can learn a generic model that can effectively be applied on the behaviour recognition of meeting situations.
Resumo:
In this paper we propose an innovative approach for behaviour recognition, from a multicamera environment, based on translating video activity into semantics. First, we fuse tracks from individual cameras through clustering employing soft computing techniques. Then, we introduce a higher-level module able to translate fused tracks into semantic information. With our proposed approach, we address the challenge set in PETS 2014 on recognising behaviours of interest around a parked vehicle, namely the abnormal behaviour of someone walking around the vehicle.
Resumo:
Clinical text understanding (CTU) is of interest to health informatics because critical clinical information frequently represented as unconstrained text in electronic health records are extensively used by human experts to guide clinical practice, decision making, and to document delivery of care, but are largely unusable by information systems for queries and computations. Recent initiatives advocating for translational research call for generation of technologies that can integrate structured clinical data with unstructured data, provide a unified interface to all data, and contextualize clinical information for reuse in multidisciplinary and collaborative environment envisioned by CTSA program. This implies that technologies for the processing and interpretation of clinical text should be evaluated not only in terms of their validity and reliability in their intended environment, but also in light of their interoperability, and ability to support information integration and contextualization in a distributed and dynamic environment. This vision adds a new layer of information representation requirements that needs to be accounted for when conceptualizing implementation or acquisition of clinical text processing tools and technologies for multidisciplinary research. On the other hand, electronic health records frequently contain unconstrained clinical text with high variability in use of terms and documentation practices, and without commitmentto grammatical or syntactic structure of the language (e.g. Triage notes, physician and nurse notes, chief complaints, etc). This hinders performance of natural language processing technologies which typically rely heavily on the syntax of language and grammatical structure of the text. This document introduces our method to transform unconstrained clinical text found in electronic health information systems to a formal (computationally understandable) representation that is suitable for querying, integration, contextualization and reuse, and is resilient to the grammatical and syntactic irregularities of the clinical text. We present our design rationale, method, and results of evaluation in processing chief complaints and triage notes from 8 different emergency departments in Houston Texas. At the end, we will discuss significance of our contribution in enabling use of clinical text in a practical bio-surveillance setting.
Resumo:
Semantic Web aims to allow machines to make inferences using the explicit conceptualisations contained in ontologies. By pointing to ontologies, Semantic Web-based applications are able to inter-operate and share common information easily. Nevertheless, multilingual semantic applications are still rare, owing to the fact that most online ontologies are monolingual in English. In order to solve this issue, techniques for ontology localisation and translation are needed. However, traditional machine translation is difficult to apply to ontologies, owing to the fact that ontology labels tend to be quite short in length and linguistically different from the free text paradigm. In this paper, we propose an approach to enhance machine translation of ontologies based on exploiting the well-structured concept descriptions contained in the ontology. In particular, our approach leverages the semantics contained in the ontology by using Cross Lingual Explicit Semantic Analysis (CLESA) for context-based disambiguation in phrase-based Statistical Machine Translation (SMT). The presented work is novel in the sense that application of CLESA in SMT has not been performed earlier to the best of our knowledge.
Resumo:
One of the main challenges to be addressed in text summarization concerns the detection of redundant information. This paper presents a detailed analysis of three methods for achieving such goal. The proposed methods rely on different levels of language analysis: lexical, syntactic and semantic. Moreover, they are also analyzed for detecting relevance in texts. The results show that semantic-based methods are able to detect up to 90% of redundancy, compared to only the 19% of lexical-based ones. This is also reflected in the quality of the generated summaries, obtaining better summaries when employing syntactic- or semantic-based approaches to remove redundancy.