13 resultados para Correcteurs. Mentions de correcteur

em Queensland University of Technology - ePrints Archive


Relevância:

10.00% 10.00%

Publicador:

Resumo:

RÉSUMÉ. La prise en compte des troubles de la communication dans l’utilisation des systèmes de recherche d’information tels qu’on peut en trouver sur le Web est généralement réalisée par des interfaces utilisant des modalités n’impliquant pas la lecture et l’écriture. Peu d’applications existent pour aider l’utilisateur en difficulté dans la modalité textuelle. Nous proposons la prise en compte de la conscience phonologique pour assister l’utilisateur en difficulté d’écriture de requêtes (dysorthographie) ou de lecture de documents (dyslexie). En premier lieu un système de réécriture et d’interprétation des requêtes entrées au clavier par l’utilisateur est proposé : en s’appuyant sur les causes de la dysorthographie et sur les exemples à notre disposition, il est apparu qu’un système combinant une approche éditoriale (type correcteur orthographique) et une approche orale (système de transcription automatique) était plus approprié. En second lieu une méthode d’apprentissage automatique utilise des critères spécifiques , tels que la cohésion grapho-phonémique, pour estimer la lisibilité d’une phrase, puis d’un texte. ABSTRACT. Most applications intend to help disabled users in the information retrieval process by proposing non-textual modalities. This paper introduces specific parameters linked to phonological awareness in the textual modality. This will enhance the ability of systems to deal with orthographic issues and with the adaptation of results to the reader when for example the reader is dyslexic. We propose a phonology based sentence level rewriting system that combines spelling correction, speech synthesis and automatic speech recognition. This has been evaluated on a corpus of questions we get from dyslexic children. We propose a specific sentence readability measure that involves phonetic parameters such as grapho-phonemic cohesion. This has been learned on a corpus of reading time of sentences read by dyslexic children.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article explores an important temporal aspect of the design of strategic alliances by focusing on the issue of time bounds specification. Time bounds specification refers to a choice on behalf of prospective alliance partners at the time of alliance formation to either pre-specify the duration of an alliance to a specific time window, or to keep the alliance open-ended (Reuer & Ariňo, 2007). For instance, Das (2006) mentions the example of the alliance between Telemundo Network and Mexican Argos Comunicacion (MAC). Announced in October 2000, this alliance entailed a joint production of 1200 hours of comedy, news, drama, reality and novella programs (Das, 2006). Conditioned on the projected date of completing the 1200 hours of programs, Telemundo Network and MAC pre-specified the time bounds of the alliance ex ante. Such time-bound alliances are said to be particularly prevalent in project-based industries, like movie production, construction, telecommunications and pharmaceuticals (Schwab & Miner, 2008). In many other instances, however, firms may choose to keep their alliances open-ended, not specifying a specific time bound at the time of alliance formation. The choice between designing open-ended alliances that are “built to last”, versus time bound alliances that are “meant to end” is important. Seminal works like Axelrod (1984), Heide & Miner (1992), and Parkhe (1993) demonstrated that the choice to place temporal bounds on a collaborative venture has important implications. More specifically, collaborations that have explicit, short term time bounds (i.e. what is termed a shorter “shadow of the future”) are more likely to experience opportunism (Axelrod, 1984), are more likely to focus on the immediate present (Bakker, Boros, Kenis & Oerlemans, 2012), and are less likely to develop trust (Parkhe, 1993) than alliances for which time bounds are kept indeterminate. These factors, in turn, have been shown to have important implications for the performance of alliances (e.g. Kale, Singh & Perlmutter, 2000). Thus, there seems to be a strong incentive for organizations to form open-ended strategic alliances. And yet, Reuer & Ariňo (2007), one of few empirical studies that details the prevalence of time-bound and open-ended strategic alliances, found that about half (47%) of the alliances in their sample were time bound, the other half were open-ended. What conditions, then, determine this choice?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter attends to the legal and political geographies of one of Earth's most important, valuable, and pressured spaces: the geostationary orbit. Since the first, NASA, satellite entered it in 1964, this small, defined band of Outer Space, 35,786km from the Earth's surface, and only 30km wide, has become a highly charged legal and geopolitical environment, yet it remains a space which is curiously unheard of outside of specialist circles. For the thousands of satellites which now underpin the Earth's communication, media, and data industries and flows, the geostationary orbit is the prime position in Space. The geostationary orbit only has the physical capacity to hold approximately 1500 satellites; in 1997 there were approximately 1000. It is no overstatement to assert that media, communication, and data industries would not be what they are today if it was not for the geostationary orbit. This chapter provides a critical legal geography of the geostationary orbit, charting the topography of the debates and struggles to define and manage this highly-important space. Drawing on key legal documents such as the Outer Space Treaty and the Moon Treaty, the chapter addresses fundamental questions about the legal geography of the orbit, questions which are of growing importance as the orbit’s available satellite spaces diminish and the orbit comes under increasing pressure. Who owns the geostationary orbit? Who, and whose rules, govern what may or may not (literally) take place within it? Who decides which satellites can occupy the orbit? Is the geostationary orbit the sovereign property of the equatorial states it supertends, as these states argued in the 1970s? Or is it a part of the res communis, or common property of humanity, which currently legally characterises Outer Space? As challenges to the existing legal spatiality of the orbit from launch states, companies, and potential launch states, it is particularly critical that the current spatiality of the orbit is understood and considered. One of the busiest areas of Outer Space’s spatiality is international territorial law. Mentions of Space law tend to evoke incredulity and ‘little green men’ jokes, but as Space becomes busier and busier, international Space law is growing in complexity and importance. The chapter draws on two key fields of research: cultural geography, and critical legal geography. The chapter is framed by the cultural geographical concept of ‘spatiality’, a term which signals the multiple and dynamic nature of geographical space. As spatial theorists such as Henri Lefebvre assert, a space is never simply physical; rather, any space is always a jostling composite of material, imagined, and practiced geographies (Lefebvre 1991). The ways in which a culture perceives, represents, and legislates that space are as constitutive of its identity--its spatiality--as the physical topography of the ground itself. The second field in which this chapter is situated—critical legal geography—derives from cultural geography’s focus on the cultural construction of spatiality. In his Law, Space and the Geographies of Power (1994), Nicholas Blomley asserts that analyses of territorial law largely neglect the spatial dimension of their investigations; rather than seeing the law as a force that produces specific kinds of spaces, they tend to position space as a neutral, universally-legible entity which is neatly governed by the equally neutral 'external variable' of territorial law (28). 'In the hegemonic conception of the law,' Pue similarly argues, 'the entire world is transmuted into one vast isotropic surface' (1990: 568) on which law simply acts. But as the emerging field of critical legal geography demonstrates, law is not a neutral organiser of space, but is instead a powerful cultural technology of spatial production. Or as Delaney states, legal debates are “episodes in the social production of space” (2001, p. 494). International territorial law, in other words, makes space, and does not simply govern it. Drawing on these tenets of the field of critical legal geography, as well as on Lefebvrian concept of multipartite spatiality, this chapter does two things. First, it extends the field of critical legal geography into Space, a domain with which the field has yet to substantially engage. Second, it demonstrates that the legal spatiality of the geostationary orbit is both complex and contested, and argues that it is crucial that we understand this dynamic legal space on which the Earth’s communications systems rely.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The article discusses the recent developments on Freedom of Information or FOI in Queensland. It mentions the recent calls for a new FOI model, pointing to a radical departure from the old FOI template and the emergence of a significantly different FOI regime. Two of these reforms are the Right to Information Bill 2009 or RTI and the Information Privacy Bill 2009 or IP. It also mentions the new FOI Public Interest Test under the RTI Act.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper outlines how commercial sponsorship can be conceptualized using an item and relational information framework, and supports this with empirical data. The model presented allows for predictions about consumer memory for sponsorship information, and hence has both theoretical and practical value. Data are reported which show that sponsors considered congruent with an event benefit by providing consumers with sponsor-specific item information, while sponsors considered incongruent benefit by providing sponsor-event relational information. Overall the provision of sponsor-event relational information is shown to result in superior memory to the provision of sponsor-specific item information, which is superior to basic sponsor mentions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper outlines how commercial sponsorship can be conceptualized using an item and relational information framework, and supports this with empirical data. The model presented allows for predictions about consumer memory for sponsorship information, and hence has both theoretical and practical value. Data are reported which show that sponsors considered congruent with an event benefit by providing consumers with sponsor-specific item information, while sponsors considered incongruent benefit by providing sponsor-event relational information. Overall the provision of sponsor-event relational information is shown to result in superior memory to the provision of sponsor-specific item information, which is superior to basic sponsor mentions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Australian e-Health Research Centre (AEHRC) recently participated in the ShARe/CLEF eHealth Evaluation Lab Task 1. The goal of this task is to individuate mentions of disorders in free-text electronic health records and map disorders to SNOMED CT concepts in the UMLS metathesaurus. This paper details our participation to this ShARe/CLEF task. Our approaches are based on using the clinical natural language processing tool Metamap and Conditional Random Fields (CRF) to individuate mentions of disorders and then to map those to SNOMED CT concepts. Empirical results obtained on the 2013 ShARe/CLEF task highlight that our instance of Metamap (after ltering irrelevant semantic types), although achieving a high level of precision, is only able to identify a small amount of disorders (about 21% to 28%) from free-text health records. On the other hand, the addition of the CRF models allows for a much higher recall (57% to 79%) of disorders from free-text, without sensible detriment in precision. When evaluating the accuracy of the mapping of disorders to SNOMED CT concepts in the UMLS, we observe that the mapping obtained by our ltered instance of Metamap delivers state-of-the-art e ectiveness if only spans individuated by our system are considered (`relaxed' accuracy).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Raven and Song Scope are two automated sound anal-ysis tools based on machine learning technique for en-vironmental monitoring. Many research works have been conducted upon them, however, no or rare explo-ration mentions about the performance and comparison between them. This paper investigates the comparisons from six aspects: theory, software interface, ease of use, detection targets, detection accuracy, and potential application. Through deep exploration one critical gap is identified that there is a lack of approach to detect both syllables and call structures, since Raven only aims to detect syllables while Song Scope targets call structures. Therefore, a Timed Probabilistic Automata (TPA) system is proposed which separates syllables first and clusters them into complex structures after.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Which statistic would you use if you were writing the newspaper headline for the following media release: "Tassie’s death rate of deaths arising from transport-related injuries was 13 per 100,000 people, or 50% higher than the national average”? (Martain, 2007). The rate “13 per 100,000” sounds very small whereas “50% higher” sounds quite large. Most people are aware of the tendency to choose between reporting data as actual numbers or using percents in order to gain attention. Looking at examples like this one can help students develop a critical quantitative literacy viewpoint when dealing with “authentic contexts” (Australian Curriculum, Assessment and Reporting Authority [ACARA], 2013a, p. 37, 67). The importance of the distinction between reporting information in raw numbers or percents is not explicitly mentioned in the Australian Curriculum: Mathematics (ACARA, 2013b, p. 42). Although the document specifically mentions making “connections between equivalent fractions, decimals and percentages” [ACMNA131] in Year 6, there is no mention of the fundamental relationship between percent and the raw numbers represented in a part-whole fashion. Such understanding, however, is fundamental to the problem solving that is the focus of the curriculum in Years 6 to 9. The purpose of this article is to raise awareness of the opportunities to distinguish between the use of raw numbers and percents when comparisons are being made in contexts other than the media. It begins with the authors’ experiences in the classroom, which motivated a search in the literature, followed by a suggestion for a follow-up activity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we provide an account-centric analysis of the tweeting activity of, and public response to, Pope Benedict XVI via the @pontifex Twitter account(s). We focus our investigation on the particular phase around Pope Benedict XVI’s resignation to generate insights into the use of Twitter in response to a celebrity crisis event. Through a combined qualitative and quantitative methodological approach we generate an overview of the follower-base and tweeting activity of the @pontifex account. We identify a very one-directional communication pattern (many @mentions by followers yet zero @replies from the papal account itself), which prompts us to enquire further into what the public resonance of the @pontifex account is. We also examine reactions to the resurrection of the papal Twitter account by Pope Benedict XVI’s successor. In this way, we provide a comprehensive analysis of the public response to the immediate events around the crisis event of Pope Benedict XVI’s resignation and its aftermath via the network of users involved in the @pontifex account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Concept mapping involves determining relevant concepts from a free-text input, where concepts are defined in an external reference ontology. This is an important process that underpins many applications for clinical information reporting, derivation of phenotypic descriptions, and a number of state-of-the-art medical information retrieval methods. Concept mapping can be cast into an information retrieval (IR) problem: free-text mentions are treated as queries and concepts from a reference ontology as the documents to be indexed and retrieved. This paper presents an empirical investigation applying general-purpose IR techniques for concept mapping in the medical domain. A dataset used for evaluating medical information extraction is adapted to measure the effectiveness of the considered IR approaches. Standard IR approaches used here are contrasted with the effectiveness of two established benchmark methods specifically developed for medical concept mapping. The empirical findings show that the IR approaches are comparable with one benchmark method but well below the best benchmark.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The article presents the author's response to the article "On Being Agnostic" by James Marshall. She clarifies that her article "Normalizing Foucault? A Rhizomatic Approach to Plateaus in Anglophone Educational Research" focuses on the implication of French philosopher Michel Foucult's ideas for anglophone literature. She mentions propensities discussed in her article including scientize and template theoretical frameworks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 2008 US election has been heralded as the first presidential election of the social media era, but took place at a time when social media were still in a state of comparative infancy; so much so that the most important platform was not Facebook or Twitter, but the purpose-built campaign site my.barackobama.com, which became the central vehicle for the most successful electoral fundraising campaign in American history. By 2012, the social media landscape had changed: Facebook and, to a somewhat lesser extent, Twitter are now well-established as the leading social media platforms in the United States, and were used extensively by the campaign organisations of both candidates. As third-party spaces controlled by independent commercial entities, however, their use necessarily differs from that of home-grown, party-controlled sites: from the point of view of the platform itself, a @BarackObama or @MittRomney is technically no different from any other account, except for the very high follower count and an exceptional volume of @mentions. In spite of the significant social media experience which Democrat and Republican campaign strategists had already accumulated during the 2008 campaign, therefore, the translation of such experience to the use of Facebook and Twitter in their 2012 incarnations still required a substantial amount of new work, experimentation, and evaluation. This chapter examines the Twitter strategies of the leading accounts operated by both campaign headquarters: the ‘personal’ candidate accounts @BarackObama and @MittRomney as well as @JoeBiden and @PaulRyanVP, and the campaign accounts @Obama2012 and @TeamRomney. Drawing on datasets which capture all tweets from and at these accounts during the final months of the campaign (from early September 2012 to the immediate aftermath of the election night), we reconstruct the campaigns’ approaches to using Twitter for electioneering from the quantitative and qualitative patterns of their activities, and explore the resonance which these accounts have found with the wider Twitter userbase. A particular focus of our investigation in this context will be on the tweeting styles of these accounts: the mixture of original messages, @replies, and retweets, and the level and nature of engagement with everyday Twitter followers. We will examine whether the accounts chose to respond (by @replying) to the messages of support or criticism which were directed at them, whether they retweeted any such messages (and whether there was any preferential retweeting of influential or – alternatively – demonstratively ordinary users), and/or whether they were used mainly to broadcast and disseminate prepared campaign messages. Our analysis will highlight any significant differences between the accounts we examine, trace changes in style over the course of the final campaign months, and correlate such stylistic differences with the respective electoral positioning of the candidates. Further, we examine the use of these accounts during moments of heightened attention (such as the presidential and vice-presidential debates, or in the context of controversies such as that caused by the publication of the Romney “47%” video; additional case studies may emerge over the remainder of the campaign) to explore how they were used to present or defend key talking points, and exploit or avert damage from campaign gaffes. A complementary analysis of the messages directed at the campaign accounts (in the form of @replies or retweets) will also provide further evidence for the extent to which these talking points were picked up and disseminated by the wider Twitter population. Finally, we also explore the use of external materials (links to articles, images, videos, and other content on the campaign sites themselves, in the mainstream media, or on other platforms) by the campaign accounts, and the resonance which these materials had with the wider follower base of these accounts. This provides an indication of the integration of Twitter into the overall campaigning process, by highlighting how the platform was used as a means of encouraging the viral spread of campaign propaganda (such as advertising materials) or of directing user attention towards favourable media coverage. By building on comprehensive, large datasets of Twitter activity (as of early October, our combined datasets comprise some 3.8 million tweets) which we process and analyse using custom-designed social media analytics tools, and by using our initial quantitative analysis to guide further qualitative evaluation of Twitter activity around these campaign accounts, we are able to provide an in-depth picture of the use of Twitter in political campaigning during the 2012 US election which will provide detailed new insights social media use in contemporary elections. This analysis will then also be able to serve as a touchstone for the analysis of social media use in subsequent elections, in the USA as well as in other developed nations where Twitter and other social media platforms are utilised in electioneering.