14 resultados para Contest

em CentAUR: Central Archive University of Reading - UK


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Turing Test, originally configured for a human to distinguish between an unseen man and unseen woman through a text-based conversational measure of gender, is the ultimate test for thinking. So conceived Alan Turing when he replaced the woman with a machine. His assertion, that once a machine deceived a human judge into believing that they were the human, then that machine should be attributed with intelligence. But is the Turing Test nothing more than a mindless game? We present results from recent Loebner Prizes, a platform for the Turing Test, and find that machines in the contest appear conversationally worse rather than better, from 2004 to 2006, showing a downward trend in highest scores awarded to them by human judges. Thus the machines are not thinking in the same way as a human intelligent entity would.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper uses a Foucauldian governmentality framework to analyse and interrogate the discourses and strategies adopted by the state and sections of the business community in their attempts to shape and influence emerging agendas of governance in post-devolution Scotland. Much of the work on governmentality has examined the ways in which governments have developed particular techniques, rationales and mechanisms to enable the functioning of governance programmes. This paper expands upon such analyses by also looking at the ways in which particular interests may use similar procedures, discourses and practices to promote their own agendas and develop new forms of resistance, contestation and challenge to emerging policy frameworks. Using the example of business interest mobilization in post-devolution Scotland, it is argued that governments may seek to mobilize defined forms of expertise and knowledge, linking them to wider political debates. This, however, creates new opportunities for interests to shape and contest the discourses and practices of government. The governmentalization of politics can, therefore, be seen as more of a dialectical process of definition and contestation than is often apparent in existing Foucault-inspired writing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper critically examines the challenges with, and impacts of, adopting the models in place for fair trade agriculture in the artisanal gold mining sector. Over the past two years, an NGO-led 'fair trade gold' movement has surfaced, its crystallization fuelled by a burgeoning body of evidence that points to impoverished artisanal miners in developing countries receiving low payments for their gold, as well as working in hazardous and unsanitary conditions. Proponents of fair trade gold contest that increased interaction between artisanal miners and Western jewellers could facilitate the former receiving fairer prices for gold, accessing support services, and ultimately, improving their quality of life. In the case of sub-Saharan Africa, however, the gold being mined on an artisanal scale does not supply Western retailers as perhaps believed; it is rather an important source of foreign exchange, which host governments employ buyers to collect for their coffers. It is maintained here that if the underlying purpose of fair trade is to improve the livelihoods and well-being of subsistence producers in developing countries, then the models that have proved so successful in alleviating the hardships of agro-producers of 'tropical' commodities such as coffee, tea, bananas and cocoa, should be adapted to artisanal gold mining in sub-Saharan Africa. Campaigns promoting 'fair trade gold' in the region should view host governments, and not Western retailers, as the 'end consumer', and focus on improving governance at the grassroots, organizing informal operators into working cooperatives, and addressing complications with purchasing arrangements - all of which would go a long way toward improving the livelihoods of subsistence artisanal miners. A case study of Noyem, Ghana, the location of a sprawling illegal gold mining community, is presented, which magnifies these challenges further and provides perspective on how they can be overcome. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article critically examines the challenges that come with implementing the Extractive Industries Transparency Initiative (EITI)a policy mechanism marketed by donors and Western governments as a key to facilitating economic improvement in resource-rich developing countriesin sub-Saharan Africa. The forces behind the EITI contest that impoverished institutions, the embezzlement of petroleum and/or mineral revenues, and a lack of transparency are the chief reasons why resource-rich sub-Saharan Africa is underperforming economically, and that implementation of the EITI, with its foundation of good governance, will help address these problems. The position here, however, is that the task is by no means straightforward: that the EITI is not necessarily a blueprint for facilitating good governance in the region's resource-rich countries. It is concluded that the EITI is a policy mechanism that could prove to be effective with significant institutional change in host African countries but, on its own, it is incapable of reducing corruption and mobilizing citizens to hold government officials accountable for hoarding profits from extractive industry operations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chatterbox Challenge is an annual web-based contest for artificial conversational systems, ACE. The 2010 instantiation was the tenth consecutive contest held between March and June in the 60th year following the publication of Alan Turing’s influential disquisition ‘computing machinery and intelligence’. Loosely based on Turing’s viva voca interrogator-hidden witness imitation game, a thought experiment to ascertain a machine’s capacity to respond satisfactorily to unrestricted questions, the contest provides a platform for technology comparison and evaluation. This paper provides an insight into emotion content in the entries since the 2005 Chatterbox Challenge. The authors find that synthetic textual systems, none of which are backed by academic or industry funding, are, on the whole and more than half a century since Weizenbaum’s natural language understanding experiment, little further than Eliza in terms of expressing emotion in dialogue. This may be a failure on the part of the academic AI community for ignoring the Turing test as an engineering challenge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to consider Turing's two tests for machine intelligence: the parallel-paired, three-participants game presented in his 1950 paper, and the “jury-service” one-to-one measure described two years later in a radio broadcast. Both versions were instantiated in practical Turing tests during the 18th Loebner Prize for artificial intelligence hosted at the University of Reading, UK, in October 2008. This involved jury-service tests in the preliminary phase and parallel-paired in the final phase. Design/methodology/approach – Almost 100 test results from the final have been evaluated and this paper reports some intriguing nuances which arose as a result of the unique contest. Findings – In the 2008 competition, Turing's 30 per cent pass rate is not achieved by any machine in the parallel-paired tests but Turing's modified prediction: “at least in a hundred years time” is remembered. Originality/value – The paper presents actual responses from “modern Elizas” to human interrogators during contest dialogues that show considerable improvement in artificial conversational entities (ACE). Unlike their ancestor – Weizenbaum's natural language understanding system – ACE are now able to recall, share information and disclose personal interests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The academic discipline of television studies has been constituted by the claim that television is worth studying because it is popular. Yet this claim has also entailed a need to defend the subject against the triviality that is associated with the television medium because of its very popularity. This article analyses the many attempts in the later twentieth and twenty-first centuries to constitute critical discourses about television as a popular medium. It focuses on how the theoretical currents of Television Studies emerged and changed in the UK, where a disciplinary identity for the subject was founded by borrowing from related disciplines, yet argued for the specificity of the medium as an object of criticism. Eschewing technological determinism, moral pathologization and sterile debates about television's supposed effects, UK writers such as Raymond Williams addressed television as an aspect of culture. Television theory in Britain has been part of, and also separate from, the disciplinary fields of media theory, literary theory and film theory. It has focused its attention on institutions, audio-visual texts, genres, authors and viewers according to the ways that research problems and theoretical inadequacies have emerged over time. But a consistent feature has been the problem of moving from a descriptive discourse to an analytical and evaluative one, and from studies of specific texts, moments and locations of television to larger theories. By discussing some historically significant critical work about television, the article considers how academic work has constructed relationships between the different kinds of objects of study. The article argues that a fundamental tension between descriptive and politically activist discourses has confused academic writing about ›the popular‹. Television study in Britain arose not to supply graduate professionals to the television industry, nor to perfect the instrumental techniques of allied sectors such as advertising and marketing, but to analyse and critique the medium's aesthetic forms and to evaluate its role in culture. Since television cannot be made by ›the people‹, the empowerment that discourses of television theory and analysis aimed for was focused on disseminating the tools for critique. Recent developments in factual entertainment television (in Britain and elsewhere) have greatly increased the visibility of ›the people‹ in programmes, notably in docusoaps, game shows and other participative formats. This has led to renewed debates about whether such ›popular‹ programmes appropriately represent ›the people‹ and how factual entertainment that is often despised relates to genres hitherto considered to be of high quality, such as scripted drama and socially-engaged documentary television. A further aspect of this problem of evaluation is how television globalisation has been addressed, and the example that the issue has crystallised around most is the reality TV contest Big Brother. Television theory has been largely based on studying the texts, institutions and audiences of television in the Anglophone world, and thus in specific geographical contexts. The transnational contexts of popular television have been addressed as spaces of contestation, for example between Americanisation and national or regional identities. Commentators have been ambivalent about whether the discipline's role is to celebrate or critique television, and whether to do so within a national, regional or global context. In the discourses of the television industry, ›popular television‹ is a quantitative and comparative measure, and because of the overlap between the programming with the largest audiences and the scheduling of established programme types at the times of day when the largest audiences are available, it has a strong relationship with genre. The measurement of audiences and the design of schedules are carried out in predominantly national contexts, but the article refers to programmes like Big Brother that have been broadcast transnationally, and programmes that have been extensively exported, to consider in what ways they too might be called popular. Strands of work in television studies have at different times attempted to diagnose what is at stake in the most popular programme types, such as reality TV, situation comedy and drama series. This has centred on questions of how aesthetic quality might be discriminated in television programmes, and how quality relates to popularity. The interaction of the designations ›popular‹ and ›quality‹ is exemplified in the ways that critical discourse has addressed US drama series that have been widely exported around the world, and the article shows how the two critical terms are both distinct and interrelated. In this context and in the article as a whole, the aim is not to arrive at a definitive meaning for ›the popular‹ inasmuch as it designates programmes or indeed the medium of television itself. Instead the aim is to show how, in historically and geographically contingent ways, these terms and ideas have been dynamically adopted and contested in order to address a multiple and changing object of analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The South African government has endeavoured to strengthen property rights in communal areas and develop civil society institutions for community-led development and natural resource management. However, the effectiveness of this remains unclear as the emergence and operation of civil society institutions in these areas is potentially constrained by the persistence of traditional authorities. Focusing on the former Transkei region of Eastern Cape Province, three case study communities are used examine the extent to which local institutions overlap in issues of land access and control. Within these communities, traditional leaders (chiefs and headmen) continue to exercise complete and sole authority over land allocation and use this to entrench their own positions. However, in the absence of effective state support, traditional authorities have only limited power over how land is used and in enforcing land rights, particularly over communal resources such as rangeland. This diminishes their local legitimacy and encourages some groups to contest their authority by cutting fences, ignoring collective grazing decisions and refusing to pay ‘fees’ levied on them. They are encouraged in such activities by the presence of democratically elected local civil society institutions such as ward councillors and farmers’ organisations, which have broad appeal and are increasingly responsible for much of the agrarian development that takes place, despite having no direct mandate over land. Where it occurs at all, interaction between these different institutions is generally restricted to approval being required from traditional leaders for land allocated to development projects. On this basis it is argued that a more radical approach to land reform in communal areas is required, which transfers all powers over land to elected and accountable local institutions and integrates land allocation, land management and agrarian development more effectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Earth system models are increasing in complexity and incorporating more processes than their predecessors, making them important tools for studying the global carbon cycle. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes, with coupled climate-carbon cycle models that represent land-use change simulating total land carbon stores by 2100 that vary by as much as 600 Pg C given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous model evaluation methodologies. Here we assess the state-of-the-art with respect to evaluation of Earth system models, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeo data and (ii) metrics for evaluation, and discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute towards the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but it is also a challenge, as more knowledge about data uncertainties is required in order to determine robust evaluation methodologies that move the field of ESM evaluation from "beauty contest" toward the development of useful constraints on model behaviour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Earth system models (ESMs) are increasing in complexity by incorporating more processes than their predecessors, making them potentially important tools for studying the evolution of climate and associated biogeochemical cycles. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes. For example, coupled climate–carbon cycle models that represent land-use change simulate total land carbon stores at 2100 that vary by as much as 600 Pg C, given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous methods of model evaluation. Here we assess the state-of-the-art in evaluation of ESMs, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeodata and (ii) metrics for evaluation. We note that the practice of averaging results from many models is unreliable and no substitute for proper evaluation of individual models. We discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute to the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but also presents a challenge. Improved knowledge of data uncertainties is still necessary to move the field of ESM evaluation away from a "beauty contest" towards the development of useful constraints on model outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this EUDO CITIZENSHIP Forum Debate, several authors consider the interrelations between eligibility criteria for participation in independence referendum (that may result in the creation of a new independent state) and the determination of putative citizenship ab initio (on day one) of such a state. The kick-off contribution argues for resemblance of an independence referendum franchise and of the initial determination of the citizenry, critically appraising the incongruence between the franchise for the 18 September 2014 Scottish independence referendum, and the blueprint for Scottish citizenship ab initio put forward by the Scottish Government in its 'Scotland's Future' White Paper. Contributors to this debate come from divergent disciplines (law, political science, sociology, philosophy). They reflect on and contest the above claims, both generally and in relation to regional settings including (in addition to Scotland) Catalonia/Spain, Flanders/Belgium, Quebec/Canada, Post-Yugoslavia and Puerto-Rico/USA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article proposes an auction model where two firms compete for obtaining the license for a public project and an auctioneer acting as a public official representing the political power, decides the winner of the contest. Players as firms face a social dilemma in the sense that the higher is the bribe offered, the higher would be the willingness of a pure monetary maximizer public official to give her the license. However, it implies inducing a cost of reducing all players’ payoffs as far as our model includes an endogenous externality, which depends on bribe. All players’ payoffs decrease with the bribe (and increase with higher quality). We find that the presence of bribe aversion in either the officials’ or the firms’ utility function shifts equilibrium towards more pro-social behavior. When the quality and bribe-bid strategy space is discrete, multiple equilibria emerge including more pro-social bids than would be predicted under a continuous strategy space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Strategies to Reduce Emissions from Deforestation and Degradation (REDD) are being pursued in numerous developing countries. Proponents contest that REDD mechanisms could deliver sustainable development by contributing to both environmental protection and economic development, particularly in poor forest communities. However, among the challenges to REDD, and natural resource management more generally, is the need to develop a comprehensive understanding of cross-sectoral linkages and addressing how they impact the pursuit of sustainable development. Drawing on an exploratory case-study of Ghana, this paper aims to outline the linkages between the forestry and minerals sectors. It is argued that contemporary debates give incommensurate attention to the reclamation of large-scale mine sites located in forest reserves, and neglect to consider more nuanced links which characterise the forestry-mining nexus in Ghana. A review of key stakeholders further elucidates the complex networks which characterise these linkages and highlights the important role of traditional authorities in governing across sectors. If the multiple roles of local resource users and traditional authorities continue to be neglected in policy mechanisms, schemes such as REDD will continue to fall short of achieving sustainable development.