442 resultados para Logosauce Contest
Resumo:
D. Manuel Lora Tamayo, Ministro de Educación Nacional, contestó un turno de preguntas en rueda de prensa, en un espacio de Televisión Española, la noche del 17 de septiembre de 1964. El Ministro en sus declaraciones, insistió en el esfuerzo que estaba realizando el Gobierno para conseguir la obligatoriedad de la enseñanza hasta los catorce años y la gratuidad del Bachillerato Elemental; la posibilidad de fomentar la enseñanza privada por no tener posibilidad real el estado para cubrir todas las necesidades educativas; establecer cursos de Bachillerato televisivo, así como existían hasta entonces radiofónicos y fomentar el transporte de alumnos que tienen que desplazarse hasta localidades vecinas para poder asistir al Instituto; duplicar la partida de maestros necesarios; conseguir libros adscritos a la escuela con fondos del Patronato de Igualdad de Oportunidades; aumento de nuevos Institutos y subida de la enseñanza privada; creación de nuevos Institutos y la Facultad de Letras de Sevilla.
Resumo:
Crónica del viaje de estudios que realizó un grupo de estudiantes chilenos que recorrieron varias provincias de España. La mayoría de los alumnos eran estudiantes de Medicina y uno de sus profesores acompañantes, el Dr. Hermosilla, respondió a una entrevista que le realizaron en una publicación periódica a raíz de dicho viaje. Contestó que la intención del grupo era viajar durante mes y medio por España, comentó las impresiones de los chilenos por el país, sobre la organización universitaria en Chile y las diferencias con nuestro país, las influencias españolas en los estudios de Chile, etc. Para agradecer la visita a España de los estudiantes chilenos, el Ministro de Educación Nacional, Sr. Ibáñez Martín pronunció un discurso en abierto a los micrófonos de Radio Nacional en sesión especial para su emisión en Latinoamérica.
Resumo:
Se transcribe la entrevista realizada al Ministro de Educación Nacional, D. José Ibáñez Martín, con motivo del aniversario del alzamiento nacional el 18 de julio. El Ministro contestó a lo que significó el 18 de julio en la cultura española; a las características en las que se basaba la política de educación en los años anteriores del régimen; las necesidades que tenía la educación española en aquel momento; a si existía una desproporción entre los medios presupuestarios y las necesidades pendientes en cuanto a política de educación española.
Resumo:
Realizar un acercamiento a la problemática de las bibliotecas escolares en los colegios públicos de Salamanca. Conocer los servicios que estas bibliotecas están proporcionando a la comunidad educativa. Poner de manifiesto su situación real respecto a la infraestructura, organización y recursos humanos y a partir del análisis de los datos recogidos, proponer alternativas para potenciar su funcionalidad. La propuesta de investigación contemplaba un campo de trabajo que incluía a todos los colegios públicos de la capital (31 centros). Pero se descartaron dos colegios por la dificultad al completar los cuestionarios por las características propias de los Centros. Otro colegio no contestó al cuestionario. El campo de estudio fue de 28 Centros. Estudio de campo cuyo diseño distingue tres fases: diseño y elaboración del modelo de cuestionario y de la entrevista, actividades de gestión para la intervención en los colegios y tramitación de correspondencia para los centros escolares; realización de entrevistas y recogida de la información; sistematización de los datos recogidos, análisis de la situación y elaboración de conclusiones. Los instrumentos de recogida de datos fueron la entrevista y el cuestionario. Se mandaron los cuestionarios a los Centros (se trataba de recoger información sobre infraestructura, organización y recursos humanos) y posteriormente se concertaba una entrevista (con el Director o el encargado de la biblioteca) para aclarar dudas y realizar o completar el cuestionario. Análisis cuantitativo de los datos. El análisis se basa en el recuento de las respuestas y en ofrecerlas en forma de frecuencias y porcentajes. Posteriormente se elaboran las conclusiones. La situación de las bibliotecas escolares en los colegios públicos de Salamanca es bastante deficitaria. Se consideran un medio de trasmisión de cultura literaria y no un recurso para la construcción de aprendizajes. Ocupa un lugar insignificante en el quehacer escolar. Hay carencias en la infraestructura, organización y recursos humanos. La sistematización de la información ha puesto de manifiesto que el uso de la biblioteca escolar es muy escaso o prácticamente nulo en el proceso de construcción de aprendizajes por parte de los escolares y que la problemática que afecta a las mismas en cuanto a su funcionalidad reside en que a los profesores no se les ha proporcionado una formación adecuada ni tampoco se asigna un tiempo en los horarios de los docentes para organizar y dinamizar la biblioteca..
Resumo:
Resumen basado en el de la publicaci??n
Resumo:
The Turing Test, originally configured for a human to distinguish between an unseen man and unseen woman through a text-based conversational measure of gender, is the ultimate test for thinking. So conceived Alan Turing when he replaced the woman with a machine. His assertion, that once a machine deceived a human judge into believing that they were the human, then that machine should be attributed with intelligence. But is the Turing Test nothing more than a mindless game? We present results from recent Loebner Prizes, a platform for the Turing Test, and find that machines in the contest appear conversationally worse rather than better, from 2004 to 2006, showing a downward trend in highest scores awarded to them by human judges. Thus the machines are not thinking in the same way as a human intelligent entity would.
Resumo:
This paper uses a Foucauldian governmentality framework to analyse and interrogate the discourses and strategies adopted by the state and sections of the business community in their attempts to shape and influence emerging agendas of governance in post-devolution Scotland. Much of the work on governmentality has examined the ways in which governments have developed particular techniques, rationales and mechanisms to enable the functioning of governance programmes. This paper expands upon such analyses by also looking at the ways in which particular interests may use similar procedures, discourses and practices to promote their own agendas and develop new forms of resistance, contestation and challenge to emerging policy frameworks. Using the example of business interest mobilization in post-devolution Scotland, it is argued that governments may seek to mobilize defined forms of expertise and knowledge, linking them to wider political debates. This, however, creates new opportunities for interests to shape and contest the discourses and practices of government. The governmentalization of politics can, therefore, be seen as more of a dialectical process of definition and contestation than is often apparent in existing Foucault-inspired writing.
Resumo:
This paper critically examines the challenges with, and impacts of, adopting the models in place for fair trade agriculture in the artisanal gold mining sector. Over the past two years, an NGO-led 'fair trade gold' movement has surfaced, its crystallization fuelled by a burgeoning body of evidence that points to impoverished artisanal miners in developing countries receiving low payments for their gold, as well as working in hazardous and unsanitary conditions. Proponents of fair trade gold contest that increased interaction between artisanal miners and Western jewellers could facilitate the former receiving fairer prices for gold, accessing support services, and ultimately, improving their quality of life. In the case of sub-Saharan Africa, however, the gold being mined on an artisanal scale does not supply Western retailers as perhaps believed; it is rather an important source of foreign exchange, which host governments employ buyers to collect for their coffers. It is maintained here that if the underlying purpose of fair trade is to improve the livelihoods and well-being of subsistence producers in developing countries, then the models that have proved so successful in alleviating the hardships of agro-producers of 'tropical' commodities such as coffee, tea, bananas and cocoa, should be adapted to artisanal gold mining in sub-Saharan Africa. Campaigns promoting 'fair trade gold' in the region should view host governments, and not Western retailers, as the 'end consumer', and focus on improving governance at the grassroots, organizing informal operators into working cooperatives, and addressing complications with purchasing arrangements - all of which would go a long way toward improving the livelihoods of subsistence artisanal miners. A case study of Noyem, Ghana, the location of a sprawling illegal gold mining community, is presented, which magnifies these challenges further and provides perspective on how they can be overcome. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This article critically examines the challenges that come with implementing the Extractive Industries Transparency Initiative (EITI)a policy mechanism marketed by donors and Western governments as a key to facilitating economic improvement in resource-rich developing countriesin sub-Saharan Africa. The forces behind the EITI contest that impoverished institutions, the embezzlement of petroleum and/or mineral revenues, and a lack of transparency are the chief reasons why resource-rich sub-Saharan Africa is underperforming economically, and that implementation of the EITI, with its foundation of good governance, will help address these problems. The position here, however, is that the task is by no means straightforward: that the EITI is not necessarily a blueprint for facilitating good governance in the region's resource-rich countries. It is concluded that the EITI is a policy mechanism that could prove to be effective with significant institutional change in host African countries but, on its own, it is incapable of reducing corruption and mobilizing citizens to hold government officials accountable for hoarding profits from extractive industry operations.
Resumo:
Chatterbox Challenge is an annual web-based contest for artificial conversational systems, ACE. The 2010 instantiation was the tenth consecutive contest held between March and June in the 60th year following the publication of Alan Turing’s influential disquisition ‘computing machinery and intelligence’. Loosely based on Turing’s viva voca interrogator-hidden witness imitation game, a thought experiment to ascertain a machine’s capacity to respond satisfactorily to unrestricted questions, the contest provides a platform for technology comparison and evaluation. This paper provides an insight into emotion content in the entries since the 2005 Chatterbox Challenge. The authors find that synthetic textual systems, none of which are backed by academic or industry funding, are, on the whole and more than half a century since Weizenbaum’s natural language understanding experiment, little further than Eliza in terms of expressing emotion in dialogue. This may be a failure on the part of the academic AI community for ignoring the Turing test as an engineering challenge.
Resumo:
Purpose – The purpose of this paper is to consider Turing's two tests for machine intelligence: the parallel-paired, three-participants game presented in his 1950 paper, and the “jury-service” one-to-one measure described two years later in a radio broadcast. Both versions were instantiated in practical Turing tests during the 18th Loebner Prize for artificial intelligence hosted at the University of Reading, UK, in October 2008. This involved jury-service tests in the preliminary phase and parallel-paired in the final phase. Design/methodology/approach – Almost 100 test results from the final have been evaluated and this paper reports some intriguing nuances which arose as a result of the unique contest. Findings – In the 2008 competition, Turing's 30 per cent pass rate is not achieved by any machine in the parallel-paired tests but Turing's modified prediction: “at least in a hundred years time” is remembered. Originality/value – The paper presents actual responses from “modern Elizas” to human interrogators during contest dialogues that show considerable improvement in artificial conversational entities (ACE). Unlike their ancestor – Weizenbaum's natural language understanding system – ACE are now able to recall, share information and disclose personal interests.
Resumo:
The academic discipline of television studies has been constituted by the claim that television is worth studying because it is popular. Yet this claim has also entailed a need to defend the subject against the triviality that is associated with the television medium because of its very popularity. This article analyses the many attempts in the later twentieth and twenty-first centuries to constitute critical discourses about television as a popular medium. It focuses on how the theoretical currents of Television Studies emerged and changed in the UK, where a disciplinary identity for the subject was founded by borrowing from related disciplines, yet argued for the specificity of the medium as an object of criticism. Eschewing technological determinism, moral pathologization and sterile debates about television's supposed effects, UK writers such as Raymond Williams addressed television as an aspect of culture. Television theory in Britain has been part of, and also separate from, the disciplinary fields of media theory, literary theory and film theory. It has focused its attention on institutions, audio-visual texts, genres, authors and viewers according to the ways that research problems and theoretical inadequacies have emerged over time. But a consistent feature has been the problem of moving from a descriptive discourse to an analytical and evaluative one, and from studies of specific texts, moments and locations of television to larger theories. By discussing some historically significant critical work about television, the article considers how academic work has constructed relationships between the different kinds of objects of study. The article argues that a fundamental tension between descriptive and politically activist discourses has confused academic writing about ›the popular‹. Television study in Britain arose not to supply graduate professionals to the television industry, nor to perfect the instrumental techniques of allied sectors such as advertising and marketing, but to analyse and critique the medium's aesthetic forms and to evaluate its role in culture. Since television cannot be made by ›the people‹, the empowerment that discourses of television theory and analysis aimed for was focused on disseminating the tools for critique. Recent developments in factual entertainment television (in Britain and elsewhere) have greatly increased the visibility of ›the people‹ in programmes, notably in docusoaps, game shows and other participative formats. This has led to renewed debates about whether such ›popular‹ programmes appropriately represent ›the people‹ and how factual entertainment that is often despised relates to genres hitherto considered to be of high quality, such as scripted drama and socially-engaged documentary television. A further aspect of this problem of evaluation is how television globalisation has been addressed, and the example that the issue has crystallised around most is the reality TV contest Big Brother. Television theory has been largely based on studying the texts, institutions and audiences of television in the Anglophone world, and thus in specific geographical contexts. The transnational contexts of popular television have been addressed as spaces of contestation, for example between Americanisation and national or regional identities. Commentators have been ambivalent about whether the discipline's role is to celebrate or critique television, and whether to do so within a national, regional or global context. In the discourses of the television industry, ›popular television‹ is a quantitative and comparative measure, and because of the overlap between the programming with the largest audiences and the scheduling of established programme types at the times of day when the largest audiences are available, it has a strong relationship with genre. The measurement of audiences and the design of schedules are carried out in predominantly national contexts, but the article refers to programmes like Big Brother that have been broadcast transnationally, and programmes that have been extensively exported, to consider in what ways they too might be called popular. Strands of work in television studies have at different times attempted to diagnose what is at stake in the most popular programme types, such as reality TV, situation comedy and drama series. This has centred on questions of how aesthetic quality might be discriminated in television programmes, and how quality relates to popularity. The interaction of the designations ›popular‹ and ›quality‹ is exemplified in the ways that critical discourse has addressed US drama series that have been widely exported around the world, and the article shows how the two critical terms are both distinct and interrelated. In this context and in the article as a whole, the aim is not to arrive at a definitive meaning for ›the popular‹ inasmuch as it designates programmes or indeed the medium of television itself. Instead the aim is to show how, in historically and geographically contingent ways, these terms and ideas have been dynamically adopted and contested in order to address a multiple and changing object of analysis.
Resumo:
Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.
Resumo:
The South African government has endeavoured to strengthen property rights in communal areas and develop civil society institutions for community-led development and natural resource management. However, the effectiveness of this remains unclear as the emergence and operation of civil society institutions in these areas is potentially constrained by the persistence of traditional authorities. Focusing on the former Transkei region of Eastern Cape Province, three case study communities are used examine the extent to which local institutions overlap in issues of land access and control. Within these communities, traditional leaders (chiefs and headmen) continue to exercise complete and sole authority over land allocation and use this to entrench their own positions. However, in the absence of effective state support, traditional authorities have only limited power over how land is used and in enforcing land rights, particularly over communal resources such as rangeland. This diminishes their local legitimacy and encourages some groups to contest their authority by cutting fences, ignoring collective grazing decisions and refusing to pay ‘fees’ levied on them. They are encouraged in such activities by the presence of democratically elected local civil society institutions such as ward councillors and farmers’ organisations, which have broad appeal and are increasingly responsible for much of the agrarian development that takes place, despite having no direct mandate over land. Where it occurs at all, interaction between these different institutions is generally restricted to approval being required from traditional leaders for land allocated to development projects. On this basis it is argued that a more radical approach to land reform in communal areas is required, which transfers all powers over land to elected and accountable local institutions and integrates land allocation, land management and agrarian development more effectively.
Resumo:
Earth system models are increasing in complexity and incorporating more processes than their predecessors, making them important tools for studying the global carbon cycle. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes, with coupled climate-carbon cycle models that represent land-use change simulating total land carbon stores by 2100 that vary by as much as 600 Pg C given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous model evaluation methodologies. Here we assess the state-of-the-art with respect to evaluation of Earth system models, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeo data and (ii) metrics for evaluation, and discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute towards the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but it is also a challenge, as more knowledge about data uncertainties is required in order to determine robust evaluation methodologies that move the field of ESM evaluation from "beauty contest" toward the development of useful constraints on model behaviour.