973 resultados para web-enabled collective intelligence
Resumo:
Copyright © Cambridge University Press 2016In her recent book, Democratic Reason, Hélène Landemore argues that, when evaluated epistemically, “a democratic decision procedure is likely to be a better decision procedure than any non-democratic decision procedures, such as a council of experts or a benevolent dictator” (p. 3). Landemore's argument rests heavily on studies of collective intelligence done by Lu Hong and Scott Page. These studies purport to show that cognitive diversity – differences in how people solve problems – is actually more important to overall group performance than average individual ability – how smart the individual members are. Landemore's argument aims to extrapolate from these results to the conclusion that democracy is epistemically better than any non-democratic rival. I argue here that Hong and Page's results actually undermine, rather than support, this conclusion. More specifically, I argue that the results do not show that democracy is better than any non-democratic alternative, and that in fact, they suggest the opposite – that at least some non-democratic alternatives are likely to epistemically outperform democracy.
Resumo:
This paper introduces a novel, in-depth approach of analyzing the differences in writing style between two famous Romanian orators, based on automated textual complexity indices for Romanian language. The considered authors are: (a) Mihai Eminescu, Romania’s national poet and a remarkable journalist of his time, and (b) Ion C. Brătianu, one of the most important Romanian politicians from the middle of the 18th century. Both orators have a common journalistic interest consisting in their desire to spread the word about political issues in Romania via the printing press, the most important public voice at that time. In addition, both authors exhibit writing style particularities, and our aim is to explore these differences through our ReaderBench framework that computes a wide range of lexical and semantic textual complexity indices for Romanian and other languages. The used corpus contains two collections of speeches for each orator that cover the period 1857–1880. The results of this study highlight the lexical and cohesive textual complexity indices that reflect very well the differences in writing style, measures relying on Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) semantic models.
Resumo:
A educação a distância apoiada pelos meios de comunicação digital ampliou as possibilidades de interação, flexibilizando o processo de mediação pedagógica no tempo e no espaço. Nessa perspectiva, a educação profissional democratizou seu acesso, na qual os conhecimentos de nível técnico são customizados em um Ambiente Virtual de Aprendizagem (AVA) para serem mediados a distância. Esta tese, apresentada na forma de artigos, problematiza o processo de mediação pedagógica realizado pelo professor tutor virtual na Rede e-Tec Brasil do Instituto Federal de Educação, Ciência e Tecnologia Sul-rio-grandense (IFSul) Campus Visconde da Graça (CAVG). Nesse modelo de ensino, o professor tutor virtual é contratado para atuar, pelo período de dois anos, em todas as disciplinas curriculares de um curso técnico. Se, por um lado, isso permite-lhe conhecer a realidade de seus alunos; por outro, exige-lhe um esforço pedagógico de apropriação e mediação dos conteúdos específicos nas diversas disciplinas que integram os currículos de cada curso. A pesquisa buscou conhecer como o professor tutor virtual apropria-se dos conhecimentos específicos nos cursos técnicos para mediá-los pedagogicamente com os alunos. Apresentamos, como hipótese explicativa neste estudo, que é na convivência com o professor pesquisador que o professor tutor virtual encontra a possibilidade de se apropriar dos conhecimentos curriculares para poder mediá-los pedagogicamente com os alunos. Para sustentar teoricamente nossas proposições na experiência vivida, estabelecemos uma rede de conversação com os autores Humberto Maturana, Pierre Lévy, Lee Shulman e Maurice Tardif, por meio dos conceitos: cultura em redes de conversação; inteligência coletiva; conhecimento pedagógico do conteúdo; e formação profissional docente. Como procedimento metodológico, encontramos na técnica do Discurso do Sujeito Coletivo (DSC), de Lefèvre e Lefèvre, uma estratégia de abordagem qualitativa para analisar as recorrências encontradas nos discursos dos professores tutores virtuais. O estudo aponta que uma rede de conversação recursiva entre o professor pesquisador e o professor tutor virtual possibilita a apropriação de conhecimentos técnicos e específicos necessários ao processo de mediação pedagógica com os estudantes. Essa convivência, no caminho da constituição de um coletivo inteligente, favorece o trabalho colaborativo no ambiente da tutoria, contribuindo para profissionalizar o processo de mediação pedagógica na educação profissional a distância do IFSul CAVG. Supported by digital media, distance learning has increased the possibilities of interaction, easing the process of pedagogical mediation in time and space. From this perspective, the access to professional education has been democratized: technical knowledge is customized in a Learning Managing System and later delivered by means of mediated distance education courses. Structured in a sequence of articles, this dissertation addresses the problem of the pedagogical mediation process performed by on-line tutor teachers at Rede e-Tec Brasil of the Instituto Federal Sul- rio-grandense (IF-Sul), Campus Visconde da Graça (CAVG). This model of education establishes that on-line tutor teachers are hired to work with all the curriculum courses of a technical program for two years. If, on the hand, it allows these teachers to know the reality of their students well, on the other hand it demands them a pedagogical effort of appropriation and mediation of the specific contents guiding the various courses that comprise the curriculum of each program. This research aimed to find out how on-line tutor teachers appropriate expertise from technical programs to mediate it with their students in a pedagogical way. The explanatory hypothesis given is that by working together and sharing experience with the teacher/researcher, on-line tutor teachers will be able to appropriate of curricular knowledge and pedagogically mediate it with their students afterwards. To support our theoretical propositions, a network of conversation was established with authors like Humberto Maturana, Pierre Lévy, Lee Shulman, and Maurice Tardif through the concepts of culture in networks of conversation, collective intelligence, pedagogical content knowledge, and teacher training. As a methodological procedure, the technique of the Collective Subject Discourse (CSD), by Lefèvre and Lefèvre, was found to offer a strategy of qualitative approach to analyze the recurrences seen in the speech of on-line tutor teachers. The study shows that a recursive network of conversation between the teacher/researcher and the on-line tutor teacher enables the appropriation of specific and technical knowledge required for the process of pedagogical mediation with students. The experience of sharing a consensual professional relationship, in which one respects and accepts the other as a way of establishing a collective intelligence, encourages collaborative work in the tutoring environment, helping professionalize the process of pedagogical mediation in distance professional education at IFSul CAVG.
Resumo:
Las redes sociales digitales forman, en la actualidad, una inteligencia colectiva que se puede transformar en un verdadero editor virtual de los medios de comunicación social al ayudar a definir el presente social y la noticiabilidad de los hechos, basados en el interés de estos. El objetivo de este trabajo es conocer cómo se determina la noticiabilidad de los hechos con el uso de redes sociales en Ecuador y difundir herramientas útiles para este fin. Para esto se hizo un estudio documental y una consulta a 94 periodistas ecuatorianos sobre el empleo de redes sociales en su trabajo, sobre todo en la opción de usarlas para definir qué es noticia y qué es más noticia. Las redes sociales y otras herramientas relacionadas pueden ayudar a definir si un hecho es digno de publicarse como noticia o no. Esto es algo que ya se ha empezado a hacer realidad en los medios de comunicación ecuatorianos.
Resumo:
Social networks rely on concepts such as collaboration, cooperation, replication, flow, speed, interaction, engagement, and aim the continuous sharing and resharing of information in support of the permanent social interaction. Facebook, the largest social network in the world, reached, in May 2016, the mark of 1.09 billion active users daily, draining 161.7 million hours of users’ attention to the website every day. These users share 4.75 billion units of content daily. The research presented in this dissertation aims to investigate the management of knowledge and collective intelligence, from the introduction of mechanisms that aim to enable users to manage and organize current information in the feeds from Facebook groups in which they participate, turning Facebook into a collective knowledge and information management device that goes far beyond mere interaction and communication among people. The adoption of Design Science Research methodology is intended to instill the "genes" of collective intelligence, as presented in the literature, in the computational artifact being developed, so that intelligence can be managed and used to create even more knowledge and intelligence to and by the group. The main theoretical contribution of this dissertation is to discuss knowledge management and collective intelligence in a complementary and integrated manner, showing how efforts to obtain one also contribute to leveraging the other.
Resumo:
In the last few years, mobile wireless technology has gone through a revolutionary change. Web-enabled devices have evolved into essential tools for communication, information, and entertainment. The fifth generation (5G) of mobile communication networks is envisioned to be a key enabler of the next upcoming wireless revolution. Millimeter wave (mmWave) spectrum and the evolution of Cloud Radio Access Networks (C-RANs) are two of the main technological innovations of 5G wireless systems and beyond. Because of the current spectrum-shortage condition, mmWaves have been proposed for the next generation systems, providing larger bandwidths and higher data rates. Consequently, new radio channel models are being developed. Recently, deterministic ray-based models such as Ray-Tracing (RT) are getting more attractive thanks to their frequency-agility and reliable predictions. A modern RT software has been calibrated and used to analyze the mmWave channel. Knowledge of the electromagnetic properties of materials is therefore essential. Hence, an item-level electromagnetic characterization of common construction materials has been successfully achieved to obtain information about their complex relative permittivity. A complete tuning of the RT tool has been performed against indoor and outdoor measurement campaigns at 27 and 38 GHz, setting the basis for the future development of advanced beamforming techniques which rely on deterministic propagation models (as RT). C-RAN is a novel mobile network architecture which can address a number of challenges that network operators are facing in order to meet the continuous customers’ demands. C-RANs have already been adopted in advanced 4G deployments; however, there are still some issues to deal with, especially considering the bandwidth requirements set by the forthcoming 5G systems. Open RAN specifications have been proposed to overcome the new 5G challenges set on C-RAN architectures, including synchronization aspects. In this work it is described an FPGA implementation of the Synchronization Plane for an O-RAN-compliant radio system.
Resumo:
Nowadays, some activities, such as subscribing an insurance policy or opening a bank account, are possible by navigating through a web page or a downloadable application. Since the user is often “hidden” behind a monitor or a smartphone, it is necessary a solution able to guarantee about their identity. Companies are often requiring the submission of a “proof-of-identity”, which usually consists in a picture of an identity document of the user, together with a picture or a brief video of themselves. This work describes a system whose purpose is the automation of these kinds of verifications.
Resumo:
Resumen tomado de la publicación
Resumo:
Master’s Degree Dissertation
Resumo:
Ce mémoire de maîtrise a été rédigé dans l’objectif d’explorer une inégalité. Une inégalité dans les pratiques liées à la saisie et l’exploitation des données utilisateur dans la sphère des technologies et services Web, plus particulièrement dans la sphère des GIS (Geographic Information Systems). En 2014, de nombreuses entreprises exploitent les données de leurs utilisateurs afin d’améliorer leurs services ou générer du revenu publicitaire. Du côté de la sphère publique et gouvernementale, ce changement n’a pas été effectué. Ainsi, les gouvernements fédéraux et municipaux sont démunis de données qui permettraient d’améliorer les infrastructures et services publics. Des villes à travers le monde essayent d’améliorer leurs services et de devenir « intelligentes » mais sont dépourvues de ressources et de savoir faire pour assurer une transition respectueuse de la vie privée et des souhaits des citadins. Comment une ville peut-elle créer des jeux de données géo-référencés sans enfreindre les droits des citadins ? Dans l’objectif de répondre à ces interrogations, nous avons réalisé une étude comparative entre l’utilisation d’OpenStreetMap (OSM) et de Google Maps (GM). Grâce à une série d’entretiens avec des utilisateurs de GM et d’OSM, nous avons pu comprendre les significations et les valeurs d’usages de ces deux plateformes. Une analyse mobilisant les concepts de l’appropriation, de l’action collective et des perspectives critiques variées nous a permis d’analyser nos données d’entretiens pour comprendre les enjeux et problèmes derrière l’utilisation de technologies de géolocalisation, ainsi que ceux liés à la contribution des utilisateurs à ces GIS. Suite à cette analyse, la compréhension de la contribution et de l’utilisation de ces services a été recontextualisée pour explorer les moyens potentiels que les villes ont d’utiliser les technologies de géolocalisation afin d’améliorer leurs infrastructures publiques en respectant leurs citoyens.
Resumo:
À l’heure où les limites du modèle de la démocratie représentative traditionnelle apparaissent de plus en plus évidentes, on a pu défendre, dans la littérature récente, un modèle sensiblement différent : celui d’une démocratie épistémique, tirant parti, par le mécanisme de la délibération inclusive, d’une forme d’intelligence collective qui serait disséminée à travers les agents d’une collectivité. Or, les partisans d’une telle approche se réclament souvent d’un argument qu’on trouve sous la plume d’Aristote en Politique, III, 11. L’objectif de cette étude est d’examiner la légitimité d’une telle filiation, en examinant de manière comparative les arguments modernes en faveur de la démocratie épistémique délibérative et le texte aristotélicien. Un tel travail permet de nuancer la portée du recours à Aristote dans les justifications épistémiques de la démocratie : d’une part, le mécanisme auquel songe Aristote dans le texte de la Politique ne saurait se ramener à une forme quelconque de délibération inclusive ; et, d’autre part, l’argument épistémique se trouve restreint par la mise en évidence des limites intrinsèques du régime démocratique. Plutôt que la source d’une position moderne en faveur de la démocratie, on trouverait alors chez Aristote l’occasion de penser le danger concret qui guette tout modèle délibératif : c’est-à-dire, la confiscation du débat public par une minorité de démagogues, qui empêche la collectivité de tirer parti de la diversité cognitive qu’elle contient.
Resumo:
This chapter presents fuzzy cognitive maps (FCM) as a vehicle for Web knowledge aggregation, representation, and reasoning. The corresponding Web KnowARR framework incorporates findings from fuzzy logic. To this end, a first emphasis is particularly on the Web KnowARR framework along with a stakeholder management use case to illustrate the framework’s usefulness as a second focal point. This management form is to help projects to acceptance and assertiveness where claims for company decisions are actively involved in the management process. Stakeholder maps visually (re-) present these claims. On one hand, they resort to non-public content and on the other they resort to content that is available to the public (mostly on the Web). The Semantic Web offers opportunities not only to present public content descriptively but also to show relationships. The proposed framework can serve as the basis for the public content of stakeholder maps.
Resumo:
Internet está evolucionando hacia la conocida como Live Web. En esta nueva etapa en la evolución de Internet, se pone al servicio de los usuarios multitud de streams de datos sociales. Gracias a estas fuentes de datos, los usuarios han pasado de navegar por páginas web estáticas a interacturar con aplicaciones que ofrecen contenido personalizado, basada en sus preferencias. Cada usuario interactúa a diario con multiples aplicaciones que ofrecen notificaciones y alertas, en este sentido cada usuario es una fuente de eventos, y a menudo los usuarios se sienten desbordados y no son capaces de procesar toda esa información a la carta. Para lidiar con esta sobresaturación, han aparecido múltiples herramientas que automatizan las tareas más habituales, desde gestores de bandeja de entrada, gestores de alertas en redes sociales, a complejos CRMs o smart-home hubs. La contrapartida es que aunque ofrecen una solución a problemas comunes, no pueden adaptarse a las necesidades de cada usuario ofreciendo una solucion personalizada. Los Servicios de Automatización de Tareas (TAS de sus siglas en inglés) entraron en escena a partir de 2012 para dar solución a esta liminación. Dada su semejanza, estos servicios también son considerados como un nuevo enfoque en la tecnología de mash-ups pero centra en el usuarios. Los usuarios de estas plataformas tienen la capacidad de interconectar servicios, sensores y otros aparatos con connexión a internet diseñando las automatizaciones que se ajustan a sus necesidades. La propuesta ha sido ámpliamante aceptada por los usuarios. Este hecho ha propiciado multitud de plataformas que ofrecen servicios TAS entren en escena. Al ser un nuevo campo de investigación, esta tesis presenta las principales características de los TAS, describe sus componentes, e identifica las dimensiones fundamentales que los defines y permiten su clasificación. En este trabajo se acuña el termino Servicio de Automatización de Tareas (TAS) dando una descripción formal para estos servicios y sus componentes (llamados canales), y proporciona una arquitectura de referencia. De igual forma, existe una falta de herramientas para describir servicios de automatización, y las reglas de automatización. A este respecto, esta tesis propone un modelo común que se concreta en la ontología EWE (Evented WEb Ontology). Este modelo permite com parar y equiparar canales y automatizaciones de distintos TASs, constituyendo un aporte considerable paraa la portabilidad de automatizaciones de usuarios entre plataformas. De igual manera, dado el carácter semántico del modelo, permite incluir en las automatizaciones elementos de fuentes externas sobre los que razonar, como es el caso de Linked Open Data. Utilizando este modelo, se ha generado un dataset de canales y automatizaciones, con los datos obtenidos de algunos de los TAS existentes en el mercado. Como último paso hacia el lograr un modelo común para describir TAS, se ha desarrollado un algoritmo para aprender ontologías de forma automática a partir de los datos del dataset. De esta forma, se favorece el descubrimiento de nuevos canales, y se reduce el coste de mantenimiento del modelo, el cual se actualiza de forma semi-automática. En conclusión, las principales contribuciones de esta tesis son: i) describir el estado del arte en automatización de tareas y acuñar el término Servicio de Automatización de Tareas, ii) desarrollar una ontología para el modelado de los componentes de TASs y automatizaciones, iii) poblar un dataset de datos de canales y automatizaciones, usado para desarrollar un algoritmo de aprendizaje automatico de ontologías, y iv) diseñar una arquitectura de agentes para la asistencia a usuarios en la creación de automatizaciones. ABSTRACT The new stage in the evolution of the Web (the Live Web or Evented Web) puts lots of social data-streams at the service of users, who no longer browse static web pages but interact with applications that present them contextual and relevant experiences. Given that each user is a potential source of events, a typical user often gets overwhelmed. To deal with that huge amount of data, multiple automation tools have emerged, covering from simple social media managers or notification aggregators to complex CRMs or smart-home Hub/Apps. As a downside, they cannot tailor to the needs of every single user. As a natural response to this downside, Task Automation Services broke in the Internet. They may be seen as a new model of mash-up technology for combining social streams, services and connected devices from an end-user perspective: end-users are empowered to connect those stream however they want, designing the automations they need. The numbers of those platforms that appeared early on shot up, and as a consequence the amount of platforms following this approach is growing fast. Being a novel field, this thesis aims to shed light on it, presenting and exemplifying the main characteristics of Task Automation Services, describing their components, and identifying several dimensions to classify them. This thesis coins the term Task Automation Services (TAS) by providing a formal definition of them, their components (called channels), as well a TAS reference architecture. There is also a lack of tools for describing automation services and automations rules. In this regard, this thesis proposes a theoretical common model of TAS and formalizes it as the EWE ontology This model enables to compare channels and automations from different TASs, which has a high impact in interoperability; and enhances automations providing a mechanism to reason over external sources such as Linked Open Data. Based on this model, a dataset of components of TAS was built, harvesting data from the web sites of actual TASs. Going a step further towards this common model, an algorithm for categorizing them was designed, enabling their discovery across different TAS. Thus, the main contributions of the thesis are: i) surveying the state of the art on task automation and coining the term Task Automation Service; ii) providing a semantic common model for describing TAS components and automations; iii) populating a categorized dataset of TAS components, used to learn ontologies of particular domains from the TAS perspective; and iv) designing an agent architecture for assisting users in setting up automations, that is aware of their context and acts in consequence.
Resumo:
UncertWeb is a European research project running from 2010-2013 that will realize the uncertainty enabled model web. The assumption is that data services, in order to be useful, need to provide information about the accuracy or uncertainty of the data in a machine-readable form. Models taking these data as imput should understand this and propagate errors through model computations, and quantify and communicate errors or uncertainties generated by the model approximations. The project will develop technology to realize this and provide demonstration case studies.