848 resultados para Digital information environments
Resumo:
Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.
Resumo:
Apresentam-se os resultados parcelares de um estudo destinado a promover um melhor conhecimento das estratégias que os jovens em idade escolar (12-18 anos) consideram relevantes para avaliar as fontes de informação disponíveis na Internet. Para o efeito, foi aplicado um inquérito distribuído a uma amostra de 195 alunos de uma escola do 3o ciclo e outra do ensino secundário de um concelho do distrito do Porto. São apresentados e discutidos os resultados acerca da perceção destes alunos quanto aos critérios a aplicar na avaliação das fontes de informação disponíveis na Internet, na vertente da credibilidade. Serão apresenta- das as práticas que os jovens declaram ter relativamente ao uso de critérios de autoria, originalidade, estrutura, atualidade e de comparação para avaliar a credibilidade das fontes de informação. Em complemento, estes resultados serão comparados e discutidos com as perceções que os mesmos inquiridos demonstram possuir relativamente aos elementos que compõem cada um destes critérios. A análise dos dados obtidos é enquadrada e sustentada numa revisão da literatura acerca do conceito de credibilidade, aplicado às fontes de informação disponíveis na Internet. São ainda abordados alguns tópicos relaciona- dos com a inclusão de estratégias de avaliação da credibilidade da informação digital no modelo Big6, um dos modelos de desenvolvimento de competências de literacia da informação mais conhecidos e utilizados nas bibliotecas escolares portuguesas.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
We examine whether earnings manipulation around seasoned equity offerings (SEOs) is associated with an increase in the likelihood of a stock price crash post-issue and test whether the enactment of securities regulations attenuate the relation between SEOs and crash risk. Empirical evidence documents that managerial tendency to conceal bad news increases the likelihood of a stock price crash (Jin and Myers, 2006; Hutton, Marcus, and Tehranian, 2009). We test this hypothesis using a sample of firms from 29 EU countries that enacted the Market Abuse Directive (MAD). Consistent with our hypothesis, we find that equity issuers that engage in earnings management experience a significant increase in crash risk post-SEO relative to control groups of non-issuers; this effect is stronger for equity issuers with poor information environments. In addition, our findings show a significant decline in crash risk post-issue after the enactment of MAD that is stronger for firms that actively manage earnings. This decline in post-issue crash risk is more effective in countries with high ex-ante institutional quality and enforcement. These results suggest that the implementation of MAD helps to mitigate managers’ ability to manipulate earnings around SEOs.
Resumo:
There is growing evidence that nonlinear time series analysis techniques can be used to successfully characterize, classify, or process signals derived from realworld dynamics even though these are not necessarily deterministic and stationary. In the present study we proceed in this direction by addressing an important problem our modern society is facing, the automatic classification of digital information. In particular, we address the automatic identification of cover songs, i.e. alternative renditions of a previously recorded musical piece. For this purpose we here propose a recurrence quantification analysis measure that allows tracking potentially curved and disrupted traces in cross recurrence plots. We apply this measure to cross recurrence plots constructed from the state space representation of musical descriptor time series extracted from the raw audio signal. We show that our method identifies cover songs with a higher accuracy as compared to previously published techniques. Beyond the particular application proposed here, we discuss how our approach can be useful for the characterization of a variety of signals from different scientific disciplines. We study coupled Rössler dynamics with stochastically modulated mean frequencies as one concrete example to illustrate this point.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
This work investigates novel alternative means of interaction in a virtual environment (VE).We analyze whether humans can remap established body functions to learn to interact with digital information in an environment that is cross-sensory by nature and uses vocal utterances in order to influence (abstract) virtual objects. We thus establish a correlation among learning, control of the interface, and the perceived sense of presence in the VE. The application enables intuitive interaction by mapping actions (the prosodic aspects of the human voice) to a certain response (i.e., visualization). A series of single-user and multiuser studies shows that users can gain control of the intuitive interface and learn to adapt to new and previously unseen tasks in VEs. Despite the abstract nature of the presented environment, presence scores were generally very high.
Resumo:
This work investigates novel alternative means of interaction in a virtual environment (VE).We analyze whether humans can remap established body functions to learn to interact with digital information in an environment that is cross-sensory by nature and uses vocal utterances in order to influence (abstract) virtual objects. We thus establish a correlation among learning, control of the interface, and the perceived sense of presence in the VE. The application enables intuitive interaction by mapping actions (the prosodic aspects of the human voice) to a certain response (i.e., visualization). A series of single-user and multiuser studies shows that users can gain control of the intuitive interface and learn to adapt to new and previously unseen tasks in VEs. Despite the abstract nature of the presented environment, presence scores were generally very high.
Resumo:
Oheinen opinnäytetyö on kvalitatiivinen tutkimus kuluttajavastarinnasta mobiilin kaupankäynnin palveluja kohtaan. Tutkimus kohdistuu läntisiin kulttuureihin, joissa kyseisen innovatiivisen palveluryhmän leviämistä tukevat monet aikaisemmat innovaatiot kuten matkapuhelin, Internet, digitaaliset pankkipalvelut. Tutkimus esittelee innovaatioiden vastarintatekijöitä ihmisen luonnollisena reaktiona tämän vakiintuneita elämäntapoja mullistavia keksintöjä kohtaan nimenomaan läntisissä kulttuureissa, joissa kuluttajat ovat perinteisesti hyvin teknologiamyönteisiä. Toisaalta tutkimusalueella on havaittavissa sosiaalisten ryhmien pirstoutuminen yhä pienemmiksi alaryhmiksi, mikä voi hidastaa sosiaalista oppimista. Tutkimus vastaa todelliseen tutkimusaukkoon. Aihe on samalla sekä ajankohtainen että relevantti vastatessaan nykyisin käytävään utopistiseen keskusteluun digitaalisen informaatioyhteiskunnan kehittymisestä ja merkityksestä modernille ihmiskunnalle. Tutkimuksen teoreettinen eksploratiivinen viitekehys rakentuu valikoiduista uusien tuotteiden ja palvelujen kehittämisen, palvelumarkkinoinnin ja sosiaalisen oppimisen teorioista sekä innovaatio- kommunikaatioteorioista. Empiirisen osan muodostavat kansainvälisten markkinatutkimuslaitosten ja haastateltujen asiantuntijoiden näkemykset alan kehityksestä. Tutkimus osoittaa, että kuluttajat eivät ole valmiita vastaanottamaan kehittyvien teknologioiden mahdollistamia mobiilin kaupankäynnin palveluita ennen kuin ne vastaavat kuluttajien perustarpeisiin ja rakenteelliset vastarintatekijät (alhainen käytettävyys, matala lisäarvo, koetut riskit, perinnevastarinta, palveluryhmän huono mielikuva) on poistettu. Tutkimus esittää, että mobiilin kaupankäynnin alalla toimivien yritysten tulisi työskennellä yhteistyössä keskenään ja kuluttajien kanssa luodakseen kuluttajien tarpeita ja toiveita vastaavia turvallisiksi koettuja mobiilin kaupankäynnin palveluita. Tutkimus ehdottaa, että kyselytutkimusten ohella käytettäisiin havaintomenetelmiä, jotta teknologiat voitaisiin valjastaa kuluttajien tarpeita ja kulutustottumuksia vastaaviksi.
Resumo:
Tutkimuksen tarkoituksena on selvittää mitkä ovat Lappeenrannan teknillisen korkeakoulun laadun kehittämisen mahdollisuudet. Tutkimusongelmaa lähestytään selvittämällä kirjallisuudesta laadun kehittämisen yleisiä perusedellytyksiä ja miten toimiminen julkisesti rahoitetulla sektorilla vaikuttaa niihin. Teollisuudesta peräisin olevat laadun kehittämisen j a laatujohtamisen menetelmät eivät yksin sovellu tietointensiiviseen akateemiseen maailmaan. Tietojohtaminen tuo yliopistojen laadun kehittämiseen uuden ulottuvuuden. Organisaatiot on nähtävä moniulotteisina tietoyöparistoina, joissa on mekaanisia, orgaanisia ja dynaamisia piirteitä. Näissä tietoympäristöissä on omat periaatteensa, joiden mukaan niiden toimintaa tehokkaimmin johdetaan, ja kriteerinsä, joiden pohjalta laatu määräytyy. Tutkimus osoittaa, että LTKK:n johto suhtautuu myönteisesti laadun kehittämiseen ja LTKK:ssa on monia kohteita, joiden laatua voidaan kehittää. Vaikka LTKK:ssa arvostetaan innovatiivisuutta, joka on dynaamisen ympäristön laadun kriteeri, kehittämisehdotukset tukivat lähes kokonaan orgaanisen ympäristön laatua. jonka kriteerinä on hallittu kehittäminen. Suurimmat haasteet laadun kehittämisessä ovat kenties dynaamisen ympäristön tavoitteiden tunnistaminen, henkilöstön asenteiden muuttaminen ja yhteisöllisyyden lisääminen.
Resumo:
Las técnicas de análisis forense digital, de aplicación en investigación criminal, también se pueden usar en las bibliotecas para acceder a información digital almacenada en soportes o formatos obsoletos. Se analizan distintos ejemplos de departamentos de análisis forense creados por bibliotecas y se describen los elementos de hardware y software mínimos con los que se podría montar una unidad de análisis forense en cualquier biblioteca. Con este fin se presentan dos posibles configuraciones de equipamiento y se dan recomendaciones sobre organización del flujo de trabajo para la recuperación de antiguos discos duros y disquetes. Forensic analysis techniques, usually applied in criminal research, could also be used in libraries to access digital information stored in obsolete formats or storage devices. This article analyses some examples of forensic research departments created by libraries, and describes the minimal hardware and software elements required to set up a library unit specialized in forensic analysis. Two possible equipment settings are introduced and recommendations are given on how to organize a workflow to recover information stored in floppy disks, diskettes and old hard drives.
Resumo:
This artcle describes work done with enterprise architecture of the National Digital Library. The National Digital Library is an initiative of the Finnish Ministry of Education and Culture. Its purpose is to promote the availability of the digital information resources of archives, libraries and museums, and to develope the long-term preservation of digital cultural heritage materials. Enterprise architectures are a tool for strategic management and planning. An enterprise architecture also functions as an aid at a more practical level. It shows, for example, what kind of changes and improvements may be made in one system without overlap or conflict with other systems.
Resumo:
This meta-analytic study sought to determine if cross-national curricula are aligned with burgeoning digital learning environments in order to help policy makers develop curriculum that incorporates 21st-century skills instruction. The study juxtaposed cross- national curricula in Ontario (Canada), Australia, and Finland against Jenkins’s (2009) framework of 11 crucial 21st-century skills that include: play, performance, simulation, appropriation, multitasking, distributed cognition, collective intelligence, judgment, transmedia navigation, networking, and negotiation. Results from qualitative data collection and analysis revealed that Finland implements all of Jenkins’s 21st-century skills. Recommendations are made to implement sound 21st-century skills in other jurisdictions.
Resumo:
The goal of this work was developing a query processing system using software agents. Open Agent Architecture framework is used for system development. The system supports queries in both Hindi and Malayalam; two prominent regional languages of India. Natural language processing techniques are used for meaning extraction from the plain query and information from database is given back to the user in his native language. The system architecture is designed in a structured way that it can be adapted to other regional languages of India. . This system can be effectively used in application areas like e-governance, agriculture, rural health, education, national resource planning, disaster management, information kiosks etc where people from all walks of life are involved.
Resumo:
Detection of Objects in Video is a highly demanding area of research. The Background Subtraction Algorithms can yield better results in Foreground Object Detection. This work presents a Hybrid CodeBook based Background Subtraction to extract the foreground ROI from the background. Codebooks are used to store compressed information by demanding lesser memory usage and high speedy processing. This Hybrid method which uses Block-Based and Pixel-Based Codebooks provide efficient detection results; the high speed processing capability of block based background subtraction as well as high Precision Rate of pixel based background subtraction are exploited to yield an efficient Background Subtraction System. The Block stage produces a coarse foreground area, which is then refined by the Pixel stage. The system’s performance is evaluated with different block sizes and with different block descriptors like 2D-DCT, FFT etc. The Experimental analysis based on statistical measurements yields precision, recall, similarity and F measure of the hybrid system as 88.74%, 91.09%, 81.66% and 89.90% respectively, and thus proves the efficiency of the novel system.