970 resultados para Information sciences


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Les institutions de mémoire et de savoir (par exemple, les bibliothèques, les archives, les musées) font face à des défis importants dans leurs responsabilités d’assurer la pérennité du patrimoine documentaire à l’ère numérique. Citons la surabondance de l’information numérique, la possibilité de production, théoriquement illimitée, offerte aux individus comme aux groupes sociaux, les limites dans les capacités de stockage et de diffusion de l’information numérique qui sont à la disposition des institutions mandataires du patrimoine documentaire. De plus, il est apparu que les approches et les méthodes utilisées pour identifier, gérer, préserver et diffuser le patrimoine documentaire de la société canadienne dans un environnement analogique n’étaient transférables à un environnement numérique. Nous suggérons que la théorie sociale de la connaissance peut servir de base à une réflexion portant sur le développement d’une politique publique qui viserait à encadrer l’identification, la sélection, la gestion et la préservation du patrimoine documentaire d’une société à l'ère numérique. Nous définissons la problématique puis proposons des réponses à travers trois articles scientifiques. Les résultats indiquent que les connaissances et les pratiques professionnelles utilisées demeurent persistantes et limitent la formulation et l'application de nouveaux cadres théoriques, de politiques administratives et de techniques associés à l'identification et la sélection du patrimoine documentaire. Cette recherche propose un cadre conceptuel qui permet de développer des politiques publiques sur le patrimoine documentaire du Canada.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Estudiosos de todo el mundo se están centrando en el estudio del fenómeno de las ciudades inteligentes. La producción bibliográfica española sobre este tema ha crecido exponencialmente en los últimos años. Las nuevas ciudades inteligentes se fundamentan en nuevas visiones de desarrollo urbano que integran múltiples soluciones tecnológicas ligadas al mundo de la información y de la comunicación, todas ellas actuales y al servicio de las necesidades de la ciudad. La literatura en español sobre este tema proviene de campos tan diferentes como la Arquitectura, la Ingeniería, las Ciencias Políticas y el Derecho o las Ciencias Empresariales. La finalidad de las ciudades inteligentes es la mejora de la vida de sus ciudadanos a través de la implementación de tecnologías de la información y de la comunicación que resuelvan las necesidades de sus habitantes, por lo que los investigadores del campo de las Ciencias de la Comunicación y de la Información tienen mucho que decir. Este trabajo analiza un total de 120 textos y concluye que el fenómeno de las ciudades inteligentes será uno de los ejes centrales de la investigación multidisciplinar en los próximos años en nuestro país.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Innovation is a fundamental part of social work. In recent years there has been a shift in the innovation paradigm, making it easier to accept this relationship. National and supranational policies aimed at promoting innovation appear to be specifically guided by this idea. To be able to affirm this hypothesis, it is necessary to review the perception that social workers have of their duties. It is also useful to examine particular cases that show how such social innovation arises.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este artículo presenta una investigación en la que se analizan las dificultades del profesorado para planificar, coordinar y evaluar competencias claves en una muestra de 23 centros educativos. El tema tiene hondas repercusiones ya que una mala praxis educativa de las competencias claves puede conculcar uno de los derechos fundamentales del alumnado a ser evaluado de forma objetiva (LODE: Art.6b y RD 732/1995: Art. 13.1) y poder superar las pruebas de evaluación consideradas necesarias para la obtención del título académico mínimo que otorga el estado español. La investigación se ha desarrollado desde una doble perspectiva metodológica; en primer lugar, es una investigación descriptiva en la que presentamos las características fundamentales de las competencias claves y la normativa básica para su desarrollo y evaluación. En segundo lugar,  aplicamos un procedimiento de análisis con una doble vertiente cualitativa mediante el empleo del programa Atlas-Ti y del enfoque reticular-categorial del análisis de redes sociales con la aplicación de UCINET y el visor yED Graph Editor para abordar el análisis de las principales dificultades y obstáculos detectados. Los resultados muestran que existen serias dificultades en las tres dimensiones analizadas: "planificación", "coordinación" y "evaluación" de competencias clave; especialmente en la necesidad de formación del profesorado, en la evaluación de las competencias, en la metodología para su desarrollo y en los procesos de coordinación interna para su consecución en los centros educativos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Se presentan los resultados de un estudio sobre la formación universitaria que reciben los profesionales de la información sobre la materia gestión de proyectos, tras el análisis de las titulaciones en Información y Documentación a nivel internacional. Para ello, se han utilizado fuentes y directorios sobre la educación internacional en Library and Information Science y se ha creado una base de datos con 106 registros de asignaturas sobre gestión de proyectos incluidas en las titulaciones en Información y Documentación. Como resultado de este proceso, los parámetros de análisis de la investigación han sido la ubicación geográfica, las instituciones de educación superior, las titulaciones académicas y las propias asignaturas sobre gestión de proyectos. Entre las conclusiones más notables, destaca la obligatoriedad de las asignaturas, la enseñanza de las mismas en modo presencial y el caso de las universidades públicas de Estados Unidos, Alemania y Francia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La organización del conocimiento en el contexto de las Ciencias de la Información tiene como esencia la información y el conocimiento debidamente documentado o registrado. La organización del conocimiento como proceso, envuelve tanto la descripción física como de los contenidos de los objetos informacionales. Y el producto de ese proceso descriptivo es la representación de los atributos de un objeto o conjunto de objetos. Las representaciones son construidas por lenguajes elaborados específicamente para los objetivos de la organización en los sistemas de información. Lenguajes que se subdividen en lenguajes que describen el documento (el soporte físico del objeto) y lenguajes que describen la información (los contenidos).A partir de esta premisa la siguiente investigación tiene como objetivo general analizarlos sistemas de Gestión de Información y Conocimiento Institucional principalmente los que proponen utilizar el Currículum Vitae del profesor como única fuente de información, medición y representación de la información y el conocimiento de una organización. Dentro delos principales resultados se muestra la importancia de usar el currículo personal como fuente de información confiable y normalizada; una síntesis de los principales sistemas curriculares que existen a nivel internacional y regional; así como el gráfico del modelo de datos del caso de estudio; y por último, la propuesta del uso de las ontologías como principal herramienta para la organización semántica de la información en un sistema de gestión de información y conocimiento.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Major food adulteration and contamination events occur with alarming regularity and are known to be episodic, with the question being not if but when another large-scale food safety/integrity incident will occur. Indeed, the challenges of maintaining food security are now internationally recognised. The ever increasing scale and complexity of food supply networks can lead to them becoming significantly more vulnerable to fraud and contamination, and potentially dysfunctional. This can make the task of deciding which analytical methods are more suitable to collect and analyse (bio)chemical data within complex food supply chains, at targeted points of vulnerability, that much more challenging. It is evident that those working within and associated with the food industry are seeking rapid, user-friendly methods to detect food fraud and contamination, and rapid/high-throughput screening methods for the analysis of food in general. In addition to being robust and reproducible, these methods should be portable and ideally handheld and/or remote sensor devices, that can be taken to or be positioned on/at-line at points of vulnerability along complex food supply networks and require a minimum amount of background training to acquire information rich data rapidly (ergo point-and-shoot). Here we briefly discuss a range of spectrometry and spectroscopy based approaches, many of which are commercially available, as well as other methods currently under development. We discuss a future perspective of how this range of detection methods in the growing sensor portfolio, along with developments in computational and information sciences such as predictive computing and the Internet of Things, will together form systems- and technology-based approaches that significantly reduce the areas of vulnerability to food crime within food supply chains. As food fraud is a problem of systems and therefore requires systems level solutions and thinking.

Relevância:

60.00% 60.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Twitter System is the biggest social network in the world, and everyday millions of tweets are posted and talked about, expressing various views and opinions. A large variety of research activities have been conducted to study how the opinions can be clustered and analyzed, so that some tendencies can be uncovered. Due to the inherent weaknesses of the tweets - very short texts and very informal styles of writing - it is rather hard to make an investigation of tweet data analysis giving results with good performance and accuracy. In this paper, we intend to attack the problem from another aspect - using a two-layer structure to analyze the twitter data: LDA with topic map modelling. The experimental results demonstrate that this approach shows a progress in twitter data analysis. However, more experiments with this method are expected in order to ensure that the accurate analytic results can be maintained.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recommendation systems aim to help users make decisions more efficiently. The most widely used method in recommendation systems is collaborative filtering, of which, a critical step is to analyze a user's preferences and make recommendations of products or services based on similarity analysis with other users' ratings. However, collaborative filtering is less usable for recommendation facing the "cold start" problem, i.e. few comments being given to products or services. To tackle this problem, we propose an improved method that combines collaborative filtering and data classification. We use hotel recommendation data to test the proposed method. The accuracy of the recommendation is determined by the rankings. Evaluations regarding the accuracies of Top-3 and Top-10 recommendation lists using the 10-fold cross-validation method and ROC curves are conducted. The results show that the Top-3 hotel recommendation list proposed by the combined method has the superiority of the recommendation performance than the Top-10 list under the cold start condition in most of the times.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract: In the mid-1990s when I worked for a telecommunications giant I struggled to gain access to basic geodemographic data. It cost hundreds of thousands of dollars at the time to simply purchase a tile of satellite imagery from Marconi, and it was often cheaper to create my own maps using a digitizer and A0 paper maps. Everything from granular administrative boundaries to right-of-ways to points of interest and geocoding capabilities were either unavailable for the places I was working in throughout Asia or very limited. The control of this data was either in a government’s census and statistical bureau or was created by a handful of forward thinking corporations. Twenty years on we find ourselves inundated with data (location and other) that we are challenged to amalgamate, and much of it still “dirty” in nature. Open data initiatives such as ODI give us great hope for how we might be able to share information together and capitalize not only in the crowdsourcing behavior but in the implications for positive usage for the environment and for the advancement of humanity. We are already gathering and amassing a great deal of data and insight through excellent citizen science participatory projects across the globe. In early 2015, I delivered a keynote at the Data Made Me Do It conference at UC Berkeley, and in the preceding year an invited talk at the inaugural QSymposium. In gathering research for these presentations, I began to ponder on the effect that social machines (in effect, autonomous data collection subjects and objects) might have on social behaviors. I focused on studying the problem of data from various veillance perspectives, with an emphasis on the shortcomings of uberveillance which included the potential for misinformation, misinterpretation, and information manipulation when context was entirely missing. As we build advanced systems that rely almost entirely on social machines, we need to ponder on the risks associated with following a purely technocratic approach where machines devoid of intelligence may one day dictate what humans do at the fundamental praxis level. What might be the fallout of uberveillance? Bio: Dr Katina Michael is a professor in the School of Computing and Information Technology at the University of Wollongong. She presently holds the position of Associate Dean – International in the Faculty of Engineering and Information Sciences. Katina is the IEEE Technology and Society Magazine editor-in-chief, and IEEE Consumer Electronics Magazine senior editor. Since 2008 she has been a board member of the Australian Privacy Foundation, and until recently was the Vice-Chair. Michael researches on the socio-ethical implications of emerging technologies with an emphasis on an all-hazards approach to national security. She has written and edited six books, guest edited numerous special issue journals on themes related to radio-frequency identification (RFID) tags, supply chain management, location-based services, innovation and surveillance/ uberveillance for Proceedings of the IEEE, Computer and IEEE Potentials. Prior to academia, Katina worked for Nortel Networks as a senior network engineer in Asia, and also in information systems for OTIS and Andersen Consulting. She holds cross-disciplinary qualifications in technology and law.