1000 resultados para Informação e tecnologia da computação
Resumo:
Introduction: In the digital environment, metadata influence both in data access and information retrieval and are used as search elements to facilitate locating resources on the Web. Objective: In this perspective, the aim is to present the methodology BEAM, developed in Biblioteca de Estudos e Aplicação de Metadados, of the Research Group “Novas Tecnologias em Informação” in Universidade Estadual Paulista and used to define the metadata for describing information resources. Methodology: The methodology used for the construction of the research is exploratory and bibliographic and was developed based on the theoretical method Chuttur (2011) and the life cycle of data from the DataOne (2012) and also the PDCA cycle and tool 5W1H . Results: The seven steps of the methodology are presented and also the necessary guidelines for their implementation. Conclusions: We conclude pointing BEAM methodology that can be adopted by libraries in the construction of catalogs aimed at meeting the needs of users.
Resumo:
Presents a developing laboratory within History of Culture, subject taught in the Library and Archive Studies at UNESP (Marilia). The project is based upon the foundations of research line Information and Technology: students from the second under graduation year participate in the improving of 27 entries in Portuguese language Wikipedia. The aim is to capacitate for scientific reading and writing in digital media, habilitate for the information identification and recuperation and for the interpretation and understanding of formal and contents aspects and its reorganization. It includes activities of search, selection, remix and republishing of texts, images, audio and videos in the convergence of diverse hypertext information sources, supported by tutors with strategic abilities in digital environments. In this sense it was adhered to the international Wikipedia Foundation University Campus Ambassadors project. It’s also aimed to induce sharing and collaboration behaviors with the purpose of creating necessary habits for the informational empowerment in Brazil. As methodology is to optimize the work of individuals already trained in wiki culture and to create in within the subject information sharing programs with a more specialized bias, giving greater credibility to the digital environment. The environment syntaxes helps in the learning of the complementary skills of reading and writing and offers itself as an open repository from which information can be reused. It is, thus, an empowerment strategy in the search of autonomy and self reliance considering intersemiotic knowledge in the edition, visualization and understanding of information in the social web. A second step of a verifying research on the environment’s credibility after the consolidation and dissemination of the entries improvement work is proposed.
Resumo:
The research presents as its central theme the study of the bibliographic record conversion process. The object of study is framed by an understanding of analogic bibliographic record conversion to the Bibliograhpic MARC21 format, based on a syntactic and semantic analysis of records described according to descriptive metadata structure standards and content standards. The objective of this research the objective is to develop a theoretical-conceptual model of syntactic and semantic of bibliographic records, from Linguistic studies of Saussure and Hjelmslev of manifestations of human language, which subsidizes the development of a computacional interpreter, focused to the conversion of bibliographic records to MARC21 Bibliographic Format, which can be confirmed both the semantic value of the informational resource represented as the reliability of the representation. Given the aforementioned objectives, the methodological trajectory of the research is based on the qualitative approach, of an exploratory, descriptive and experimental nature, and with recourse to the literature. Contributions on the theoretical plane can be envisaged regarding the development of questions inherent to the syntactic and semantic aspects of bibliographic records, and by involving, at the same time, interdisciplinarity between Information Science, Computer Science and Linguistics. Contributions to the practical field are identified by the fact the study covers the development of the Scan for MARC, a computational interpreter that can be adopted by any institution that wishes to use the conversion procedure for bibliographic record databases to the MARC21 Bibliographic Format from description and visualization schemes of bibliographic records (AACR2r and ISBD), an aspect of the research which is considered innovative.
Resumo:
Information retrieval has been much discussed within Information Science lately. The search for quality information compatible with the users needs became the object of constant research.Using the Internet as a source of dissemination of knowledge has suggested new models of information storage, such as digital repositories, which have been used in academic research as the main form of autoarchiving and disseminating information, but with an information structure that suggests better descriptions of resources and hence better retrieval.Thus the objective is to improve the process of information retrieval, presenting a proposal for a structural model in the context of the semantic web, addressing the use of web 2.0 and web 3.0 in digital repositories, enabling semantic retrieval of information through building a data layer called Iterative Representation.The present study is characterized as descriptive and analytical, based on document analysis, divided into two parts: the first, characterized by direct observation of non-participatory tools that implement digital repositories, as well as digital repositories already instantiated, and the second with scanning feature, which suggests an innovative model for repositories, with the use of structures of knowledge representation and user participation in building a vocabulary domain. The model suggested and proposed ─ Iterative Representation ─ will allow to tailor the digital repositories using Folksonomy and also controlled vocabulary of the field in order to generate a data layer iterative, which allows feedback information, and semantic retrieval of information, through the structural model designed for repositories. The suggested model resulted in the formulation of the thesis that through Iterative Representation it is possible to establish a process of semantic retrieval of information in digital repositories.
Resumo:
The Information Technology and Communication (ICT) made possible to adapt bibliographic catalogs to the digital environment, giving them more speed, flexibility and efficiency in the information retrieval. The FRBR, as a conceptual model for the bibliographic universe based on entity-relationship modeling, brought to the Librarianship area the possibility of making more efficient operation catalogs. The FRBR model was the first initiative caring about how to accomplish the conceptual modeling of bibliographic catalogs, to do not spend more efforts in individual developments of distinct and inconsistent modeling.However, even many years after its publication, there were few real implementation initiatives. The aim of this study is to present the model, based on its main features and structure, and bring to the discussion some considerations and inconsistencies that, according to the literature, may be the cause of its failure so far. It s based on the national and international literature about conceptual modeling and about the FRBR model.
Resumo:
The cataloging process, is responsible for building systems consisting of sets of interconnected elements and combined forms of representation, creating tools to facilitate the flow of information in various informational environments. It presents structures that offer favorable conditions for access to formal codes of symbolic representation and to the channels of information transfer, performing with competence the decoding and encoding of codes and rules used to represent knowledge and to describe information, documents and resources. The objective of this paper is to present the challenge of transforming operational data into consistent information, the role of the forms of representation and the mental constructions for defining the memory markers of users of catalogs. It shows as results the memory markers indicated by three categories of users for the description of a book like resource and points to the need of collaborative and cooperative work in cataloging and to the need of catalog modelling focused on the user.
Resumo:
This study explores, in 3 steps, how the 3 main library classification systems, the Library of Congress Classification, the Dewey Decimal Classification, and the Universal Decimal Classification, cover human knowledge. First, we mapped the knowledge covered by the 3 systems. We used the “10 Pillars of Knowledge: Map of Human Knowledge”, which comprises 10 pillars, as an evaluative model. We mapped all the subject-based classes and subclasses that are part of the first 2 levels of the 3 hierarchical structures. Then, we zoomed into each of the 10 pillars and analyzed how the three systems cover the 10 knowledge domains. Finally, we focused on the 3 library systems. Based on the way each one of them covers the 10 knowledge domains, it is evident that they failed to adequately and systematically present contemporary human knowledge. They are unsystematic and biased, and, at the top 2 levels of the hierarchical structures, they are incomplete.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The incipient but quickly expansion action on the Information and Communications Technologies (ICT) in Africa it is now just having different impact on these societies. One of these relates bear on how users are identified with these tools. Just like that we find individuals identify as bloggers, twitter followers or cyber activist. This contribution analyzes the Senegal’s fact where a successful use of social nets and web 2.0 tools experience (at least in repercussion) as social and political involvement while presidential elections in 2012 is tied to come back an identity: Cyber activist. Senegalese circumstance shows us how this identity has a personal and assertiveness dimension as well collective aspects of belonging to a community. One as much as the other, show us personal traits in contrast to previous beliefs, basically because it fuse and confuse virtual and reality. Due to dynamics from expanding technology, this identity is youthful and urban, but not only. This situation creates new dynamics at least in this affected group. For this reason, besides knowing emergence and evolution of this fact, it raises some of the involvement in social and political involvement from groups traditionally “invisible”. Beyond the new social behavior there are new changes in the rules of the game in order to start new social revolution.
Resumo:
The collection of prices for basic goods supply is very important for the population, based on the collection and processing of these data the CLI (Cost Living Index) is calculated among others, helping consumers to shop more rationally and with a clearer view of each product impact of each product on their household budget, not only food, but also cleaning products and personal hygiene ones. Nowadays, the project of collection of prices for basic goods supply is conducted weekly in Botucatu - SP through a spreadsheet. The aim of this work was to develop a software which utilized mobile devices in the data collection and storage phase, concerning the basic goods supply in Botucatu -SP. This was created in order to eliminate the need of taking notes in paper spreadsheets, increasing efficiency and accelerating the data processing. This work utilized the world of mobile technology and development tools, through the platform".NET" - Compact Framework and programming language Visual Basic".NET" was used in the handheld phase, enabling to develop a system using techniques of object oriented programming, with higher speed and reliability in the codes writing. A HP Pavilion dv3 personal computer and an Eten glofish x500+ handheld computer were used. At the end of the software development, collection, data storing and processing in a report, the phase of in loco paper spreadsheets were eliminated and it was possible to verify that the whole process was faster, more consistent, safer, more efficient and the data were more available.
Resumo:
The ability to view and interact with 3D models has been happening for a long time. However, vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges. Hand-held mobile devices have changed the way we interact with virtual reality environments. Their high mobility and technical features, such as inertial sensors, cameras and fast processors, are especially attractive for advancing the state of the art in virtual reality systems. Also, their ubiquity and fast Internet connection open a path to distributed and collaborative development. However, such path has not been fully explored in many domains. VR systems for real world engineering contexts are still difficult to use, especially when geographically dispersed engineering teams need to collaboratively visualize and review 3D CAD models. Another challenge is the ability to rendering these environments at the required interactive rates and with high fidelity. In this document it is presented a virtual reality system mobile for visualization, navigation and reviewing large scale 3D CAD models, held under the CEDAR (Collaborative Engineering Design and Review) project. It’s focused on interaction using different navigation modes. The system uses the mobile device's inertial sensors and camera to allow users to navigate through large scale models. IT professionals, architects, civil engineers and oil industry experts were involved in a qualitative assessment of the CEDAR system, in the form of direct user interaction with the prototypes and audio-recorded interviews about the prototypes. The lessons learned are valuable and are presented on this document. Subsequently it was prepared a quantitative study on the different navigation modes to analyze the best mode to use it in a given situation.
Resumo:
Um dos processos de envelhecimento do Vinho Madeira é a “Estufagem” realizada através da circulação de água aquecida a uma determinada temperatura por um sistema de serpentina existente no interior de cada estufa. De modo a tornar o processo de estufagem eficiente e preservar a qualidade do Vinho Madeira, a monitorização, registo e controlo da temperatura reveste-se da maior importância sendo, atualmente, todo esse processo realizado, por norma, manualmente, quer no sistema de dez estufas de aço inox existente no laboratório de enologia da Universidade da Madeira (UMa), quer nos sistemas das cooperativas de Vinho Madeira. Existem, no mercado, alguns sistemas que solucionam, com menor ou maior limitação, este problema. Porém, nenhum desses sistemas implementa um sistema de controlo “inteligente” capaz de se adaptar automaticamente a diferentes períodos de temperaturas predefinidos e manter o aquecimento das estufas de acordo com essas temperaturas com uma margem de erro inferior a ±0,5℃, bem como o custo associado aos mesmos é elevado o que dificulta a sua implementação neste setor. O sistema implementado, nesta tese, consiste em duas aplicações: uma aplicação web e uma Windows Forms Application. A aplicação web foi desenvolvida em C# com recurso à framework ASP.NET Web Pages e implementa toda a lógica necessária à monitorização gráfica e à gestão do sistema, nomeadamente a definição do setpoint para cada estufa. A Windows Forms Application, também desenvolvida em C# devido à necessidade de interligação com a biblioteca fornecida pela CAREL para conexão aos controladores de temperatura IR32, efetua o registo e controlo automático da temperatura, de acordo com o setpoint definido para cada estufa através da aplicação web. O controlo de temperatura realiza-se com recurso às redes neuronais, nomeadamente através dum controlador DIC (Direct Inverse Controller) que obteve, de entre os vários controladores testados, o melhor Erro Quadrático Médio (MSE) e o melhor Coeficiente de Correlação (R). Através da utilização do sistema implementado conseguiu-se eliminar a limitação física de erro com ± 1℃ em torno do setpoint tendo-se conseguido, para o melhor caso, uma margem de erro de ± 0,1℃ relativamente ao setpoint reduzindo-se, assim, a margem de variação de temperatura até um máximo de 1,8ºC e, consequentemente, o erro associado.
Resumo:
The ability to view and interact with 3D models has been happening for a long time. However, vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges. Hand-held mobile devices have changed the way we interact with virtual reality environments. Their high mobility and technical features, such as inertial sensors, cameras and fast processors, are especially attractive for advancing the state of the art in virtual reality systems. Also, their ubiquity and fast Internet connection open a path to distributed and collaborative development. However, such path has not been fully explored in many domains. VR systems for real world engineering contexts are still difficult to use, especially when geographically dispersed engineering teams need to collaboratively visualize and review 3D CAD models. Another challenge is the ability to rendering these environments at the required interactive rates and with high fidelity. In this document it is presented a virtual reality system mobile for visualization, navigation and reviewing large scale 3D CAD models, held under the CEDAR (Collaborative Engineering Design and Review) project. It’s focused on interaction using different navigation modes. The system uses the mobile device's inertial sensors and camera to allow users to navigate through large scale models. IT professionals, architects, civil engineers and oil industry experts were involved in a qualitative assessment of the CEDAR system, in the form of direct user interaction with the prototypes and audio-recorded interviews about the prototypes. The lessons learned are valuable and are presented on this document. Subsequently it was prepared a quantitative study on the different navigation modes to analyze the best mode to use it in a given situation.
Resumo:
Um dos processos de envelhecimento do Vinho Madeira é a “Estufagem” realizada através da circulação de água aquecida a uma determinada temperatura por um sistema de serpentina existente no interior de cada estufa. De modo a tornar o processo de estufagem eficiente e preservar a qualidade do Vinho Madeira, a monitorização, registo e controlo da temperatura reveste-se da maior importância sendo, atualmente, todo esse processo realizado, por norma, manualmente, quer no sistema de dez estufas de aço inox existente no laboratório de enologia da Universidade da Madeira (UMa), quer nos sistemas das cooperativas de Vinho Madeira. Existem, no mercado, alguns sistemas que solucionam, com menor ou maior limitação, este problema. Porém, nenhum desses sistemas implementa um sistema de controlo “inteligente” capaz de se adaptar automaticamente a diferentes períodos de temperaturas predefinidos e manter o aquecimento das estufas de acordo com essas temperaturas com uma margem de erro inferior a ±0,5℃, bem como o custo associado aos mesmos é elevado o que dificulta a sua implementação neste setor. O sistema implementado, nesta tese, consiste em duas aplicações: uma aplicação web e uma Windows Forms Application. A aplicação web foi desenvolvida em C# com recurso à framework ASP.NET Web Pages e implementa toda a lógica necessária à monitorização gráfica e à gestão do sistema, nomeadamente a definição do setpoint para cada estufa. A Windows Forms Application, também desenvolvida em C# devido à necessidade de interligação com a biblioteca fornecida pela CAREL para conexão aos controladores de temperatura IR32, efetua o registo e controlo automático da temperatura, de acordo com o setpoint definido para cada estufa através da aplicação web. O controlo de temperatura realiza-se com recurso às redes neuronais, nomeadamente através dum controlador DIC (Direct Inverse Controller) que obteve, de entre os vários controladores testados, o melhor Erro Quadrático Médio (MSE) e o melhor Coeficiente de Correlação (R). Através da utilização do sistema implementado conseguiu-se eliminar a limitação física de erro com ± 1℃ em torno do setpoint tendo-se conseguido, para o melhor caso, uma margem de erro de ± 0,1℃ relativamente ao setpoint reduzindo-se, assim, a margem de variação de temperatura até um máximo de 1,8ºC e, consequentemente, o erro associado.