891 resultados para pacs: information storage and retrieval
Resumo:
Layered Double hydroxides (LDHs) have been widely studied for their plethora of fascinating features and applications. The potentiostatic electrodeposition of LDHs has been extensively applied in the literature as a fast and direct method to substitute classical chemical routes. However, it does not usually allow for a fine control of the M(II)/M(III) ratio in the synthesized material and it is not suitable for large anions intercalation. Therefore, in this work a novel protocol has been proposed with the aim to overcome all these constraints using a method based on potentiodynamic synthesis. LDHs of controlled composition were prepared using different molar ratios of the trivalent to bivalent cations in the electrolytic solution ranging from 1:1 to 1:4. Moreover, we were able to produce electrochemically LDHs intercalated with carbon nanomaterials for the first time. A one-step procedure which contemporaneously allows for the Ni/Al-LDH synthesis, the reduction of graphene oxide (GO) and its intercalation inside the structure has been developed. The synthesised materials have been applied in several fields of interest. First of all, LDHs with a ratio 3:1 were exploited, and displayed good performances as catalysts for 5-(hydroxymethyl)furfural electro-oxidation, thus suggesting to carry out further investigation for applications in the field of industrial catalysis. The same materials, but with different metals ratios, were tested as catalysts for Oxygen Evolution Reaction, obtaining results comparable to LDHs synthesised by the classical co-precipitation method and also a better activity with respect to LDHs obtained by the potentiostatic approach. The composite material based on LDH and reduced graphene oxide was employed to fabricate a cathode of a hybrid supercapacitor coupled with an activated carbon anode. We can thus conclude that, to date, the potentiodynamic method has the greatest potential for the rapid synthesis of reproducible films of Co and Ni-based LDHs with controlled composition.
Resumo:
In this thesis, the optimal operation of a neighborhood of smart households in terms of minimizing the total energy cost is analyzed. Each household may comprise several assets such as electric vehicles, controllable appliances, energy storage and distributed generation. Bi-directional power flow is considered for each household . Apart from the distributed generation unit, technological options such as vehicle-to-home and vehicle-to-grid are available to provide energy to cover self-consumption needs and to export excessive energy to other households, respectively.
Resumo:
While multimedia data, image data in particular, is an integral part of most websites and web documents, our quest for information so far is still restricted to text based search. To explore the World Wide Web more effectively, especially its rich repository of truly multimedia information, we are facing a number of challenging problems. Firstly, we face the ambiguous and highly subjective nature of defining image semantics and similarity. Secondly, multimedia data could come from highly diversified sources, as a result of automatic image capturing and generation processes. Finally, multimedia information exists in decentralised sources over the Web, making it difficult to use conventional content-based image retrieval (CBIR) techniques for effective and efficient search. In this special issue, we present a collection of five papers on visual and multimedia information management and retrieval topics, addressing some aspects of these challenges. These papers have been selected from the conference proceedings (Kluwer Academic Publishers, ISBN: 1-4020- 7060-8) of the Sixth IFIP 2.6 Working Conference on Visual Database Systems (VDB6), held in Brisbane, Australia, on 29–31 May 2002.
Resumo:
Einhergehend mit der Entwicklung und zunehmenden Verfügbarkeit des Internets hat sich die Art der Informationsbereitstellung und der Informationsbeschaffung deutlich geändert. Die einstmalige Trennung zwischen Publizist und Konsument wird durch kollaborative Anwendungen des sogenannten Web 2.0 aufgehoben, wo jeder Teilnehmer gleichsam Informationen bereitstellen und konsumieren kann. Zudem können Einträge anderer Teilnehmer erweitert, kommentiert oder diskutiert werden. Mit dem Social Web treten schließlich die sozialen Beziehungen und Interaktionen der Teilnehmer in den Vordergrund. Dank mobiler Endgeräte können zu jeder Zeit und an nahezu jedem Ort Nachrichten verschickt und gelesen werden, neue Bekannschaften gemacht oder der aktuelle Status dem virtuellen Freundeskreis mitgeteilt werden. Mit jeder Aktivität innerhalb einer solchen Applikation setzt sich ein Teilnehmer in Beziehung zu Datenobjekten und/oder anderen Teilnehmern. Dies kann explizit geschehen, indem z.B. ein Artikel geschrieben wird und per E-Mail an Freunde verschickt wird. Beziehungen zwischen Datenobjekten und Nutzern fallen aber auch implizit an, wenn z.B. die Profilseite eines anderen Teilnehmers aufgerufen wird oder wenn verschiedene Teilnehmer einen Artikel ähnlich bewerten. Im Rahmen dieser Arbeit wird ein formaler Ansatz zur Analyse und Nutzbarmachung von Beziehungsstrukturen entwickelt, welcher auf solchen expliziten und impliziten Datenspuren aufbaut. In einem ersten Teil widmet sich diese Arbeit der Analyse von Beziehungen zwischen Nutzern in Applikationen des Social Web unter Anwendung von Methoden der sozialen Netzwerkanalyse. Innerhalb einer typischen sozialen Webanwendung haben Nutzer verschiedene Möglichkeiten zu interagieren. Aus jedem Interaktionsmuster werden Beziehungsstrukturen zwischen Nutzern abgeleitet. Der Vorteil der impliziten Nutzer-Interaktionen besteht darin, dass diese häufig vorkommen und quasi nebenbei im Betrieb des Systems abfallen. Jedoch ist anzunehmen, dass eine explizit angegebene Freundschaftsbeziehung eine stärkere Aussagekraft hat, als entsprechende implizite Interaktionen. Ein erster Schwerpunkt dieser Arbeit ist entsprechend der Vergleich verschiedener Beziehungsstrukturen innerhalb einer sozialen Webanwendung. Der zweite Teil dieser Arbeit widmet sich der Analyse eines der weit verbreitetsten Profil-Attributen von Nutzern in sozialen Webanwendungen, dem Vornamen. Hierbei finden die im ersten Teil vorgestellten Verfahren und Analysen Anwendung, d.h. es werden Beziehungsnetzwerke für Namen aus Daten von sozialen Webanwendungen gewonnen und mit Methoden der sozialen Netzwerkanalyse untersucht. Mithilfe externer Beschreibungen von Vornamen werden semantische Ähnlichkeiten zwischen Namen bestimmt und mit jeweiligen strukturellen Ähnlichkeiten in den verschiedenen Beziehungsnetzwerken verglichen. Die Bestimmung von ähnlichen Namen entspricht in einer praktischen Anwendung der Suche von werdenden Eltern nach einem passenden Vornamen. Die Ergebnisse zu der Analyse von Namensbeziehungen sind die Grundlage für die Implementierung der Namenssuchmaschine Nameling, welche im Rahmen dieser Arbeit entwickelt wurde. Mehr als 35.000 Nutzer griffen innerhalb der ersten sechs Monate nach Inbetriebnahme auf Nameling zu. Die hierbei anfallenden Nutzungsdaten wiederum geben Aufschluss über individuelle Vornamenspräferenzen der Anwender. Im Rahmen dieser Arbeit werden diese Nutzungsdaten vorgestellt und zur Bestimmung sowie Bewertung von personalisierten Vornamensempfehlungen verwendet. Abschließend werden Ansätze zur Diversifizierung von personalisierten Vornamensempfehlungen vorgestellt, welche statische Beziehungsnetzwerke für Namen mit den individuellen Nutzungsdaten verknüpft.
Resumo:
VITULLO, Nadia Aurora Vanti. Avaliação do banco de dissertações e teses da Associação Brasileira de Antropologia: uma análise cienciométrica. 2001. 143 f. Dissertaçao (Mestrado) - Curso de Mestrado em Biblioteconomia e Ciência da Informação, Pontifícia Universidade Católica de Campinas, Campinas, 2001.
Resumo:
Nowadays several electronics devices support digital videos. Some examples of these devices are cellphones, digital cameras, video cameras and digital televisions. However, raw videos present a huge amount of data, millions of bits, for their representation as the way they were captured. To store them in its primary form it would be necessary a huge amount of disk space and a huge bandwidth to allow the transmission of these data. The video compression becomes essential to make possible information storage and transmission. Motion Estimation is a technique used in the video coder that explores the temporal redundancy present in video sequences to reduce the amount of data necessary to represent the information. This work presents a hardware architecture of a motion estimation module for high resolution videos according to H.264/AVC standard. The H.264/AVC is the most advanced video coder standard, with several new features which allow it to achieve high compression rates. The architecture presented in this work was developed to provide a high data reuse. The data reuse schema adopted reduces the bandwidth required to execute motion estimation. The motion estimation is the task responsible for the largest share of the gains obtained with the H.264/AVC standard so this module is essential for final video coder performance. This work is included in Rede H.264 project which aims to develop Brazilian technology for Brazilian System of Digital Television
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Ciências Cartográficas - FCT
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In the paper analyzes the process of knowledge construction in the news regarding the emergence of massive information and communication technologies. In this context, through a qualitative approach, first we examine the potential of production, dissemination, access and use information in a technocratic perspective, and secondly, the negative impacts stemming from the computed mediation, mainly linked to the complexity of policies of democratization of access to information and loss of criticality of science. The focus of this review focuses on the current model of information search and knowledge construction through the Web, outlining aspects inherent to the process in general. Similarly, envision different contributions of information science for the systematization of human knowledge, guided through the social actions of professionals working in various fields, with emphasis on the information representation and retrieval. It is hoped that work can contribute to the current thematic reflection on the production, dissemination and use of information as well as on digital inclusion policies.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
At large, research universities, a common approach for teaching hundreds of undergraduate students at one time is the traditional, large, lecture-based course. Trends indicate that over the next decade there will be an increase in the number of large, campus courses being offered as well as larger enrollments in courses currently offered. As universities investigate alternative means to accommodate more students and their learning needs, Web-based instruction provides an attractive delivery mode for teaching large, on-campus courses. This article explores a theoretical approach regarding how Web-based instruction can be designed and developed to provide quality education for traditional, on-campus, undergraduate students. The academic debate over the merit of Web-based instruction for traditional, on-campus students has not been resolved. This study identifies and discusses instructional design theory for adapting a large, lecture-based course to the Web.