969 resultados para Strongly Semantic Information


Relevância:

20.00% 20.00%

Publicador:

Resumo:

WDM multilayered SiC/Si devices based on a-Si:H and a-SiC:H filter design are approached from a reconfigurable point of view. Results show that the devices, under appropriated optical bias, act as reconfigurable active filters that allow optical switching and optoelectronic logic functions development. Under front violet irradiation the magnitude of the red and green channels are amplified and the blue and violet reduced. Violet back irradiation cuts the red channel, slightly influences the magnitude of the green and blue ones and strongly amplifies de violet channel. This nonlinearity provides the possibility for selective removal of useless wavelengths. Particular attention is given to the amplification coefficient weights, which allow taking into account the wavelength background effects when a band needs to be filtered from a wider range of mixed signals, or when optical active filter gates are used to select and filter input signals to specific output ports in WDM communication systems. A truth table of an encoder that performs 8-to-1 multiplexer (MUX) function is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Bologna Process aimed to build a European Higher Education Area with the objective of promoting students mobility. The adoption of Bologna Declaration directives requires a decentralized approach that accelerates student's mobility, based on frequently updated legislation. This paper proposes a student personal system to manage student's academic information. This system is supported by a flexible model that integrates, for instance, knowledge about the student attended courses or about a course that the student wishes to apply. Essentially, this model holds a (i) Student's Academic Record with skills acquired in academic course units, professional experience or training and an (ii) Individual Studies Plan, which places the student in a particular (iii) Course Plan setting the curricular structure that the student wishes to apply.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper aims to present a contrastive approach between three different ways of building concepts after proving the similar syntactic possibilities that coexist in terms. However, from the semantic point of view we can see that each language family has a different distribution in meaning. But the most important point we try to show is that the differences found in the psychological process when communicating concepts should guide the translator and the terminologist in the target text production and the terminology planning process. Differences between languages in the information transmission process are due to the different roles the different types of knowledge play. We distinguish here the analytic-descriptive knowledge and the analogical knowledge among others. We also state that none of them is the best when determining the correctness of a term, but there has to be adequacy criteria in the selection process. This concept building or term building success is important when looking at the linguistic map of the information society.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Associar tradução e poesia será, muitas vezes, sinónimo de enfrentar preconceitos académicos e científicos muito enraizados na cultura ocidental. Se, por um lado, a tradução é vista como indispensável à troca de informações entre códigos linguísticos diferentes e até mesmo como a possibilitadora de avanços científicos e tecnológicos decorrentes do contacto com outras realidades economicamente mais evoluídas, a verdade é que o seu papel enquanto “ponte” cultural está longe de ser aceite universalmente quando em causa passam a estar os “tesouros literários” de uma cultura nacional. Este carácter polémico levou-me a ponderar a hipótese de analisar, de um ponto de vista eminentemente prático, quatro traduções dissemelhantes, de épocas também distintas, do poema The Tyger, de William Blake. Ter ao dispor quatro traduções de quatro tradutores diferentes tornou possível a compilação de um corpus mais alargado e diversificado onde basear conclusões reais para os problemas de tradução de poesia, devidamente contextualizados.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho de Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introdução Hoje em dia, o conceito de ontologia (Especificação explícita de uma conceptualização [Gruber, 1993]) é um conceito chave em sistemas baseados em conhecimento em geral e na Web Semântica em particular. Entretanto, os agentes de software nem sempre concordam com a mesma conceptualização, justificando assim a existência de diversas ontologias, mesmo que tratando o mesmo domínio de discurso. Para resolver/minimizar o problema de interoperabilidade entre estes agentes, o mapeamento de ontologias provou ser uma boa solução. O mapeamento de ontologias é o processo onde são especificadas relações semânticas entre entidades da ontologia origem e destino ao nível conceptual, e que por sua vez podem ser utilizados para transformar instâncias baseadas na ontologia origem em instâncias baseadas na ontologia destino. Motivação Num ambiente dinâmico como a Web Semântica, os agentes alteram não só os seus dados mas também a sua estrutura e semântica (ontologias). Este processo, denominado evolução de ontologias, pode ser definido como uma adaptação temporal da ontologia através de alterações que surgem no domínio ou nos objectivos da própria ontologia, e da gestão consistente dessas alterações [Stojanovic, 2004], podendo por vezes deixar o documento de mapeamento inconsistente. Em ambientes heterogéneos onde a interoperabilidade entre sistemas depende do documento de mapeamento, este deve reflectir as alterações efectuadas nas ontologias, existindo neste caso duas soluções: (i) gerar um novo documento de mapeamento (processo exigente em termos de tempo e recursos computacionais) ou (ii) adaptar o documento de mapeamento, corrigindo relações semânticas inválidas e criar novas relações se forem necessárias (processo menos existente em termos de tempo e recursos computacionais, mas muito dependente da informação sobre as alterações efectuadas). O principal objectivo deste trabalho é a análise, especificação e desenvolvimento do processo de evolução do documento de mapeamento de forma a reflectir as alterações efectuadas durante o processo de evolução de ontologias. Contexto Este trabalho foi desenvolvido no contexto do MAFRA Toolkit1. O MAFRA (MApping FRAmework) Toolkit é uma aplicação desenvolvida no GECAD2 que permite a especificação declarativa de relações semânticas entre entidades de uma ontologia origem e outra de destino, utilizando os seguintes componentes principais: Concept Bridge – Representa uma relação semântica entre um conceito de origem e um de destino; Property Bridge – Representa uma relação semântica entre uma ou mais propriedades de origem e uma ou mais propriedades de destino; Service – São aplicados às Semantic Bridges (Property e Concept Bridges) definindo como as instâncias origem devem ser transformadas em instâncias de destino. Estes conceitos estão especificados na ontologia SBO (Semantic Bridge Ontology) [Silva, 2004]. No contexto deste trabalho, um documento de mapeamento é uma instanciação do SBO, contendo relações semânticas entre entidades da ontologia de origem e da ontologia de destino. Processo de evolução do mapeamento O processo de evolução de mapeamento é o processo onde as entidades do documento de mapeamento são adaptadas, reflectindo eventuais alterações nas ontologias mapeadas, tentando o quanto possível preservar a semântica das relações semântica especificadas. Se as ontologias origem e/ou destino sofrerem alterações, algumas relações semânticas podem tornar-se inválidas, ou novas relações serão necessárias, sendo por isso este processo composto por dois sub-processos: (i) correcção de relações semânticas e (ii) processamento de novas entidades das ontologias. O processamento de novas entidades das ontologias requer a descoberta e cálculo de semelhanças entre entidades e a especificação de relações de acordo com a ontologia/linguagem SBO. Estas fases (“similarity measure” e “semantic bridging”) são implementadas no MAFRA Toolkit, sendo o processo (semi-) automático de mapeamento de ontologias descrito em [Silva, 2004].O processo de correcção de entidades SBO inválidas requer um bom conhecimento da ontologia/linguagem SBO, das suas entidades e relações, e de todas as suas restrições, i.e. da sua estrutura e semântica. Este procedimento consiste em (i) identificar as entidades SBO inválidas, (ii) a causa da sua invalidez e (iii) corrigi-las da melhor forma possível. Nesta fase foi utilizada informação vinda do processo de evolução das ontologias com o objectivo de melhorar a qualidade de todo o processo. Conclusões Para além do processo de evolução do mapeamento desenvolvido, um dos pontos mais importantes deste trabalho foi a aquisição de um conhecimento mais profundo sobre ontologias, processo de evolução de ontologias, mapeamento etc., expansão dos horizontes de conhecimento, adquirindo ainda mais a consciência da complexidade do problema em questão, o que permite antever e perspectivar novos desafios para o futuro.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seismic data is difficult to analyze and classical mathematical tools reveal strong limitations in exposing hidden relationships between earthquakes. In this paper, we study earthquake phenomena in the perspective of complex systems. Global seismic data, covering the period from 1962 up to 2011 is analyzed. The events, characterized by their magnitude, geographic location and time of occurrence, are divided into groups, either according to the Flinn-Engdahl (F-E) seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Two methods of analysis are considered and compared in this study. In a first method, the distributions of magnitudes are approximated by Gutenberg-Richter (G-R) distributions and the parameters used to reveal the relationships among regions. In the second method, the mutual information is calculated and adopted as a measure of similarity between regions. In both cases, using clustering analysis, visualization maps are generated, providing an intuitive and useful representation of the complex relationships that are present among seismic data. Such relationships might not be perceived on classical geographic maps. Therefore, the generated charts are a valid alternative to other visualization tools, for understanding the global behavior of earthquakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To describe the patterns of deliveries in a birth cohort and to compare vaginal and cesarean section deliveries. METHODS: All children born to mothers from the urban area of Pelotas, Brazil, in 2004, were recruited for a birth cohort study. Mothers were contacted and interviewed during their hospital stay when extensive information on the gestation, the birth and the newborn, along with maternal health history and family characteristics was collected. Maternal characteristics and childbirth care financing - either private or public healthcare (SUS) patients - were the main factors investigated along with a description of C-sections distribution according to day of the week and delivery time. Standard descriptive techniques, Χ² tests for comparing proportions and Poisson regression to explore the independent effect of C-section predictors were the methods used. RESULTS: The overall C-section rate was 45%, 36% among SUS and 81% among private patients, where 35% of C-sections were reported elective. C-sections were more frequent on Tuesdays and Wednesdays, reducing by about a third on Sundays, while normal deliveries had a uniform distribution along the week. Delivery time for C-sections was markedly different among public and private patients. Maternal schooling was positively associated with C-section among SUS patients, but not among private patients. CONCLUSIONS: C-sections were almost universal among the wealthier mothers, and strongly related to maternal education among SUS patients. The patterns we describe are compatible with the idea that C-sections are largely done to suit the doctor's schedule. Drastic action is called for to change the current situation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Learning and teaching processes, like all human activities, can be mediated through the use of tools. Information and communication technologies are now widespread within education. Their use in the daily life of teachers and learners affords engagement with educational activities at any place and time and not necessarily linked to an institution or a certificate. In the absence of formal certification, learning under these circumstances is known as informal learning. Despite the lack of certification, learning with technology in this way presents opportunities to gather information about and present new ways of exploiting an individual’s learning. Cloud technologies provide ways to achieve this through new architectures, methodologies, and workflows that facilitate semantic tagging, recognition, and acknowledgment of informal learning activities. The transparency and accessibility of cloud services mean that institutions and learners can exploit existing knowledge to their mutual benefit. The TRAILER project facilitates this aim by providing a technological framework using cloud services, a workflow, and a methodology. The services facilitate the exchange of information and knowledge associated with informal learning activities ranging from the use of social software through widgets, computer gaming, and remote laboratory experiments. Data from these activities are shared among institutions, learners, and workers. The project demonstrates the possibility of gathering information related to informal learning activities independently of the context or tools used to carry them out.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The discussion and analysis of the diverse outreach activities in this article provide guidance and suggestions for academic librarians who are interested in outreach and community engagement of any scale and nature. Cases are draw from a wide spectrum and are particularly strong in the setting of large academic libraries, special collections and programming for multicultural populations. The aim of this study is to present the results of research carried out regarding the needs, demand and consumption of European Union information by users in European Documentation Centres (EDC). A quantitative methodology was chosen based on a questionnaire with 24 items. This questionnaire was distributed within the EDC of Salamanca, Spain, and the EDC of Porto, Portugal, during specific time intervals between 2010 and 2011. We examined the level of EU information that EDC users possess, and identified the factors that facilitate or hinder access to EU information, the topics most demanded, and the types of documents consulted. Analysis was made of the use that the consumer of European information makes of databases and their behaviour during the consultation. Although the sample used was not very significant owing to its small size, it is a faithful reflection of the scarce visits made to EDCs. This study can be of use to managers of EDCs, providing them with better knowledge of the information needs and demands of their users. Ultimately this should lead to improvements in the services offered. The study lies within a frame of research scarcely addressed in specialized scholarly literature: European Union information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper suggests that the thought of the North-American critical theorist James W. Carey provides a relevant perspective on communication and technology. Having as background American social pragmatism and progressive thinkers of the beginning of the 20th century (as Dewey, Mead, Cooley, and Park), Carey built a perspective that brought together the political economy of Harold A. Innis, the social criticism of David Riesman and Charles W. Mills and incorporated Marxist topics such as commodification and sociocultural domination. The main goal of this paper is to explore the connection established by Carey between modern technological communication and what he called the “transmissive model”, a model which not only reduces the symbolic process of communication to instrumentalization and to information delivery, but also politically converges with capitalism as well as power, control and expansionist goals. Conceiving communication as a process that creates symbolic and cultural systems, in which and through which social life takes place, Carey gives equal emphasis to the incorporation processes of communication.If symbolic forms and culture are ways of conditioning action, they are also influenced by technological and economic materializations of symbolic systems, and by other conditioning structures. In Carey’s view, communication is never a disembodied force; rather, it is a set of practices in which co-exist conceptions, techniques and social relations. These practices configure reality or, alternatively, can refute, transform and celebrate it. Exhibiting sensitiveness favourable to the historical understanding of communication, media and information technologies, one of the issues Carey explored most was the history of the telegraph as an harbinger of the Internet, of its problems and contradictions. For Carey, Internet was seen as the contemporary heir of the communications revolution triggered by the prototype of transmission technologies, namely the telegraph in the 19th century. In the telegraph Carey saw the prototype of many subsequent commercial empires based on science and technology, a pioneer model for complex business management; an example of conflict of interest for the control over patents; an inducer of changes both in language and in structures of knowledge; and a promoter of a futurist and utopian thought of information technologies. After a brief approach to Carey’s communication theory, this paper focuses on his seminal essay "Technology and ideology. The case of the telegraph", bearing in mind the prospect of the communication revolution introduced by Internet. We maintain that this essay has seminal relevance for critically studying the information society. Our reading of it highlights the reach, as well as the problems, of an approach which conceives the innovation of the telegraph as a metaphor for all innovations, announcing the modern stage of history and determining to this day the major lines of development in modern communication systems.