82 resultados para Knowledge representation
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Eletrotécnica e de Computadores
Staging the Scientist: The Representation of Science and its Processes in American and British Drama
Resumo:
Dissertação apresentada para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Estudos Ingleses e Norte Americanos
Resumo:
This dissertation presents an approach aimed at three-dimensional perception’s obstacle detection on all-terrain robots. Given the huge amount of acquired information, the adversities such environments present to an autonomous system and the swiftness, thus required, from each of its navigation decisions, it becomes imperative that the 3-D perceptional system to be able to map obstacles and passageways in the most swift and detailed manner. In this document, a hybrid approach is presented bringing the best of several methods together, combining the lightness of lesser meticulous analyses with the detail brought by more thorough ones. Realizing the former, a terrain’s slope mapping system upon a low resolute volumetric representation of the surrounding occupancy. For the latter’s detailed evaluation, two novel metrics were conceived to discriminate the little depth discrepancies found in between range scanner’s beam distance measurements. The hybrid solution resulting from the conjunction of these two representations provides a reliable answer to traversability mapping and a robust discrimination of penetrable vegetation from that constituting real obstructions. Two distinct robotic platforms offered the possibility to test the hybrid approach on very different applications: a boat, under an European project, the ECHORD Riverwatch, and a terrestrial four-wheeled robot for a national project, the Introsys Robot.
Resumo:
This research aims to provide a better understanding on how firms stimulate knowledge sharing through the utilization of collaboration tools, in particular Emergent Social Software Platforms (ESSPs). It focuses on the distinctive applications of ESSPs and on the initiatives contributing to maximize its advantages. In the first part of the research, I have itemized all types of existing collaboration tools and classify them in different categories according to their capabilities, objectives and according to their faculty for promoting knowledge sharing. In the second part, and based on an exploratory case study at Cisco Systems, I have identified the main applications of an existing enterprise social software platform named Webex Social. By combining a qualitative and quantitative approach, as well as combining data collected from survey’s results and from the analysis of the company’s documents, I am expecting to maximize the outcome of this investigation and reduce the risk of bias. Although effects cannot be universalized based on one single case study, some utilization patterns have been underlined from the data collected and potential trends in managing knowledge have been observed. The results of the research have also enabled identifying most of the constraints experienced by the users of the firm’s social software platform. Utterly, this research should provide a primary framework for firms planning to create or implement a social software platform and for firms willing to increase adoption levels and to promote the overall participation of users. It highlights the common traps that should be avoided by developers when designing a social software platform and the capabilities that it should inherently carry to support an effective knowledge management strategy.
Resumo:
The term res publica (literally “thing of the people”) was coined by the Romans to translate the Greek word politeia, which, as we know, referred to a political community organised in accordance with certain principles, amongst which the notion of the “good life” (as against exclusively private interests) was paramount. This ideal also came to be known as political virtue. To achieve it, it was necessary to combine the best of each “constitutional” type and avoid their worst aspects (tyranny, oligarchy and ochlocracy). Hence, the term acquired from the Greeks a sense of being a “mixed” and “balanced” system. Anyone that was entitled to citizenship could participate in the governance of the “public thing”. This implied the institutionalization of open debate and confrontation between interested parties as a way of achieving the consensus necessary to ensure that man the political animal, who fought with words and reason, prevailed over his “natural” counterpart. These premises lie at the heart of the project which is now being presented under the title of Res Publica: Citizenship and Political Representation in Portugal, 1820-1926. The fact that it is integrated into the centenary commemorations of the establishment of the Republic in Portugal is significant, as it was the idea of revolution – with its promise of rupture and change – that inspired it. However, it has also sought to explore events that could be considered the precursor of democratization in the history of Portugal, namely the vintista, setembrista and patuleia revolutions. It is true that the republican regime was opposed to the monarchic. However, although the thesis that monarchy would inevitably lead to tyranny had held sway for centuries, it had also been long believed that the monarchic system could be as “politically virtuous” as a republic (in the strict sense of the word) provided that power was not concentrated in the hands of a single individual. Moreover, various historical experiments had shown that republics could also degenerate into Caesarism and different kinds of despotism. Thus, when absolutism began to be overturned in continental Europe in the name of the natural rights of man and the new social pact theories, initiating the difficult process of (written) constitutionalization, the monarchic principle began to be qualified as a “monarchy hedged by republican institutions”, a situation in which not even the king was exempt from isonomy. This context justifies the time frame chosen here, as it captures the various changes and continuities that run through it. Having rejected the imperative mandate and the reinstatement of the model of corporative representation (which did not mean that, in new contexts, this might not be revived, or that the second chamber established by the Constitutional Charter of 1826 might not be given another lease of life), a new power base was convened: national sovereignty, a precept that would be shared by the monarchic constitutions of 1822 and 1838, and by the republican one of 1911. This followed the French example (manifested in the monarchic constitution of 1791 and in the Spanish constitution of 1812), as not even republicans entertained a tradition of republicanism based upon popular sovereignty. This enables us to better understand the rejection of direct democracy and universal suffrage, and also the long incapacitation (concerning voting and standing for office) of the vast body of “passive” citizens, justified by “enlightened”, property- and gender-based criteria. Although the republicans had promised in the propaganda phase to alter this situation, they ultimately failed to do so. Indeed, throughout the whole period under analysis, the realisation of the potential of national sovereignty was mediated above all by the individual citizen through his choice of representatives. However, this representation was indirect and took place at national level, in the hope that action would be motivated not by particular local interests but by the common good, as dictated by reason. This was considered the only way for the law to be virtuous, a requirement that was also manifested in the separation and balance of powers. As sovereignty was postulated as single and indivisible, so would be the nation that gave it soul and the State that embodied it. Although these characteristics were common to foreign paradigms of reference, in Portugal, the constitutionalization process also sought to nationalise the idea of Empire. Indeed, this had been the overriding purpose of the 1822 Constitution, and it persisted, even after the loss of Brazil, until decolonization. Then, the dream of a single nation stretching from the Minho to Timor finally came to an end.
Resumo:
Epistemology in philosophy of mind is a difficult endeavor. Those who believe that our phenomenal life is different from other domains suggest that self-knowledge about phenomenal properties is certain and therefore privileged. Usually, this so called privileged access is explained by the idea that we have direct access to our phenomenal life. This means, in contrast to perceptual knowledge, self-knowledge is non-inferential. It is widely believed that, this kind of directness involves two different senses: an epistemic sense and a metaphysical sense. Proponents of this view often claim that this is due to the fact that we are acquainted with our current experiences. The acquaintance thesis, therefore, is the backbone in justifying privileged access. Unfortunately the whole approach has a profound flaw. For the thesis to work, acquaintance has to be a genuine explanation. Since it is usually assumed that any knowledge relation between judgments and the corresponding objects are merely causal and contingent (e.g. in perception), the proponent of the privileged access view needs to show that acquaintance can do the job. In this thesis, however, I claim that the latter cannot be done. Based on considerations introduced by Levine, I conclude that this approach involves either the introduction of ontologically independent properties or a rather obscure knowledge relation. A proper explanation, however, cannot employ either of the two options. The acquaintance thesis is, therefore, bound to fail. Since the privileged access intuition seems to be vital to epistemology within the philosophy of mind, I will explore alternative justifications. After discussing a number of options, I will focus on the so called revelation thesis. This approach states that by simply having an experience with phenomenal properties, one is in the position to know the essence of those phenomenal properties. I will argue that, after finding a solution for the controversial essence claim, this thesis is a successful replacement explanation which maintains all the virtues of the acquaintance account without necessarily introducing ontologically independent properties or an obscure knowledge relation. The overall solution consists in qualifying the essence claim in the relevant sense, leaving us with an appropriate ontology for phenomenal properties. On the one hand, this avoids employing mysterious independent properties, since this ontological view is physicalist in nature. On the other hand, this approach has the right kind of structure to explain privileged self-knowledge of our phenomenal life. My final conclusion consists in the claim that the privileged access intuition is in fact veridical. It cannot, however, be justified by the popular acquaintance approach, but rather, is explainable by the controversial revelation thesis.
Resumo:
Hybrid knowledge bases are knowledge bases that combine ontologies with non-monotonic rules, allowing to join the best of both open world ontologies and close world rules. Ontologies shape a good mechanism to share knowledge on theWeb that can be understood by both humans and machines, on the other hand rules can be used, e.g., to encode legal laws or to do a mapping between sources of information. Taking into account the dynamics present today on the Web, it is important for these hybrid knowledge bases to capture all these dynamics and thus adapt themselves. To achieve that, it is necessary to create mechanisms capable of monitoring the information flow present on theWeb. Up to today, there are no such mechanisms that allow for monitoring events and performing modifications of hybrid knowledge bases autonomously. The goal of this thesis is then to create a system that combine these hybrid knowledge bases with reactive rules, aiming to monitor events and perform actions over a knowledge base. To achieve this goal, a reactive system for the SemanticWeb is be developed in a logic-programming based approach accompanied with a language for heterogeneous rule base evolution having as its basis RIF Production Rule Dialect, which is a standard for exchanging rules over theWeb.
Resumo:
RESUMO: O cancro de mama e o mais frequente diagnoticado a indiv duos do sexo feminino. O conhecimento cientifico e a tecnologia tem permitido a cria ção de muitas e diferentes estrat egias para tratar esta patologia. A Radioterapia (RT) est a entre as diretrizes atuais para a maioria dos tratamentos de cancro de mama. No entanto, a radia ção e como uma arma de dois canos: apesar de tratar, pode ser indutora de neoplasias secund arias. A mama contralateral (CLB) e um orgão susceptivel de absorver doses com o tratamento da outra mama, potenciando o risco de desenvolver um tumor secund ario. Nos departamentos de radioterapia tem sido implementadas novas tecnicas relacionadas com a radia ção, com complexas estrat egias de administra ção da dose e resultados promissores. No entanto, algumas questões precisam de ser devidamente colocadas, tais como: E seguro avançar para tecnicas complexas para obter melhores indices de conformidade nos volumes alvo, em radioterapia de mama? O que acontece aos volumes alvo e aos tecidos saudaveis adjacentes? Quão exata e a administração de dose? Quais são as limitações e vantagens das técnicas e algoritmos atualmente usados? A resposta a estas questões e conseguida recorrendo a m etodos de Monte Carlo para modelar com precisão os diferentes componentes do equipamento produtor de radia ção(alvos, ltros, colimadores, etc), a m de obter uma descri cão apropriada dos campos de radia cão usados, bem como uma representa ção geometrica detalhada e a composição dos materiais que constituem os orgãos e os tecidos envolvidos. Este trabalho visa investigar o impacto de tratar cancro de mama esquerda usando diferentes tecnicas de radioterapia f-IMRT (intensidade modulada por planeamento direto), IMRT por planeamento inverso (IMRT2, usando 2 feixes; IMRT5, com 5 feixes) e DCART (arco conformacional dinamico) e os seus impactos em irradia ção da mama e na irradia ção indesejada dos tecidos saud aveis adjacentes. Dois algoritmos do sistema de planeamento iPlan da BrainLAB foram usados: Pencil Beam Convolution (PBC) e Monte Carlo comercial iMC. Foi ainda usado um modelo de Monte Carlo criado para o acelerador usado (Trilogy da VARIAN Medical Systems), no c odigo EGSnrc MC, para determinar as doses depositadas na mama contralateral. Para atingir este objetivo foi necess ario modelar o novo colimador multi-laminas High- De nition que nunca antes havia sido simulado. O modelo desenvolvido est a agora disponí vel no pacote do c odigo EGSnrc MC do National Research Council Canada (NRC). O acelerador simulado foi validado com medidas realizadas em agua e posteriormente com c alculos realizados no sistema de planeamento (TPS).As distribui ções de dose no volume alvo (PTV) e a dose nos orgãos de risco (OAR) foram comparadas atrav es da an alise de histogramas de dose-volume; an alise estati stica complementar foi realizadas usando o software IBM SPSS v20. Para o algoritmo PBC, todas as tecnicas proporcionaram uma cobertura adequada do PTV. No entanto, foram encontradas diferen cas estatisticamente significativas entre as t ecnicas, no PTV, nos OAR e ainda no padrão da distribui ção de dose pelos tecidos sãos. IMRT5 e DCART contribuem para maior dispersão de doses baixas pelos tecidos normais, mama direita, pulmão direito, cora cão e at e pelo pulmão esquerdo, quando comparados com as tecnicas tangenciais (f-IMRT e IMRT2). No entanto, os planos de IMRT5 melhoram a distribuição de dose no PTV apresentando melhor conformidade e homogeneidade no volume alvo e percentagens de dose mais baixas nos orgãos do mesmo lado. A t ecnica de DCART não apresenta vantagens comparativamente com as restantes t ecnicas investigadas. Foram tamb em identi cadas diferen cas entre os algoritmos de c alculos: em geral, o PBC estimou doses mais elevadas para o PTV, pulmão esquerdo e cora ção, do que os algoritmos de MC. Os algoritmos de MC, entre si, apresentaram resultados semelhantes (com dferen cas at e 2%). Considera-se que o PBC não e preciso na determina ção de dose em meios homog eneos e na região de build-up. Nesse sentido, atualmente na cl nica, a equipa da F sica realiza medi ções para adquirir dados para outro algoritmo de c alculo. Apesar de melhor homogeneidade e conformidade no PTV considera-se que h a um aumento de risco de cancro na mama contralateral quando se utilizam t ecnicas não-tangenciais. Os resultados globais dos estudos apresentados confirmam o excelente poder de previsão com precisão na determinação e c alculo das distribui ções de dose nos orgãos e tecidos das tecnicas de simulação de Monte Carlo usados.---------ABSTRACT:Breast cancer is the most frequent in women. Scienti c knowledge and technology have created many and di erent strategies to treat this pathology. Radiotherapy (RT) is in the actual standard guidelines for most of breast cancer treatments. However, radiation is a two-sword weapon: although it may heal cancer, it may also induce secondary cancer. The contralateral breast (CLB) is a susceptible organ to absorb doses with the treatment of the other breast, being at signi cant risk to develop a secondary tumor. New radiation related techniques, with more complex delivery strategies and promising results are being implemented and used in radiotherapy departments. However some questions have to be properly addressed, such as: Is it safe to move to complex techniques to achieve better conformation in the target volumes, in breast radiotherapy? What happens to the target volumes and surrounding healthy tissues? How accurate is dose delivery? What are the shortcomings and limitations of currently used treatment planning systems (TPS)? The answers to these questions largely rely in the use of Monte Carlo (MC) simulations using state-of-the-art computer programs to accurately model the di erent components of the equipment (target, lters, collimators, etc.) and obtain an adequate description of the radiation elds used, as well as the detailed geometric representation and material composition of organs and tissues. This work aims at investigating the impact of treating left breast cancer using di erent radiation therapy (RT) techniques f-IMRT (forwardly-planned intensity-modulated), inversely-planned IMRT (IMRT2, using 2 beams; IMRT5, using 5 beams) and dynamic conformal arc (DCART) RT and their e ects on the whole-breast irradiation and in the undesirable irradiation of the surrounding healthy tissues. Two algorithms of iPlan BrainLAB TPS were used: Pencil Beam Convolution (PBC)and commercial Monte Carlo (iMC). Furthermore, an accurate Monte Carlo (MC) model of the linear accelerator used (a Trilogy R VARIANR) was done with the EGSnrc MC code, to accurately determine the doses that reach the CLB. For this purpose it was necessary to model the new High De nition multileaf collimator that had never before been simulated. The model developed was then included on the EGSnrc MC package of National Research Council Canada (NRC). The linac was benchmarked with water measurements and later on validated against the TPS calculations. The dose distributions in the planning target volume (PTV) and the dose to the organs at risk (OAR) were compared analyzing dose-volume histograms; further statistical analysis was performed using IBM SPSS v20 software. For PBC, all the techniques provided adequate coverage of the PTV. However, statistically significant dose di erences were observed between the techniques, in the PTV, OAR and also in the pattern of dose distribution spreading into normal tissues. IMRT5 and DCART spread low doses into greater volumes of normal tissue, right breast, right lung, heart and even the left lung than tangential techniques (f-IMRT and IMRT2). However,IMRT5 plans improved distributions for the PTV, exhibiting better conformity and homogeneity in target and reduced high dose percentages in ipsilateral OAR. DCART did not present advantages over any of the techniques investigated. Di erences were also found comparing the calculation algorithms: PBC estimated higher doses for the PTV, ipsilateral lung and heart than the MC algorithms predicted. The MC algorithms presented similar results (within 2% di erences). The PBC algorithm was considered not accurate in determining the dose in heterogeneous media and in build-up regions. Therefore, a major e ort is being done at the clinic to acquire data to move from PBC to another calculation algorithm. Despite better PTV homogeneity and conformity there is an increased risk of CLB cancer development, when using non-tangential techniques. The overall results of the studies performed con rm the outstanding predictive power and accuracy in the assessment and calculation of dose distributions in organs and tissues rendered possible by the utilization and implementation of MC simulation techniques in RT TPS.
Resumo:
This research seeks to design and implement a WebGIS application allowing high school students to work with information related to the disciplinary competencies of the competency-teaching model, in Mexico. This paradigm assumes knowledge to be acquired through the application of new technologies and to link it with everyday life situations of students. The WebGIS provides access to maps regarding natural risks in Mexico, e.g. volcanism, seismic activities, or hurricanes; the prototype's user interface was designed with special emphasis on scholar needs for high school students.
Resumo:
Instituto Politécnico de Lisboa (IPL) e Instituto Superior de Engenharia de Lisboa (ISEL)apoio concedido pela bolsa SPRH/PROTEC/67580/2010, que apoiou parcialmente este trabalho
Resumo:
11TH INTERNATIONAL COLLOQUIUM ON ANCIENT MOSAICS OCTOBER 16TH 20TH, 2009, BURSA TURKEY Mosaics of Turkey and Parallel Developments in the Rest of the Ancient and Medieval World: Questions of Iconography, Style and Technique from the Beginnings of Mosaic until the Late Byzantine Era
Resumo:
This thesis focuses on the representation of Popular Music in museums by mapping, analyzing, and characterizing its practices in Portugal at the beginning of the 21st century. Now that museums' ability to shape public discourse is acknowledged, the examination of popular music's discourses in museums is of the utmost importance for Ethnomusicology and Popular Music Studies as well as for Museum Studies. The concept of 'heritage' is at the heart of this processes. The study was designed with the aim of moving the exhibiting of popular music in museums forward through a qualitative inquiry of case studies. Data collection involved surveying pop-rock music exhibitions as a qualitative sampling of popular music exhibitions in Portugal from 2007 to 2013. Two of these exhibitions were selected as case studies: No Tempo do Gira-Discos: Um Percurso pela Produção Fonográfica Portuguesa at the Museu da Música in Lisbon in 2007 (also Faculdade de Letras, 2009), and A Magia do Vinil, a Música que Mudou a Sociedade at the Oficina da Cultura in Almada in 2008 (and several other venues, from 2009 to 2013). Two specific domains were observed: popular music exhibitions as instances of museum practice and museum professionals. The first domain encompasses analyzing the types of objects selected for exhibition; the interactive museum practices fostered by the exhibitions; the concepts and narratives used to address popular music discursively, as well as the interpretative practices they allow. The second domain, focuses museum professionals and curators of popular music exhibitions as members of a group, namely their goals, motivations and perspectives. The theoretical frameworks adopted were drawn from the fields of ethnomusicology, popular music studies, and museum studies. The written materials of the exhibitions were subjected of methods of discourse analysis methods. Semi-structured interviews with curators and museum professional were also conducted and analysed. From the museum studies perspective, the study research suggests that the practice adopted by popular music museums largely matches that of conventional museums. From the ethnomusicological and popular music studies stand point, the two case studies reveal two distinct conceptual worlds: the first exhibition, curated by an academic and an independent researcher, points to a mental configuration where popular music is explained through a framework of genres supported by different musical practices. Moreover, it is industry actors such as decision makers and gatekeepers that govern popular music, which implies that the visitors' romantic conception of the musician is to some extent dismantled; the second exhibition, curated by a record collector and specialist, is based on a more conventional process of the everyday historical speech that encodes a mismatch between “good” and “bad music”. Data generated by a survey shows that only one curator, in fact that of my first case study, has an academic background. The backgrounds of all the others are in some way similar to the curator of the second case study. Therefore, I conclude that the second case study best conveys the current practice of exhibiting Popular Music in Portugal.
Resumo:
This proposal aims to explore the use of available technologies for video representation of sets and performers in order to serve as support for composition processes and artistic performer rehearsals, while focusing in representing the performer’s body and its movements, and its relation with objects belonging to the three-dimensional space of their performances. This project’s main goal is to design and develop a system that can spatially represent the performer and its movements, by means of capturing processes and reconstruction using a camera device, as well as enhance the three-dimensional space where the performance occurs by allowing interaction with virtual objects and by adding a video component, either for documentary purposes, or for live performances effects (for example, using video mapping video techniques in captured video or projection during a performance).
Resumo:
Nowadays, the consumption of goods and services on the Internet are increasing in a constant motion. Small and Medium Enterprises (SMEs) mostly from the traditional industry sectors are usually make business in weak and fragile market sectors, where customized products and services prevail. To survive and compete in the actual markets they have to readjust their business strategies by creating new manufacturing processes and establishing new business networks through new technological approaches. In order to compete with big enterprises, these partnerships aim the sharing of resources, knowledge and strategies to boost the sector’s business consolidation through the creation of dynamic manufacturing networks. To facilitate such demand, it is proposed the development of a centralized information system, which allows enterprises to select and create dynamic manufacturing networks that would have the capability to monitor all the manufacturing process, including the assembly, packaging and distribution phases. Even the networking partners that come from the same area have multi and heterogeneous representations of the same knowledge, denoting their own view of the domain. Thus, different conceptual, semantic, and consequently, diverse lexically knowledge representations may occur in the network, causing non-transparent sharing of information and interoperability inconsistencies. The creation of a framework supported by a tool that in a flexible way would enable the identification, classification and resolution of such semantic heterogeneities is required. This tool will support the network in the semantic mapping establishments, to facilitate the various enterprises information systems integration.
Resumo:
The life of humans and most living beings depend on sensation and perception for the best assessment of the surrounding world. Sensorial organs acquire a variety of stimuli that are interpreted and integrated in our brain for immediate use or stored in memory for later recall. Among the reasoning aspects, a person has to decide what to do with available information. Emotions are classifiers of collected information, assigning a personal meaning to objects, events and individuals, making part of our own identity. Emotions play a decisive role in cognitive processes as reasoning, decision and memory by assigning relevance to collected information. The access to pervasive computing devices, empowered by the ability to sense and perceive the world, provides new forms of acquiring and integrating information. But prior to data assessment on its usefulness, systems must capture and ensure that data is properly managed for diverse possible goals. Portable and wearable devices are now able to gather and store information, from the environment and from our body, using cloud based services and Internet connections. Systems limitations in handling sensorial data, compared with our sensorial capabilities constitute an identified problem. Another problem is the lack of interoperability between humans and devices, as they do not properly understand human’s emotional states and human needs. Addressing those problems is a motivation for the present research work. The mission hereby assumed is to include sensorial and physiological data into a Framework that will be able to manage collected data towards human cognitive functions, supported by a new data model. By learning from selected human functional and behavioural models and reasoning over collected data, the Framework aims at providing evaluation on a person’s emotional state, for empowering human centric applications, along with the capability of storing episodic information on a person’s life with physiologic indicators on emotional states to be used by new generation applications.