6 resultados para Processamento do pedido de recuperação
em Repositório Institucional da Universidade de Aveiro - Portugal
Resumo:
A valorização de diferentes resíduos industriais provenientes dosector de tratamento/revestimento de superfícies metálicas pode ser obtida utilizando-os como matérias-primas na formulação de pigmentos cerâmicos, sintetizados pelo método cerâmico convencional. Neste trabalho avaliou-se a viabilidade de incorporar lamas geradas pelos processos de (i) anodização, ricas em alumínio, (ii) de niquelagem e cromagem de torneiras, usadas como fonte de níquel e crómio, e (iii) da decapagem química de aços de uma trefilaria, ricas em ferro, utilizadas por si só, ou em conjunto com matérias-primas comerciais, para a obtenção depigmentos cerâmicos que coram, de forma estável, diversas matrizes cerâmicas e vítreas. Esta solução assegura ainda a inertização de potenciaisespécies perigosas presentes nos resíduos, resultando produtos inócuospara a saúde pública. Procedeu-se à caracterização de cada resíduo, em termos de composição química e mineralógica, comportamento térmico, grau de toxicidade, distribuição granulométrica, teor de humidade, etc. Verificou-se a constância das características das lamas, recorrendo à análise de lotes recolhidos em momentos distintos. Os resíduos são essencialmente constituídos por hidróxidos metálicos e foram utilizados após secagem e desagregação. No entanto, a lama de anodização de alumínio sofreu um tratamento térmico suplementar a 1400ºC. O método de síntese dos pigmentos englobou as seguintes etapas: (i) doseamento; (ii) homogeneização; (iii) calcinação; (iv) lavagem e moagem.Procedeu-se à caracterização dos pigmentos, avaliando a cor por espectroscopia de reflectância difusae pelo método CIELAB e determinando as características físico-químicas relevantes. Posteriormente, testou-se o seu desempenho em produtos cerâmicos distintos (corpos e vidrados), aferindo o desempenho cromático e a estabilidade. Numa primeira fase, desenvolveram-se e caracterizaram-se tipos distintos de pigmentos: (i) com base na estrutura do corundo (ii) verde Victória deuvarovite (iii) violeta de cassiterite com crómio (iv) pigmento carmim de malaiaíte; (v)pretos e castanhos com base na estrutura da espinela. Aprofundaram-se depois os estudos do pigmento carmim de malaiaítee do pigmento preto com base na estrutura da espinela. O pigmento carmim de malaiaíte, CaSnSiO5:Cr2O3, é formulado coma lama gerada no processo de niquelagem e cromagem. Avaliou-se a influência do teor de lama na temperatura de síntese e na qualidade cromática, em comparação com um pigmento formulado com reagentes puros. O pigmento preto com estrutura de espinela de níquel, crómio e ferro, foii formulado exclusivamente a partir das lamas geradas nos processos de cromagem/niquelagem e de decapagem química do aço. Avaliaram-se as características cromáticas e o grau de inertização dos elementos tóxicospresentes, em função da estequiometria e do tratamento térmico. Estudou-se ainda um novo sistema com base na estrutura da hibonite(CaAl12O19), que permite a obtenção de pigmentos azuis e que utiliza a lama de cromagem e niquelagem. As espécies cromóforas (Ni2+ ou Co2+) assumem coordenação tetraédrica quando substituem os iões Al3+ que ocupam as posições M5 da rede da hibonite. A formação simultânea de anortite permite reduzir a temperatura de síntese.Para além do carácter inovador deste pigmento de dissolução sólida, a qualidade cromática e a sua estabilidade são interessantes. Além disso, os teores de cobalto ou níquel são reduzidosrelativamente aos utilizados em formulações comerciais de pigmentos azuis, o que se traduz em importantes vantagens económicas e ambientais.
Resumo:
The electronic storage of medical patient data is becoming a daily experience in most of the practices and hospitals worldwide. However, much of the data available is in free-form text, a convenient way of expressing concepts and events, but especially challenging if one wants to perform automatic searches, summarization or statistical analysis. Information Extraction can relieve some of these problems by offering a semantically informed interpretation and abstraction of the texts. MedInX, the Medical Information eXtraction system presented in this document, is the first information extraction system developed to process textual clinical discharge records written in Portuguese. The main goal of the system is to improve access to the information locked up in unstructured text, and, consequently, the efficiency of the health care process, by allowing faster and reliable access to quality information on health, for both patient and health professionals. MedInX components are based on Natural Language Processing principles, and provide several mechanisms to read, process and utilize external resources, such as terminologies and ontologies, in the process of automatic mapping of free text reports onto a structured representation. However, the flexible and scalable architecture of the system, also allowed its application to the task of Named Entity Recognition on a shared evaluation contest focused on Portuguese general domain free-form texts. The evaluation of the system on a set of authentic hospital discharge letters indicates that the system performs with 95% F-measure, on the task of entity recognition, and 95% precision on the task of relation extraction. Example applications, demonstrating the use of MedInX capabilities in real applications in the hospital setting, are also presented in this document. These applications were designed to answer common clinical problems related with the automatic coding of diagnoses and other health-related conditions described in the documents, according to the international classification systems ICD-9-CM and ICF. The automatic review of the content and completeness of the documents is an example of another developed application, denominated MedInX Clinical Audit system.
Resumo:
For the actual existence of e-government it is necessary and crucial to provide public information and documentation, making its access simple to citizens. A portion, not necessarily small, of these documents is in an unstructured form and in natural language, and consequently outside of which the current search systems are generally able to cope and effectively handle. Thus, in thesis, it is possible to improve access to these contents using systems that process natural language and create structured information, particularly if supported in semantics. In order to put this thesis to test, this work was developed in three major phases: (1) design of a conceptual model integrating the creation of structured information and making it available to various actors, in line with the vision of e-government 2.0; (2) definition and development of a prototype instantiating the key modules of this conceptual model, including ontology based information extraction supported by examples of relevant information, knowledge management and access based on natural language; (3) assessment of the usability and acceptability of querying information as made possible by the prototype - and in consequence of the conceptual model - by users in a realistic scenario, that included comparison with existing forms of access. In addition to this evaluation, at another level more related to technology assessment and not to the model, evaluations were made on the performance of the subsystem responsible for information extraction. The evaluation results show that the proposed model was perceived as more effective and useful than the alternatives. Associated with the performance of the prototype to extract information from documents, comparable to the state of the art, results demonstrate the feasibility and advantages, with current technology, of using natural language processing and integration of semantic information to improve access to unstructured contents in natural language. The conceptual model and the prototype demonstrator intend to contribute to the future existence of more sophisticated search systems that are also more suitable for e-government. To have transparency in governance, active citizenship, greater agility in the interaction with the public administration, among others, it is necessary that citizens and businesses have quick and easy access to official information, even if it was originally created in natural language.
Resumo:
The expectations of citizens from the Information Technologies (ITs) are increasing as the ITs have become integral part of our society, serving all kinds of activities whether professional, leisure, safety-critical applications or business. Hence, the limitations of the traditional network designs to provide innovative and enhanced services and applications motivated a consensus to integrate all services over packet switching infrastructures, using the Internet Protocol, so as to leverage flexible control and economical benefits in the Next Generation Networks (NGNs). However, the Internet is not capable of treating services differently while each service has its own requirements (e.g., Quality of Service - QoS). Therefore, the need for more evolved forms of communications has driven to radical changes of architectural and layering designs which demand appropriate solutions for service admission and network resources control. This Thesis addresses QoS and network control issues, aiming to improve overall control performance in current and future networks which classify services into classes. The Thesis is divided into three parts. In the first part, we propose two resource over-reservation algorithms, a Class-based bandwidth Over-Reservation (COR) and an Enhanced COR (ECOR). The over-reservation means reserving more bandwidth than a Class of Service (CoS) needs, so the QoS reservation signalling rate is reduced. COR and ECOR allow for dynamically defining over-reservation parameters for CoSs based on network interfaces resource conditions; they aim to reduce QoS signalling and related overhead without incurring CoS starvation or waste of bandwidth. ECOR differs from COR by allowing for optimizing control overhead minimization. Further, we propose a centralized control mechanism called Advanced Centralization Architecture (ACA), that uses a single state-full Control Decision Point (CDP) which maintains a good view of its underlying network topology and the related links resource statistics on real-time basis to control the overall network. It is very important to mention that, in this Thesis, we use multicast trees as the basis for session transport, not only for group communication purposes, but mainly to pin packets of a session mapped to a tree to follow the desired tree. Our simulation results prove a drastic reduction of QoS control signalling and the related overhead without QoS violation or waste of resources. Besides, we provide a generic-purpose analytical model to assess the impact of various parameters (e.g., link capacity, session dynamics, etc.) that generally challenge resource overprovisioning control. In the second part of this Thesis, we propose a decentralization control mechanism called Advanced Class-based resource OverpRovisioning (ACOR), that aims to achieve better scalability than the ACA approach. ACOR enables multiple CDPs, distributed at network edge, to cooperate and exchange appropriate control data (e.g., trees and bandwidth usage information) such that each CDP is able to maintain a good knowledge of the network topology and the related links resource statistics on real-time basis. From scalability perspective, ACOR cooperation is selective, meaning that control information is exchanged dynamically among only the CDPs which are concerned (correlated). Moreover, the synchronization is carried out through our proposed concept of Virtual Over-Provisioned Resource (VOPR), which is a share of over-reservations of each interface to each tree that uses the interface. Thus, each CDP can process several session requests over a tree without requiring synchronization between the correlated CDPs as long as the VOPR of the tree is not exhausted. Analytical and simulation results demonstrate that aggregate over-reservation control in decentralized scenarios keep low signalling without QoS violations or waste of resources. We also introduced a control signalling protocol called ACOR Protocol (ACOR-P) to support the centralization and decentralization designs in this Thesis. Further, we propose an Extended ACOR (E-ACOR) which aggregates the VOPR of all trees that originate at the same CDP, and more session requests can be processed without synchronization when compared with ACOR. In addition, E-ACOR introduces a mechanism to efficiently track network congestion information to prevent unnecessary synchronization during congestion time when VOPRs would exhaust upon every session request. The performance evaluation through analytical and simulation results proves the superiority of E-ACOR in minimizing overall control signalling overhead while keeping all advantages of ACOR, that is, without incurring QoS violations or waste of resources. The last part of this Thesis includes the Survivable ACOR (SACOR) proposal to support stable operations of the QoS and network control mechanisms in case of failures and recoveries (e.g., of links and nodes). The performance results show flexible survivability characterized by fast convergence time and differentiation of traffic re-routing under efficient resource utilization i.e. without wasting bandwidth. In summary, the QoS and architectural control mechanisms proposed in this Thesis provide efficient and scalable support for network control key sub-systems (e.g., QoS and resource control, traffic engineering, multicasting, etc.), and thus allow for optimizing network overall control performance.
Resumo:
The rapid evolution and proliferation of a world-wide computerized network, the Internet, resulted in an overwhelming and constantly growing amount of publicly available data and information, a fact that was also verified in biomedicine. However, the lack of structure of textual data inhibits its direct processing by computational solutions. Information extraction is the task of text mining that intends to automatically collect information from unstructured text data sources. The goal of the work described in this thesis was to build innovative solutions for biomedical information extraction from scientific literature, through the development of simple software artifacts for developers and biocurators, delivering more accurate, usable and faster results. We started by tackling named entity recognition - a crucial initial task - with the development of Gimli, a machine-learning-based solution that follows an incremental approach to optimize extracted linguistic characteristics for each concept type. Afterwards, Totum was built to harmonize concept names provided by heterogeneous systems, delivering a robust solution with improved performance results. Such approach takes advantage of heterogenous corpora to deliver cross-corpus harmonization that is not constrained to specific characteristics. Since previous solutions do not provide links to knowledge bases, Neji was built to streamline the development of complex and custom solutions for biomedical concept name recognition and normalization. This was achieved through a modular and flexible framework focused on speed and performance, integrating a large amount of processing modules optimized for the biomedical domain. To offer on-demand heterogenous biomedical concept identification, we developed BeCAS, a web application, service and widget. We also tackled relation mining by developing TrigNER, a machine-learning-based solution for biomedical event trigger recognition, which applies an automatic algorithm to obtain the best linguistic features and model parameters for each event type. Finally, in order to assist biocurators, Egas was developed to support rapid, interactive and real-time collaborative curation of biomedical documents, through manual and automatic in-line annotation of concepts and relations. Overall, the research work presented in this thesis contributed to a more accurate update of current biomedical knowledge bases, towards improved hypothesis generation and knowledge discovery.
Resumo:
O trabalho apresentado tem por objetivo contribuir para a valorização da borracha proveniente de pneus em fim de vida, assente em princípios de sustentabilidade ambiental. A abordagem adotada para a concretização deste objetivo consiste na incorporação de borracha de pneus em formulações de base termoplástica e elastomérica (TPE), adequadas ao processo de moldação por injeção. São desenvolvidos estudos sobre a morfologia, propriedades mecânicas, térmicas e reológicas das ligas poliméricas à base de granulado de borracha de pneu (GTR). A falta de adesão entre o GTR e a matriz polimérica leva à degradação das propriedades mecânicas dos materiais obtidos. A estratégia explorada passa pela utilização de um elastómero para promover o encapsulamento do GTR e, desta forma, procurar obter ligas com propriedades mecânicas características de um TPE. São analisadas ligas ternárias (TPEGTR) compostas por polipropileno (PP) de elevada fluidez, GTR e elastómero virgem. O efeito da presença de diferentes elastómeros nas ligas é analisado neste trabalho: um elastómero de etilenopropileno- dieno (EPDM), e um novo elastómero de etileno-propileno (EPR) obtido por catálise metalocénica. O estudo da morfologia das ligas obtidas mostra haver interação entre os materiais, sendo possível inferir a viabilidade da estratégia adotada para promover a adesão do GTR. A incorporação de elastómero promove o aumento da resistência ao impacto e da extensão na rotura nas ligas, o que é atribuído, fundamentalmente, ao encapsulamento do GTR e ao aumento da tenacidade da matriz termoplástica. Com o objetivo de avaliar a influência da estrutura cristalina das ligas TPEGTR no seu comportamento mecânico, procede-se à análise do processo de cristalização sob condições isotérmicas e não isotérmicas. Neste estudo, é avaliado o efeito da presença dos materiais que constituem a fase elastomérica na cinética de cristalização. Para cada uma das ligas desenvolvidas, recorre-se ao modelo de Avrami para avaliar o efeito da temperatura no mecanismo de nucleação, na morfologia das estruturas cristalinas e na taxa de cristalização. Recorre-se à reometria capilar para estudar, sob condições estacionárias, o comportamento reológico das ligas TPEGTR. O modelo de Cross-WLF é utilizado para avaliar o comportamento reológico de todos os materiais, obtendo-se resultados similares àqueles obtidos experimentalmente. O comportamento reológico dos polímeros PP, EPR e EPDM é do tipo reofluidificante, tendo o EPR um comportamento reológico similar ao do PP e o EPDM um comportamento reo-fluidificante mais pronunciado. Em todas as ligas analisadas o comportamento reológico revela-se do tipo reo-fluidificante, sendo que a presença de GTR promove o aumento da viscosidade. Os parâmetros obtidos do modelo de Cross-WLF são utilizados para realizar a simulação da etapa de injeção recorrendo a um software comercial. Os resultados obtidos são validados experimentalmente pelo processo de moldação por injeção, evidenciando uma boa adequabilidade da aplicação deste modelo a estas ligas. O trabalho desenvolvido sobre ligas TPEGTR, constitui um contributo para a valorização da borracha proveniente de pneus em fim de vida, assente em princípios de sustentabilidade ambiental.