864 resultados para Component-based systems
Resumo:
MSC Dissertation in Computer Engineering
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Informática.
Resumo:
Since long ago cellulosic lyotropic liquid crystals were thought as potential materials to produce fibers competitive with spidersilk or Kevlar, yet the processing of high modulus materials from cellulose-based precursors was hampered by their complex rheological behavior. In this work, by using the Rheo-NMR technique, which combines deuterium NMR with rheology, we investigate the high shear rate regimes that may be of interest to the industrial processing of these materials. Whereas the low shear rate regimes were already investigated by this technique in different works [1-4], the high shear rates range is still lacking a detailed study. This work focuses on the orientational order in the system both under shear and subsequent relaxation process arising after shear cessation through the analysis of deuterium spectra from the deuterated solvent water. At the analyzed shear rates the cholesteric order is suppressed and a flow-aligned nematic is observed which for the higher shear rates develops after certain time periodic perturbations that transiently annihilate the order in the system. During relaxation the flow aligned nematic starts losing order due to the onset of the cholesteric helices leading to a period of very low order where cholesteric helices with different orientations are forming from the aligned nematic, followed in the final stage by an increase in order at long relaxation times corresponding to the development of aligned cholesteric domains. This study sheds light on the complex rheological behavior of chiral nematic cellulose-based systems and opens ways to improve its processing. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Electrotécnica, Especialidade de Sistemas Digitais, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
In a highly competitive market companies know that having quality products or provide good services is not enough to keep customers "faithful". Currently, quality of products/services, location and price are fundamental aspects customers expect to get on every purchase, so they look for ways to distinguish companies. This can happen either in a strictly materialistic way or by evaluation of intangible metrics such as having his opinion appreciated or being part of a selected group of "premium" customers. Therefore, companies must find ways to value and reward its customers in order to keep them "faithful" to their products or services. Loyalty systems are one means to achieve this goal, however, due to its nature and how they are implemented, often companies end up having low acceptance, without achieving intended objectives. In an era of technological revolution, where global average adoption of smartphones and tablets is 74% and 40% [Our Mobile Planet, 2014], the opportunity to reinvent loyalty systems reappears. Throughout this thesis a new tool, relying on the latest technologies and aiming to fulfill this market opportunity, will be presented. The main idea is to use ancient loyalty concepts, such as stamps or pointscards, and transforms them into digital cards, to be used in digital wallets, introducing an innovative technology component based on Apple's Passbook technology. The main goal is to create a platform for managing the card’s life cycle, allowing anyone to create, edit, distribute and analyze the data, and also create a new communication channel with customers, improving the customer-‐supplier relationship and enhancing the mobile-‐marketing.
Resumo:
Este documento descreve um modelo de tolerância a falhas para sistemas de tempo-real distribuídos. A sugestão deste modelo tem como propósito a apresentação de uma solu-ção fiável, flexível e adaptável às necessidades dos sistemas de tempo-real distribuídos. A tolerância a falhas é um aspeto extremamente importante na construção de sistemas de tempo-real e a sua aplicação traz inúmeros benefícios. Um design orientado para a to-lerância a falhas contribui para um melhor desempenho do sistema através do melhora-mento de aspetos chave como a segurança, a confiabilidade e a disponibilidade dos sis-temas. O trabalho desenvolvido centra-se na prevenção, deteção e tolerância a falhas de tipo ló-gicas (software) e físicas (hardware) e assenta numa arquitetura maioritariamente basea-da no tempo, conjugada com técnicas de redundância. O modelo preocupa-se com a efi-ciência e os custos de execução. Para isso utilizam-se também técnicas tradicionais de to-lerância a falhas, como a redundância e a migração, no sentido de não prejudicar o tempo de execução do serviço, ou seja, diminuindo o tempo de recuperação das réplicas, em ca-so de ocorrência de falhas. Neste trabalho são propostas heurísticas de baixa complexida-de para tempo-de-execução, a fim de se determinar para onde replicar os componentes que constituem o software de tempo-real e de negociá-los num mecanismo de coordena-ção por licitações. Este trabalho adapta e estende alguns algoritmos que fornecem solu-ções ainda que interrompidos. Estes algoritmos são referidos em trabalhos de investiga-ção relacionados, e são utilizados para formação de coligações entre nós coadjuvantes. O modelo proposto colmata as falhas através de técnicas de replicação ativa, tanto virtual como física, com blocos de execução concorrentes. Tenta-se melhorar ou manter a sua qualidade produzida, praticamente sem introduzir overhead de informação significativo no sistema. O modelo certifica-se que as máquinas escolhidas, para as quais os agentes migrarão, melhoram iterativamente os níveis de qualidade de serviço fornecida aos com-ponentes, em função das disponibilidades das respetivas máquinas. Caso a nova configu-ração de qualidade seja rentável para a qualidade geral do serviço, é feito um esforço no sentido de receber novos componentes em detrimento da qualidade dos já hospedados localmente. Os nós que cooperam na coligação maximizam o número de execuções para-lelas entre componentes paralelos que compõem o serviço, com o intuito de reduzir atra-sos de execução. O desenvolvimento desta tese conduziu ao modelo proposto e aos resultados apresenta-dos e foi genuinamente suportado por levantamentos bibliográficos de trabalhos de in-vestigação e desenvolvimento, literaturas e preliminares matemáticos. O trabalho tem também como base uma lista de referências bibliográficas.
Resumo:
Dissertation to obtain the Master degree in Electrical Engineering and Computer Science
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia do Ambiente
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Report for the scientific sojourn carried out at the Model-based Systems and Qualitative Reasoning Group (Technical University of Munich), from September until December 2005. Constructed wetlands (CWs), or modified natural wetlands, are used all over the world as wastewater treatment systems for small communities because they can provide high treatment efficiency with low energy consumption and low construction, operation and maintenance costs. Their treatment process is very complex because it includes physical, chemical and biological mechanisms like microorganism oxidation, microorganism reduction, filtration, sedimentation and chemical precipitation. Besides, these processes can be influenced by different factors. In order to guarantee the performance of CWs, an operation and maintenance program must be defined for each Wastewater Treatment Plant (WWTP). The main objective of this project is to provide a computer support to the definition of the most appropriate operation and maintenance protocols to guarantee the correct performance of CWs. To reach them, the definition of models which represent the knowledge about CW has been proposed: components involved in the sanitation process, relation among these units and processes to remove pollutants. Horizontal Subsurface Flow CWs are chosen as a case study and the filtration process is selected as first modelling-process application. However, the goal is to represent the process knowledge in such a way that it can be reused for other types of WWTP.
Resumo:
The study investigates the possibility to incorporate fracture intensity and block geometry as spatially continuous parameters in GIS-based systems. For this purpose, a deterministic method has been implemented to estimate block size (Bloc3D) and joint frequency (COLTOP). In addition to measuring the block size, the Bloc3D Method provides a 3D representation of the shape of individual blocks. These two methods were applied using field measurements (joint set orientation and spacing) performed over a large field area, in the Swiss Alps. This area is characterized by a complex geology, a number of different rock masses and varying degrees of metamorphism. The spatial variability of the parameters was evaluated with regard to lithology and major faults. A model incorporating these measurements and observations into a GIS system to assess the risk associated with rock falls is proposed. The analysis concludes with a discussion on the feasibility of such an application in regularly and irregularly jointed rock masses, with persistent and impersistent discontinuities.
Resumo:
In the administration, planning, design, and maintenance of road systems, transportation professionals often need to choose between alternatives, justify decisions, evaluate tradeoffs, determine how much to spend, set priorities, assess how well the network meets traveler needs, and communicate the basis for their actions to others. A variety of technical guidelines, tools, and methods have been developed to help with these activities. Such work aids include design criteria guidelines, design exception analysis methods, needs studies, revenue allocation schemes, regional planning guides, designation of minimum standards, sufficiency ratings, management systems, point based systems to determine eligibility for paving, functional classification, and bridge ratings. While such tools play valuable roles, they also manifest a number of deficiencies and are poorly integrated. Design guides tell what solutions MAY be used, they aren't oriented towards helping find which one SHOULD be used. Design exception methods help justify deviation from design guide requirements but omit consideration of important factors. Resource distribution is too often based on dividing up what's available rather than helping determine how much should be spent. Point systems serve well as procedural tools but are employed primarily to justify decisions that have already been made. In addition, the tools aren't very scalable: a system level method of analysis seldom works at the project level and vice versa. In conjunction with the issues cited above, the operation and financing of the road and highway system is often the subject of criticisms that raise fundamental questions: What is the best way to determine how much money should be spent on a city or a county's road network? Is the size and quality of the rural road system appropriate? Is too much or too little money spent on road work? What parts of the system should be upgraded and in what sequence? Do truckers receive a hidden subsidy from other motorists? Do transportation professions evaluate road situations from too narrow of a perspective? In considering the issues and questions the author concluded that it would be of value if one could identify and develop a new method that would overcome the shortcomings of existing methods, be scalable, be capable of being understood by the general public, and utilize a broad viewpoint. After trying out a number of concepts, it appeared that a good approach would be to view the road network as a sub-component of a much larger system that also includes vehicles, people, goods-in-transit, and all the ancillary items needed to make the system function. Highway investment decisions could then be made on the basis of how they affect the total cost of operating the total system. A concept, named the "Total Cost of Transportation" method, was then developed and tested. The concept rests on four key principles: 1) that roads are but one sub-system of a much larger 'Road Based Transportation System', 2) that the size and activity level of the overall system are determined by market forces, 3) that the sum of everything expended, consumed, given up, or permanently reserved in building the system and generating the activity that results from the market forces represents the total cost of transportation, and 4) that the economic purpose of making road improvements is to minimize that total cost. To test the practical value of the theory, a special database and spreadsheet model of Iowa's county road network was developed. This involved creating a physical model to represent the size, characteristics, activity levels, and the rates at which the activities take place, developing a companion economic cost model, then using the two in tandem to explore a variety of issues. Ultimately, the theory and model proved capable of being used in full system, partial system, single segment, project, and general design guide levels of analysis. The method appeared to be capable of remedying many of the existing work method defects and to answer society's transportation questions from a new perspective.
Resumo:
We describe a series of experiments in which we start with English to French and English to Japanese versions of an Open Source rule-based speech translation system for a medical domain, and bootstrap correspondign statistical systems. Comparative evaluation reveals that the rule-based systems are still significantly better than the statistical ones, despite the fact that considerable effort has been invested in tuning both the recognition and translation components; also, a hybrid system only marginally improved recall at the cost of a los in precision. The result suggests that rule-based architectures may still be preferable to statistical ones for safety-critical speech translation tasks.
Resumo:
Background: Experimental evidences demonstrate that vegetable derived extracts inhibit cholesterol absorption in the gastrointestinal tract. To further explore the mechanisms behind, we modeled duodenal contents with several vegetable extracts. Results: By employing a widely used cholesterol quantification method based on a cholesterol oxidase-peroxidase coupled reaction we analyzed the effects on cholesterol partition. Evidenced interferences were analyzed by studying specific and unspecific inhibitors of cholesterol oxidase-peroxidase coupled reaction. Cholesterol was also quantified by LC/MS. We found a significant interference of diverse (cocoa and tea-derived) extracts over this method. The interference was strongly dependent on model matrix: while as in phosphate buffered saline, the development of unspecific fluorescence was inhibitable by catalase (but not by heat denaturation), suggesting vegetable extract derived H2O2 production, in bile-containing model systems, this interference also comprised cholesterol-oxidase inhibition. Several strategies, such as cholesterol standard addition and use of suitable blanks containing vegetable extracts were tested. When those failed, the use of a mass-spectrometry based chromatographic assay allowed quantification of cholesterol in models of duodenal contents in the presence of vegetable extracts. Conclusions: We propose that the use of cholesterol-oxidase and/or peroxidase based systems for cholesterol analyses in foodstuffs should be accurately monitored, as important interferences in all the components of the enzymatic chain were evident. The use of adequate controls, standard addition and finally, chromatographic analyses solve these issues.
Resumo:
Tulevaisuudessa siirrettävät laitteet, kuten matkapuhelimet ja kämmenmikrot, pystyvät muodostamaan verkkoyhteyden käyttäen erilaisia yhteysmenetelmiä eri tilanteissa. Yhteysmenetelmillä on toisistaan poikkeavat viestintäominaisuudet mm. latenssin, kaistanleveyden, virhemäärän yms. suhteen. Langattomille yhteysmenetelmille on myös ominaista tietoliikenneyhteyden ominaisuuksien voimakas muuttuminen ympäristön suhteen. Parhaan suorituskyvyn ja käytettävyyden saavuttamiseksi, on siirrettävän laitteen pystyttävä mukautumaan käytettyyn viestintämenetelmään ja viestintäympäristössä tapahtuviin muutoksiin. Olennainen osa tietoliikenteessä ovat protokollapinot, jotka mahdollistavat tietoliikenneyhteyden järjestelmien välillä tarjoten verkkopalveluita päätelaitteen käyttäjäsovelluksille. Jotta protokollapinot pystyisivät mukautumaan tietyn viestintäympäristön ominaisuuksiin, on protokollapinon käyttäytymistä pystyttävä muuttamaan ajonaikaisesti. Perinteisesti protokollapinot ovat kuitenkin rakennettu muuttumattomiksi niin, että mukautuminen tässä laajuudessa on erittäin vaikeaa toteuttaa, ellei jopa mahdotonta. Tämä diplomityö käsittelee mukautuvien protokollapinojen rakentamista käyttäen komponenttipohjaista ohjelmistokehystä joka mahdollistaa protokollapinojen ajonaikaisen muuttamisen. Toteuttamalla esimerkkijärjestelmän, ja mittaamalla sen suorituskykyä vaihtelevassa tietoliikenneympäristössä, osoitamme, että mukautuvat protokollapinot ovat mahdollisia rakentaa ja ne tarjoavat merkittäviä etuja erityisesti tulevaisuuden siirrettävissä laitteissa.