928 resultados para Topologies on an arbitrary set
Resumo:
Presentemente, com a economia cada vez mais globalizada e com a grande competitividade do mercado, as empresas de produção procuram cada vez mais ajustar-se às exigências dos clientes. Por esse motivo, o controlo do fluxo produtivo torna-se imprescindível para a resolução de problemas e para a própria melhoria contínua do processo. O sistema “Lean Manufacturing”, é um conjunto de atividades que tem como meta o aumento da capacidade de resposta às mudanças e à minimização dos desperdícios na produção, constituindo-se num verdadeiro empreendimento de gestão inovadora. O TPM – Total Productive Maintenance, é uma ferramenta de melhoria continua cada vez mais utilizada nas empresas com o objetivo de melhorar a eficiência dos seus equipamentos e atingir metas para a redução de desperdícios, incluindo a restauração e manutenção de condições padrão de funcionamento. O presente trabalho visa a implementação da ferramenta TPM num equipamento (Serrote Mecânico Alternativo) instalado no Laboratório das Oficinas Mecânicas do Instituto Superior de Engenharia do Porto. No contexto prático, este trabalho consistiu numa primeira fase por implementar a ferramenta 5S´s no posto de trabalho do equipamento em estudo. Durante esta implementação foi possível detetar algumas anomalias no equipamento, tendo sido sujeitas a uma análise para encontrar as suas causas raiz. Posteriormente foi implementada a ferramenta TPM, de modo, a criar melhores condições de acesso e simplificação das atividades de inspeção, lubrificação e limpeza. Além disso, foi executado e proposto algumas oportunidades de melhoria em alguns elementos, de forma a reduzir tempos de operação e tempos de setup, contribuindo para o aumento da eficiência do equipamento.
Resumo:
Dissertação apresentada para obtenção do grau de Doutor em Matemática na especialidade de Equações Diferenciais, pela Universidade Nova de Lisboa,Faculdade de Ciências e Tecnologia
Resumo:
XML Schema is one of the most used specifications for defining types of XML documents. It provides an extensive set of primitive data types, ways to extend and reuse definitions and an XML syntax that simplifies automatic manipulation. However, many features that make XML Schema Definitions (XSD) so interesting also make them rather cumbersome to read. Several tools to visualize and browse schema definitions have been proposed to cope with this issue. The novel approach proposed in this paper is to base XSD visualization and navigation on the XML document itself, using solely the web browser, without requiring a pre-processing step or an intermediate representation. We present the design and implementation of a web-based XML Schema browser called schem@Doc that operates over the XSD file itself. With this approach, XSD visualization is synchronized with the source file and always reflects its current state. This tool fits well in the schema development process and is easy to integrate in web repositories containing large numbers of XSD files.
Resumo:
Standards for learning objects focus primarily on content presentation. They were already extended to support automatic evaluation but it is limited to exercises with a predefined set of answers. The existing standards lack the metadata required by specialized evaluators to handle types of exercises with an indefinite set of solutions. To address this issue we extended existing learning object standards to the particular requirements of a specialized domain. We present a definition of programming problems as learning objects that is compatible both with Learning Management Systems and with systems performing automatic evaluation of programs. The proposed definition includes metadata that cannot be conveniently represented using existing standards, such as: the type of automatic evaluation; the requirements of the valuation engine; and the roles of different assets - tests cases, program solutions, etc. We present also the EduJudge project and its main services as a case study on the use of the proposed definition of programming problems as learning objects.
Resumo:
A quantidade e variedade de conteúdos multimédia actualmente disponíveis cons- tituem um desafio para os utilizadores dado que o espaço de procura e escolha de fontes e conteúdos excede o tempo e a capacidade de processamento dos utilizado- res. Este problema da selecção, em função do perfil do utilizador, de informação em grandes conjuntos heterogéneos de dados é complexo e requer ferramentas específicas. Os Sistemas de Recomendação surgem neste contexto e são capazes de sugerir ao utilizador itens que se coadunam com os seus gostos, interesses ou necessidades, i.e., o seu perfil, recorrendo a metodologias de inteligência artificial. O principal objectivo desta tese é demonstrar que é possível recomendar em tempo útil conteúdos multimédia a partir do perfil pessoal e social do utilizador, recorrendo exclusivamente a fontes públicas e heterogéneas de dados. Neste sen- tido, concebeu-se e desenvolveu-se um Sistema de Recomendação de conteúdos multimédia baseado no conteúdo, i.e., nas características dos itens, no historial e preferências pessoais e nas interacções sociais do utilizador. Os conteúdos mul- timédia recomendados, i.e., os itens sugeridos ao utilizador, são provenientes da estação televisiva britânica, British Broadcasting Corporation (BBC), e estão classificados de acordo com as categorias dos programas da BBC. O perfil do utilizador é construído levando em conta o historial, o contexto, as preferências pessoais e as actividades sociais. O YouTube é a fonte do histo- rial pessoal utilizada, permitindo simular a principal fonte deste tipo de dados - a Set-Top Box (STB). O historial do utilizador é constituído pelo conjunto de vídeos YouTube e programas da BBC vistos pelo utilizador. O conteúdo dos vídeos do YouTube está classificado segundo as categorias de vídeo do próprio YouTube, sendo efectuado o mapeamento para as categorias dos programas da BBC. A informação social, que é proveniente das redes sociais Facebook e Twit- ter, é recolhida através da plataforma Beancounter. As actividades sociais do utilizador obtidas são filtradas para extrair os filmes e séries que são, por sua vez, enriquecidos semanticamente através do recurso a repositórios abertos de dados interligados. Neste caso, os filmes e séries são classificados através dos géneros da IMDb e, posteriormente, mapeados para as categorias de programas da BBC. Por último, a informação do contexto e das preferências explícitas, através da classificação dos itens recomendados, do utilizador são também contempladas. O sistema desenvolvido efectua recomendações em tempo real baseado nas actividades das redes sociais Facebook e Twitter, no historial de vídeos Youtube e de programas da BBC vistos e preferências explícitas. Foram realizados testes com cinco utilizadores e o tempo médio de resposta do sistema para criar o conjunto inicial de recomendações foi 30 s. As recomendações personalizadas são geradas e actualizadas mediante pedido expresso do utilizador.
Resumo:
Com o envelhecimento da população, as preocupações com a garantia do seu bem-estar aumentam criando a necessidade de desenvolver ferramentas que permitam monitorizar em permanência este sector da população. A utilização de smartphones pelos mais velhos pode ser crucial no seu bem-estar e na sua autonomia contribuindo para a recolha de informação importante já que estes estão muitas vezes equipados com sensores que podem dar indicações preciosas ao cuidador sobre o estado atual do paciente. Os sensores podem fornecer dados sobre a atividade física do paciente, bem como detetar quedas ou calcular a sua posição, com a ajuda do acelerómetro, do giroscópio e do sensor de campo magnético. No entanto, funcionalidades como essas requerem, obrigatoriamente, uma frequência de amostragem mínima por parte dos sensores que permita a implementação de algoritmos, que determinarão esses parâmetros da forma mais exata possível. Dado que nem sempre os pacientes se fazem acompanhar do seu smartphone quando estão na sua residência, a criação de ambientes de AAL (Ambient Assisted Living) com recurso a dispositivos externos que podem ser “vestidos” pelos pacientes pode também ser uma solução adequada. Estes contêm normalmente os mesmos sensores que os smartphones e comunicam com estes através de tecnologias sem fios, como é o caso do Bluetooth Low Energy. Neste trabalho, avaliou-se a possibilidade de alteração da frequência dos sensores em diferentes sistemas operativos, tendo sido efectuadas modificações nas instalações por defeito de alguns sistemas operativos abertos. Com o objectivo de permitir a criação de uma solução de AAL com recurso a um dispositivo externo implementaram-se serviços e perfis num dispositivo externo, o SensorTag.
Resumo:
This study is based on a previous experimental work in which embedded cylindrical heaters were applied to a pultrusion machine die, and resultant energetic performance compared with that achieved with the former heating system based on planar resistances. The previous work allowed to conclude that the use of embedded resistances enhances significantly the energetic performance of pultrusion process, leading to 57% decrease of energy consumption. However, the aforementioned study was developed with basis on an existing pultrusion die, which only allowed a single relative position for the heaters. In the present work, new relative positions for the heaters were investigated in order to optimise heat distribution process and energy consumption. Finite Elements Analysis was applied as an efficient tool to identify the best relative position of the heaters into the die, taking into account the usual parameters involved in the process and the control system already tested in the previous study. The analysis was firstly developed based on eight cylindrical heaters located in four different location plans. In a second phase, in order to refine the results, a new approach was adopted using sixteen heaters with the same total power. Final results allow to conclude that the correct positioning of the heaters can contribute to about 10% of energy consumption reduction, decreasing the production costs and leading to a better eco-efficiency of pultrusion process.
Resumo:
Volatile organic compounds are a common source of groundwater contamination that can be easily removed by air stripping in columns with random packing and using a counter-current flow between the phases. This work proposes a new methodology for column design for any type of packing and contaminant which avoids the necessity of an arbitrary chosen diameter. It also avoids the employment of the usual graphical Eckert correlations for pressure drop. The hydraulic features are previously chosen as a project criterion. The design procedure was translated into a convenient algorithm in C++ language. A column was built in order to test the design, the theoretical steady-state and dynamic behaviour. The experiments were conducted using a solution of chloroform in distilled water. The results allowed for a correction in the theoretical global mass transfer coefficient previously estimated by the Onda correlations, which depend on several parameters that are not easy to control in experiments. For best describe the column behaviour in stationary and dynamic conditions, an original mathematical model was developed. It consists in a system of two partial non linear differential equations (distributed parameters). Nevertheless, when flows are steady, the system became linear, although there is not an evident solution in analytical terms. In steady state the resulting ODE can be solved by analytical methods, and in dynamic state the discretization of the PDE by finite differences allows for the overcoming of this difficulty. To estimate the contaminant concentrations in both phases in the column, a numerical algorithm was used. The high number of resulting algebraic equations and the impossibility of generating a recursive procedure did not allow the construction of a generalized programme. But an iterative procedure developed in an electronic worksheet allowed for the simulation. The solution is stable only for similar discretizations values. If different values for time/space discretization parameters are used, the solution easily becomes unstable. The system dynamic behaviour was simulated for the common liquid phase perturbations: step, impulse, rectangular pulse and sinusoidal. The final results do not configure strange or non-predictable behaviours.
Resumo:
Dissertação apresentada à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Educação Artística, na especialização de Teatro na Educação
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Sandpit exploitation near Lisbon allowed collecting of many Miocene, non marine fossils. These sands are part of the mostly marine Miocene series in the Lower Tagus basin. The particularly favourable situation led several researchers to deal with marine-continental correlations. Difficulties often concern methodologic aspects. Some poorly based interpretations exerced a lasting influence. A critical approach is presented. Analysis requires data. Methods based upon models often lead to the temptation of fitting data in order to confirm a priori conclusions, or of mixing up data as if of equal statistic value while they have not at all the same weight. Erroneous interpretations' uncritical repetition for many years "upgraded" them into absolute truth. Another point is endemism vs. europeism. Miocene mammals from Lisbon compared well with corresponding French, contemporaneous taxa, while this was apparently not true for Spanish ones. Too much accent had been put on the endemic character of Spanish, or even regional, mammalian faunas. Nationalist bias and sensationalism also weigh, albeit negatively. Meanwhile nearly all the more evident examples as the rhinoceros Hispanotherium are discredited as Iberian endemisms. Taxa may appear as endemic just because they have not yet been found elsewhere. At least for the medium to large-sized mammals, with their huge geographic distribution, faunal differences depend much more on ecology, climate and environmental conditions. Emphasis on differences may also result from researchers that are often in a precarious situation and need very much to achieve short-term, preferably sensational results. Overvalued differences may mask real similarities. Unethic and not scientific behaviour are further enhanced by "nomina nuda" tricks that may simply be a way to circunvent or cheat the Priority Rule. On the other hand, access to communication networks may present as sensational novelties items that are not new at all, misleading the audience. A new class of "science people" arose, created by the media and not by the value of their real achievements. Discussion is presented on sedimentation processes and discontinuities that are often regarded as absolute precision dating tools, as well as on some geochemical and paleomagnetic interpretations. A very good chronologie frame has been obtained for the basin under study on the basis of an impressive set of data, providing a rather detailed and accurate frame for Miocene marine-continental correlations.
Resumo:
Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.
Resumo:
Energy consumption is one of the major issues for modern embedded systems. Early, power saving approaches mainly focused on dynamic power dissipation, while neglecting the static (leakage) energy consumption. However, technology improvements resulted in a case where static power dissipation increasingly dominates. Addressing this issue, hardware vendors have equipped modern processors with several sleep states. We propose a set of leakage-aware energy management approaches that reduce the energy consumption of embedded real-time systems while respecting the real-time constraints. Our algorithms are based on the race-to-halt strategy that tends to run the system at top speed with an aim to create long idle intervals, which are used to deploy a sleep state. The effectiveness of our algorithms is illustrated with an extensive set of simulations that show an improvement of up to 8% reduction in energy consumption over existing work at high utilization. The complexity of our algorithms is smaller when compared to state-of-the-art algorithms. We also eliminate assumptions made in the related work that restrict the practical application of the respective algorithms. Moreover, a novel study about the relation between the use of sleep intervals and the number of pre-emptions is also presented utilizing a large set of simulation results, where our algorithms reduce the experienced number of pre-emptions in all cases. Our results show that sleep states in general can save up to 30% of the overall number of pre-emptions when compared to the sleep-agnostic earliest-deadline-first algorithm.
Resumo:
Hand-off (or hand-over), the process where mobile nodes select the best access point available to transfer data, has been well studied in wireless networks. The performance of a hand-off process depends on the specific characteristics of the wireless links. In the case of low-power wireless networks, hand-off decisions must be carefully taken by considering the unique properties of inexpensive low-power radios. This paper addresses the design, implementation and evaluation of smart-HOP, a hand-off mechanism tailored for low-power wireless networks. This work has three main contributions. First, it formulates the hard hand-off process for low-power networks (such as typical wireless sensor networks - WSNs) with a probabilistic model, to investigate the impact of the most relevant channel parameters through an analytical approach. Second, it confirms the probabilistic model through simulation and further elaborates on the impact of several hand-off parameters. Third, it fine-tunes the most relevant hand-off parameters via an extended set of experiments, in a realistic experimental scenario. The evaluation shows that smart-HOP performs well in the transitional region while achieving more than 98 percent relative delivery ratio and hand-off delays in the order of a few tens of a milliseconds.
Resumo:
Arguably, the most difficult task in text classification is to choose an appropriate set of features that allows machine learning algorithms to provide accurate classification. Most state-of-the-art techniques for this task involve careful feature engineering and a pre-processing stage, which may be too expensive in the emerging context of massive collections of electronic texts. In this paper, we propose efficient methods for text classification based on information-theoretic dissimilarity measures, which are used to define dissimilarity-based representations. These methods dispense with any feature design or engineering, by mapping texts into a feature space using universal dissimilarity measures; in this space, classical classifiers (e.g. nearest neighbor or support vector machines) can then be used. The reported experimental evaluation of the proposed methods, on sentiment polarity analysis and authorship attribution problems, reveals that it approximates, sometimes even outperforms previous state-of-the-art techniques, despite being much simpler, in the sense that they do not require any text pre-processing or feature engineering.