60 resultados para reject handling

em Instituto Politécnico do Porto, Portugal


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Actualmente, os smartphones e outros dispositivos móveis têm vindo a ser dotados com cada vez maior poder computacional, sendo capazes de executar um vasto conjunto de aplicações desde simples programas de para tirar notas até sofisticados programas de navegação. Porém, mesmo com a evolução do seu hardware, os actuais dispositivos móveis ainda não possuem as mesmas capacidades que os computadores de mesa ou portáteis. Uma possível solução para este problema é distribuir a aplicação, executando partes dela no dispositivo local e o resto em outros dispositivos ligados à rede. Adicionalmente, alguns tipos de aplicações como aplicações multimédia, jogos electrónicos ou aplicações de ambiente imersivos possuem requisitos em termos de Qualidade de Serviço, particularmente de tempo real. Ao longo desta tese é proposto um sistema de execução de código remota para sistemas distribuídos com restrições de tempo-real. A arquitectura proposta adapta-se a sistemas que necessitem de executar periodicamente e em paralelo mesmo conjunto de funções com garantias de tempo real, mesmo desconhecendo os tempos de execução das referidas funções. A plataforma proposta foi desenvolvida para sistemas móveis capazes de executar o Sistema Operativo Android.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mobile applications are becoming increasingly more complex and making heavier demands on local system resources. Moreover, mobile systems are nowadays more open, allowing users to add more and more applications, including third-party developed ones. In this perspective, it is increasingly expected that users will want to execute in their devices applications which supersede currently available resources. It is therefore important to provide frameworks which allow applications to benefit from resources available on other nodes, capable of migrating some or all of its services to other nodes, depending on the user needs. These requirements are even more stringent when users want to execute Quality of Service (QoS) aware applications, such as voice or video. The required resources to guarantee the QoS levels demanded by an application can vary with time, and consequently, applications should be able to reconfigure themselves. This paper proposes a QoS-aware service-based framework able to support distributed, migration-capable, QoS-enabled applications on top of the Android Operating system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is an increasing demand for highly dynamic realtime systems where several independently developed applications with different timing requirements can coexist. This paper proposes a protocol to integrate shared resources and precedence constraints among tasks in such systems assuming no precise information on critical sections and computation times is available. The concept of bandwidth inheritance is combined with a capacity sharing and stealing mechanism to efficiently exchange bandwidth among needed tasks, minimising the cost of blocking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the growing complexity and adaptability requirements of real-time embedded systems, which often exhibit unrestricted inter-dependencies among supported services and user-imposed quality constraints, it is increasingly difficult to optimise the level of service of a dynamic task set within an useful and bounded time. This is even more difficult when intending to benefit from the full potential of an open distributed cooperating environment, where service characteristics are not known beforehand. This paper proposes an iterative refinement approach for a service’s QoS configuration taking into account services’ inter-dependencies and quality constraints, and trading off the achieved solution’s quality for the cost of computation. Extensive simulations demonstrate that the proposed anytime algorithm is able to quickly find a good initial solution and effectively optimises the rate at which the quality of the current solution improves as the algorithm is given more time to run. The added benefits of the proposed approach clearly surpass its reducedoverhead.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introdução: O medicamento citotóxico é definido pelas suas características de genotoxicidade, mutagenicidade, carcinogenicidade, teratogenicidade, toxicidade reprodutiva e toxicidade orgânica em baixas doses. Deste modo, existe uma grande preocupação no que concerne ao manuseamento deste tipo de medicamentos, devido aos riscos ocupacionais que podem surtir da exposição a que os profissionais de farmácia envolvidos estão sujeitos. Objectivos: Analisar a realidade da farmácia hospitalar face ao cumprimento das normas e procedimentos preconizados pelas actuais guidelines para o manuseamento seguro de medicamentos citotóxicos, e identificar as lacunas existentes, conduzindo à promoção de práticas centradas na minimização do risco de exposição/contaminação dos profissionais e do ambiente. Material e Métodos: Foi realizada uma pesquisa bibliográfica sistemática sobre o tema, utilizando-se como instrumento de recolha de dados um inquérito por questionário, em que os TDT de Farmácia foram abordados sobre os procedimentos verificados no hospital onde exercem actividade profissional. Resultados: Face ao cumprimento das normas na recepção, armazenamento e transporte de medicamentos citotóxicos, verifica-se que todos os hospitais se encontram acima da média. Apesar desta evidência, é na fase de transporte que se verifica um menor cumprimento. As principais lacunas detectadas foram ao nível da não utilização de EPI nas fases de recepção e armazenamento; a recepção de medicamentos citotóxicos em conjunto com outros medicamentos; a falta de um sistema de ventilação no local de armazenamento e, ainda, ausência de portas de correr e/ou gavetas fechadas nos carros de transporte de medicamentos citotóxicos. Conclusões: Os resultados deste estudo revelam alguma heterogeneidade de procedimentos nos hospitais Portugueses, sugerindo a necessidade de intervenção e reformulação do programa de segurança e gestão de risco desenvolvidos para o manuseamento de citotóxicos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many of the most common human functions such as temporal and non-monotonic reasoning have not yet been fully mapped in developed systems, even though some theoretical breakthroughs have already been accomplished. This is mainly due to the inherent computational complexity of the theoretical approaches. In the particular area of fault diagnosis in power systems however, some systems which tried to solve the problem, have been deployed using methodologies such as production rule based expert systems, neural networks, recognition of chronicles, fuzzy expert systems, etc. SPARSE (from the Portuguese acronym, which means expert system for incident analysis and restoration support) was one of the developed systems and, in the sequence of its development, came the need to cope with incomplete and/or incorrect information as well as the traditional problems for power systems fault diagnosis based on SCADA (supervisory control and data acquisition) information retrieval, namely real-time operation, huge amounts of information, etc. This paper presents an architecture for a decision support system, which can solve the presented problems, using a symbiosis of the event calculus and the default reasoning rule based system paradigms, insuring soft real-time operation with incomplete, incorrect or domain incoherent information handling ability. A prototype implementation of this system is already at work in the control centre of the Portuguese Transmission Network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed generation unlike centralized electrical generation aims to generate electrical energy on small scale as near as possible to load centers, interchanging electric power with the network. This work presents a probabilistic methodology conceived to assist the electric system planning engineers in the selection of the distributed generation location, taking into account the hourly load changes or the daily load cycle. The hourly load centers, for each of the different hourly load scenarios, are calculated deterministically. These location points, properly weighted according to their load magnitude, are used to calculate the best fit probability distribution. This distribution is used to determine the maximum likelihood perimeter of the area where each source distributed generation point should preferably be located by the planning engineers. This takes into account, for example, the availability and the cost of the land lots, which are factors of special relevance in urban areas, as well as several obstacles important for the final selection of the candidates of the distributed generation points. The proposed methodology has been applied to a real case, assuming three different bivariate probability distributions: the Gaussian distribution, a bivariate version of Freund’s exponential distribution and the Weibull probability distribution. The methodology algorithm has been programmed in MATLAB. Results are presented and discussed for the application of the methodology to a realistic case and demonstrate the ability of the proposed methodology for efficiently handling the determination of the best location of the distributed generation and their corresponding distribution networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the optimal involvement in derivatives electricity markets of a power producer to hedge against the pool price volatility. To achieve this aim, a swarm intelligence meta-heuristic optimization technique for long-term risk management tool is proposed. This tool investigates the long-term opportunities for risk hedging available for electric power producers through the use of contracts with physical (spot and forward contracts) and financial (options contracts) settlement. The producer risk preference is formulated as a utility function (U) expressing the trade-off between the expectation and the variance of the return. Variance of return and the expectation are based on a forecasted scenario interval determined by a long-term price range forecasting model. This model also makes use of particle swarm optimization (PSO) to find the best parameters allow to achieve better forecasting results. On the other hand, the price estimation depends on load forecasting. This work also presents a regressive long-term load forecast model that make use of PSO to find the best parameters as well as in price estimation. The PSO technique performance has been evaluated by comparison with a Genetic Algorithm (GA) based approach. A case study is presented and the results are discussed taking into account the real price and load historical data from mainland Spanish electricity market demonstrating the effectiveness of the methodology handling this type of problems. Finally, conclusions are dully drawn.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the proposal of an architecture for developing systems that interact with Ambient Intelligence (AmI) environments. This architecture has been proposed as a consequence of a methodology for the inclusion of Artificial Intelligence in AmI environments (ISyRAmI - Intelligent Systems Research for Ambient Intelligence). The ISyRAmI architecture considers several modules. The first is related with the acquisition of data, information and even knowledge. This data/information knowledge deals with our AmI environment and can be acquired in different ways (from raw sensors, from the web, from experts). The second module is related with the storage, conversion, and handling of the data/information knowledge. It is understood that incorrectness, incompleteness, and uncertainty are present in the data/information/knowledge. The third module is related with the intelligent operation on the data/information/knowledge of our AmI environment. Here we include knowledge discovery systems, expert systems, planning, multi-agent systems, simulation, optimization, etc. The last module is related with the actuation in the AmI environment, by means of automation, robots, intelligent agents and users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bacterial food poisoning is an ever-present threat that can be prevented with proper care and handling of food products. A disposable electrochemical immunosensor for the simultaneous measurements of common food pathogenic bacteria namely Escherichia coli O157:H7 (E. coli), campylobacter and salmonella were developed. The immunosensor was fabricated by immobilizing the mixture of anti-E. coli, anticampylobacter and anti-salmonella antibodies with a ratio of 1:1:1 on the surface of the multiwall carbon nanotube-polyallylamine modified screen printed electrode (MWCNT-PAH/SPE). Bacteria suspension became attached to the immobilized antibodies when the immunosensor was incubated in liquid samples. The sandwich immunoassay was performed with three antibodies conjugated with specific nanocrystal ( -E. coli-CdS, -campylobacter-PbS and -salmonella-CuS) which has releasable metal ions for electrochemical measurements. The square wave anodic stripping voltammetry (SWASV) was employed to measure released metal ions from bound antibody nanocrystal conjugates. The calibration curves for three selected bacteria were found in the range of 1 × 103 – 5 × 105 cells mL−1 with the limit of detection (LOD) 400 cells mL−1 for salmonella, 400 cells mL−1 for campylobacter and 800 cells mL−1 for E. coli. The precision and sensitivity of this method show the feasibility of multiplexed determination of bacteria in milk samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A procura de piscinas para a prática de atividades desportivas, recreativas e/ou terapêuticas tem sofrido um aumento gradual ao longo do tempo. No entanto, nas piscinas existem vários perigos associados à sua utilização. Relativamente aos perigos químicos, a utilização de desinfetantes à base de cloro, bromo ou compostos derivados vai, por um lado, inativar microrganismos patogénicos mas, por outro, dar origem a subprodutos ao reagir com compostos orgânicos presentes na água. Os trihalometanos são um exemplo de subprodutos que se podem formar e, entre os compostos principais, estão o clorofórmio (TCM), bromodiclorometano (BDCM), clorodibromometano (CDBM) e bromofórmio (TBM). Este trabalho teve como objetivo o desenvolvimento de uma metodologia analítica para a determinação de trihalometanos em água e ar de piscinas e a sua aplicação a um conjunto de amostras. Para a análise dos compostos, foi utilizada a microextração em fase sólida no espaço de cabeça (HS-SPME) com posterior quantificação dos compostos por cromatografia gasosa com detetor de captura eletrónica (GC-ECD). Foi realizada uma otimização das condições de extração dos compostos em estudo em amostras de água, através da realização de dois planeamentos experimentais. As condições ótimas são assim obtidas para uma temperatura de extração de 45ºC, um tempo de extração de 25 min e um tempo de dessorção de 5 min. Foram analisadas amostras de águas de piscina cedidas pelo Centro de Estudos de Águas, sendo avaliada a aplicação da técnica HS-SPME e o efeito de matriz. O modo como se manuseiam as soluções que contêm os compostos em estudo influencia os resultados devido ao facto destes serem bastante voláteis. Concluiu-se também que existe efeito de matriz, logo a concentração das amostras deverá ser determinada através do método de adição de padrão. A caraterização da água de piscinas interiores permitiu conhecer a concentração de trihalometanos (THMs). Foram obtidas concentrações de TCM entre 4,5 e 406,5 μg/L sendo que apenas 4 das 27 amostras analisadas ultrapassam o valor limite imposto pelo Decreto-Lei nº306/2007 (100 μg/L) no que diz respeito a águas de consumo humano e que é normalmente utilizado como valor indicativo para a qualidade das águas de piscina. Relativamente à concentração obtida no ar de uma piscina interior, foi detetada uma concentração média de 224 μg/m3 de TCM, valor muito abaixo dos 10000 μg/m3 impostos pelo Decreto-lei nº24/2012, como valor limite para exposição profissional a agentes químicos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O desenvolvimento deste trabalho teve como objectivo a optimização de um sistema de climatização industrial, constituído por quatro centrais de climatização adiabáticas, que apresentam limitações de capacidade de arrefecimento, controlo e eficiência. Inicialmente foi necessária a pesquisa bibliográfica e recolha de informação relativa à indústria têxtil e ao processo de arrefecimento evaporativo. Numa fase posterior foram recolhidos e analisados os diversos dados essenciais à compreensão do binómio edifício/sistema de climatização, para a obtenção de possíveis hipóteses de optimização. Da fase de recolha de informações e dados, destaca-se, também, a realização de análises à qualidade do ar interior (QAI). As optimizações seleccionadas como passíveis de implementação, foram estudadas e analisadas com o auxílio do software de simulação energética dinâmica DesignBuilder e os resultados obtidos foram devidamente trabalhados e ajustados de modo a permitir uma assimilação amigável e de fácil interpretação das suas vantagens e desvantagens, tendo ainda sido objecto de estudo de viabilidade económica. A optimização proposta reflecte uma melhoria substancial das condições interiores ao nível da temperatura e humidade relativa, resultando, ainda assim, numa redução de consumos energéticos na ordem dos 23 % (490.337 kWh), isto é, uma poupança anual de 42.169 € aos custos de exploração e com um período de retorno de 1 ano e 11 meses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding he management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.