946 resultados para incremental computation
Resumo:
Ao longo dos últimos anos tem-se assistido a uma rápida evolução dos dispositivos móveis. Essa evolução tem sido mais intensa no poder de processamento, na resolução e qualidade das câmaras e na largura de banda das redes móveis de nova geração. Outro aspecto importante é o preço, pois cada vez mais existem dispositivos móveis avançados a um preço acessível, o que facilita a adopção destes equipamentos por parte dos utilizadores. Estes factores contribuem para que o número de utilizadores com “computadores de bolso” tenda a aumentar, possibilitando cada vez mais a criação de ferramentas com maior complexidade que tirem partido das características desses equipamentos. Existem muitas aplicações que exploram estas características para facilitar o trabalho aos utilizadores. Algumas dessas aplicações conseguem retirar informação do mundo físico e fazer algum tipo de processamento, como por exemplo, um leitor de códigos QR ou um OCR (Optical Character Recognizer). Aproveitando o potencial dos dispositivos móveis actuais, este trabalho descreve o estudo, a implementação e a avaliação de uma aplicação de realidade aumentada para adquirir e gerir recibos em papel de forma automática e inteligente. A aplicação utiliza a câmara do dispositivo para adquirir imagens dos recibos de forma a poder processá-las recorrendo a técnicas de processamento de imagem. Tendo uma imagem processada do recibo é efectuado um reconhecimento óptico de caracteres para extracção de informação e é utilizada uma técnica de classificação para atribuir uma classe ao documento. Para um melhor desempenho do classificador é utilizada uma estratégia de aprendizagem incremental. Após a correcta classificação é possível visualizar o recibo com informação adicional (realidade aumentada). O trabalho proposto inclui também a avaliação da interface e dos algoritmos desenvolvidos.
Resumo:
In a liberalized electricity market, the Transmission System Operator (TSO) plays a crucial role in power system operation. Among many other tasks, TSO detects congestion situations and allocates the payments of electricity transmission. This paper presents a software tool for congestion management and transmission price determination in electricity markets. The congestion management is based on a reformulated Optimal Power Flow (OPF), whose main goal is to obtain a feasible solution for the re-dispatch minimizing the changes in the dispatch proposed by the market operator. The transmission price computation considers the physical impact caused by the market agents in the transmission network. The final tariff includes existing system costs and also costs due to the initial congestion situation and losses costs. The paper includes a case study for the IEEE 30 bus power system.
Resumo:
Mestrado em Radioterapia.
Resumo:
Power system organization has gone through huge changes in the recent years. Significant increase in distributed generation (DG) and operation in the scope of liberalized markets are two relevant driving forces for these changes. More recently, the smart grid (SG) concept gained increased importance, and is being seen as a paradigm able to support power system requirements for the future. This paper proposes a computational architecture to support day-ahead Virtual Power Player (VPP) bid formation in the smart grid context. This architecture includes a forecasting module, a resource optimization and Locational Marginal Price (LMP) computation module, and a bid formation module. Due to the involved problems characteristics, the implementation of this architecture requires the use of Artificial Intelligence (AI) techniques. Artificial Neural Networks (ANN) are used for resource and load forecasting and Evolutionary Particle Swarm Optimization (EPSO) is used for energy resource scheduling. The paper presents a case study that considers a 33 bus distribution network that includes 67 distributed generators, 32 loads and 9 storage units.
Resumo:
Power system planning, control and operation require an adequate use of existing resources as to increase system efficiency. The use of optimal solutions in power systems allows huge savings stressing the need of adequate optimization and control methods. These must be able to solve the envisaged optimization problems in time scales compatible with operational requirements. Power systems are complex, uncertain and changing environments that make the use of traditional optimization methodologies impracticable in most real situations. Computational intelligence methods present good characteristics to address this kind of problems and have already proved to be efficient for very diverse power system optimization problems. Evolutionary computation, fuzzy systems, swarm intelligence, artificial immune systems, neural networks, and hybrid approaches are presently seen as the most adequate methodologies to address several planning, control and operation problems in power systems. Future power systems, with intensive use of distributed generation and electricity market liberalization increase power systems complexity and bring huge challenges to the forefront of the power industry. Decentralized intelligence and decision making requires more effective optimization and control techniques techniques so that the involved players can make the most adequate use of existing resources in the new context. The application of computational intelligence methods to deal with several problems of future power systems is presented in this chapter. Four different applications are presented to illustrate the promises of computational intelligence, and illustrate their potentials.
Resumo:
This paper studies Optimal Intelligent Supervisory Control System (OISCS) model for the design of control systems which can work in the presence of cyber-physical elements with privacy protection. The development of such architecture has the possibility of providing new ways of integrated control into systems where large amounts of fast computation are not easily available, either due to limitations on power, physical size or choice of computing elements.
Resumo:
This paper is a contribution for the assessment and comparison of magnet properties based on magnetic field characteristics particularly concerning the magnetic induction uniformity in the air gaps. For this aim, a solver was developed and implemented to determine the magnetic field of a magnetic core to be used in Fast Field Cycling (FFC) Nuclear Magnetic Resonance (NMR) relaxometry. The electromagnetic field computation is based on a 2D finite-element method (FEM) using both the scalar and the vector potential formulation. Results for the magnetic field lines and the magnetic induction vector in the air gap are presented. The target magnetic induction is 0.2 T, which is a typical requirement of the FFC NMR technique, which can be achieved with a magnetic core based on permanent magnets or coils. In addition, this application requires high magnetic induction uniformity. To achieve this goal, a solution including superconducting pieces is analyzed. Results are compared with a different FEM program.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
A novel high throughput and scalable unified architecture for the computation of the transform operations in video codecs for advanced standards is presented in this paper. This structure can be used as a hardware accelerator in modern embedded systems to efficiently compute all the two-dimensional 4 x 4 and 2 x 2 transforms of the H.264/AVC standard. Moreover, its highly flexible design and hardware efficiency allows it to be easily scaled in terms of performance and hardware cost to meet the specific requirements of any given video coding application. Experimental results obtained using a Xilinx Virtex-5 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which presents a throughput per unit of area relatively higher than other similar recently published designs targeting the H.264/AVC standard. Such results also showed that, when integrated in a multi-core embedded system, this architecture provides speedup factors of about 120x concerning pure software implementations of the transform algorithms, therefore allowing the computation, in real-time, of all the above mentioned transforms for Ultra High Definition Video (UHDV) sequences (4,320 x 7,680 @ 30 fps).
Resumo:
Dynamical systems modeling tumor growth have been investigated to determine the dynamics between tumor and healthy cells. Recent theoretical investigations indicate that these interactions may lead to different dynamical outcomes, in particular to homoclinic chaos. In the present study, we analyze both topological and dynamical properties of a recently characterized chaotic attractor governing the dynamics of tumor cells interacting with healthy tissue cells and effector cells of the immune system. By using the theory of symbolic dynamics, we first characterize the topological entropy and the parameter space ordering of kneading sequences from one-dimensional iterated maps identified in the dynamics, focusing on the effects of inactivation interactions between both effector and tumor cells. The previous analyses are complemented with the computation of the spectrum of Lyapunov exponents, the fractal dimension and the predictability of the chaotic attractors. Our results show that the inactivation rate of effector cells by the tumor cells has an important effect on the dynamics of the system. The increase of effector cells inactivation involves an inverse Feigenbaum (i.e. period-halving bifurcation) scenario, which results in the stabilization of the dynamics and in an increase of dynamics predictability. Our analyses also reveal that, at low inactivation rates of effector cells, tumor cells undergo strong, chaotic fluctuations, with the dynamics being highly unpredictable. Our findings are discussed in the context of tumor cells potential viability.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores. Área de Especialização de Automação e Sistemas.
Resumo:
OBJETIVO: Testar o uso da metodologia de análise custo-efetividade como instrumento de decisão na produção de refeições para inclusão das recomendações proferidas na Estratégia Global da Organização Mundial da Saúde. MÉTODOS: Cinco opções alternativas de cardápio de café da manhã foram analisadas previamente à implementação da refeição em unidade de alimentação e nutrição de uma universidade do estado de São Paulo, no ano de 2006. O custo de cada opção baseou-se em preços de mercado dos componentes de custo direto. Os benefícios em saúde foram calculados com base em adaptação do Índice de Qualidade da Refeição (IQR). Foram calculadas a razão custo-efetividade dos cardápios pela divisão dos benefícios pelos custos e a razão custo-efetividade incremental pelo diferencial de custo por unidade de benefício adicional. A escolha considerou unidades de benefício à saúde em relação ao custo direto de produção, assim como a efetividade incremental por unidade de custo diferencial. RESULTADOS: A análise resultou na opção mais simples com adição de uma fruta (IQR=64 e custo=R$1,58) como melhor alternativa. Observou-se maior efetividade das alternativas com uma porção de fruta (IQR1=64 / IQR3=58 / IQR5=72) sobre as demais (IQR2=48 / IQR4=58). CONCLUSÕES: O cálculo da razão custo-efetividade permitiu identificar a melhor opção de café da manhã com base na análise custo-efetividade e Índice de Qualidade da Refeição. Tais instrumentos agregam características de facilidade de aplicação e objetividade de avaliação, base fundamental ao processo de inclusão de instituições públicas ou privadas sob as diretrizes da Estratégia Global.
Resumo:
Dissertação de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica Ramo de Manutenção e Produção
Resumo:
Mestrado em Engenharia Informática. Área de Especialização em Tecnologias do Conhecimento e Decisão.