930 resultados para Multi rate processing
Resumo:
Frame rate upconversion (FRUC) is an important post-processing technique to enhance the visual quality of low frame rate video. A major, recent advance in this area is FRUC based on trilateral filtering which novelty mainly derives from the combination of an edge-based motion estimation block matching criterion with the trilateral filter. However, there is still room for improvement, notably towards reducing the size of the uncovered regions in the initial estimated frame, this means the estimated frame before trilateral filtering. In this context, proposed is an improved motion estimation block matching criterion where a combined luminance and edge error metric is weighted according to the motion vector components, notably to regularise the motion field. Experimental results confirm that significant improvements are achieved for the final interpolated frames, reaching PSNR gains up to 2.73 dB, on average, regarding recent alternative solutions, for video content with varied motion characteristics.
Resumo:
Mestrado em Engenharia Informática
Resumo:
Red, green and blue optical signals were directed to an a-SiC:H multilayered device, each one with a specific transmission rate. The combined optical signal was analyzed by reading out, under different applied voltages, the generated photocurrent. Results show that when a chromatic time dependent wavelength combination with different transmission rates irradiates the multilayered structure, the device operates as a tunable wavelength filter and can be used in wavelength division multiplexing systems for short range communications. An application to fluorescent proteins detection is presented. (C) 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Resumo:
Este trabalho visa apresentar um enquadramento da realidade económica e industrial do sector transformador de granitos ornamentais em Portugal e fazer uma análise do processo de serragem, com engenhos multi-lâminas e granalha de aço, na medida em que este é o método de seccionamento de blocos de granito mais utilizado pelas grandes indústrias do sector. Tendo em conta a importância económica desta operação produtiva na indústria em causa, foi definido como fito deste projecto a análise estatística dos custos de produção; a definição de fórmulas de cálculo que permitam prever o custo médio de serragem; e o estudo de soluções economicamente viáveis e ambientalmente sustentáveis para o problema das lamas resultantes do expurgo dos engenhos. Para a persecução deste projecto foi realizada uma recolha de dados implementando rotinas de controlo e registo dos mesmos, em quadros de produção normalizados e de fácil preenchimento, pelos operadores destes equipamentos. Esta recolha de dados permitiu isolar, quantificar e formular os factores de rentabilização do processo de serragem selecionando, dentro da amostra de estudo obtida, um conjunto de serragens com características similares e com valores próximos dos valores da média estatística. Apartir dos dados destas serragens foram geradas curvas de tendência polinomial com as quais se analisaram as variações provocadas no custo médio de serragem, pelas variações do factor em estudo. A formulação dos factores de rentabilização e os dados estatísticos obtidos permitiram depois o desenvolvimento de fórmulas de cálculo do custo médio de serragem que establecem o custo de produção diferenciado em função das espessuras com, ou sem, a incorporação dos factores de rentabilização. Como consequência do projecto realizado obteve-se um conjunto de conclusões util, para o sector industrial em causa, que evidencia a importancia da Ocupação dos engenhos e rentabilização de um espaço confinado, da Resistência oferecida à serragem pelos granitos, e da Diferença de altura entre os blocos de uma mesma carga, nos custos de transformação.
Resumo:
A new high performance architecture for the computation of all the DCT operations adopted in the H.264/AVC and HEVC standards is proposed in this paper. Contrasting to other dedicated transform cores, the presented multi-standard transform architecture is supported on a completely configurable, scalable and unified structure, that is able to compute not only the forward and the inverse 8×8 and 4×4 integer DCTs and the 4×4 and 2×2 Hadamard transforms defined in the H.264/AVC standard, but also the 4×4, 8×8, 16×16 and 32×32 integer transforms adopted in HEVC. Experimental results obtained using a Xilinx Virtex-7 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which outperforms its more prominent related designs by at least 1.8 times. When integrated in a multi-core embedded system, this architecture allows the computation, in real-time, of all the transforms mentioned above for resolutions as high as the 8k Ultra High Definition Television (UHDTV) (7680×4320 @ 30fps).
Resumo:
Solvent extraction is considered as a multi-criteria optimization problem, since several chemical species with similar extraction kinetic properties are frequently present in the aqueous phase and the selective extraction is not practicable. This optimization, applied to mixer–settler units, considers the best parameters and operating conditions, as well as the best structure or process flow-sheet. Global process optimization is performed for a specific flow-sheet and a comparison of Pareto curves for different flow-sheets is made. The positive weight sum approach linked to the sequential quadratic programming method is used to obtain the Pareto set. In all investigated structures, recovery increases with hold-up, residence time and agitation speed, while the purity has an opposite behaviour. For the same treatment capacity, counter-current arrangements are shown to promote recovery without significant impairment in purity. Recycling the aqueous phase is shown to be irrelevant, but organic recycling with as many stages as economically feasible clearly improves the design criteria and reduces the most efficient organic flow-rate.
Resumo:
The main purpose of this work is to present and to interpret the change of structure and physical properties of tantalum oxynitride (TaNxOy) thin films, produced by dc reactive magnetron sputtering, by varying the processing parameters. A set of TaNxOy films was prepared by varying the reactive gases flow rate, using a N2/O2 gas mixture with a concentration ratio of 17:3. The different films, obtained by this process, exhibited significant differences. The obtained composition and the interpretation of X-ray diffraction results, shows that, depending on the partial pressure of the reactive gases, the films are: essentially dark grey metallic, when the atomic ratio (N + O)/Ta < 0.1, evidencing a tetragonal β-Ta structure; grey-brownish, when 0.1 < (N + O)/Ta < 1, exhibiting a face-centred cubic (fcc) TaN-like structure; and transparent oxide-type, when (N + O)/Ta > 1, evidencing the existence of Ta2O5, but with an amorphous structure. These transparent films exhibit refractive indexes, in the visible region, always higher than 2.0. The wear resistance of the films is relatively good. The best behaviour was obtained for the films with (N + O)/Ta ≈ 0.5 and (N + O)/Ta ≈ 1.3.
Resumo:
Traditional vertically integrated power utilities around the world have evolved from monopoly structures to open markets that promote competition among suppliers and provide consumers with a choice of services. Market forces drive the price of electricity and reduce the net cost through increased competition. Electricity can be traded in both organized markets or using forward bilateral contracts. This article focuses on bilateral contracts and describes some important features of an agent-based system for bilateral trading in competitive markets. Special attention is devoted to the negotiation process, demand response in bilateral contracting, and risk management. The article also presents a case study on forward bilateral contracting: a retailer agent and a customer agent negotiate a 24h-rate tariff. © 2014 IEEE.
Resumo:
Componentised systems, in particular those with fault confinement through address spaces, are currently emerging as a hot topic in embedded systems research. This paper extends the unified rate-based scheduling framework RBED in several dimensions to fit the requirements of such systems: we have removed the requirement that the deadline of a task is equal to its period. The introduction of inter-process communication reflects the need to communicate. Additionally we also discuss server tasks, budget replenishment and the low level details needed to deal with the physical reality of systems. While a number of these issues have been studied in previous work in isolation, we focus on the problems discovered and lessons learned when integrating solutions. We report on our experiences implementing the proposed mechanisms in a commercial grade OKL4 microkernel as well as an application with soft real-time and best-effort tasks on top of it.
Resumo:
Traditional vertically integrated power utilities around the world have evolved from monopoly structures to open markets that promote competition among suppliers and provide consumers with a choice of services. Market forces drive the price of electricity and reduce the net cost through increased competition. Electricity can be traded in both organized markets or using forward bilateral contracts. This article focuses on bilateral contracts and describes some important features of an agent-based system for bilateral trading in competitive markets. Special attention is devoted to the negotiation process, demand response in bilateral contracting, and risk management. The article also presents a case study on forward bilateral contracting: a retailer agent and a customer agent negotiate a 24h-rate tariff. © 2014 IEEE.
Resumo:
In the last twenty years genetic algorithms (GAs) were applied in a plethora of fields such as: control, system identification, robotics, planning and scheduling, image processing, and pattern and speech recognition (Bäck et al., 1997). In robotics the problems of trajectory planning, collision avoidance and manipulator structure design considering a single criteria has been solved using several techniques (Alander, 2003). Most engineering applications require the optimization of several criteria simultaneously. Often the problems are complex, include discrete and continuous variables and there is no prior knowledge about the search space. These kind of problems are very more complex, since they consider multiple design criteria simultaneously within the optimization procedure. This is known as a multi-criteria (or multiobjective) optimization, that has been addressed successfully through GAs (Deb, 2001). The overall aim of multi-criteria evolutionary algorithms is to achieve a set of non-dominated optimal solutions known as Pareto front. At the end of the optimization procedure, instead of a single optimal (or near optimal) solution, the decision maker can select a solution from the Pareto front. Some of the key issues in multi-criteria GAs are: i) the number of objectives, ii) to obtain a Pareto front as wide as possible and iii) to achieve a Pareto front uniformly spread. Indeed, multi-objective techniques using GAs have been increasing in relevance as a research area. In 1989, Goldberg suggested the use of a GA to solve multi-objective problems and since then other researchers have been developing new methods, such as the multi-objective genetic algorithm (MOGA) (Fonseca & Fleming, 1995), the non-dominated sorted genetic algorithm (NSGA) (Deb, 2001), and the niched Pareto genetic algorithm (NPGA) (Horn et al., 1994), among several other variants (Coello, 1998). In this work the trajectory planning problem considers: i) robots with 2 and 3 degrees of freedom (dof ), ii) the inclusion of obstacles in the workspace and iii) up to five criteria that are used to qualify the evolving trajectory, namely the: joint traveling distance, joint velocity, end effector / Cartesian distance, end effector / Cartesian velocity and energy involved. These criteria are used to minimize the joint and end effector traveled distance, trajectory ripple and energy required by the manipulator to reach at destination point. Bearing this ideas in mind, the paper addresses the planning of robot trajectories, meaning the development of an algorithm to find a continuous motion that takes the manipulator from a given starting configuration up to a desired end position without colliding with any obstacle in the workspace. The chapter is organized as follows. Section 2 describes the trajectory planning and several approaches proposed in the literature. Section 3 formulates the problem, namely the representation adopted to solve the trajectory planning and the objectives considered in the optimization. Section 4 studies the algorithm convergence. Section 5 studies a 2R manipulator (i.e., a robot with two rotational joints/links) when the optimization trajectory considers two and five objectives. Sections 6 and 7 show the results for the 3R redundant manipulator with five goals and for other complementary experiments are described, respectively. Finally, section 8 draws the main conclusions.
Resumo:
IEEE International Symposium on Circuits and Systems, pp. 724 – 727, Seattle, EUA
Resumo:
Reliable flow simulation software is inevitable to determine an optimal injection strategy in Liquid Composite Molding processes. Several methodologies can be implemented into standard software in order to reduce CPU time. Post-processing techniques might be one of them. Post-processing a finite element solution is a well-known procedure, which consists in a recalculation of the originally obtained quantities such that the rate of convergence increases without the need for expensive remeshing techniques. Post-processing is especially effective in problems where better accuracy is required for derivatives of nodal variables in regions where Dirichlet essential boundary condition is imposed strongly. In previous works influence of smoothness of non-homogeneous Dirichlet condition, imposed on smooth front was examined. However, usually quite a non-smooth boundary is obtained at each time step of the infiltration process due to discretization. Then direct application of post-processing techniques does not improve final results as expected. The new contribution of this paper lies in improvement of the standard methodology. Improved results clearly show that the recalculated flow front is closer to the ”exact” one, is smoother that the previous one and it improves local disturbances of the “exact” solution.
Resumo:
Post-processing a finite element solution is a well-known technique, which consists in a recalculation of the originally obtained quantities such that the rate of convergence increases without the need for expensive remeshing techniques. Postprocessing is especially effective in problems where better accuracy is required for derivatives of nodal variables in regions where Dirichlet essential boundary condition is imposed strongly. Consequently such an approach can be exceptionally good in modelling of resin infiltration under quasi steady-state assumption by remeshing techniques and with explicit time integration, because only the free-front normal velocities are necessary to advance the resin front to the next position. The new contribution is the post-processing analysis and implementation of the freeboundary velocities of mesolevel infiltration analysis. Such implementation ensures better accuracy on even coarser meshes, which in consequence reduces the computational time also by the possibility of employing larger time steps.
Resumo:
Mestrado em Engenharia Mecânica – Especialização Gestão Industrial