937 resultados para Lagrange interpolation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives, or the verification of their existence, is not necessary: the Direct Search Methods or Derivative-free Methods are one solution. When the problem has constraints, penalty functions are often used. Unfortunately the choice of the penalty parameters is, frequently, very difficult, because most strategies for choosing it are heuristics strategies. As an alternative to penalty function appeared the filter methods. A filter algorithm introduces a function that aggregates the constrained violations and constructs a biobjective problem. In this problem the step is accepted if it either reduces the objective function or the constrained violation. This implies that the filter methods are less parameter dependent than a penalty function. In this work, we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of the simplex method and filter methods. This method does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. The basic idea of simplex filter algorithm is to construct an initial simplex and use the simplex to drive the search. We illustrate the behavior of our algorithm through some examples. The proposed methods were implemented in Java.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The filter method is a technique for solving nonlinear programming problems. The filter algorithm has two phases in each iteration. The first one reduces a measure of infeasibility, while in the second the objective function value is reduced. In real optimization problems, usually the objective function is not differentiable or its derivatives are unknown. In these cases it becomes essential to use optimization methods where the calculation of the derivatives or the verification of their existence is not necessary: direct search methods or derivative-free methods are examples of such techniques. In this work we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of simplex and filter methods. This method neither computes nor approximates derivatives, penalty constants or Lagrange multipliers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em hidráulica

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A gestão dos sistemas eléctricos de energia assume um papel fundamental a vários níveis. Desde logo, o bom funcionamento (qualidade e continuidade de serviço) e a segurança da exploração apenas são conseguidos com um bom planeamento. Outro ponto importantíssimo é o aspecto económico. A este nível, os sistemas eléctricos representam um peso importante nas economias nacionais, uma vez que a energia é o motor do desenvolvimento. Nos tempos que correm, o aparecimento de grandes potências tem agitado os mercados energéticos, fazendo com que o preço dos produtos energéticos atinja máximos históricos. No primeiro capítulo deste trabalho, é feita uma introdução onde se apresenta e contextualiza o Despacho Óptimo na gestão global dos sistemas eléctricos de energia e como estes evoluíram nos últimos anos. O problema do Despacho Óptimo e todas as condicionantes/variáveis inerentes à sua resolução são aprofundados no capítulo 2. Primeiramente desprezando as perdas de transmissão das linhas, e depois com a sua contemplação. É, também, apresentado o métodos dos multiplicadores de Lagrange aplicado a este problema. No capítulo 3 é feita uma resenha da evolução dos métodos utilizados para a resolução do Despacho Óptimo, fazendo-se a destrinça entre os métodos clássicos e os mais recentes métodos heurísticos. A evolução que se tem verificado ao longo dos anos nos métodos utilizados, assim como o recurso ao cálculo computacional, devem-se à crescente complexidade dos Sistemas Eléctricos e à necessidade de rapidez e precisão nos resultados. Devido ao facto das centrais de produção de energia eléctrica funcionarem, não só com recurso a matérias-primas mas também através de recursos naturais que não têm um custo de aquisição, mas que não têm uma disponibilidade constante, existe a necessidade de se fazer uma gestão criteriosa na conjugação dos diversos tipos de produção. Como no nosso pais a grande alternativa às centrais térmicas são as hídricas, no capítulo 4 é apresentado o problema da coordenação hidro-térmica. No capítulo 5 é exposta a ferramenta computacional desenvolvida para a resolução do despacho óptimo com e sem perdas, que consiste num programa elaborado com o “software MatLab”. Este trabalho finaliza com um capítulo de conclusões.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Finding the structure of a confined liquid crystal is a difficult task since both the density and order parameter profiles are nonuniform. Starting from a microscopic model and density-functional theory, one has to either (i) solve a nonlinear, integral Euler-Lagrange equation, or (ii) perform a direct multidimensional free energy minimization. The traditional implementations of both approaches are computationally expensive and plagued with convergence problems. Here, as an alternative, we introduce an unsupervised variant of the multilayer perceptron (MLP) artificial neural network for minimizing the free energy of a fluid of hard nonspherical particles confined between planar substrates of variable penetrability. We then test our algorithm by comparing its results for the structure (density-orientation profiles) and equilibrium free energy with those obtained by standard iterative solution of the Euler-Lagrange equations and with Monte Carlo simulation results. Very good agreement is found and the MLP method proves competitively fast, flexible, and refinable. Furthermore, it can be readily generalized to the richer experimental patterned-substrate geometries that are now experimentally realizable but very problematic to conventional theoretical treatments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose - To compare the image quality and effective dose applying the 10 kVp rule with manual mode acquisition and AEC mode in PA chest X-ray. Method - 68 images (with and without lesions) were acquired using an anthropomorphic chest phantom using a Wolverson Arcoma X-ray unit. These images were compared against a reference image using the 2 alternative forced choice (2AFC) method. The effective dose (E) was calculated using PCXMC software using the exposure parameters and the DAP. The exposure index (lgM provided by Agfa systems) was recorded. Results - Exposure time decreases more when applying the 10 kVp rule with manual mode (50%–28%) when compared with automatic mode (36%–23%). Statistical differences for E between several ionization chambers' combinations for AEC mode were found (p = 0.002). E is lower when using only the right AEC ionization chamber. Considering the image quality there are no statistical differences (p = 0.348) between the different ionization chambers' combinations for AEC mode for images with no lesions. Considering lgM values, it was demonstrated that they were higher when the AEC mode was used compared to the manual mode. It was also observed that lgM values obtained with AEC mode increased as kVp value went up. The image quality scores did not demonstrate statistical significant differences (p = 0.343) for the images with lesions comparing manual with AEC mode. Conclusion - In general the E is lower when manual mode is used. By using the right AEC ionising chamber under the lung the E will be the lowest in comparison to other ionising chambers. The use of the 10 kVp rule did not affect the visibility of the lesions or image quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sensor/actuator networks promised to extend automated monitoring and control into industrial processes. Avionic system is one of the prominent technologies that can highly gain from dense sensor/actuator deployments. An aircraft with smart sensing skin would fulfill the vision of affordability and environmental friendliness properties by reducing the fuel consumption. Achieving these properties is possible by providing an approximate representation of the air flow across the body of the aircraft and suppressing the detected aerodynamic drags. To the best of our knowledge, getting an accurate representation of the physical entity is one of the most significant challenges that still exists with dense sensor/actuator network. This paper offers an efficient way to acquire sensor readings from very large sensor/actuator network that are located in a small area (dense network). It presents LIA algorithm, a Linear Interpolation Algorithm that provides two important contributions. First, it demonstrates the effectiveness of employing a transformation matrix to mimic the environmental behavior. Second, it renders a smart solution for updating the previously defined matrix through a procedure called learning phase. Simulation results reveal that the average relative error in LIA algorithm can be reduced by as much as 60% by exploiting transformation matrix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate the most productive types of properties and containers for Aedes aegypti and the spatial distribution of entomological indices.METHODS: Between December 2006 and February 2007, the vector's immature forms were collected to obtain entomological indices in 9,875 properties in the Jaguare neighborhood of Sao Jose do Rio Preto, SP, Southeastern Brazil. In March and April 2007, a questionnaire about the conditions and characteristics of properties was administered. Logistic regression was used to identify variables associated with the presence of pupae at the properties. Indices calculated per block were combined with a geo-referenced map, and thematic maps of these indices were obtained using statistical interpolation.RESULTS: The properties inspected had the following Ae. aegypti indices: Breteau Index = 18.9, 3.7 larvae and 0.42 pupae per property, 5.2 containers harboring Ae. aegypti per hectare, 100.0 larvae and 11.6 pupae per hectare, and 1.3 larvae and 0.15 pupae per inhabitant. The presence of yards, gardens and animals was associated with the presence of pupae.CONCLUSIONS: Specific types of properties and containers that simultaneously had low frequencies among those positive for the vector and high participation in the productivity of larvae and pupae were not identified. The use of indices including larval and pupal counts does not provide further information beyond that obtained from the traditional Stegomyia indices in locations with characteristics similar to those of São José do Rio Preto. The indices calculated per area were found to be more accurate for the spatial assessment of infestation. The Ae. aegypti infestation levels exhibited extensive spatial variation, indicating that the assessment of infestation in micro areas is needed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Consider the problem of designing an algorithm for acquiring sensor readings. Consider specifically the problem of obtaining an approximate representation of sensor readings where (i) sensor readings originate from different sensor nodes, (ii) the number of sensor nodes is very large, (iii) all sensor nodes are deployed in a small area (dense network) and (iv) all sensor nodes communicate over a communication medium where at most one node can transmit at a time (a single broadcast domain). We present an efficient algorithm for this problem, and our novel algorithm has two desired properties: (i) it obtains an interpolation based on all sensor readings and (ii) it is scalable, that is, its time-complexity is independent of the number of sensor nodes. Achieving these two properties is possible thanks to the close interlinking of the information processing algorithm, the communication system and a model of the physical world.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a spatial econometrics analysis for the number of road accidents with victims in the smallest administrative divisions of Lisbon, considering as a baseline a log-Poisson model for environmental factors. Spatial correlation on data is investigated for data alone and for the residuals of the baseline model without and with spatial-autocorrelated and spatial-lagged terms. In all the cases no spatial autocorrelation was detected.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discrete time control systems require sample- and-hold circuits to perform the conversion from digital to analog. Fractional-Order Holds (FROHs) are an interpolation between the classical zero and first order holds and can be tuned to produce better system performance. However, the model of the FROH is somewhat hermetic and the design of the system becomes unnecessarily complicated. This paper addresses the modelling of the FROHs using the concepts of Fractional Calculus (FC). For this purpose, two simple fractional-order approximations are proposed whose parameters are estimated by a genetic algorithm. The results are simple to interpret, demonstrating that FC is a useful tool for the analysis of these devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network control systems (NCSs) are spatially distributed systems in which the communication between sensors, actuators and controllers occurs through a shared band-limited digital communication network. However, the use of a shared communication network, in contrast to using several dedicated independent connections, introduces new challenges which are even more acute in large scale and dense networked control systems. In this paper we investigate a recently introduced technique of gathering information from a dense sensor network to be used in networked control applications. Obtaining efficiently an approximate interpolation of the sensed data is exploited as offering a good tradeoff between accuracy in the measurement of the input signals and the delay to the actuation. These are important aspects to take into account for the quality of control. We introduce a variation to the state-of-the-art algorithms which we prove to perform relatively better because it takes into account the changes over time of the input signal within the process of obtaining an approximate interpolation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We focus on large-scale and dense deeply embedded systems where, due to the large amount of information generated by all nodes, even simple aggregate computations such as the minimum value (MIN) of the sensor readings become notoriously expensive to obtain. Recent research has exploited a dominance-based medium access control(MAC) protocol, the CAN bus, for computing aggregated quantities in wired systems. For example, MIN can be computed efficiently and an interpolation function which approximates sensor data in an area can be obtained efficiently as well. Dominance-based MAC protocols have recently been proposed for wireless channels and these protocols can be expected to be used for achieving highly scalable aggregate computations in wireless systems. But no experimental demonstration is currently available in the research literature. In this paper, we demonstrate that highly scalable aggregate computations in wireless networks are possible. We do so by (i) building a new wireless hardware platform with appropriate characteristics for making dominance-based MAC protocols efficient, (ii) implementing dominance-based MAC protocols on this platform, (iii) implementing distributed algorithms for aggregate computations (MIN, MAX, Interpolation) using the new implementation of the dominance-based MAC protocol and (iv) performing experiments to prove that such highly scalable aggregate computations in wireless networks are possible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The availability of small inexpensive sensor elements enables the employment of large wired or wireless sensor networks for feeding control systems. Unfortunately, the need to transmit a large number of sensor measurements over a network negatively affects the timing parameters of the control loop. This paper presents a solution to this problem by representing sensor measurements with an approximate representation-an interpolation of sensor measurements as a function of space coordinates. A priority-based medium access control (MAC) protocol is used to select the sensor messages with high information content. Thus, the information from a large number of sensor measurements is conveyed within a few messages. This approach greatly reduces the time for obtaining a snapshot of the environment state and therefore supports the real-time requirements of feedback control loops.