894 resultados para Many-to-many-assignment problem


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese de doutorado propõe uma formulação matemática para simulação de roteamento e alocação de comprimentos de onda em redes ópticas, sem a inclusão de restrições que não são inerentes ao problema básico e com o objetivo de ser aplicável a qualquer tipo de rede óptica com tráfego de demanda estática. O estabelecimento de uma rota seguida da seleção de um comprimento de onda é um dos pontos chave para o bom funcionamento de uma rede óptica, pois influencia na forma como os recursos da rede serão gerenciados. Assim, o processo de roteamento e alocação de comprimentos de onda em redes ópticas, conhecido como RWA (Routing and Wavelength Assignment), necessita de soluções que busquem a sua otimização. Entretanto, a despeito dos inúmeros estudos com o objetivo de otimizar o processo RWA, observa-se que não há, a priori, nenhuma solução que possa levar a uma padronização do referido processo. Considerando que a padronização é desejável na consolidação do uso de qualquer tecnologia, a Tese descrita neste trabalho é uma Função de Objetivo Genérico (FOG) que trata do processo de roteamento e alocação de comprimentos de onda, visando estabelecer uma base a partir da qual seja possível desenvolver um padrão ou vários padrões para redes ópticas. A FOG foi testada, via simulação, no processo de alocação de comprimentos de onda do inglês, Wavelength Assignment e no processo RWA como um todo. Em ambos os casos, os testes foram realizados considerando redes opacas, trazendo resultados surpreendentes, considerando a simplicidade da solução para um problema não trivial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is twofold: to analyze the computational complexity of the cogeneration design problem; to present an expert system to solve the proposed problem, comparing such an approach with the traditional searching methods available.Design/methodology/approach - The complexity of the cogeneration problem is analyzed through the transformation of the well-known knapsack problem. Both problems are formulated as decision problems and it is proven that the cogeneration problem is np-complete. Thus, several searching approaches, such as population heuristics and dynamic programming, could be used to solve the problem. Alternatively, a knowledge-based approach is proposed by presenting an expert system and its knowledge representation scheme.Findings - The expert system is executed considering two case-studies. First, a cogeneration plant should meet power, steam, chilled water and hot water demands. The expert system presented two different solutions based on high complexity thermodynamic cycles. In the second case-study the plant should meet just power and steam demands. The system presents three different solutions, and one of them was never considered before by our consultant expert.Originality/value - The expert system approach is not a "blind" method, i.e. it generates solutions based on actual engineering knowledge instead of the searching strategies from traditional methods. It means that the system is able to explain its choices, making available the design rationale for each solution. This is the main advantage of the expert system approach over the traditional search methods. On the other hand, the expert system quite likely does not provide an actual optimal solution. All it can provide is one or more acceptable solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the instrumental records of daily precipitation, we often encounter one or more periods in which values below some threshold were not registered. Such periods, besides lacking small values, also have a large number of dry days. Their cumulative distribution function is shifted to the right in relation to that for other portions of the record having more reliable observations. Such problems are examined in this work, based mostly on the two-sample Kolmogorov–Smirnov (KS) test, where the portion of the series with more number of dry days is compared with the portion with less number of dry days. Another relatively common problem in daily rainfall data is the prevalence of integers either throughout the period of record or in some part of it, likely resulting from truncation during data compilation prior to archiving or by coarse rounding of daily readings by observers. This problem is identified by simple calculation of the proportion of integers in the series, taking the expected proportion as 10%. The above two procedures were applied to the daily rainfall data sets from the European Climate Assessment (ECA), Southeast Asian Climate Assessment (SACA), and Brazilian Water Resources Agency (BRA). Taking the statistic D of the KS test >0.15 and the corresponding p-value <0.001 as the condition to classify a given series as suspicious, the proportions of the ECA, SACA, and BRA series falling into this category are, respectively, 34.5%, 54.3%, and 62.5%. With relation to coarse rounding problem, the proportions of series exceeding twice the 10% reference level are 3%, 60%, and 43% for the ECA, SACA, and BRA data sets, respectively. A simple way to visualize the two problems addressed here is by plotting the time series of daily rainfall for a limited range, for instance, 0–10 mm day−1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a technique for solving the multiobjective environmental/economic dispatch problem using the weighted sum and ε-constraint strategies, which transform the problem into a set of single-objective problems. In the first strategy, the objective function is a weighted sum of the environmental and economic objective functions. The second strategy considers one of the objective functions: in this case, the environmental function, as a problem constraint, bounded above by a constant. A specific predictor-corrector primal-dual interior point method which uses the modified log barrier is proposed for solving the set of single-objective problems generated by such strategies. The purpose of the modified barrier approach is to solve the problem with relaxation of its original feasible region, enabling the method to be initialized with unfeasible points. The tests involving the proposed solution technique indicate i) the efficiency of the proposed method with respect to the initialization with unfeasible points, and ii) its ability to find a set of efficient solutions for the multiobjective environmental/economic dispatch problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this action research study of my classroom of 7th grade mathematics, I investigated whether the use of decoding would increase the students’ ability to problem solve. I discovered that knowing how to decode a word problem is only one facet of being a successful problem solver. I also discovered that confidence, effective instruction, and practice have an impact on improving problem solving skills. Because of this research, I plan to alter my problem solving guide that will enable it to be used by any classroom teacher. I also plan to keep adding to my math problem solving clue words and share with others. My hope is that I will be able to explain my project to math teachers in my district to make them aware of the importance of knowing the steps to solve a word problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this action research study of my classroom of 8th grade mathematics students, I investigated if learning different problem solving strategies helped students successfully solve problems. I also investigated if students’ knowledge of the topics involved in story problems had an impact on students’ success rates. I discovered that students were more successful after learning different problem solving strategies and when given problems with which they have experience. I also discovered that students put forth a greater effort when they approach the story problem like a game, instead of just being another math problem that they have to solve. An unexpected result was that the students’ degree of effort had a major impact on their success rate. As a result of this research, I plan to continue to focus on problem solving strategies in my classes. I also plan to improve my methods on getting students’ full effort in class.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this action research study of my sixth grade mathematics class, I investigated the influence a change in my questioning tactics would have on students’ ability to determine answer reasonability to mathematics problems. During the course of my research, students were asked to explain their problem solving and solutions. Students, amongst themselves, discussed solutions given by their peers and the reasonability of those solutions. They also completed daily questionnaires that inquired about my questioning practices, and 10 students were randomly chosen to be interviewed regarding their problem solving strategies. I discovered that by placing more emphasis on the process rather than the product, students became used to questioning problem solving strategies and explaining their reasoning. I plan to maintain this practice in the future while incorporating more visual and textual explanations to support verbal explanations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Planck scale physics may influence the evolution of cosmological fluctuations in the early stages of cosmological evolution. Because of the quasiexponential redshifting, which occurs during an inflationary period, the physical wavelengths of comoving scales that correspond to the present large-scale structure of the Universe were smaller than the Planck length in the early stages of the inflationary period. This trans-Planckian effect was studied before using toy models. The Horava-Lifshitz (HL) theory offers the chance to study this problem in a candidate UV complete theory of gravity. In this paper we study the evolution of cosmological perturbations according to HL gravity assuming that matter gives rise to an inflationary background. As is usually done in inflationary cosmology, we assume that the fluctuations originate in their minimum energy state. In the trans-Planckian region the fluctuations obey a nonlinear dispersion relation of Corley-Jacobson type. In the "healthy extension" of HL gravity there is an extra degree of freedom which plays an important role in the UV region but decouples in the IR, and which influences the cosmological perturbations. We find that in spite of these important changes compared to the usual description, the overall scale invariance of the power spectrum of cosmological perturbations is recovered. However, we obtain oscillations in the spectrum as a function of wave number with a relative amplitude of order unity and with an effective frequency which scales nonlinearly with wave number. Taking the usual inflationary parameters we find that the frequency of the oscillations is so large as to render the effect difficult to observe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] In this paper we study a variational problem derived from a computer vision application: video camera calibration with smoothing constraint. By video camera calibration we meanto estimate the location, orientation and lens zoom-setting of the camera for each video frame taking into account image visible features. To simplify the problem we assume that the camera is mounted on a tripod, in such case, for each frame captured at time t , the calibration is provided by 3 parameters : (1) P(t) (PAN) which represents the tripod vertical axis rotation, (2) T(t) (TILT) which represents the tripod horizontal axis rotation and (3) Z(t) (CAMERA ZOOM) the camera lens zoom setting. The calibration function t -> u(t) = (P(t),T(t),Z(t)) is obtained as the minima of an energy function I[u] . In thIs paper we study the existence of minima of such energy function as well as the solutions of the associated Euler-Lagrange equations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.