891 resultados para Linear and multilinear programming
Resumo:
This paper obtains a new accurate model for sensitivity in power systems and uses it in conjunction with linear programming for the solution of load-shedding problems with a minimum loss of loads. For cases where the error in the sensitivity model increases, other linear programming and quadratic programming models have been developed, assuming currents at load buses as variables and not load powers. A weighted error criterion has been used to take priority schedule into account; it can be either a linear or a quadratic function of the errors, and depending upon the function appropriate programming techniques are to be employed.
Resumo:
Diffuse optical tomography (DOT) is one of the ways to probe highly scattering media such as tissue using low-energy near infra-red light (NIR) to reconstruct a map of the optical property distribution. The interaction of the photons in biological tissue is a non-linear process and the phton transport through the tissue is modelled using diffusion theory. The inversion problem is often solved through iterative methods based on nonlinear optimization for the minimization of a data-model misfit function. The solution of the non-linear problem can be improved by modeling and optimizing the cost functional. The cost functional is f(x) = x(T)Ax - b(T)x + c and after minimization, the cost functional reduces to Ax = b. The spatial distribution of optical parameter can be obtained by solving the above equation iteratively for x. As the problem is non-linear, ill-posed and ill-conditioned, there will be an error or correction term for x at each iteration. A linearization strategy is proposed for the solution of the nonlinear ill-posed inverse problem by linear combination of system matrix and error in solution. By propagating the error (e) information (obtained from previous iteration) to the minimization function f(x), we can rewrite the minimization function as f(x; e) = (x + e)(T) A(x + e) - b(T)(x + e) + c. The revised cost functional is f(x; e) = f(x) + e(T)Ae. The self guided spatial weighted prior (e(T)Ae) error (e, error in estimating x) information along the principal nodes facilitates a well resolved dominant solution over the region of interest. The local minimization reduces the spreading of inclusion and removes the side lobes, thereby improving the contrast, localization and resolution of reconstructed image which has not been possible with conventional linear and regularization algorithm.
Resumo:
The trapezoidal rule, which is a special case of the Newmark family of algorithms, is one of the most widely used methods for transient hyperbolic problems. In this work, we show that this rule conserves linear and angular momenta and energy in the case of undamped linear elastodynamics problems, and an ``energy-like measure'' in the case of undamped acoustic problems. These conservation properties, thus, provide a rational basis for using this algorithm. In linear elastodynamics problems, variants of the trapezoidal rule that incorporate ``high-frequency'' dissipation are often used, since the higher frequencies, which are not approximated properly by the standard displacement-based approach, often result in unphysical behavior. Instead of modifying the trapezoidal algorithm, we propose using a hybrid finite element framework for constructing the stiffness matrix. Hybrid finite elements, which are based on a two-field variational formulation involving displacement and stresses, are known to approximate the eigenvalues much more accurately than the standard displacement-based approach, thereby either bypassing or reducing the need for high-frequency dissipation. We show this by means of several examples, where we compare the numerical solutions obtained using the displacement-based and hybrid approaches against analytical solutions.
Resumo:
The concept of a "projection function" in a finite-dimensional real or complex normed linear space H (the function PM which carries every element into the closest element of a given subspace M) is set forth and examined.
If dim M = dim H - 1, then PM is linear. If PN is linear for all k-dimensional subspaces N, where 1 ≤ k < dim M, then PM is linear.
The projective bound Q, defined to be the supremum of the operator norm of PM for all subspaces, is in the range 1 ≤ Q < 2, and these limits are the best possible. For norms with Q = 1, PM is always linear, and a characterization of those norms is given.
If H also has an inner product (defined independently of the norm), so that a dual norm can be defined, then when PM is linear its adjoint PMH is the projection on (kernel PM)⊥ by the dual norm. The projective bounds of a norm and its dual are equal.
The notion of a pseudo-inverse F+ of a linear transformation F is extended to non-Euclidean norms. The distance from F to the set of linear transformations G of lower rank (in the sense of the operator norm ∥F - G∥) is c/∥F+∥, where c = 1 if the range of F fills its space, and 1 ≤ c < Q otherwise. The norms on both domain and range spaces have Q = 1 if and only if (F+)+ = F for every F. This condition is also sufficient to prove that we have (F+)H = (FH)+, where the latter pseudo-inverse is taken using dual norms.
In all results, the real and complex cases are handled in a completely parallel fashion.
Resumo:
In this paper we present a methodology and its implementation for the design and verification of programming circuit used in a family of application-specific FPGAs that share a common architecture. Each member of the family is different either in the types of functional blocks contained or in the number of blocks of each type. The parametrized design methodology is presented here to achieve this goal. Even though our focus is on the programming circuitry that provides the interface between the FPGA core circuit and the external programming hardware, the parametrized design method can be generalized to the design of entire chip for all members in the FPGA family. The method presented here covers the generation of the design RTL files and the support files for synthesis, place-and-route layout and simulations. The proposed method is proven to work smoothly within the complete chip design methodology. We will describe the implementation of this method to the design of the programming circuit in details including the design flow from the behavioral-level design to the final layout as well as the verification. Different package options and different programming modes are included in the description of the design. The circuit design implementation is based on SMIC 0.13-micron CMOS technology.
Resumo:
The technical challenges in the design and programming of signal processors for multimedia communication are discussed. The development of terminal equipment to meet such demand presents a significant technical challenge, considering that it is highly desirable that the equipment be cost effective, power efficient, versatile, and extensible for future upgrades. The main challenges in the design and programming of signal processors for multimedia communication are, general-purpose signal processor design, application-specific signal processor design, operating systems and programming support and application programming. The size of FFT is programmable so that it can be used for various OFDM-based communication systems, such as digital audio broadcasting (DAB), digital video broadcasting-terrestrial (DVB-T) and digital video broadcasting-handheld (DVB-H). The clustered architecture design and distributed ping-pong register files in the PAC DSP raise new challenges of code generation.
Resumo:
In this paper, we propose a novel linear transmit precoding strategy for multiple-input, multiple-output (MIMO) systems employing improper signal constellations. In particular, improved zero-forcing (ZF) and minimum mean square error (MMSE) precoders are derived based on modified cost functions, and are shown to achieve a superior performance without loss of spectrum efficiency compared to the conventional linear and nonlinear precoders. The superiority of the proposed precoders over the conventional solutions are verified by both simulation and analytical results. The novel approach to precoding design is also applied to the case of an imperfect channel estimate with a known error covariance as well as to the multi-user scenario where precoding based on the nullspace of channel transmission matrix is employed to decouple multi-user channels. In both cases, the improved precoding schemes yield significant performance gain compared to the conventional counterparts.
Resumo:
Many-electron systems confined to a quasi-one-dimensional geometry by a cylindrical distribution of positive charge have been investigated by density functional computations in the unrestricted local spin density approximation. Our investigations have been focused on the low-density regime, in which electrons are localized. The results reveal a wide variety of different charge and spin configurations, including linear and zig-zag chains, single-and double-strand helices, and twisted chains of dimers. The spin-spin coupling turns from weakly antiferromagnetic at relatively high density, to weakly ferromagnetic at the lowest densities considered in our computations. The stability of linear chains of localized charge has been investigated by analyzing the radial dependence of the self-consistent potential and by computing the dispersion relation of low-energy harmonic excitations.
Resumo:
This paper investigates the computation of lower/upper expectations that must cohere with a collection of probabilistic assessments and a collection of judgements of epistemic independence. New algorithms, based on multilinear programming, are presented, both for independence among events and among random variables. Separation properties of graphical models are also investigated.
Resumo:
This paper investigates a representation language with flexibility inspired by probabilistic logic and compactness inspired by relational Bayesian networks. The goal is to handle propositional and first-order constructs together with precise, imprecise, indeterminate and qualitative probabilistic assessments. The paper shows how this can be achieved through the theory of credal networks. New exact and approximate inference algorithms based on multilinear programming and iterated/loopy propagation of interval probabilities are presented; their superior performance, compared to existing ones, is shown empirically.
Resumo:
This paper explores semi-qualitative probabilistic networks (SQPNs) that combine numeric and qualitative information. We first show that exact inferences with SQPNs are NPPP-Complete. We then show that existing qualitative relations in SQPNs (plus probabilistic logic and imprecise assessments) can be dealt effectively through multilinear programming. We then discuss learning: we consider a maximum likelihood method that generates point estimates given a SQPN and empirical data, and we describe a Bayesian-minded method that employs the Imprecise Dirichlet Model to generate set-valued estimates.
Resumo:
In order to minimize the risk of failures or major renewals of hull structures during the ship's expected life span, it is imperative that the precaution must be taken with regard to an adequate margin of safety against any one or combination of failure modes including excessive yielding, buckling, brittle fracture, fatigue and corrosion. The most efficient system for combating underwater corrosion is 'cathodic protection'. The basic principle of this method is that the ship's structure is made cathodic, i.e. the anodic (corrosion) reactions are suppressed by the application of an opposing current and the ship is there by protected. This paper deals with state of art in cathodic protection and its programming in ship structure
Resumo:
The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.
Resumo:
We examine the long-run relationship between the parallel and the official exchange rate in Colombia over two regimes; a crawling peg period and a more flexible crawling band one. The short-run adjustment process of the parallel rate is examined both in a linear and a nonlinear context. We find that the change from the crawling peg to the crawling band regime did not affect the long-run relationship between the official and parallel exchange rates, but altered the short-run dynamics. Non-linear adjustment seems appropriate for the first period, mainly due to strict foreign controls that cause distortions in the transition back to equilibrium once disequilibrium occurs
Resumo:
Does the 2009 Stockholm Programme matter? This paper addresses the controversies experienced at EU institutional levels as to ‘who’ should have ownership of the contours of the EU’s policy and legislative multiannual programming in the Area of Freedom, Security and Justice (AFSJ) in a post-Lisbon Treaty landscape. It examines the struggles around the third multiannual programme on the AFSJ, i.e. the Stockholm Programme, and the dilemmas affecting its implementation. The latest affair to emerge relates to the lack of fulfilment by the European Commission of the commitment to provide a mid-term evaluation of the Stockholm Programme’s implementation by mid-2012, as requested by both the Council and the European Parliament. This paper shifts the focus to a broader perspective and raises the following questions: Is the Stockholm Programme actually relevant? What do the discussions behind its implementation tell us about the new institutional dynamics affecting European integration on the AFSJ? Does the EU actually need a new (post- Stockholm) multiannual programme for the period 2015–20? And last, what role should the EP play in legislative and policy programming in order to further strengthen the democratic accountability and legitimacy of the EU’s AFSJ?