947 resultados para linear programming applications
Resumo:
The complex of questions connected with the analysis, estimation and structural-parametrical optimization of dynamic system is considered in this article. Connection of such problems with tasks of control by beams of trajectories is emphasized. The special attention is concentrated on the review and analysis of spent scientific researches, the attention is stressed to their constructability and applied directedness. Efficiency of the developed algorithmic and software is demonstrated on the tasks of modeling and optimization of output beam characteristics in linear resonance accelerators.
Resumo:
A concept of educational game for learning programming languages is presented. The idea of learning programming languages and improving programming skills through programming game characters’ behavior is described. The learning course description rules for using in games are suggested. The concept is implemented in a game for learning C# programming language. A common game architecture is modified for using in the educational game. The game engine is built on the base of the graphical engine Ogre3D and extended with game logic. The game has been developed as an industry level commercial product and is planned for sale to educational institutions.
Resumo:
The modern grid system or the smart grid is likely to be populated with multiple distributed energy sources, e.g. wind power, PV power, Plug-in Electric Vehicle (PEV). It will also include a variety of linear and nonlinear loads. The intermittent nature of renewable energies like PV, wind turbine and increased penetration of Electric Vehicle (EV) makes the stable operation of utility grid system challenging. In order to ensure a stable operation of the utility grid system and to support smart grid functionalities such as, fault ride-through, frequency response, reactive power support, and mitigation of power quality issues, an energy storage system (ESS) could play an important role. A fast acting bidirectional energy storage system which can rapidly provide and absorb power and/or VARs for a sufficient time is a potentially valuable tool to support this functionality. Battery energy storage systems (BESS) are one of a range suitable energy storage system because it can provide and absorb power for sufficient time as well as able to respond reasonably fast. Conventional BESS already exist on the grid system are made up primarily of new batteries. The cost of these batteries can be high which makes most BESS an expensive solution. In order to assist moving towards a low carbon economy and to reduce battery cost this work aims to research the opportunities for the re-use of batteries after their primary use in low and ultra-low carbon vehicles (EV/HEV) on the electricity grid system. This research aims to develop a new generation of second life battery energy storage systems (SLBESS) which could interface to the low/medium voltage network to provide necessary grid support in a reliable and in cost-effective manner. The reliability/performance of these batteries is not clear, but is almost certainly worse than a new battery. Manufacturers indicate that a mixture of gradual degradation and sudden failure are both possible and failure mechanisms are likely to be related to how hard the batteries were driven inside the vehicle. There are several figures from a number of sources including the DECC (Department of Energy and Climate Control) and Arup and Cenex reports indicate anything from 70,000 to 2.6 million electric and hybrid vehicles on the road by 2020. Once the vehicle battery has degraded to around 70-80% of its capacity it is considered to be at the end of its first life application. This leaves capacity available for a second life at a much cheaper cost than a new BESS Assuming a battery capability of around 5-18kWhr (MHEV 5kWh - BEV 18kWh battery) and approximate 10 year life span, this equates to a projection of battery storage capability available for second life of >1GWhrs by 2025. Moreover, each vehicle manufacturer has different specifications for battery chemistry, number and arrangement of battery cells, capacity, voltage, size etc. To enable research and investment in this area and to maximize the remaining life of these batteries, one of the design challenges is to combine these hybrid batteries into a grid-tie converter where their different performance characteristics, and parameter variation can be catered for and a hot swapping mechanism is available so that as a battery ends it second life, it can be replaced without affecting the overall system operation. This integration of either single types of batteries with vastly different performance capability or a hybrid battery system to a grid-tie 3 energy storage system is different to currently existing work on battery energy storage systems (BESS) which deals with a single type of battery with common characteristics. This thesis addresses and solves the power electronic design challenges in integrating second life hybrid batteries into a grid-tie energy storage unit for the first time. This study details a suitable multi-modular power electronic converter and its various switching strategies which can integrate widely different batteries to a grid-tie inverter irrespective of their characteristics, voltage levels and reliability. The proposed converter provides a high efficiency, enhanced control flexibility and has the capability to operate in different operational modes from the input to output. Designing an appropriate control system for this kind of hybrid battery storage system is also important because of the variation of battery types, differences in characteristics and different levels of degradations. This thesis proposes a generalised distributed power sharing strategy based on weighting function aims to optimally use a set of hybrid batteries according to their relative characteristics while providing the necessary grid support by distributing the power between the batteries. The strategy is adaptive in nature and varies as the individual battery characteristics change in real time as a result of degradation for example. A suitable bidirectional distributed control strategy or a module independent control technique has been developed corresponding to each mode of operation of the proposed modular converter. Stability is an important consideration in control of all power converters and as such this thesis investigates the control stability of the multi-modular converter in detailed. Many controllers use PI/PID based techniques with fixed control parameters. However, this is not found to be suitable from a stability point-of-view. Issues of control stability using this controller type under one of the operating modes has led to the development of an alternative adaptive and nonlinear Lyapunov based control for the modular power converter. Finally, a detailed simulation and experimental validation of the proposed power converter operation, power sharing strategy, proposed control structures and control stability issue have been undertaken using a grid connected laboratory based multi-modular hybrid battery energy storage system prototype. The experimental validation has demonstrated the feasibility of this new energy storage system operation for use in future grid applications.
Resumo:
The task of smooth and stable decision rules construction in logical recognition models is considered. Logical regularities of classes are defined as conjunctions of one-place predicates that determine the membership of features values in an intervals of the real axis. The conjunctions are true on a special no extending subsets of reference objects of some class and are optimal. The standard approach of linear decision rules construction for given sets of logical regularities consists in realization of voting schemes. The weighting coefficients of voting procedures are done as heuristic ones or are as solutions of complex optimization task. The modifications of linear decision rules are proposed that are based on the search of maximal estimations of standard objects for their classes and use approximations of logical regularities by smooth sigmoid functions.
Resumo:
Бойко Бл. Банчев - Представена е обосновка и описание на език за програмиране в композиционен стил за опитни и учебни цели. Под “композиционен” имаме предвид функционален стил на програмиране, при който пресмятането е йерархия от композиции и прилагания на функции. Един от данновите типове на езика е този на геометричните фигури, които могат да бъдат получавани чрез прости правила за съотнасяне и така също образуват йерархични композиции. Езикът е силно повлиян от GeomLab, но по редица свойства се различава от него значително. Статията разглежда основните черти на езика; подробното му описание и фигурноконструктивните му възможности ще бъдат представени в съпътстваща публикация.
Resumo:
ACM Computing Classification System (1998): E.4.
Resumo:
Solar energy is the most abundant, widely distributed and clean renewable energy resource. Since the insolation intensity is only in the range of 0.5 - 1.0 kW/m2, solar concentrators are required for attaining temperatures appropriate for medium and high temperature applications. The concentrated energy is transferred through an absorber to a thermal fluid such as air, water or other fluids for various uses. This paper describes design and development of a 'Linear Fresnel Mirror Solar Concentrator' (LFMSC) using long thin strips of mirrors to focus sunlight on to a fixed receiver located at a common focal line. Our LFMSC system comprises a reflector (concentrator), receiver (target) and an innovative solar tracking mechanism. Reflectors are mirror strips, mounted on tubes which are fixed to a base frame. The tubes can be rotated to align the strips to focus solar radiation on the receiver (target). The latter comprises a coated tube carrying water and covered by a glass plate. This is mounted at an elevation of few meters above the horizontal, parallel to the plane of the mirrors. The reflector is oriented along north-south axis. The most difficult task is tracking. This is achieved by single axis tracking using a four bar link mechanism. Thus tracking has been made simple and easy to operate. The LFMSC setup is used for generating steam for a variety of applications. © 2013 The Authors. Published by Elsevier Ltd.
Resumo:
2002 Mathematics Subject Classification: 35J15, 35J25, 35B05, 35B50
Resumo:
MSC 2010: 05C50, 15A03, 15A06, 65K05, 90C08, 90C35
Resumo:
In this paper we develop set of novel Markov Chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient. © 2011 Springer-Verlag.
Resumo:
Two-stage data envelopment analysis (DEA) efficiency models identify the efficient frontier of a two-stage production process. In some two-stage processes, the inputs to the first stage are shared by the second stage, known as shared inputs. This paper proposes a new relational linear DEA model for dealing with measuring the efficiency score of two-stage processes with shared inputs under constant returns-to-scale assumption. Two case studies of banking industry and university operations are taken as two examples to illustrate the potential applications of the proposed approach.
Resumo:
We describe a parallel multi-threaded approach for high performance modelling of wide class of phenomena in ultrafast nonlinear optics. Specific implementation has been performed using the highly parallel capabilities of a programmable graphics processor. © 2011 SPIE.
Resumo:
The objectives of this research are to analyze and develop a modified Principal Component Analysis (PCA) and to develop a two-dimensional PCA with applications in image processing. PCA is a classical multivariate technique where its mathematical treatment is purely based on the eigensystem of positive-definite symmetric matrices. Its main function is to statistically transform a set of correlated variables to a new set of uncorrelated variables over $\IR\sp{n}$ by retaining most of the variations present in the original variables.^ The variances of the Principal Components (PCs) obtained from the modified PCA form a correlation matrix of the original variables. The decomposition of this correlation matrix into a diagonal matrix produces a set of orthonormal basis that can be used to linearly transform the given PCs. It is this linear transformation that reproduces the original variables. The two-dimensional PCA can be devised as a two successive of one-dimensional PCA. It can be shown that, for an $m\times n$ matrix, the PCs obtained from the two-dimensional PCA are the singular values of that matrix.^ In this research, several applications for image analysis based on PCA are developed, i.e., edge detection, feature extraction, and multi-resolution PCA decomposition and reconstruction. ^
Resumo:
The problems of plasticity and non-linear fracture mechanics have been generally recognized as the most difficult problems of solid mechanics. The present dissertation is devoted to some problems on the intersection of both plasticity and non-linear fracture mechanics. The crack tip is responsible for the crack growth and therefore is the focus of fracture science. The problem of crack has been studied by an army of outstanding scholars and engineers in this century, but has not, as yet, been solved for many important practical situations. The aim of this investigation is to provide an analytical solution to the problem of plasticity at the crack tip for elastic-perfectly plastic materials and to apply the solution to a classical problem of the mechanics of composite materials.^ In this work, the stresses inside the plastic region near the crack tip in a composite material made of two different elastic-perfectly plastic materials are studied. The problems of an interface crack, a crack impinging an interface at the right angle and at arbitrary angles are examined. The constituent materials are assumed to obey the Huber-Mises yielding condition criterion. The theory of slip lines for plane strain is utilized. For the particular homogeneous case these problems have two solutions: the continuous solution found earlier by Prandtl and modified by Hill and Sokolovsky, and the discontinuous solution found later by Cherepanov. The same type of solutions were discovered in the inhomogeneous problems of the present study. Some reasons to prefer the discontinuous solution are provided. The method is also applied to the analysis of a contact problem and a push-in/pull-out problem to determine the critical load for plasticity in these classical problems of the mechanics of composite materials.^ The results of this dissertation published in three journal articles (two of which are under revision) will also be presented in the Invited Lecture at the 7$\rm\sp{th}$ International Conference on Plasticity (Cancun, Mexico, January 1999). ^
Resumo:
If we classify variables in a program into various security levels, then a secure information flow analysis aims to verify statically that information in a program can flow only in ways consistent with the specified security levels. One well-studied approach is to formulate the rules of the secure information flow analysis as a type system. A major trend of recent research focuses on how to accommodate various sophisticated modern language features. However, this approach often leads to overly complicated and restrictive type systems, making them unfit for practical use. Also, problems essential to practical use, such as type inference and error reporting, have received little attention. This dissertation identified and solved major theoretical and practical hurdles to the application of secure information flow. ^ We adopted a minimalist approach to designing our language to ensure a simple lenient type system. We started out with a small simple imperative language and only added features that we deemed most important for practical use. One language feature we addressed is arrays. Due to the various leaking channels associated with array operations, arrays have received complicated and restrictive typing rules in other secure languages. We presented a novel approach for lenient array operations, which lead to simple and lenient typing of arrays. ^ Type inference is necessary because usually a user is only concerned with the security types for input/output variables of a program and would like to have all types for auxiliary variables inferred automatically. We presented a type inference algorithm B and proved its soundness and completeness. Moreover, algorithm B stays close to the program and the type system and therefore facilitates informative error reporting that is generated in a cascading fashion. Algorithm B and error reporting have been implemented and tested. ^ Lastly, we presented a novel framework for developing applications that ensure user information privacy. In this framework, core computations are defined as code modules that involve input/output data from multiple parties. Incrementally, secure flow policies are refined based on feedback from the type checking/inference. Core computations only interact with code modules from involved parties through well-defined interfaces. All code modules are digitally signed to ensure their authenticity and integrity. ^