41 resultados para vector addition systems
Resumo:
This paper presents the first multi vector energy analysis for the interconnected energy systems of Great Britain (GB) and Ireland. Both systems share a common high penetration of wind power, but significantly different security of supply outlooks. Ireland is heavily dependent on gas imports from GB, giving significance to the interconnected aspect of the methodology in addition to the gas and power interactions analysed. A fully realistic unit commitment and economic dispatch model coupled to an energy flow model of the gas supply network is developed. Extreme weather events driving increased domestic gas demand and low wind power output were utilised to increase gas supply network stress. Decreased wind profiles had a larger impact on system security than high domestic gas demand. However, the GB energy system was resilient during high demand periods but gas network stress limited the ramping capability of localised generating units. Additionally, gas system entry node congestion in the Irish system was shown to deliver a 40% increase in short run costs for generators. Gas storage was shown to reduce the impact of high demand driven congestion delivering a reduction in total generation costs of 14% in the period studied and reducing electricity imports from GB, significantly contributing to security of supply.
Resumo:
Support vector machines (SVMs), though accurate, are not preferred in applications requiring high classification speed or when deployed in systems of limited computational resources, due to the large number of support vectors involved in the model. To overcome this problem we have devised a primal SVM method with the following properties: (1) it solves for the SVM representation without the need to invoke the representer theorem, (2) forward and backward selections are combined to approach the final globally optimal solution, and (3) a criterion is introduced for identification of support vectors leading to a much reduced support vector set. In addition to introducing this method the paper analyzes the complexity of the algorithm and presents test results on three public benchmark problems and a human activity recognition application. These applications demonstrate the effectiveness and efficiency of the proposed algorithm.
--------------------------------------------------------------------------------
Resumo:
Two distinct systems for the rhodium-catalyzed enantioselective desymmetrization of meso-cyclic anhydrides have been developed. Each system has been optimized and are compatible with the use of in situ prepared organozinc reagents. Rhodium/PHOX species efficiently catalyze the addition of alkyl nucleophiles to glutaric anhydrides, while a rhodium/phosphoramidite system is effective in the enantioselective arylation of succinic and glutaric anhydrides.
Resumo:
Polyol sugars, displaying a plurality Of hydroxyl groups, were shown to modulate tetra hydroxyborate (borate) cross-linking in lidocaine hydrochloride containing poly(vinyl alcohol) scini-solid hydrogels. Without polyol, demixing of borate cross-linked PVA hydrogels into two distinct phases was noticeable upon lidocaine hydrochloride addition, preventing further use as a topical System. D-Mannitol incorporation was found to be particularly suitable in cicumventing network constriction induced by ionic and pH effects upon adding the hydrochloride salt of lidocaine. A test formulation (4% w/v lidocaine HCl, 2% W/V D-mannitol, 10% w/v PVA and 2.5%, w/v THB) was shown to constitute an effective delivery system, which was characterised by an initial burst release and a drug release mechanism dependent on temperature, changing from a diffusion-controlled system to one with the properties of a reservoir system. The novel flow properties and innocuous adhesion of PVA-tetrahydroxyborate hydrogels Support their application for drug delivery to exposed epithelial surfaces, Such as lacerated wounds. Furthermore, addition of a polyol, such as mannitol, allows incorporation of soluble salt forms of active therapeutic agents by modulation of cross-linking density. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The monitoring of multivariate systems that exhibit non-Gaussian behavior is addressed. Existing work advocates the use of independent component analysis (ICA) to extract the underlying non-Gaussian data structure. Since some of the source signals may be Gaussian, the use of principal component analysis (PCA) is proposed to capture the Gaussian and non-Gaussian source signals. A subsequent application of ICA then allows the extraction of non-Gaussian components from the retained principal components (PCs). A further contribution is the utilization of a support vector data description to determine a confidence limit for the non-Gaussian components. Finally, a statistical test is developed for determining how many non-Gaussian components are encapsulated within the retained PCs, and associated monitoring statistics are defined. The utility of the proposed scheme is demonstrated by a simulation example, and the analysis of recorded data from an industrial melter.
Resumo:
Nonlinear principal component analysis (PCA) based on neural networks has drawn significant attention as a monitoring tool for complex nonlinear processes, but there remains a difficulty with determining the optimal network topology. This paper exploits the advantages of the Fast Recursive Algorithm, where the number of nodes, the location of centres, and the weights between the hidden layer and the output layer can be identified simultaneously for the radial basis function (RBF) networks. The topology problem for the nonlinear PCA based on neural networks can thus be solved. Another problem with nonlinear PCA is that the derived nonlinear scores may not be statistically independent or follow a simple parametric distribution. This hinders its applications in process monitoring since the simplicity of applying predetermined probability distribution functions is lost. This paper proposes the use of a support vector data description and shows that transforming the nonlinear principal components into a feature space allows a simple statistical inference. Results from both simulated and industrial data confirm the efficacy of the proposed method for solving nonlinear principal component problems, compared with linear PCA and kernel PCA.
Resumo:
The purpose of this study is to survey the use of networks and network-based methods in systems biology. This study starts with an introduction to graph theory and basic measures allowing to quantify structural properties of networks. Then, the authors present important network classes and gene networks as well as methods for their analysis. In the last part of this study, the authors review approaches that aim at analysing the functional organisation of gene networks and the use of networks in medicine. In addition to this, the authors advocate networks as a systematic approach to general problems in systems biology, because networks are capable of assuming multiple roles that are very beneficial connecting experimental data with a functional interpretation in biological terms.
Resumo:
In today’s atmosphere of constrained defense spending and reduced research budgets, determining how to allocate resources for research and design has become a critical and challenging task. In the area of aircraft design there are many promising technologies to be explored, yet limited funds with which to explore them. In addition, issues concerning uncertainty in technology readiness as well as the quantification of the impact of a technology (or combinations of technologies), are of key importance during the design process. This paper presents a methodology that details a comprehensive and structured process in which to quantitatively explore the effects of technology for a given baseline aircraft. This process, called Technology Impact Forecasting (TIF), involves the creation of a assessment environment for use in conjunction with defined technology scenarios, and will have a significant impact on resource allocation strategies for defense acquisition. The advantages and limitations of the method are discussed. In addition, an example TIF application, that of an Uninhabited Combat Aerial Vehicle, is presented and serves to illustrate the applicability of this methodology to a military system.
Resumo:
Massively parallel networks of highly efficient, high performance Single Instruction Multiple Data (SIMD) processors have been shown to enable FPGA-based implementation of real-time signal processing applications with performance and
cost comparable to dedicated hardware architectures. This is achieved by exploiting simple datapath units with deep processing pipelines. However, these architectures are highly susceptible to pipeline bubbles resulting from data and control hazards; the only way to mitigate against these is manual interleaving of
application tasks on each datapath, since no suitable automated interleaving approach exists. In this paper we describe a new automated integrated mapping/scheduling approach to map algorithm tasks to processors and a new low-complexity list scheduling technique to generate the interleaved schedules. When applied to a spatial Fixed-Complexity Sphere Decoding (FSD) detector
for next-generation Multiple-Input Multiple-Output (MIMO) systems, the resulting schedules achieve real-time performance for IEEE 802.11n systems on a network of 16-way SIMD processors on FPGA, enable better performance/complexity balance than current approaches and produce results comparable to handcrafted implementations.
Resumo:
In this paper, the authors have presented one approach to configuring a Wafer-Scale Integration Chip. The approach described is called the 'WINNER', in which bus channels and an external controller for configuring the working processors are not required. In addition, the technique is applicable to high availability systems constructed using conventional methods. The technique can also be extended to arrays of arbitrary size and with any degree of fault tolerance simply by using an appropriate number of cells.
Resumo:
A bit-level systolic array for computing matrix x vector products is described. The operation is carried out on bit parallel input data words and the basic circuit takes the form of a 1-bit slice. Several bit-slice components must be connected together to form the final result, and authors outline two different ways in which this can be done. The basic array also has considerable potential as a stand-alone device, and its use in computing the Walsh-Hadamard transform and discrete Fourier transform operations is briefly discussed.
Resumo:
A new method is proposed which reduces the size of the memory needed to implement multirate vector quantizers. Investigations have shown that the performance of the coders implemented using this approach is comparable to that obtained from standard systems. The proposed method can therefore be used to reduce the hardware required to implement real-time speech coders.
Resumo:
The real time implementation of an efficient signal compression technique, Vector Quantization (VQ), is of great importance to many digital signal coding applications. In this paper, we describe a new family of bit level systolic VLSI architectures which offer an attractive solution to this problem. These architectures are based on a bit serial, word parallel approach and high performance and efficiency can be achieved for VQ applications of a wide range of bandwidths. Compared with their bit parallel counterparts, these bit serial circuits provide better alternatives for VQ implementations in terms of performance and cost. © 1995 Kluwer Academic Publishers.