891 resultados para Linear and multilinear programming


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wydział Matematyki i Informatyki

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Science of Network Service Composition has clearly emerged as one of the grand themes driving many of our research questions in the networking field today [NeXtworking 2003]. This driving force stems from the rise of sophisticated applications and new networking paradigms. By "service composition" we mean that the performance and correctness properties local to the various constituent components of a service can be readily composed into global (end-to-end) properties without re-analyzing any of the constituent components in isolation, or as part of the whole composite service. The set of laws that would govern such composition is what will constitute that new science of composition. The combined heterogeneity and dynamic open nature of network systems makes composition quite challenging, and thus programming network services has been largely inaccessible to the average user. We identify (and outline) a research agenda in which we aim to develop a specification language that is expressive enough to describe different components of a network service, and that will include type hierarchies inspired by type systems in general programming languages that enable the safe composition of software components. We envision this new science of composition to be built upon several theories (e.g., control theory, game theory, network calculus, percolation theory, economics, queuing theory). In essence, different theories may provide different languages by which certain properties of system components can be expressed and composed into larger systems. We then seek to lift these lower-level specifications to a higher level by abstracting away details that are irrelevant for safe composition at the higher level, thus making theories scalable and useful to the average user. In this paper we focus on services built upon an overlay management architecture, and we use control theory and QoS theory as example theories from which we lift up compositional specifications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A computer model has been developed to optimize the performance of a 50kWp photovoltaic system which supplies electrical energy to a dairy farm at Fota Island in Cork Harbour. Optimization of the system involves maximising the efficiency and increasing the performance and reliability of each hardware unit. The model accepts horizontal insolation, ambient temperature, wind speed, wind direction and load demand as inputs. An optimization program uses the computer model to simulate the optimum operating conditions. From this analysis, criteria are established which are used to improve the photovoltaic system operation. This thesis describes the model concepts, the model implementation and the model verification procedures used during development. It also describes the techniques which are used during system optimization. The software, which is written in FORTRAN, is structured in modular units to provide logical and efficient programming. These modular units may also be used in the modelling and optimization of other photovoltaic systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Avalanche Photodiodes (APDs) have been used in a wide range of low light sensing applications such as DNA sequencing, quantum key distribution, LIDAR and medical imaging. To operate the APDs, control circuits are required to achieve the desired performance characteristics. This thesis presents the work on development of three control circuits including a bias circuit, an active quench and reset circuit and a gain control circuit all of which are used for control and performance enhancement of the APDs. The bias circuit designed is used to bias planar APDs for operation in both linear and Geiger modes. The circuit is based on a dual charge pumps configuration and operates from a 5 V supply. It is capable of providing milliamp load currents for shallow-junction planar APDs that operate up to 40 V. With novel voltage regulators, the bias voltage provided by the circuit can be accurately controlled and easily adjusted by the end user. The circuit is highly integrable and provides an attractive solution for applications requiring a compact integrated APD device. The active quench and reset circuit is designed for APDs that operate in Geiger-mode and are required for photon counting. The circuit enables linear changes in the hold-off time of the Geiger-mode APD (GM-APD) from several nanoseconds to microseconds with a stable setting step of 6.5 ns. This facilitates setting the optimal `afterpulse-free' hold-off time for any GM-APD via user-controlled digital inputs. In addition this circuit doesn’t require an additional monostable or pulse generator to reset the detector, thus simplifying the circuit. Compared to existing solutions, this circuit provides more accurate and simpler control of the hold-off time while maintaining a comparable maximum count-rate of 35.2 Mcounts/s. The third circuit designed is a gain control circuit. This circuit is based on the idea of using two matched APDs to set and stabilize the gain. The circuit can provide high bias voltage for operating the planar APD, precisely set the APD’s gain (with the errors of less than 3%) and compensate for the changes in the temperature to maintain a more stable gain. The circuit operates without the need for external temperature sensing and control electronics thus lowering the system cost and complexity. It also provides a simpler and more compact solution compared to previous designs. The three circuits designed in this project were developed independently of each other and are used for improving different performance characteristics of the APD. Further research on the combination of the three circuits will produce a more compact APD-based solution for a wide range of applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Segmentation of anatomical and pathological structures in ophthalmic images is crucial for the diagnosis and study of ocular diseases. However, manual segmentation is often a time-consuming and subjective process. This paper presents an automatic approach for segmenting retinal layers in Spectral Domain Optical Coherence Tomography images using graph theory and dynamic programming. Results show that this method accurately segments eight retinal layer boundaries in normal adult eyes more closely to an expert grader as compared to a second expert grader.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pianists of the twenty-first century have a wealth of repertoire at their fingertips. They busily study music from the different periods -- Baroque, Classical, Romantic, and some of the twentieth century -- trying to understand the culture and performance practice of the time and the stylistic traits of each composer so they can communicate their music effectively. Unfortunately, this leaves little time to notice the composers who are writing music today. Whether this neglect proceeds from lack of time or lack of curiosity, I feel we should be connected to music that was written in our own lifetime, when we already understand the culture and have knowledge of the different styles that preceded us. Therefore, in an attempt to promote today’s composers, I have selected piano music written during my lifetime, to show that contemporary music is effective and worthwhile and deserves as much attention as the music that preceded it. This dissertation showcases piano music composed from 1978 to 2005. A point of departure in selecting the pieces for this recording project is to represent the major genres in the piano repertoire in order to show a variety of styles, moods, lengths, and difficulties. Therefore, from these recordings, there is enough variety to successfully program a complete contemporary recital from the selected works, and there is enough variety to meet the demands of pianists with different skill levels and recital programming needs. Since we live in an increasingly global society, music from all parts of the world is included to offer a fair representation of music being composed everywhere. Half of the music in this project comes from the United States. The other half comes from Australia, Japan, Russia, and Argentina. The composers represented in these recordings are: Lowell Liebermann, Richard Danielpour, Frederic Rzewski, Judith Lang Zaimont, Samuel Adler, Carl Vine, Nikolai Kapustin, Akira Miyoshi and Osvaldo Golijov. With the exception of one piano concerto, all the works are for solo piano. This recording project dissertation consists of two 60 minute CDs of selected repertoire, accompanied by a substantial document of in-depth program notes. The recordings are documented on compact discs that are housed within the University of Maryland Library System.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The PHYSICA software was developed to enable multiphysics modelling allowing for interaction between Computational Fluid Dynamics (CFD) and Computational Solid Mechanics (CSM) and Computational Aeroacoustics (CAA). PHYSICA uses the finite volume method with 3-D unstructured meshes to enable the modelling of complex geometries. Many engineering applications involve significant computational time which needs to be reduced by means of a faster solution method or parallel and high performance algorithms. It is well known that multigrid methods serve as a fast iterative scheme for linear and nonlinear diffusion problems. This papers attempts to address two major issues of this iterative solver, including parallelisation of multigrid methods and their applications to time dependent multiscale problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An industrial electrolysis cell used to produce primary aluminium is sensitive to waves at the interface of liquid aluminium and electrolyte. The interface waves are similar to stratified sea layers [1], but the penetrating electric current and the associated magnetic field are intricately involved in the oscillation process, and the observed wave frequencies are shifted from the purely hydrodynamic ones [2]. The interface stability problem is of great practical importance because the electrolytic aluminium production is a major electrical energy consumer, and it is related to environmental pollution rate. The stability analysis was started in [3] and a short summary of the main developments is given in [2]. Important aspects of the multiple mode interaction have been introduced in [4], and a widely used linear friction law first applied in [5]. In [6] a systematic perturbation expansion is developed for the fluid dynamics and electric current problems permitting reduction of the three-dimensional problem to a two dimensional one. The procedure is more generally known as “shallow water approximation” which can be extended for the case of weakly non-linear and dispersive waves. The Boussinesq formulation permits to generalise the problem for non-unidirectionally propagating waves accounting for side walls and for a two fluid layer interface [1]. Attempts to extend the electrolytic cell wave modelling to the weakly nonlinear case have started in [7] where the basic equations are derived, including the nonlinearity and linear dispersion terms. An alternative approach for the nonlinear numerical simulation for an electrolysis cell wave evolution is attempted in [8 and references there], yet, omitting the dispersion terms and without a proper account for the dissipation, the model can predict unstable waves growth only. The present paper contains a generalisation of the previous non linear wave equations [7] by accounting for the turbulent horizontal circulation flows in the two fluid layers. The inclusion of the turbulence model is essential in order to explain the small amplitude self-sustained oscillations of the liquid metal surface observed in real cells, known as “MHD noise”. The fluid dynamic model is coupled to the extended electromagnetic simulation including not only the fluid layers, but the whole bus bar circuit and the ferromagnetic effects [9].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Financial modelling in the area of option pricing involves the understanding of the correlations between asset and movements of buy/sell in order to reduce risk in investment. Such activities depend on financial analysis tools being available to the trader with which he can make rapid and systematic evaluation of buy/sell contracts. In turn, analysis tools rely on fast numerical algorithms for the solution of financial mathematical models. There are many different financial activities apart from shares buy/sell activities. The main aim of this chapter is to discuss a distributed algorithm for the numerical solution of a European option. Both linear and non-linear cases are considered. The algorithm is based on the concept of the Laplace transform and its numerical inverse. The scalability of the algorithm is examined. Numerical tests are used to demonstrate the effectiveness of the algorithm for financial analysis. Time dependent functions for volatility and interest rates are also discussed. Applications of the algorithm to non-linear Black-Scholes equation where the volatility and the interest rate are functions of the option value are included. Some qualitative results of the convergence behaviour of the algorithm is examined. This chapter also examines the various computational issues of the Laplace transformation method in terms of distributed computing. The idea of using a two-level temporal mesh in order to achieve distributed computation along the temporal axis is introduced. Finally, the chapter ends with some conclusions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of the ocean in the cycling of oxygenated volatile organic compounds (OVOCs) remains largely unanswered due to a paucity of datasets. We describe the method development of a membrane inlet-proton transfer reaction/mass spectrometer (MI-PTR/MS) as an efficient method of analysing methanol, acetaldehyde and acetone in seawater. Validation of the technique with water standards shows that the optimised responses are linear and reproducible. Limits of detection are 27 nM for methanol, 0.7 nM for acetaldehyde and 0.3 nM for acetone. Acetone and acetaldehyde concentrations generated by MI-PTR/MS are compared to a second, independent method based on purge and trap-gas chromatography/flame ionisation detection (P&T-GC/FID) and show excellent agreement. Chromatographic separation of isomeric species acetone and propanal permits correction to mass 59 signal generated by the PTR/MS and overcomes a known uncertainty in reporting acetone concentrations via mass spectrometry. A third bioassay technique using radiolabelled acetone further supported the result generated by this method. We present the development and optimisation of the MI-PTR/MS technique as a reliable and convenient tool for analysing seawater samples for these trace gases. We compare this method with other analytical techniques and discuss its potential use in improving the current understanding of the cycling of oceanic OVOCs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scepticism over stated preference surveys conducted online revolves around the concerns over “professional respondents” who might rush through the questionnaire without sufficiently considering the information provided. To gain insight on the validity of this phenomenon and test the effect of response time on choice randomness, this study makes use of a recently conducted choice experiment survey on ecological and amenity effects of an offshore windfarm in the UK. The positive relationship between self-rated and inferred attribute attendance and response time is taken as evidence for a link between response time and cognitive effort. Subsequently, the generalised multinomial logit model is employed to test the effect of response time on scale, which indicates the weight of the deterministic relative to the error component in the random utility model. Results show that longer response time increases scale, i.e. decreases choice randomness. This positive scale effect of response time is further found to be non-linear and wear off at some point beyond which extreme response time decreases scale. While response time does not systematically affect welfare estimates, higher response time increases the precision of such estimates. These effects persist when self-reported choice certainty is controlled for. Implications of the results for online stated preference surveys and further research are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The characterization of thermocouple sensors for temperature measurement in variable flow environments is a challenging problem. In this paper, novel difference equation-based algorithms are presented that allow in situ characterization of temperature measurement probes consisting of two-thermocouple sensors with differing time constants. Linear and non-linear least squares formulations of the characterization problem are introduced and compared in terms of their computational complexity, robustness to noise and statistical properties. With the aid of this analysis, least squares optimization procedures that yield unbiased estimates are identified. The main contribution of the paper is the development of a linear two-parameter generalized total least squares formulation of the sensor characterization problem. Monte-Carlo simulation results are used to support the analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents two new approaches for use in complete process monitoring. The firstconcerns the identification of nonlinear principal component models. This involves the application of linear
principal component analysis (PCA), prior to the identification of a modified autoassociative neural network (AAN) as the required nonlinear PCA (NLPCA) model. The benefits are that (i) the number of the reduced set of linear principal components (PCs) is smaller than the number of recorded process variables, and (ii) the set of PCs is better conditioned as redundant information is removed. The result is a new set of input data for a modified neural representation, referred to as a T2T network. The T2T NLPCA model is then used for complete process monitoring, involving fault detection, identification and isolation. The second approach introduces a new variable reconstruction algorithm, developed from the T2T NLPCA model. Variable reconstruction can enhance the findings of the contribution charts still widely used in industry by reconstructing the outputs from faulty sensors to produce more accurate fault isolation. These ideas are illustrated using recorded industrial data relating to developing cracks in an industrial glass melter process. A comparison of linear and nonlinear models, together with the combined use of contribution charts and variable reconstruction, is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this letter, a standard postnonlinear blind source separation algorithm is proposed, based on the MISEP method, which is widely used in linear and nonlinear independent component analysis. To best suit a wide class of postnonlinear mixtures, we adapt the MISEP method to incorporate a priori information of the mixtures. In particular, a group of three-layered perceptrons and a linear network are used as the unmixing system to separate sources in the postnonlinear mixtures, and another group of three-layered perceptron is used as the auxiliary network. The learning algorithm for the unmixing system is then obtained by maximizing the output entropy of the auxiliary network. The proposed method is applied to postnonlinear blind source separation of both simulation signals and real speech signals, and the experimental results demonstrate its effectiveness and efficiency in comparison with existing methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the application of regularisation to the training of feedforward neural networks, as a means of improving the quality of solutions obtained. The basic principles of regularisation theory are outlined for both linear and nonlinear training and then extended to cover a new hybrid training algorithm for feedforward neural networks recently proposed by the authors. The concept of functional regularisation is also introduced and discussed in relation to MLP and RBF networks. The tendency for the hybrid training algorithm and many linear optimisation strategies to generate large magnitude weight solutions when applied to ill-conditioned neural paradigms is illustrated graphically and reasoned analytically. While such weight solutions do not generally result in poor fits, it is argued that they could be subject to numerical instability and are therefore undesirable. Using an illustrative example it is shown that, as well as being beneficial from a generalisation perspective, regularisation also provides a means for controlling the magnitude of solutions. (C) 2001 Elsevier Science B.V. All rights reserved.