28 resultados para Synchronization algorithms
Using interior point algorithms for the solution of linear programs with special structural features
Resumo:
Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.
Resumo:
With the advent of distributed computer systems with a largely transparent user interface, new questions have arisen regarding the management of such an environment by an operating system. One fertile area of research is that of load balancing, which attempts to improve system performance by redistributing the workload submitted to the system by the users. Early work in this field concentrated on static placement of computational objects to improve performance, given prior knowledge of process behaviour. More recently this has evolved into studying dynamic load balancing with process migration, thus allowing the system to adapt to varying loads. In this thesis, we describe a simulated system which facilitates experimentation with various load balancing algorithms. The system runs under UNIX and provides functions for user processes to communicate through software ports; processes reside on simulated homogeneous processors, connected by a user-specified topology, and a mechanism is included to allow migration of a process from one processor to another. We present the results of a study of adaptive load balancing algorithms, conducted using the aforementioned simulated system, under varying conditions; these results show the relative merits of different approaches to the load balancing problem, and we analyse the trade-offs between them. Following from this study, we present further novel modifications to suggested algorithms, and show their effects on system performance.
Resumo:
The computer systems of today are characterised by data and program control that are distributed functionally and geographically across a network. A major issue of concern in this environment is the operating system activity of resource management for different processors in the network. To ensure equity in load distribution and improved system performance, load balancing is often undertaken. The research conducted in this field so far, has been primarily concerned with a small set of algorithms operating on tightly-coupled distributed systems. More recent studies have investigated the performance of such algorithms in loosely-coupled architectures but using a small set of processors. This thesis describes a simulation model developed to study the behaviour and general performance characteristics of a range of dynamic load balancing algorithms. Further, the scalability of these algorithms are discussed and a range of regionalised load balancing algorithms developed. In particular, we examine the impact of network diameter and delay on the performance of such algorithms across a range of system workloads. The results produced seem to suggest that the performance of simple dynamic policies are scalable but lack the load stability of more complex global average algorithms.
Resumo:
Many of the applications of geometric modelling are concerned with the computation of well-defined properties of the model. The applications which have received less attention are those which address questions to which there is no unique answer. This thesis describes such an application: the automatic production of a dimensioned engineering drawing. One distinctive feature of this operation is the requirement for sophisticated decision-making algorithms at each stage in the processing of the geometric model. Hence, the thesis is focussed upon the design, development and implementation of such algorithms. Various techniques for geometric modelling are briefly examined and then details are given of the modelling package that was developed for this project, The principles of orthographic projection and dimensioning are treated and some published work on the theory of dimensioning is examined. A new theoretical approach to dimensioning is presented and discussed. The existing body of knowledge on decision-making is sampled and the author then shows how methods which were originally developed for management decisions may be adapted to serve the purposes of this project. The remainder of the thesis is devoted to reports on the development of decision-making algorithms for orthographic view selection, sectioning and crosshatching, the preparation of orthographic views with essential hidden detail, and two approaches to the actual insertion of dimension lines and text. The thesis concludes that the theories of decision-making can be applied to work of this kind. It may be possible to generate computer solutions that are closer to the optimum than some man-made dimensioning schemes. Further work on important details is required before a commercially acceptable package could be produced.
Resumo:
Image segmentation is one of the most computationally intensive operations in image processing and computer vision. This is because a large volume of data is involved and many different features have to be extracted from the image data. This thesis is concerned with the investigation of practical issues related to the implementation of several classes of image segmentation algorithms on parallel architectures. The Transputer is used as the basic building block of hardware architectures and Occam is used as the programming language. The segmentation methods chosen for implementation are convolution, for edge-based segmentation; the Split and Merge algorithm for segmenting non-textured regions; and the Granlund method for segmentation of textured images. Three different convolution methods have been implemented. The direct method of convolution, carried out in the spatial domain, uses the array architecture. The other two methods, based on convolution in the frequency domain, require the use of the two-dimensional Fourier transform. Parallel implementations of two different Fast Fourier Transform algorithms have been developed, incorporating original solutions. For the Row-Column method the array architecture has been adopted, and for the Vector-Radix method, the pyramid architecture. The texture segmentation algorithm, for which a system-level design is given, demonstrates a further application of the Vector-Radix Fourier transform. A novel concurrent version of the quad-tree based Split and Merge algorithm has been implemented on the pyramid architecture. The performance of the developed parallel implementations is analysed. Many of the obtained speed-up and efficiency measures show values close to their respective theoretical maxima. Where appropriate comparisons are drawn between different implementations. The thesis concludes with comments on general issues related to the use of the Transputer system as a development tool for image processing applications; and on the issues related to the engineering of concurrent image processing applications.
Resumo:
This work sets out to evaluate the potential benefits and pit-falls in using a priori information to help solve the Magnetoencephalographic (MEG) inverse problem. In chapter one the forward problem in MEG is introduced, together with a scheme that demonstrates how a priori information can be incorporated into the inverse problem. Chapter two contains a literature review of techniques currently used to solve the inverse problem. Emphasis is put on the kind of a priori information that is used by each of these techniques and the ease with which additional constraints can be applied. The formalism of the FOCUSS algorithm is shown to allow for the incorporation of a priori information in an insightful and straightforward manner. In chapter three it is described how anatomical constraints, in the form of a realistically shaped source space, can be extracted from a subject’s Magnetic Resonance Image (MRI). The use of such constraints relies on accurate co-registration of the MEG and MRI co-ordinate systems. Variations of the two main co-registration approaches, based on fiducial markers or on surface matching, are described and the accuracy and robustness of a surface matching algorithm is evaluated. Figures of merit introduced in chapter four are shown to given insight into the limitations of a typical measurement set-up and potential value of a priori information. It is shown in chapter five that constrained dipole fitting and FOCUSS outperform unconstrained dipole fitting when data with low SNR is used. However, the effect of errors in the constraints can reduce this advantage. Finally, it is demonstrated in chapter six that the results of different localisation techniques give corroborative evidence about the location and activation sequence of the human visual cortical areas underlying the first 125ms of the visual magnetic evoked response recorded with a whole head neuromagnetometer.
Resumo:
This thesis was focused on theoretical models of synchronization to cortical dynamics as measured by magnetoencephalography (MEG). Dynamical systems theory was used in both identifying relevant variables for brain coordination and also in devising methods for their quantification. We presented a method for studying interactions of linear and chaotic neuronal sources using MEG beamforming techniques. We showed that such sources can be accurately reconstructed in terms of their location, temporal dynamics and possible interactions. Synchronization in low-dimensional nonlinear systems was studied to explore specific correlates of functional integration and segregation. In the case of interacting dissimilar systems, relevant coordination phenomena involved generalized and phase synchronization, which were often intermittent. Spatially-extended systems were then studied. For locally-coupled dissimilar systems, as in the case of cortical columns, clustering behaviour occurred. Synchronized clusters emerged at different frequencies and their boundaries were marked through oscillation death. The macroscopic mean field revealed sharp spectral peaks at the frequencies of the clusters and broader spectral drops at their boundaries. These results question existing models of Event Related Synchronization and Desynchronization. We re-examined the concept of the steady-state evoked response following an AM stimulus. We showed that very little variability in the AM following response could be accounted by system noise. We presented a methodology for detecting local and global nonlinear interactions from MEG data in order to account for residual variability. We found crosshemispheric nonlinear interactions of ongoing cortical rhythms concurrent with the stimulus and interactions of these rhythms with the following AM responses. Finally, we hypothesized that holistic spatial stimuli would be accompanied by the emergence of clusters in primary visual cortex resulting in frequency-specific MEG oscillations. Indeed, we found different frequency distributions in induced gamma oscillations for different spatial stimuli, which was suggestive of temporal coding of these spatial stimuli. Further, we addressed the bursting character of these oscillations, which was suggestive of intermittent nonlinear dynamics. However, we did not observe the characteristic-3/2 power-law scaling in the distribution of interburst intervals. Further, this distribution was only seldom significantly different to the one obtained in surrogate data, where nonlinear structure was destroyed. In conclusion, the work presented in this thesis suggests that advances in dynamical systems theory in conjunction with developments in magnetoencephalography may facilitate a mapping between levels of description int he brain. this may potentially represent a major advancement in neuroscience.
Resumo:
With the ability to collect and store increasingly large datasets on modern computers comes the need to be able to process the data in a way that can be useful to a Geostatistician or application scientist. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively for likelihood-based Geostatistics. Various methods have been proposed and are extensively used in an attempt to overcome these complexity issues. This thesis introduces a number of principled techniques for treating large datasets with an emphasis on three main areas: reduced complexity covariance matrices, sparsity in the covariance matrix and parallel algorithms for distributed computation. These techniques are presented individually, but it is also shown how they can be combined to produce techniques for further improving computational efficiency.
Resumo:
We propose a new simple method to achieve precise symbol synchronization using one start-of-frame (SOF) symbol in optical fast orthogonal frequency-division multiplexing (FOFDM) with subchannel spacing equal to half of the symbol rate per sub-carrier. The proposed method first identifies the SOF symbol, then exploits the evenly symmetric property of the discrete cosine transform in FOFDM, which is also valid in the presence of chromatic dispersion, to achieve precise symbol synchronization. We demonstrate its use in a 16.88-Gb/s phase-shifted-keying-based FOFDM system over a 124-km field-installed single-mode fiber link and show that this technique operates well in automatic precise symbol synchronization at an optical signal-to-noise ratio as low as 3 dB and after transmission.
Resumo:
Orthogonal frequency division multiplexing (OFDM) is becoming a fundamental technology in future generation wireless communications. Call admission control is an effective mechanism to guarantee resilient, efficient, and quality-of-service (QoS) services in wireless mobile networks. In this paper, we present several call admission control algorithms for OFDM-based wireless multiservice networks. Call connection requests are differentiated into narrow-band calls and wide-band calls. For either class of calls, the traffic process is characterized as batch arrival since each call may request multiple subcarriers to satisfy its QoS requirement. The batch size is a random variable following a probability mass function (PMF) with realistically maximum value. In addition, the service times for wide-band and narrow-band calls are different. Following this, we perform a tele-traffic queueing analysis for OFDM-based wireless multiservice networks. The formulae for the significant performance metrics call blocking probability and bandwidth utilization are developed. Numerical investigations are presented to demonstrate the interaction between key parameters and performance metrics. The performance tradeoff among different call admission control algorithms is discussed. Moreover, the analytical model has been validated by simulation. The methodology as well as the result provides an efficient tool for planning next-generation OFDM-based broadband wireless access systems.
Resumo:
Since wireless network optimisations can be typically designed and evaluated independently of one another under the assumption that they can be applied jointly or independently. In this paper, we have analysis some rate algorithms in wireless networks. Since wireless networks have different standards in IEEE with peculiar features, data rate is one of those important parameters that wireless networks depend on for performances. The optimisation of this network is dependent on the behaviour of a particular rate algorithm in a network scenario. We have considered some first and second generation's rate algorithm, and it is all about selecting an appropriate data rate that any available wireless network can utilise for transmission in order to achieve a good performance. We have designed and analysis a wireless network and results obtained for some rate algorithms, like ONOE and AARF.
Resumo:
Although event-related potentials (ERPs) are widely used to study sensory, perceptual and cognitive processes, it remains unknown whether they are phase-locked signals superimposed upon the ongoing electroencephalogram (EEG) or result from phase-alignment of the EEG. Previous attempts to discriminate between these hypotheses have been unsuccessful but here a new test is presented based on the prediction that ERPs generated by phase-alignment will be associated with event-related changes in frequency whereas evoked-ERPs will not. Using empirical mode decomposition (EMD), which allows measurement of narrow-band changes in the EEG without predefining frequency bands, evidence was found for transient frequency slowing in recognition memory ERPs but not in simulated data derived from the evoked model. Furthermore, the timing of phase-alignment was frequency dependent with the earliest alignment occurring at high frequencies. Based on these findings, the Firefly model was developed, which proposes that both evoked and induced power changes derive from frequency-dependent phase-alignment of the ongoing EEG. Simulated data derived from the Firefly model provided a close match with empirical data and the model was able to account for i) the shape and timing of ERPs at different scalp sites, ii) the event-related desynchronization in alpha and synchronization in theta, and iii) changes in the power density spectrum from the pre-stimulus baseline to the post-stimulus period. The Firefly Model, therefore, provides not only a unifying account of event-related changes in the EEG but also a possible mechanism for cross-frequency information processing.
Resumo:
Multi-agent algorithms inspired by the division of labour in social insects and by markets, are applied to a constrained problem of distributed task allocation. The efficiency (average number of tasks performed), the flexibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved efficiency and robustness. We employ nature inspired particle swarm optimisation to obtain optimised parameters for all algorithms in a range of representative environments. Although results are obtained for large population sizes to avoid finite size effects, the influence of population size on the performance is also analysed. From a theoretical point of view, we analyse the causes of efficiency loss, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.