13 resultados para basis of the solution space of a homogeneous sparse linear system
em Aston University Research Archive
Resumo:
In this work the solution of a class of capital investment problems is considered within the framework of mathematical programming. Upon the basis of the net present value criterion, the problems in question are mainly characterized by the fact that the cost of capital is defined as a non-decreasing function of the investment requirements. Capital rationing and some cases of technological dependence are also included, this approach leading to zero-one non-linear programming problems, for which specifically designed solution procedures supported by a general branch and bound development are presented. In the context of both this development and the relevant mathematical properties of the previously mentioned zero-one programs, a generalized zero-one model is also discussed. Finally,a variant of the scheme, connected with the search sequencing of optimal solutions, is presented as an alternative in which reduced storage limitations are encountered.
Resumo:
Computer simulated trajectories of bulk water molecules form complex spatiotemporal structures at the picosecond time scale. This intrinsic complexity, which underlies the formation of molecular structures at longer time scales, has been quantified using a measure of statistical complexity. The method estimates the information contained in the molecular trajectory by detecting and quantifying temporal patterns present in the simulated data (velocity time series). Two types of temporal patterns are found. The first, defined by the short-time correlations corresponding to the velocity autocorrelation decay times (â‰0.1â€ps), remains asymptotically stable for time intervals longer than several tens of nanoseconds. The second is caused by previously unknown longer-time correlations (found at longer than the nanoseconds time scales) leading to a value of statistical complexity that slowly increases with time. A direct measure based on the notion of statistical complexity that describes how the trajectory explores the phase space and independent from the particular molecular signal used as the observed time series is introduced. © 2008 The American Physical Society.
Resumo:
The first part of the thesis compares Roth's method with other methods, in particular the method of separation of variables and the finite cosine transform method, for solving certain elliptic partial differential equations arising in practice. In particular we consider the solution of steady state problems associated with insulated conductors in rectangular slots. Roth's method has two main disadvantages namely the slow rate of convergence of the double Fourier series and the restrictive form of the allowable boundary conditions. A combined Roth-separation of variables method is derived to remove the restrictions on the form of the boundary conditions and various Chebyshev approximations are used to try to improve the rate of convergence of the series. All the techniques are then applied to the Neumann problem arising from balanced rectangular windings in a transformer window. Roth's method is then extended to deal with problems other than those resulting from static fields. First we consider a rectangular insulated conductor in a rectangular slot when the current is varying sinusoidally with time. An approximate method is also developed and compared with the exact method.The approximation is then used to consider the problem of an insulated conductor in a slot facing an air gap. We also consider the exact method applied to the determination of the eddy-current loss produced in an isolated rectangular conductor by a transverse magnetic field varying sinusoidally with time. The results obtained using Roth's method are critically compared with those obtained by other authors using different methods. The final part of the thesis investigates further the application of Chebyshdev methods to the solution of elliptic partial differential equations; an area where Chebyshev approximations have rarely been used. A poisson equation with a polynomial term is treated first followed by a slot problem in cylindrical geometry.
Using interior point algorithms for the solution of linear programs with special structural features
Resumo:
Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Static mechanical properties of 2124 Al/SiCp MMC have been measured as a function of solution temperature and time. An optimum solution treatment has been established which produces significant improvements in static mechanical properties and fatigue crack growth resistance over conventional solution treatments. Increasing the solution treatment parameters up to the optimum values improves the mechanical properties because of intermetallic dissolution, improved solute and GPB zone strengthening and increased matrix dislocation density. Increasing the solution treatment parameters beyond the optimum values results in a rapid reduction in mechanical properties due to the formation of gas porosity and surface blisters. The optimum solution treatment improves tensile properties in the transverse orientation to a greater extent than in the longitudinal orientation and this results in reduced anisotropy. © 1996 Elsevier Science Limited.
Resumo:
Background: A natural glycoprotein usually exists as a spectrum of glycosylated forms, where each protein molecule may be associated with an array of oligosaccharide structures. The overall range of glycoforms can have a variety of different biophysical and biochemical properties, although details of structure–function relationships are poorly understood, because of the microheterogeneity of biological samples. Hence, there is clearly a need for synthetic methods that give access to natural and unnatural homogeneously glycosylated proteins. The synthesis of novel glycoproteins through the selective reaction of glycosyl iodoacetamides with the thiol groups of cysteine residues, placed by site-directed mutagenesis at desired glycosylation sites has been developed. This provides a general method for the synthesis of homogeneously glycosylated proteins that carry saccharide side chains at natural or unnatural glycosylation sites. Here, we have shown that the approach can be applied to the glycoprotein hormone erythropoietin, an important therapeutic glycoprotein with three sites of N-glycosylation that are essential for in vivo biological activity. Results: Wild-type recombinant erythropoietin and three mutants in which glycosylation site asparagine residues had been changed to cysteines (His10-WThEPO, His10-Asn24Cys, His10-Asn38Cys, His10-Asn83CyshEPO) were overexpressed and purified in yields of 13 mg l−1 from Escherichia coli. Chemical glycosylation with glycosyl-β-N-iodoacetamides could be monitored by electrospray MS. Both in the wild-type and in the mutant proteins, the potential side reaction of the other four cysteine residues (all involved in disulfide bonds) were not observed. Yield of glycosylation was generally about 50% and purification of glycosylated protein from non-glycosylated protein was readily carried out using lectin affinity chromatography. Dynamic light scattering analysis of the purified glycoproteins suggested that the glycoforms produced were monomeric and folded identically to the wild-type protein. Conclusions: Erythropoietin expressed in E. coli bearing specific Asn→Cys mutations at natural glycosylation sites can be glycosylated using β-N-glycosyl iodoacetamides even in the presence of two disulfide bonds. The findings provide the basis for further elaboration of the glycan structures and development of this general methodology for the synthesis of semi-synthetic glycoproteins. Results: Wild-type recombinant erythropoietin and three mutants in which glycosylation site asparagine residues had been changed to cysteines (His10-WThEPO, His10-Asn24Cys, His10-Asn38Cys, His10-Asn83CyshEPO) were overexpressed and purified in yields of 13 mg l−1 from Escherichia coli. Chemical glycosylation with glycosyl-β-N-iodoacetamides could be monitored by electrospray MS. Both in the wild-type and in the mutant proteins, the potential side reaction of the other four cysteine residues (all involved in disulfide bonds) were not observed. Yield of glycosylation was generally about 50% and purification of glycosylated protein from non-glycosylated protein was readily carried out using lectin affinity chromatography. Dynamic light scattering analysis of the purified glycoproteins suggested that the glycoforms produced were monomeric and folded identically to the wild-type protein. Conclusions: Erythropoietin expressed in E. coli bearing specific Asn→Cys mutations at natural glycosylation sites can be glycosylated using β-N-glycosyl iodoacetamides even in the presence of two disulfide bonds. The findings provide the basis for further elaboration of the glycan structures and development of this general methodology for the synthesis of semi-synthetic glycoproteins
Resumo:
The research described in this thesis investigates three issues related to the use of expert systems for decision making in organizations. These are the effectiveness of ESs when used in different roles, to replace a human decision maker or to advise a human decision maker, the users' behaviourand opinions towards using an expertadvisory system and, the possibility of organization-wide deployment of expert systems and the role of an ES in different organizational levels. The research was based on the development of expert systems within a business game environment, a simulation of a manufacturing company. This was chosen to give more control over the `experiments' than would be possible in a real organization. An expert system (EXGAME) was developed based on a structure derived from Anthony's three levels of decision making to manage the simulated company in the business game itself with little user intervention. On the basis of EXGAME, an expert advisory system (ADGAME) was built to help game players to make better decisions in managing the game company. EXGAME and ADGAME are thus two expert systems in the same domain performing different roles; it was found that ADGAME had, in places, to be different from EXGAME, not simply an extension of it. EXGAME was tested several times against human rivals and was evaluated by measuring its performance. ADGAME was also tested by different users and was assessed by measuring the users' performance and analysing their opinions towards it as a helpful decision making aid. The results showed that an expert system was able to replace a human at the operational level, but had difficulty at the strategic level. It also showed the success of the organization-wide deployment of expert systems in this simulated company.
Resumo:
The sectoral and occupational structure of Britain and West Germany has increasingly changed over the last fifty years from a manual manufacturing based to a non-manual service sector based one. There has been a trend towards more managerial and less menial type occupations. Britain employs a higher proportion of its population in the service sector than in manufacturing compared to West Germany, except in retailing, where West Germany employs twice as many people as Britain. This is a stable sector of the economy in terms of employment, but the requirements of the workforce have changed in line with changes in the industry in both countries. School leavers in the two countries, faced with the same options (FE, training schemes or employment) have opted for the various options in different proportions: young Germans are staying longer in education before embarking on training and young Britons are now less likely to go straight into employment than ten years ago. Training is becoming more accepted as the normal route into employment with government policy leading the way, but public opinion still slow to respond. This study investigates how vocational training has adapted to the changing requirements of industry, often determined by technological advancements. In some areas e.g. manufacturing industry the changes have been radical, in others such as retailing they have not, but skill requirements, not necessarily influenced by technology have changed. Social-communicative skills, frequently not even considered skills and therefore not included in training are coming to the forefront. Vocational training has adapted differently in the two countries: in West Germany on the basis of an established over-defined system and in Britain on the basis of an out-dated ill-defined and almost non-existent system. In retailing German school leavers opt for two or three year apprenticeships whereas British school leavers are offered employment with or without formalised training. The publicly held view of the occupation of sales assistant is one of low-level skill, low intellectual demands and a job anyone can do. The traditional skills - product knowledge, selling and social-communicative skills have steadily been eroded. In the last five years retailers have recognised that a return to customer service, utilising the traditional skills was going to be needed of their staff to remain competitive. This requires training. The German retail training system responded by adapting its training regulations in a long consultative process, whereas the British experimented with YTS, a formalised training scheme nationwide being a new departure. The thesis evaluates the changes in these regulations. The case studies in four retail outlets demonstrate that it is indeed product knowledge and selling and social-communicative skills which are fundamental to being a successful and content sales assistant in either country. When the skills are recognised and taught well and systematically the foundations for career development in retailing are laid in a labour market which is continually looking for better qualified workers. Training, when planned and conducted professionally is appreciated by staff and customers and of benefit to the company. In retailing not enough systematic training, to recognisable standards is carried out in Britain, whereas in West Germany the training system is nevertheless better prepared to show innovative potential as a structure and is in place on which to build. In Britain the reputation of the individual company has a greater role to play, not ensuring a national provision of good training in retailing.
Resumo:
This thesis presents a thorough and principled investigation into the application of artificial neural networks to the biological monitoring of freshwater. It contains original ideas on the classification and interpretation of benthic macroinvertebrates, and aims to demonstrate their superiority over the biotic systems currently used in the UK to report river water quality. The conceptual basis of a new biological classification system is described, and a full review and analysis of a number of river data sets is presented. The biological classification is compared to the common biotic systems using data from the Upper Trent catchment. This data contained 292 expertly classified invertebrate samples identified to mixed taxonomic levels. The neural network experimental work concentrates on the classification of the invertebrate samples into biological class, where only a subset of the sample is used to form the classification. Other experimentation is conducted into the identification of novel input samples, the classification of samples from different biotopes and the use of prior information in the neural network models. The biological classification is shown to provide an intuitive interpretation of a graphical representation, generated without reference to the class labels, of the Upper Trent data. The selection of key indicator taxa is considered using three different approaches; one novel, one from information theory and one from classical statistical methods. Good indicators of quality class based on these analyses are found to be in good agreement with those chosen by a domain expert. The change in information associated with different levels of identification and enumeration of taxa is quantified. The feasibility of using neural network classifiers and predictors to develop numeric criteria for the biological assessment of sediment contamination in the Great Lakes is also investigated.