890 resultados para combinatorial optimisation
Resumo:
This thesis describes current and past n-in-one methods and presents three early experimental studies using mass spectrometry and the triple quadrupole instrument on the application of n-in-one in drug discovery. N-in-one strategy pools and mix samples in drug discovery prior to measurement or analysis. This allows the most promising compounds to be rapidly identified and then analysed. Nowadays properties of drugs are characterised earlier and in parallel with pharmacological efficacy. Studies presented here use in vitro methods as caco-2 cells and immobilized artificial membrane chromatography for drug absorption and lipophilicity measurements. The high sensitivity and selectivity of liquid chromatography mass spectrometry are especially important for new analytical methods using n-in-one. In the first study, the fragmentation patterns of ten nitrophenoxy benzoate compounds, serial homology, were characterised and the presence of the compounds was determined in a combinatorial library. The influence of one or two nitro substituents and the alkyl chain length of methyl to pentyl on collision-induced fragmentation was studied, and interesting structurefragmentation relationships were detected. Two nitro group compounds increased fragmentation compared to one nitro group, whereas less fragmentation was noted in molecules with a longer alkyl chain. The most abundant product ions were nitrophenoxy ions, which were also tested in the precursor ion screening of the combinatorial library. In the second study, the immobilized artificial membrane chromatographic method was transferred from ultraviolet detection to mass spectrometric analysis and a new method was developed. Mass spectra were scanned and the chromatographic retention of compounds was analysed using extract ion chromatograms. When changing detectors and buffers and including n-in-one in the method, the results showed good correlation. Finally, the results demonstrated that mass spectrometric detection with gradient elution can provide a rapid and convenient n-in-one method for ranking the lipophilic properties of several structurally diverse compounds simultaneously. In the final study, a new method was developed for caco-2 samples. Compounds were separated by liquid chromatography and quantified by selected reaction monitoring using mass spectrometry. This method was used for caco-2 samples, where absorption of ten chemically and physiologically different compounds was screened using both single and nin- one approaches. These three studies used mass spectrometry for compound identification, method transfer and quantitation in the area of mixture analysis. Different mass spectrometric scanning modes for the triple quadrupole instrument were used in each method. Early drug discovery with n-in-one is area where mass spectrometric analysis, its possibilities and proper use, is especially important.
Resumo:
In this paper, the design basis of the conventional Khadi and Village Industries Commission biogas plants has been elucidated. It has been shown that minimisation of the cost of the gas holder alone leads to the narrow and deep digesters of conventional plants. If instead, the total capital cost of the gas holder plus digester is minimised, the optimisation leads to wide and shallow digesters, which are less expensive. To test this alternative, two prototype plants have been designed, constructed and operated. These plants are not only 25–40% cheaper, but their performance is actually slightly better than the conventional plants.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
A simplicial complex is said to satisfy complementarity if exactly one of each complementary pair of nonempty vertex-sets constitutes a face of the complex.
Resumo:
The aim of logic synthesis is to produce circuits which satisfy the given boolean function while meeting timing constraints and requiring the minimum silicon area. Logic synthesis involves two steps namely logic decomposition and technology mapping. Existing methods treat the two as separate operation. The traditional approach is to minimize the number of literals without considering the target technology during the decomposition phase. The decomposed expressions are then mapped on to the target technology to optimize the area, Timing optimization is carried out subsequently, A new approach which treats logic decomposition and technology maping as a single operation is presented. The logic decomposition is based on the parameters of the target technology. The area and timing optimization is carried out during logic decomposition phase itself. Results using MCNC circuits are presented to show that this method produces circuits which are 38% faster while requiring 14% increase in area.
Resumo:
The optimisation is reported on the design of unbalanced magnetron (UBM) sputtering cathodes. For the study, a planar circular cathode backed by a double-coil electromagnet (compatible for a 100 mm diameter target) was developed. The variation of the structure and strength of the magnetic field in front of the target was investigated for different current combinations in the electromagnetic coils, and its effect on the sputtering process was analysed. The observations on the magnetic field geometry revealed some interesting features, such as the balancing point of the fields along the axis (null-point), and the zero axial region over the target surface (B-z = 0 ring). The positions of both could be controlled by adjusting the ratio of the electric current in the coils. The magnetic field null-point could be used as a reference for the region of homogeneous film growth. The B-z = 0 ring was the location where the glow discharge concentrated (or where the maximum target erosion occurred). The diameter of the ring determined the area covered by the discharge and thus the sputtering efficiency. The optimum substrate position can be fixed according to the position of the null-point and optimisation of sputtering can be achieved by adjusting the diameter of the B-z = 0 ring. The results of this study should be helpful in the designing of an ideal UBM using permanent magnets as well as electromagnets. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
Combinatorial exchanges are double sided marketplaces with multiple sellers and multiple buyers trading with the help of combinatorial bids. The allocation and other associated problems in such exchanges are known to be among the hardest to solve among all economic mechanisms. In this paper, we develop computationally efficient iterative auction mechanisms for solving combinatorial exchanges. Our mechanisms satisfy Individual-rationality (IR) and budget-nonnegativity (BN) properties. We also show that our method is bounded and convergent. Our numerical experiments show that our algorithm produces good quality solutions and is computationally efficient.
Resumo:
Combinatorial exchanges are double sided marketplaces with multiple sellers and multiple buyers trading with the help of combinatorial bids. The allocation and other associated problems in such exchanges are known to be among the hardest to solve among all economic mechanisms. It has been shown that the problems of surplus maximization or volume maximization in combinatorial exchanges are inapproximable even with free disposal. In this paper, the surplus maximization problem is formulated as an integer linear programming problem and we propose a Lagrangian relaxation based heuristic to find a near optimal solution. We develop computationally efficient tâtonnement mechanisms for clearing combinatorial exchanges where the Lagrangian multipliers can be interpreted as the prices of the items set by the exchange in each iteration. Our mechanisms satisfy Individual-rationality and Budget-nonnegativity properties. The computational experiments performed on representative data sets show that the proposed heuristic produces a feasible solution with negligible optimality gap.
Resumo:
We prove a lower bound of Omega(1/epsilon (m + log(d - a)) where a = [log(m) (1/4epsilon)] for the hitting set size for combinatorial rectangles of volume at least epsilon in [m](d) space, for epsilon is an element of [m(-(d-2)), 2/9] and d > 2. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Composite coatings containing quasicrystalline (QC) phases in Al-Cu-Fe alloys were prepared by laser cladding using a mixture of the elemental powders. Two substrates, namely pure aluminum and an Al-Si alloy were used. The clad layers were remelted at different scanning velocities to alter the growth conditions of different phases. The process parameters were optimized to produce quasicrystalline phases. The evolution of the microstructure in the coating layer was characterized by detailed microstructural investigation. The results indicate presence of quasicrystals in the aluminum substrate. However, only approximant phase could be observed in the substrate of Al-Si alloys. It is shown that there is a significant transport of Si atoms from the substrate to the clad layer during the cladding and remelting process. The hardness profiles of coatings on aluminum substrate indicate a very high hardness. The coating on Al-Si alloy, on the other hand, is ductile and soft. The fracture toughness of the hard coating on aluminum was obtained by nano-indentation technique. The K1C value was found to be 1.33 MPa m1/2 which is typical of brittle materials.
Resumo:
Web services are now a key ingredient of software services offered by software enterprises. Many standardized web services are now available as commodity offerings from web service providers. An important problem for a web service requester is the web service composition problem which involves selecting the right mix of web service offerings to execute an end-to-end business process. Web service offerings are now available in bundled form as composite web services and more recently, volume discounts are also on offer, based on the number of executions of web services requested. In this paper, we develop efficient algorithms for the web service composition problem in the presence of composite web service offerings and volume discounts. We model this problem as a combinatorial auction with volume discounts. We first develop efficient polynomial time algorithms when the end-to-end service involves a linear workflow of web services. Next we develop efficient polynomial time algorithms when the end-to-end service involves a tree workflow of web services.
Resumo:
An elementary combinatorial Tanner graph construction for a family of near-regular low density parity check (LDPC) codes achieving high girth is presented. These codes are near regular in the sense that the degree of a left/right vertex is allowed to differ by at most one from the average. The construction yields in quadratic time complexity an asymptotic code family with provable lower bounds on the rate and the girth for a given choice of block length and average degree. The construction gives flexibility in the choice of design parameters of the code like rate, girth and average degree. Performance simulations of iterative decoding algorithm for the AWGN channel on codes designed using the method demonstrate that these codes perform better than regular PEG codes and MacKay codes of similar length for all values of Signal to noise ratio.
Resumo:
We look at graphical descriptions of block codes known as trellises, which illustrate connections between algebra and graph theory, and can be used to develop powerful decoding algorithms. Trellis sizes for linear block codes are known to grow exponentially with the code parameters. Of considerable interest to coding theorists therefore, are more compact descriptions called tail-biting trellises which in some cases can be much smaller than any conventional trellis for the same code . We derive some interesting properties of tail-biting trellises and present a new decoding algorithm.