898 resultados para Combinatorial Algorithms
Resumo:
Many important natural products contain the furan-2(5H)-one structure. The structure of this molecule lends itself to manipulation using combinatorial techniques due to the presence of more than one site for the attachment of different suhstituents. By developing different reaction schemes at the three sites available for attachment on the furan-2(5H)-one scaffold, combinatorial chemistry techniques can be employed to assemble libraries of novel furan 2(5H)-ones. These libraries can then be entered into various biological screening programmes. This approach will enable a vast diversity or compounds to be examined, in the hope or finding new biologically active Iead structures. The work in this thesis has investigated the potential that combinatorial chemistry has in the quest for new biologically active lead structures based on the furan-2(5H)-one structure. Different reactions were investigated with respect to their suitability for inclusion in a library. Once sets of reactions at the various sites had been established, the viability of these reactions in the assembly of combinatorial libraries was investigated. Purification methods were developed, and the purified products entered into suitable biological screening tests. Results from some of these tests were optimised using structure activity relationships, and the resulting products re-screened. The screening tests performed were for anticancer and antimicrobial activity, cholecystokinin (CCK-B) antagonism and anti-inflammatory activity (in the quest for novel cyclo-oxygenase (COX-2) selective non-steroidal anti-inflammatory drugs). It has been shown that many reactions undergone by the furan-2(5H)-one structure are suitable for the assembly of a combinatorial library. Investigation into the assembly of different libraries has been carried out with initial screening results included. From this work, further investigation into combinatorial library assembly and structure activity relationships of screened reaction products can be undertaken.
Combinatorial approach to multi-substituted 1,4-Benzodiazepines as novel non-peptide CCK-antagonists
Resumo:
For the drug discovery process, a library of 168 multisubstituted 1,4-benzodiazepines were prepared by a 5-step solid phase combinatorial approach. Substituents were varied in the 3,5, 7 and 8-position on the benzodiazepine scaffold. The combinatorial library was evaluated in a CCK radiolabelled binding assay and CCKA (alimentary) and CCKB (brain) selective lead structures were discovered. The template of CCKA selective 1,4-benzodiazepin-2-ones bearing the tryptophan moiety was chemically modified by selective alkylation and acylation reactions. These studies provided a series of Asperlicin naturally analogues. The fully optimised Asperlicin related compound possessed a similar CCKA activity as the natural occuring compound. 3-Alkylated 1,4-benzodiazepines with selectivity towards the CCKB receptor subtype were optimised on A) the lipophilic side chain and B) the 2-aminophenyl-ketone moiety, together with some stereochemical changes. A C3 unit in the 3-position of 1,4-benzodiazepines possessed a CCKB activity within the nanomolar range. Further SAR optimisation on the N1-position by selective alkylation resulted in an improved CCKB binding with potentially decreased activity on the GABAA/benzodiazepine receptor complex. The in vivo studies revealed two N1-alkylated compounds containing unsaturated alkyl groups with anxiolytic properties. Alternative chemical approaches have been developed, including a route that is suitable for scale up of the desired target molecule in order to provide sufficient quantities for further in vivo evaluation.
Resumo:
The Scintillation Proximity Assay (SPA) is a method that is frequently used to detect and quantify the strength of intermolecular interactions between a biological receptor and ligand molecule in aqueous media. This thesis describes the synthesis of scintillant-tagged-compounds for application in a novel cell-based SPA. A series of 4-functianlised-2,5-diphenyloxazole molecules were synthesised. These 4-functionalised-2,5-diphenyloxazoles were evaluated by Sense Proteomic Ltd. Accordingly, the molecules were evaluated for the ability to scintillate in the presence of ionising radiation. In addition, the molecules were incorporated into liposomal preparations which were subsequently evaluated for the ability to scintillate in the presence of ionising radiation. The optimal liposomal preparation was introduced into the membrane of HeLa cells that were used successfully in a cell-based SPA to detect and quantify the uptake of [14C]methionine. This thesis also describes the synthesis and subsequent polymerisation of novel poly(oxyethylene glycol)-based monomers to form a series of new polymer supports. These Poly(oxyethylene glycol)-polymer (POP) supports were evaluated for the ability to swell and mass-uptake in a variety of solvents, demonstrating that POP-supports exhibit enhanced solvent compatibilities over several commercial resins. The utility of POP-supports in solid-phase synthesis was also demonstrated successfully. The incorporation of (4’-vinyl)-4-benzyl-2,5-diphenyloxazole in varying mole percentage into the monomer composition resulted in the production of chemically functionalised scintillant-containing poly(oxyethylene glycol) polymer (POP-Sc) supports. These materials are compatible with both aqueous and organic solvents and scintillate efficiently in the presence of ionising radiation. The utility of POP-Sc supports in solid-phase synthesis and subsequent in-situ SPA to detect and quantify, in real-time, the kinetic progress of a solid-phase reaction was exemplified successfully.In addition, POP-Sc supports were used successfully both in solid-phase combinatorial synthesis of a peptide nucleic acid (PNA)-library and subsequent screening of this library for the ability to hybridise with DNA, which was labelled with a suitable radio-isotape. This data was used to identify the dependence of the number and position of complimentary codon pairs upon the extent of hybridisation. Finally, a further SPA was used to demonstrate the excellent compatibility of POP-Sc supports for use in the detection and quantification of enzyme assays conducted within the matrix of the POP-Sc support.
Resumo:
Many of the applications of geometric modelling are concerned with the computation of well-defined properties of the model. The applications which have received less attention are those which address questions to which there is no unique answer. This thesis describes such an application: the automatic production of a dimensioned engineering drawing. One distinctive feature of this operation is the requirement for sophisticated decision-making algorithms at each stage in the processing of the geometric model. Hence, the thesis is focussed upon the design, development and implementation of such algorithms. Various techniques for geometric modelling are briefly examined and then details are given of the modelling package that was developed for this project, The principles of orthographic projection and dimensioning are treated and some published work on the theory of dimensioning is examined. A new theoretical approach to dimensioning is presented and discussed. The existing body of knowledge on decision-making is sampled and the author then shows how methods which were originally developed for management decisions may be adapted to serve the purposes of this project. The remainder of the thesis is devoted to reports on the development of decision-making algorithms for orthographic view selection, sectioning and crosshatching, the preparation of orthographic views with essential hidden detail, and two approaches to the actual insertion of dimension lines and text. The thesis concludes that the theories of decision-making can be applied to work of this kind. It may be possible to generate computer solutions that are closer to the optimum than some man-made dimensioning schemes. Further work on important details is required before a commercially acceptable package could be produced.
Resumo:
Image segmentation is one of the most computationally intensive operations in image processing and computer vision. This is because a large volume of data is involved and many different features have to be extracted from the image data. This thesis is concerned with the investigation of practical issues related to the implementation of several classes of image segmentation algorithms on parallel architectures. The Transputer is used as the basic building block of hardware architectures and Occam is used as the programming language. The segmentation methods chosen for implementation are convolution, for edge-based segmentation; the Split and Merge algorithm for segmenting non-textured regions; and the Granlund method for segmentation of textured images. Three different convolution methods have been implemented. The direct method of convolution, carried out in the spatial domain, uses the array architecture. The other two methods, based on convolution in the frequency domain, require the use of the two-dimensional Fourier transform. Parallel implementations of two different Fast Fourier Transform algorithms have been developed, incorporating original solutions. For the Row-Column method the array architecture has been adopted, and for the Vector-Radix method, the pyramid architecture. The texture segmentation algorithm, for which a system-level design is given, demonstrates a further application of the Vector-Radix Fourier transform. A novel concurrent version of the quad-tree based Split and Merge algorithm has been implemented on the pyramid architecture. The performance of the developed parallel implementations is analysed. Many of the obtained speed-up and efficiency measures show values close to their respective theoretical maxima. Where appropriate comparisons are drawn between different implementations. The thesis concludes with comments on general issues related to the use of the Transputer system as a development tool for image processing applications; and on the issues related to the engineering of concurrent image processing applications.
Resumo:
This work sets out to evaluate the potential benefits and pit-falls in using a priori information to help solve the Magnetoencephalographic (MEG) inverse problem. In chapter one the forward problem in MEG is introduced, together with a scheme that demonstrates how a priori information can be incorporated into the inverse problem. Chapter two contains a literature review of techniques currently used to solve the inverse problem. Emphasis is put on the kind of a priori information that is used by each of these techniques and the ease with which additional constraints can be applied. The formalism of the FOCUSS algorithm is shown to allow for the incorporation of a priori information in an insightful and straightforward manner. In chapter three it is described how anatomical constraints, in the form of a realistically shaped source space, can be extracted from a subject’s Magnetic Resonance Image (MRI). The use of such constraints relies on accurate co-registration of the MEG and MRI co-ordinate systems. Variations of the two main co-registration approaches, based on fiducial markers or on surface matching, are described and the accuracy and robustness of a surface matching algorithm is evaluated. Figures of merit introduced in chapter four are shown to given insight into the limitations of a typical measurement set-up and potential value of a priori information. It is shown in chapter five that constrained dipole fitting and FOCUSS outperform unconstrained dipole fitting when data with low SNR is used. However, the effect of errors in the constraints can reduce this advantage. Finally, it is demonstrated in chapter six that the results of different localisation techniques give corroborative evidence about the location and activation sequence of the human visual cortical areas underlying the first 125ms of the visual magnetic evoked response recorded with a whole head neuromagnetometer.
Resumo:
With the ability to collect and store increasingly large datasets on modern computers comes the need to be able to process the data in a way that can be useful to a Geostatistician or application scientist. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively for likelihood-based Geostatistics. Various methods have been proposed and are extensively used in an attempt to overcome these complexity issues. This thesis introduces a number of principled techniques for treating large datasets with an emphasis on three main areas: reduced complexity covariance matrices, sparsity in the covariance matrix and parallel algorithms for distributed computation. These techniques are presented individually, but it is also shown how they can be combined to produce techniques for further improving computational efficiency.
Resumo:
We consider a variation of the prototype combinatorial optimization problem known as graph colouring. Our optimization goal is to colour the vertices of a graph with a fixed number of colours, in a way to maximize the number of different colours present in the set of nearest neighbours of each given vertex. This problem, which we pictorially call palette-colouring, has been recently addressed as a basic example of a problem arising in the context of distributed data storage. Even though it has not been proved to be NP-complete, random search algorithms find the problem hard to solve. Heuristics based on a naive belief propagation algorithm are observed to work quite well in certain conditions. In this paper, we build upon the mentioned result, working out the correct belief propagation algorithm, which needs to take into account the many-body nature of the constraints present in this problem. This method improves the naive belief propagation approach at the cost of increased computational effort. We also investigate the emergence of a satisfiable-to-unsatisfiable 'phase transition' as a function of the vertex mean degree, for different ensembles of sparse random graphs in the large size ('thermodynamic') limit.
Resumo:
Orthogonal frequency division multiplexing (OFDM) is becoming a fundamental technology in future generation wireless communications. Call admission control is an effective mechanism to guarantee resilient, efficient, and quality-of-service (QoS) services in wireless mobile networks. In this paper, we present several call admission control algorithms for OFDM-based wireless multiservice networks. Call connection requests are differentiated into narrow-band calls and wide-band calls. For either class of calls, the traffic process is characterized as batch arrival since each call may request multiple subcarriers to satisfy its QoS requirement. The batch size is a random variable following a probability mass function (PMF) with realistically maximum value. In addition, the service times for wide-band and narrow-band calls are different. Following this, we perform a tele-traffic queueing analysis for OFDM-based wireless multiservice networks. The formulae for the significant performance metrics call blocking probability and bandwidth utilization are developed. Numerical investigations are presented to demonstrate the interaction between key parameters and performance metrics. The performance tradeoff among different call admission control algorithms is discussed. Moreover, the analytical model has been validated by simulation. The methodology as well as the result provides an efficient tool for planning next-generation OFDM-based broadband wireless access systems.
Resumo:
Since wireless network optimisations can be typically designed and evaluated independently of one another under the assumption that they can be applied jointly or independently. In this paper, we have analysis some rate algorithms in wireless networks. Since wireless networks have different standards in IEEE with peculiar features, data rate is one of those important parameters that wireless networks depend on for performances. The optimisation of this network is dependent on the behaviour of a particular rate algorithm in a network scenario. We have considered some first and second generation's rate algorithm, and it is all about selecting an appropriate data rate that any available wireless network can utilise for transmission in order to achieve a good performance. We have designed and analysis a wireless network and results obtained for some rate algorithms, like ONOE and AARF.
Resumo:
Multi-agent algorithms inspired by the division of labour in social insects and by markets, are applied to a constrained problem of distributed task allocation. The efficiency (average number of tasks performed), the flexibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved efficiency and robustness. We employ nature inspired particle swarm optimisation to obtain optimised parameters for all algorithms in a range of representative environments. Although results are obtained for large population sizes to avoid finite size effects, the influence of population size on the performance is also analysed. From a theoretical point of view, we analyse the causes of efficiency loss, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.
Resumo:
Objective: Recently, much research has been proposed using nature inspired algorithms to perform complex machine learning tasks. Ant colony optimization (ACO) is one such algorithm based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper investigates ant-based algorithms for gene expression data clustering and associative classification. Methods and material: An ant-based clustering (Ant-C) and an ant-based association rule mining (Ant-ARM) algorithms are proposed for gene expression data analysis. The proposed algorithms make use of the natural behavior of ants such as cooperation and adaptation to allow for a flexible robust search for a good candidate solution. Results: Ant-C has been tested on the three datasets selected from the Stanford Genomic Resource Database and achieved relatively high accuracy compared to other classical clustering methods. Ant-ARM has been tested on the acute lymphoblastic leukemia (ALL)/acute myeloid leukemia (AML) dataset and generated about 30 classification rules with high accuracy. Conclusions: Ant-C can generate optimal number of clusters without incorporating any other algorithms such as K-means or agglomerative hierarchical clustering. For associative classification, while a few of the well-known algorithms such as Apriori, FP-growth and Magnum Opus are unable to mine any association rules from the ALL/AML dataset within a reasonable period of time, Ant-ARM is able to extract associative classification rules.
Resumo:
There has been considerable recent research into the connection between Parkinson's disease (PD) and speech impairment. Recently, a wide range of speech signal processing algorithms (dysphonia measures) aiming to predict PD symptom severity using speech signals have been introduced. In this paper, we test how accurately these novel algorithms can be used to discriminate PD subjects from healthy controls. In total, we compute 132 dysphonia measures from sustained vowels. Then, we select four parsimonious subsets of these dysphonia measures using four feature selection algorithms, and map these feature subsets to a binary classification response using two statistical classifiers: random forests and support vector machines. We use an existing database consisting of 263 samples from 43 subjects, and demonstrate that these new dysphonia measures can outperform state-of-the-art results, reaching almost 99% overall classification accuracy using only ten dysphonia features. We find that some of the recently proposed dysphonia measures complement existing algorithms in maximizing the ability of the classifiers to discriminate healthy controls from PD subjects. We see these results as an important step toward noninvasive diagnostic decision support in PD.