12 resultados para Images - Computational methods
em Aston University Research Archive
Resumo:
Traditional Chinese Medicine (TCM) has been actively researched through various approaches, including computational techniques. A review on basic elements of TCM is provided to illuminate various challenges and progresses in its study using computational methods. Information on various TCM formulations, in particular resources on databases of TCM formulations and their integration to Western medicine, are analyzed in several facets, such as TCM classifications, types of databases, and mining tools. Aspects of computational TCM diagnosis, namely inspection, auscultation, pulse analysis as well as TCM expert systems are reviewed in term of their benefits and drawbacks. Various approaches on exploring relationships among TCM components and finding genes/proteins relating to TCM symptom complex are also studied. This survey provides a summary on the advance of computational approaches for TCM and will be useful for future knowledge discovery in this area. © 2007 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Crotonaldehyde (2-butenal) adsorption over gold sub-nanometer particles, and the influence of co-adsorbed oxygen, has been systematically investigated by computational methods. Using density functional theory, the adsorption energetics of crotonaldehyde on bare and oxidised gold clusters (Au , d = 0.8 nm) were determined as a function of oxygen coverage and coordination geometry. At low oxygen coverage, sites are available for which crotonaldehyde adsorption is enhanced relative to bare Au clusters by 10 kJ mol. At higher oxygen coverage, crotonaldehyde is forced to adsorb in close proximity to oxygen weakening adsorption by up to 60 kJ mol relative to bare Au. Bonding geometries, density of states plots and Bader analysis, are used to elucidate crotonaldehyde bonding to gold nanoparticles in terms of partial electron transfer from Au to crotonaldehyde, and note that donation to gold from crotonaldehyde also becomes significant following metal oxidation. At high oxygen coverage we find that all molecular adsorption sites have a neighbouring, destabilising, oxygen adatom so that despite enhanced donation, crotonaldehyde adsorption is always weakened by steric interactions. For a larger cluster (Au, d = 1.1 nm) crotonaldehyde adsorption is destabilized in this way even at a low oxygen coverage. These findings provide a quantitative framework to underpin the experimentally observed influence of oxygen on the selective oxidation of crotyl alcohol to crotonaldehyde over gold and gold-palladium alloys. © 2014 the Partner Organisations.
Resumo:
In dimensional metrology, often the largest source of uncertainty of measurement is thermal variation. Dimensional measurements are currently scaled linearly, using ambient temperature measurements and coefficients of thermal expansion, to ideal metrology conditions at 20˚C. This scaling is particularly difficult to implement with confidence in large volumes as the temperature is unlikely to be uniform, resulting in thermal gradients. A number of well-established computational methods are used in the design phase of product development for the prediction of thermal and gravitational effects, which could be used to a greater extent in metrology. This paper outlines the theory of how physical measurements of dimension and temperature can be combined more comprehensively throughout the product lifecycle, from design through to the manufacturing phase. The Hybrid Metrology concept is also introduced: an approach to metrology, which promises to improve product and equipment integrity in future manufacturing environments. The Hybrid Metrology System combines various state of the art physical dimensional and temperature measurement techniques with established computational methods to better predict thermal and gravitational effects.
Resumo:
Epitopes mediated by T cells lie at the heart of the adaptive immune response and form the essential nucleus of anti-tumour peptide or epitope-based vaccines. Antigenic T cell epitopes are mediated by major histocompatibility complex (MHC) molecules, which present them to T cell receptors. Calculating the affinity between a given MHC molecule and an antigenic peptide using experimental approaches is both difficult and time consuming, thus various computational methods have been developed for this purpose. A server has been developed to allow a structural approach to the problem by generating specific MHC:peptide complex structures and providing configuration files to run molecular modelling simulations upon them. A system has been produced which allows the automated construction of MHC:peptide structure files and the corresponding configuration files required to execute a molecular dynamics simulation using NAMD. The system has been made available through a web-based front end and stand-alone scripts. Previous attempts at structural prediction of MHC:peptide affinity have been limited due to the paucity of structures and the computational expense in running large scale molecular dynamics simulations. The MHCsim server (http://igrid-ext.cryst.bbk.ac.uk/MHCsim) allows the user to rapidly generate any desired MHC:peptide complex and will facilitate molecular modelling simulation of MHC complexes on an unprecedented scale.
Resumo:
The papers resulting from the recent Biochemical Society Focused Meeting 'G-Protein-Coupled Receptors: from Structural Insights to Functional Mechanisms' held in Prato in September 2012 are introduced in the present overview. A number of future goals for GPCR (G-protein-coupled receptor) research are considered, including the need to develop biophysical and computational methods to explore the full range of GPCR conformations and their dynamics, the need to develop methods to take this into account for drug discovery and the importance of relating observations on isolated receptors or receptors expressed in model systems to receptor function in vivo. © 2013 Biochemical Society.
Resumo:
We consider a Cauchy problem for the Laplace equation in a bounded region containing a cut, where the region is formed by removing a sufficiently smooth arc (the cut) from a bounded simply connected domain D. The aim is to reconstruct the solution on the cut from the values of the solution and its normal derivative on the boundary of the domain D. We propose an alternating iterative method which involves solving direct mixed problems for the Laplace operator in the same region. These mixed problems have either a Dirichlet or a Neumann boundary condition imposed on the cut and are solved by a potential approach. Each of these mixed problems is reduced to a system of integral equations of the first kind with logarithmic and hypersingular kernels and at most a square root singularity in the densities at the endpoints of the cut. The full discretization of the direct problems is realized by a trigonometric quadrature method which has super-algebraic convergence. The numerical examples presented illustrate the feasibility of the proposed method.
Resumo:
The twin arginine translocation (TAT) system ferries folded proteins across the bacterial membrane. Proteins are directed into this system by the TAT signal peptide present at the amino terminus of the precursor protein, which contains the twin arginine residues that give the system its name. There are currently only two computational methods for the prediction of TAT translocated proteins from sequence. Both methods have limitations that make the creation of a new algorithm for TAT-translocated protein prediction desirable. We have developed TATPred, a new sequence-model method, based on a Nave-Bayesian network, for the prediction of TAT signal peptides. In this approach, a comprehensive range of models was tested to identify the most reliable and robust predictor. The best model comprised 12 residues: three residues prior to the twin arginines and the seven residues that follow them. We found a prediction sensitivity of 0.979 and a specificity of 0.942.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
A survey of crystal structures containing hydantoin, dihydrouracil and uracil derivatives in the Cambridge Structural Database revealed four main types of hydrogen bond motifs when derivatives with extra substituents able to interfere with the main motif are excluded. All these molecules contain two hydrogen bond donors and two hydrogen bond acceptors in the sequence of NH, C = O, NH, and C=O groups within a 5-membered ring (hydantoin) and two 6-membered rings (dihydrouracil and uracil). In all cases, both ring NH groups act as donors in the main hydrogen bond motif but there is an excess of hydrogen bond acceptors (two C=O able to accept twice each) and so two possibilities are found: (i) each carbonyl O atom may accept one hydrogen bond or (ii) one carbonyl O atom may accept two hydrogen bonds while the other does not participate in the hydrogen bonding. We observed different preferences in the type and symmetry of the motifs adopted by the different derivatives, and a good agreement is found between motifs observed experimentally and those predicted using computational methods. We identified certain molecular factors such as chirality, substituent size and the possibility of C-H⋯O interactions as important factors influencing the motif observation. © 2012 The Royal Society of Chemistry and the Centre National de la Recherche Scientifique.
Resumo:
Two-stage data envelopment analysis (DEA) efficiency models identify the efficient frontier of a two-stage production process. In some two-stage processes, the inputs to the first stage are shared by the second stage, known as shared inputs. This paper proposes a new relational linear DEA model for dealing with measuring the efficiency score of two-stage processes with shared inputs under constant returns-to-scale assumption. Two case studies of banking industry and university operations are taken as two examples to illustrate the potential applications of the proposed approach.
Resumo:
Background: DNA-binding proteins play a pivotal role in various intra- and extra-cellular activities ranging from DNA replication to gene expression control. Identification of DNA-binding proteins is one of the major challenges in the field of genome annotation. There have been several computational methods proposed in the literature to deal with the DNA-binding protein identification. However, most of them can't provide an invaluable knowledge base for our understanding of DNA-protein interactions. Results: We firstly presented a new protein sequence encoding method called PSSM Distance Transformation, and then constructed a DNA-binding protein identification method (SVM-PSSM-DT) by combining PSSM Distance Transformation with support vector machine (SVM). First, the PSSM profiles are generated by using the PSI-BLAST program to search the non-redundant (NR) database. Next, the PSSM profiles are transformed into uniform numeric representations appropriately by distance transformation scheme. Lastly, the resulting uniform numeric representations are inputted into a SVM classifier for prediction. Thus whether a sequence can bind to DNA or not can be determined. In benchmark test on 525 DNA-binding and 550 non DNA-binding proteins using jackknife validation, the present model achieved an ACC of 79.96%, MCC of 0.622 and AUC of 86.50%. This performance is considerably better than most of the existing state-of-the-art predictive methods. When tested on a recently constructed independent dataset PDB186, SVM-PSSM-DT also achieved the best performance with ACC of 80.00%, MCC of 0.647 and AUC of 87.40%, and outperformed some existing state-of-the-art methods. Conclusions: The experiment results demonstrate that PSSM Distance Transformation is an available protein sequence encoding method and SVM-PSSM-DT is a useful tool for identifying the DNA-binding proteins. A user-friendly web-server of SVM-PSSM-DT was constructed, which is freely accessible to the public at the web-site on http://bioinformatics.hitsz.edu.cn/PSSM-DT/.
Resumo:
Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.