940 resultados para Computational complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is intense activity in the area of theoretical chemistry of gold. It is now possible to predict new molecular species, and more recently, solids by combining relativistic methodology with isoelectronic thinking. In this thesis we predict a series of solid sheet-type crystals for Group-11 cyanides, MCN (M=Cu, Ag, Au), and Group-2 and 12 carbides MC2 (M=Be-Ba, Zn-Hg). The idea of sheets is then extended to nanostrips which can be bent to nanorings. The bending energies and deformation frequencies can be systematized by treating these molecules as an elastic bodies. In these species Au atoms act as an 'intermolecular glue'. Further suggested molecular species are the new uncongested aurocarbons, and the neutral Au_nHg_m clusters. Many of the suggested species are expected to be stabilized by aurophilic interactions. We also estimate the MP2 basis-set limit of the aurophilicity for the model compounds [ClAuPH_3]_2 and [P(AuPH_3)_4]^+. Beside investigating the size of the basis-set applied, our research confirms that the 19-VE TZVP+2f level, used a decade ago, already produced 74 % of the present aurophilic attraction energy for the [ClAuPH_3]_2 dimer. Likewise we verify the preferred C4v structure for the [P(AuPH_3)_4]^+ cation at the MP2 level. We also perform the first calculation on model aurophilic systems using the SCS-MP2 method and compare the results to high-accuracy CCSD(T) ones. The recently obtained high-resolution microwave spectra on MCN molecules (M=Cu, Ag, Au) provide an excellent testing ground for quantum chemistry. MP2 or CCSD(T) calculations, correlating all 19 valence electrons of Au and including BSSE and SO corrections, are able to give bond lengths to 0.6 pm, or better. Our calculated vibrational frequencies are expected to be better than the currently available experimental estimates. Qualitative evidence for multiple Au-C bonding in triatomic AuCN is also found.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The chemical and physical properties of bimetallic clusters have attracted considerable attention due to the potential technological applications of mixed-metal systems. It is of fundamental interests to study clusters because they are the link between atomic surface and bulk properties. More information of metal-metal bond in small clusters can be hence released. The studies in my thesis mainly focus on the two different kinds of bimetallic clusters: the clusters consisting of extraordinary shaped all metal four-membered rings and a series of sodium auride clusters. As described in most general organic chemistry books nowadays, a group of compounds are classified as aromatic compounds because of their remarkable stabilities, particular geometrical and energetic properties and so on. The notation of aromaticity is essentially qualitative. More recently, the connection has been made between aromaticity and energetic and magnetic properties. Also, the discussions of the aromatic nature of molecular rings are no longer limited to organic compounds obeying the Hückel’s rule. In our research, we mainly applied the GIMIC method to several bimetallic clusters at the CCSD level, and compared the results with those obtained by using chemical shift based methods. The magnetically induced ring currents can be generated easily by employing GIMIC method, and the nature of aromaticity for each system can be therefore clarified. We performed intensive quantum chemical calculations to explore the characters of the anionic sodium auride clusters and the corresponding neutral clusters since it has been fascinating in investigating molecules with gold atom involved due to its distinctive physical and chemical properties. As small gold clusters, the sodium auride clusters seem to form planar structures. With the addition of a negative charge, the gold atom in anionic clusters prefers to carry the charge and orients itself away from other gold atoms. As a result, the energetically lowest isomer for an anionic cluster is distinguished from the one for the corresponding neutral cluster. Mostly importantly, we presented a comprehensive strategy of ab initio applications to computationally implement the experimental photoelectron spectra.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Having the ability to work with complex models can be highly beneficial, but the computational cost of doing so is often large. Complex models often have intractable likelihoods, so methods that directly use the likelihood function are infeasible. In these situations, the benefits of working with likelihood-free methods become apparent. Likelihood-free methods, such as parametric Bayesian indirect likelihood that uses the likelihood of an alternative parametric auxiliary model, have been explored throughout the literature as a good alternative when the model of interest is complex. One of these methods is called the synthetic likelihood (SL), which assumes a multivariate normal approximation to the likelihood of a summary statistic of interest. This paper explores the accuracy and computational efficiency of the Bayesian version of the synthetic likelihood (BSL) approach in comparison to a competitor known as approximate Bayesian computation (ABC) and its sensitivity to its tuning parameters and assumptions. We relate BSL to pseudo-marginal methods and propose to use an alternative SL that uses an unbiased estimator of the exact working normal likelihood when the summary statistic has a multivariate normal distribution. Several applications of varying complexity are considered to illustrate the findings of this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For the consumer, flavor is arguably the most important aspect of a good coffee. Coffee flavor is extremely complex and arises from numerous chemical, biological and physical influences of cultivar, coffee cherry maturity, geographical growing location, production, processing, roasting and cup preparation. Not surprisingly there is a large volume of research published detailing the volatile and non-volatile compounds in coffee and that are likely to be playing a role in coffee flavor. Further, there is much published on the sensory properties of coffee. Nevertheless, the link between flavor components and the sensory properties expressed in the complex matrix of coffee is yet to be fully understood. This paper provides an overview of the chemical components that are thought to be involved in the flavor and sensory quality of Arabica coffee.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an effective feature representation method in the context of activity recognition. Efficient and effective feature representation plays a crucial role not only in activity recognition, but also in a wide range of applications such as motion analysis, tracking, 3D scene understanding etc. In the context of activity recognition, local features are increasingly popular for representing videos because of their simplicity and efficiency. While they achieve state-of-the-art performance with low computational requirements, their performance is still limited for real world applications due to a lack of contextual information and models not being tailored to specific activities. We propose a new activity representation framework to address the shortcomings of the popular, but simple bag-of-words approach. In our framework, first multiple instance SVM (mi-SVM) is used to identify positive features for each action category and the k-means algorithm is used to generate a codebook. Then locality-constrained linear coding is used to encode the features into the generated codebook, followed by spatio-temporal pyramid pooling to convey the spatio-temporal statistics. Finally, an SVM is used to classify the videos. Experiments carried out on two popular datasets with varying complexity demonstrate significant performance improvement over the base-line bag-of-feature method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large-scale chromosome rearrangements such as copy number variants (CNVs) and inversions encompass a considerable proportion of the genetic variation between human individuals. In a number of cases, they have been closely linked with various inheritable diseases. Single-nucleotide polymorphisms (SNPs) are another large part of the genetic variance between individuals. They are also typically abundant and their measuring is straightforward and cheap. This thesis presents computational means of using SNPs to detect the presence of inversions and deletions, a particular variety of CNVs. Technically, the inversion-detection algorithm detects the suppressed recombination rate between inverted and non-inverted haplotype populations whereas the deletion-detection algorithm uses the EM-algorithm to estimate the haplotype frequencies of a window with and without a deletion haplotype. As a contribution to population biology, a coalescent simulator for simulating inversion polymorphisms has been developed. Coalescent simulation is a backward-in-time method of modelling population ancestry. Technically, the simulator also models multiple crossovers by using the Counting model as the chiasma interference model. Finally, this thesis includes an experimental section. The aforementioned methods were tested on synthetic data to evaluate their power and specificity. They were also applied to the HapMap Phase II and Phase III data sets, yielding a number of candidates for previously unknown inversions, deletions and also correctly detecting known such rearrangements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents methods for locating and analyzing cis-regulatory DNA elements involved with the regulation of gene expression in multicellular organisms. The regulation of gene expression is carried out by the combined effort of several transcription factor proteins collectively binding the DNA on the cis-regulatory elements. Only sparse knowledge of the 'genetic code' of these elements exists today. An automatic tool for discovery of putative cis-regulatory elements could help their experimental analysis, which would result in a more detailed view of the cis-regulatory element structure and function. We have developed a computational model for the evolutionary conservation of cis-regulatory elements. The elements are modeled as evolutionarily conserved clusters of sequence-specific transcription factor binding sites. We give an efficient dynamic programming algorithm that locates the putative cis-regulatory elements and scores them according to the conservation model. A notable proportion of the high-scoring DNA sequences show transcriptional enhancer activity in transgenic mouse embryos. The conservation model includes four parameters whose optimal values are estimated with simulated annealing. With good parameter values the model discriminates well between the DNA sequences with evolutionarily conserved cis-regulatory elements and the DNA sequences that have evolved neutrally. In further inquiry, the set of highest scoring putative cis-regulatory elements were found to be sensitive to small variations in the parameter values. The statistical significance of the putative cis-regulatory elements is estimated with the Two Component Extreme Value Distribution. The p-values grade the conservation of the cis-regulatory elements above the neutral expectation. The parameter values for the distribution are estimated by simulating the neutral DNA evolution. The conservation of the transcription factor binding sites can be used in the upstream analysis of regulatory interactions. This approach may provide mechanistic insight to the transcription level data from, e.g., microarray experiments. Here we give a method to predict shared transcriptional regulators for a set of co-expressed genes. The EEL (Enhancer Element Locator) software implements the method for locating putative cis-regulatory elements. The software facilitates both interactive use and distributed batch processing. We have used it to analyze the non-coding regions around all human genes with respect to the orthologous regions in various other species including mouse. The data from these genome-wide analyzes is stored in a relational database which is used in the publicly available web services for upstream analysis and visualization of the putative cis-regulatory elements in the human genome.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.