921 resultados para Opportunity discovery and exploitation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Genetic Algorithms are efficient and robust search methods that are being employed in a plethora of applications with extremely large search spaces. The directed search mechanism employed in Genetic Algorithms performs a simultaneous and balanced, exploration of new regions in the search space and exploitation of already discovered regions.This paper introduces the notion of fitness moments for analyzing the working of Genetic Algorithms (GAs). We show that the fitness moments in any generation may be predicted from those of the initial population. Since a knowledge of the fitness moments allows us to estimate the fitness distribution of strings, this approach provides for a method of characterizing the dynamics of GAs. In particular the average fitness and fitness variance of the population in any generation may be predicted. We introduce the technique of fitness-based disruption of solutions for improving the performance of GAs. Using fitness moments, we demonstrate the advantages of using fitness-based disruption. We also present experimental results comparing the performance of a standard GA and GAs (CDGA and AGA) that incorporate the principle of fitness-based disruption. The experimental evidence clearly demonstrates the power of fitness based disruption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A decapeptide Boc-L-Ala-(DeltaPhe)(4)-L-Ala-(DeltaPhe)(3)-Gly-OMe (Peptide I) was synthesized to study the preferred screw sense of consecutive alpha,beta-dehydrophenylalanine (DeltaPhe) residues. Crystallographic and CD studies suggest that, despite the presence of two L-Ala residues in the sequence, the decapeptide does not have a preferred screw sense. The peptide crystallizes with two conformers per asymmetric unit, one of them a slightly distorted right-handed 3(10)-helix (X) and the other a left-handed 3(10)-helix (Y) with X and Y being antiparallel to each other. An unanticipated and interesting observation is that in the solid state, the two shape-complement molecules self-assemble and interact with an extensive network of C-H...O hydrogen bonds and pi-pi interactions, directed laterally to the helix axis with amazing regularity. Here, we present an atomic resolution picture of the weak interaction mediated mutual recognition of two secondary structural elements and its possible implication in understanding the specific folding of the hydrophobic core of globular proteins and exploitation in future work on de novo design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Frequent episode discovery is a popular framework for mining data available as a long sequence of events. An episode is essentially a short ordered sequence of event types and the frequency of an episode is some suitable measure of how often the episode occurs in the data sequence. Recently,we proposed a new frequency measure for episodes based on the notion of non-overlapped occurrences of episodes in the event sequence, and showed that, such a definition, in addition to yielding computationally efficient algorithms, has some important theoretical properties in connecting frequent episode discovery with HMM learning. This paper presents some new algorithms for frequent episode discovery under this non-overlapped occurrences-based frequency definition. The algorithms presented here are better (by a factor of N, where N denotes the size of episodes being discovered) in terms of both time and space complexities when compared to existing methods for frequent episode discovery. We show through some simulation experiments, that our algorithms are very efficient. The new algorithms presented here have arguably the least possible orders of spaceand time complexities for the task of frequent episode discovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel Second Order Cone Programming (SOCP) formulation for large scale binary classification tasks. Assuming that the class conditional densities are mixture distributions, where each component of the mixture has a spherical covariance, the second order statistics of the components can be estimated efficiently using clustering algorithms like BIRCH. For each cluster, the second order moments are used to derive a second order cone constraint via a Chebyshev-Cantelli inequality. This constraint ensures that any data point in the cluster is classified correctly with a high probability. This leads to a large margin SOCP formulation whose size depends on the number of clusters rather than the number of training data points. Hence, the proposed formulation scales well for large datasets when compared to the state-of-the-art classifiers, Support Vector Machines (SVMs). Experiments on real world and synthetic datasets show that the proposed algorithm outperforms SVM solvers in terms of training time and achieves similar accuracies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Users can rarely reveal their information need in full detail to a search engine within 1--2 words, so search engines need to "hedge their bets" and present diverse results within the precious 10 response slots. Diversity in ranking is of much recent interest. Most existing solutions estimate the marginal utility of an item given a set of items already in the response, and then use variants of greedy set cover. Others design graphs with the items as nodes and choose diverse items based on visit rates (PageRank). Here we introduce a radically new and natural formulation of diversity as finding centers in resistive graphs. Unlike in PageRank, we do not specify the edge resistances (equivalently, conductances) and ask for node visit rates. Instead, we look for a sparse set of center nodes so that the effective conductance from the center to the rest of the graph has maximum entropy. We give a cogent semantic justification for turning PageRank thus on its head. In marked deviation from prior work, our edge resistances are learnt from training data. Inference and learning are NP-hard, but we give practical solutions. In extensive experiments with subtopic retrieval, social network search, and document summarization, our approach convincingly surpasses recently-published diversity algorithms like subtopic cover, max-marginal relevance (MMR), Grasshopper, DivRank, and SVMdiv.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Frequency hopping communications, used in the military present significant opportunities for spectrum reuse via the cognitive radio technology. We propose a MAC which incorporates hop instant identification, and supports network discovery and formation, QOS Scheduling and secondary communications. The spectrum sensing algorithm is optimized to deal with the problem of spectral leakage. The algorithms are implemented in a SDR platform based test bed and measurement results are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge of protein-ligand interactions is essential to understand several biological processes and important for applications ranging from understanding protein function to drug discovery and protein engineering. Here, we describe an algorithm for the comparison of three-dimensional ligand-binding sites in protein structures. A previously described algorithm, PocketMatch (version 1.0) is optimised, expanded, and MPI-enabled for parallel execution. PocketMatch (version 2.0) rapidly quantifies binding-site similarity based on structural descriptors such as residue nature and interatomic distances. Atomic-scale alignments may also be obtained from amino acid residue pairings generated. It allows an end-user to compute database-wide, all-to-all comparisons in a matter of hours. The use of our algorithm on a sample dataset, performance-analysis, and annotated source code is also included.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Telomerases are an attractive drug target to develop new generation drugs against cancer. A telomere appears from the chromosomal termini and protects it from double-stranded DNA degradation. A short telomere promotes genomic instability, like end-to-end fusion and regulates the over-expression of the telomere repairing enzyme, telomerase. The telomerase maintains the telomere length, which may lead to genetically abnormal situations, leading to cancer. Thus, the design and synthesis of an efficient telomerase inhibitor is a viable strategy toward anticancer drugs development. Accordingly, small molecule induced stabilization of the G-quadruplex structure, formed by the human telomeric DNA, is an area of contemporary scientific art. Several such compounds efficiently stabilize the G-quadruplex forms of nucleic acids, which often leads to telomerase inhibition. This Feature article presents the discovery and development of the telomere structure, function and evolution in telomere targeted anticancer drug design and incorporates the recent advances in this area, in addition to discussing the advantages and disadvantages in the methods, and prospects for the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Drug repurposing to explore target space has been gaining pace over the past decade with the upsurge in the use of systematic approaches for computational drug discovery. Such a cost and time-saving approach gains immense importance for pathogens of special interest, such as Mycobacterium tuberculosis H37Rv. We report a comprehensive approach to repurpose drugs, based on the exploration of evolutionary relationships inferred from the comparative sequence and structural analyses between targets of FDA-approved drugs and the proteins of M. tuberculosis. This approach has facilitated the identification of several polypharmacological drugs that could potentially target unexploited M. tuberculosis proteins. A total of 130 FDA-approved drugs, originally intended against other diseases, could be repurposed against 78 potential targets in M. tuberculosis. Additionally, we have also made an attempt to augment the chemical space by recognizing compounds structurally similar to FDA-approved drugs. For three of the attractive cases we have investigated the probable binding modes of the drugs in their corresponding M. tuberculosis targets by means of structural modelling. Such prospective targets and small molecules could be prioritized for experimental endeavours, and could significantly influence drug-discovery and drug-development programmes for tuberculosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resumen: Este artículo explora aspectos de la vida cotidiana en la colonia judía de Elefantina (siglo V AEC) en la frontera sureste de Egipto. Centrado en la historia de su descubrimiento y publicación, son considerados los orígenes de la colonia, el estatus de sus miembros así como las prácticas religiosas y legales de los colonos judíos tal como lo reflejan sus documentos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ES] Una de las principales preocupaciones en el área de la microestructura del mercado ha sido la estimación de los componentes no observables de la horquilla de precios a partir de las series de datos que proporcionan los mercados financieros, despertando quizá un mayor interés el de selección adversa por la implicaciones que supone la existencia del mismo. Esto ha provocado el desarrollo de numerosos modelos empíricos que, basándose en las propiedades estadísticas de las series de precios, proporcionan dichas estimaciones. La mayor disponibilidad de datos existentes en los mercados ha permitido el desarrollo en los últimos años de modelos basados en técnicas estadísticas más complejas como son el método generalizado de momentos o la metodología VAR y cuya base de partida es la dinámica de la formación del precio, y, en concreto, cómo la información privada de las transacciones se recoge en los nuevos precios cotizados. El objetivo de este trabajo es analizar este último grupo de trabajos, es decir, aquellos modelos de estimación de los componentes de la horquilla basados en la dinámica de la formación de precios que, además de permitir la estimación del componente de selección adversa en series temporales, suponen una herramienta fundamental para analizar el proceso de incorporación de la información a los precios cotizados en los distintos mercados.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mortality of the four major cichlid fishes of Urnuoseriche Lake is the subject of this paper. Mortality I as estimated by five techniques, vary amongst the cichlid fishes, viz, Tilapia carbrae, Tilapia mariac, Tilapia zilli cend (hrornoditilapfa guntheri. The highest mortality rate was recorded for T mariac where the total mortality (Z) was 2.06; and natural mortality (M) was 1.8949. This species was also the most highly exploited species of fish with an exploitation ratio of0.566 (56.6%) and exploitation rate of 0.494. The least exploited cichlid fish is (. gun/hen where an exploitation ratio of 0.43209%) and exploitation rate of 0.2225 was recorded. In C'. guntheni, total mortality was 0.726 and natural mortality was 0.413 1. In T zilli, total mortality was 1.0547 wile exploitation ratio was 0.3674 (3 6.74%) and an exploitation rate was 0.2394. In T cahrae. total mortality was 1.8662: exploitation ratio was 0.4786 with an exploitation rate of 0.4045. (7 page document)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Actas de la XII Reunión Científica de la Fundación Española de Historia Moderna, celebrada en la Universidad de León en 19-21 de junio de 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study was undertaken by UKOLN on behalf of the Joint Information Systems Committee (JISC) in the period April to September 2008. Application profiles are metadata schemata which consist of data elements drawn from one or more namespaces, optimized for a particular local application. They offer a way for particular communities to base the interoperability specifications they create and use for their digital material on established open standards. This offers the potential for digital materials to be accessed, used and curated effectively both within and beyond the communities in which they were created. The JISC recognized the need to undertake a scoping study to investigate metadata application profile requirements for scientific data in relation to digital repositories, and specifically concerning descriptive metadata to support resource discovery and other functions such as preservation. This followed on from the development of the Scholarly Works Application Profile (SWAP) undertaken within the JISC Digital Repositories Programme and led by Andy Powell (Eduserv Foundation) and Julie Allinson (RRT UKOLN) on behalf of the JISC. Aims and Objectives 1.To assess whether a single metadata AP for research data, or a small number thereof, would improve resource discovery or discovery-to-delivery in any useful or significant way. 2.If so, then to:a.assess whether the development of such AP(s) is practical and if so, how much effort it would take; b.scope a community uptake strategy that is likely to be successful, identifying the main barriers and key stakeholders. 3.Otherwise, to investigate how best to improve cross-discipline, cross-community discovery-to-delivery for research data, and make recommendations to the JISC and others as appropriate. Approach The Study used a broad conception of what constitutes scientific data, namely data gathered, collated, structured and analysed using a recognizably scientific method, with a bias towards quantitative methods. The approach taken was to map out the landscape of existing data centres, repositories and associated projects, and conduct a survey of the discovery-to-delivery metadata they use or have defined, alongside any insights they have gained from working with this metadata. This was followed up by a series of unstructured interviews, discussing use cases for a Scientific Data Application Profile, and how widely a single profile might be applied. On the latter point, matters of granularity, the experimental/measurement contrast, the quantitative/qualitative contrast, the raw/derived data contrast, and the homogeneous/heterogeneous data collection contrast were discussed. The Study report was loosely structured according to the Singapore Framework for Dublin Core Application Profiles, and in turn considered: the possible use cases for a Scientific Data Application Profile; existing domain models that could either be used or adapted for use within such a profile; and a comparison existing metadata profiles and standards to identify candidate elements for inclusion in the description set profile for scientific data. The report also considered how the application profile might be implemented, its relationship to other application profiles, the alternatives to constructing a Scientific Data Application Profile, the development effort required, and what could be done to encourage uptake in the community. The conclusions of the Study were validated through a reference group of stakeholders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since 1999, the ICES Working Group on the Assessment of Demersal Stocks in the North Sea and Skagerrak assesses the saithe stock in the North Sea, Skagerrak and west of Scotland as a single stock unit. The sampling, evaluation and role of biological data from the German saithe fishery in the assessment are described. The German data showed similar trends as observed in French and Norwegian series. Based on these estimates, the spawning stock recovered to more than 200 000 t due to reductions in quotas and exploitation rates. Thus, the production of the stock increased also in combination with good recruitment and positive trends in spawning stock size and landings were projected for 2002. The biological data derived from the German saithe fishery dominated the assessment of stock size, structure and exploitation. This fact encourages a continuation of the described analyses based on sampling onboard fishing vessels and fish markets by the Institute for Sea Fisheries. The successful collaboration with the saithe fishing industry is judged as an important contribution to the sustainable management of fish stocks.