973 resultados para Alkene, carbon maximum number


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivation: We study a stochastic method for approximating the set of local minima in partial RNA folding landscapes associated with a bounded-distance neighbourhood of folding conformations. The conformations are limited to RNA secondary structures without pseudoknots. The method aims at exploring partial energy landscapes pL induced by folding simulations and their underlying neighbourhood relations. It combines an approximation of the number of local optima devised by Garnier and Kallel (2002) with a run-time estimation for identifying sets of local optima established by Reeves and Eremeev (2004).

Results: The method is tested on nine sequences of length between 50 nt and 400 nt, which allows us to compare the results with data generated by RNAsubopt and subsequent barrier tree calculations. On the nine sequences, the method captures on average 92% of local minima with settings designed for a target of 95%. The run-time of the heuristic can be estimated by O(n2D?ln?), where n is the sequence length, ? is the number of local minima in the partial landscape pL under consideration and D is the maximum number of steepest descent steps in attraction basins associated with pL.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In distributed networks, it is often useful for the nodes to be aware of dense subgraphs, e.g., such a dense subgraph could reveal dense substructures in otherwise sparse graphs (e.g. the World Wide Web or social networks); these might reveal community clusters or dense regions for possibly maintaining good communication infrastructure. In this work, we address the problem of self-awareness of nodes in a dynamic network with regards to graph density, i.e., we give distributed algorithms for maintaining dense subgraphs that the member nodes are aware of. The only knowledge that the nodes need is that of the dynamic diameter D, i.e., the maximum number of rounds it takes for a message to traverse the dynamic network. For our work, we consider a model where the number of nodes are fixed, but a powerful adversary can add or remove a limited number of edges from the network at each time step. The communication is by broadcast only and follows the CONGEST model. Our algorithms are continuously executed on the network, and at any time (after some initialization) each node will be aware if it is part (or not) of a particular dense subgraph. We give algorithms that (2 + e)-approximate the densest subgraph and (3 + e)-approximate the at-least-k-densest subgraph (for a given parameter k). Our algorithms work for a wide range of parameter values and run in O(D log n) time. Further, a special case of our results also gives the first fully decentralized approximation algorithms for densest and at-least-k-densest subgraph problems for static distributed graphs. © 2012 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In distributed networks, some groups of nodes may have more inter-connections, perhaps due to their larger bandwidth availability or communication requirements. In many scenarios, it may be useful for the nodes to know if they form part of a dense subgraph, e.g., such a dense subgraph could form a high bandwidth backbone for the network. In this work, we address the problem of self-awareness of nodes in a dynamic network with regards to graph density, i.e., we give distributed algorithms for maintaining dense subgraphs (subgraphs that the member nodes are aware of). The only knowledge that the nodes need is that of the dynamic diameter D, i.e., the maximum number of rounds it takes for a message to traverse the dynamic network. For our work, we consider a model where the number of nodes are fixed, but a powerful adversary can add or remove a limited number of edges from the network at each time step. The communication is by broadcast only and follows the CONGEST model in the sense that only messages of O(log n) size are permitted, where n is the number of nodes in the network. Our algorithms are continuously executed on the network, and at any time (after some initialization) each node will be aware if it is part (or not) of a particular dense subgraph. We give algorithms that approximate both the densest subgraph, i.e., the subgraph of the highest density in the network, and the at-least-k-densest subgraph (for a given parameter k), i.e., the densest subgraph of size at least k. We give a (2 + e)-approximation algorithm for the densest subgraph problem. The at-least-k-densest subgraph is known to be NP-hard for the general case in the centralized setting and the best known algorithm gives a 2-approximation. We present an algorithm that maintains a (3+e)-approximation in our distributed, dynamic setting. Our algorithms run in O(Dlog n) time. © 2012 Authors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel series of polymerisable squaramides has been synthesised in high yields using simple chemical reactions, and evaluated in the binding of anionic species. These vinyl monomers can be used as functional building blocks in co-polymerisations with a plethora of co-monomers or cross-linkers, grace to their compatibility with free-radical polymerisation reactions. Aromatic substituted squaramides were found to be the strongest receptors, while binding of certain anions was accompanied by a strong colour change, attributed to the de-protonation of the squaramide. The best performing squaramide monomers were incorporated in molecularly imprinted polymers (MIPs) targeting a model anion and their capacities and selectivity were evaluated by rebinding experiments. Polymers prepared using the new squaramide monomers were compared to urea based co-polymers, and were found to contain up to 80% of the theoretical maximum number of binding sites, an exceptionally high value compared to previous reports. Strong polymer colour changes were observed upon rebinding of certain anions, equivalent to those witnessed in solution, paving the way for application of such materials in anion sensing devices.



Graphical abstract: Polymerisable squaramide receptors for anion binding and sensing

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When an agent wants to fulfill its desires about the world, the agent usually has multiple plans to choose from and these plans have different pre-conditions and additional effects in addition to achieving its goals. Therefore, for further reasoning and interaction with the world, a plan selection strategy (usually based on plan cost estimation) is mandatory for an autonomous agent. This demand becomes even more critical when uncertainty on the observation of the world is taken into account, since in this case, we consider not only the costs of different plans, but also their chances of success estimated according to the agent's beliefs. In addition, when multiple goals are considered together, different plans achieving the goals can be conflicting on their preconditions (contexts) or the required resources. Hence a plan selection strategy should be able to choose a subset of plans that fulfills the maximum number of goals while maintaining context consistency and resource-tolerance among the chosen plans. To address the above two issues, in this paper we first propose several principles that a plan selection strategy should satisfy, and then we present selection strategies that stem from the principles, depending on whether a plan cost is taken into account. In addition, we also show that our selection strategy can partially recover intention revision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of bipartite bosonic systems, two notions of classicality of correlations can be defined: P-classicality, based on the properties of the Glauber-Sudarshan P-function; and C-classicality, based on the entropic quantum discord. It has been shown that these two notions are maximally inequivalent in a static (metric) sense -- as they coincide only on a set of states of zero measure. We extend and reinforce quantitatively this inequivalence by addressing the dynamical relation between these types of non-classicality in a paradigmatic quantum-optical setting: the linear mixing at a beam splitter of a single-mode Gaussian state with a thermal reference state. Specifically, we show that almost all P-classical input states generate outputs that are not C-classical. Indeed, for the case of zero thermal reference photons, the more P-classical resources at the input the less C-classicality at the output. In addition, we show that the P-classicality at the input -- as quantified by the non-classical depth -- does instead determine quantitatively the potential of generating output entanglement. This endows the non-classical depth with a new operational interpretation: it gives the maximum number of thermal reference photons that can be mixed at a beam splitter without destroying the output entanglement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os problemas de visibilidade têm diversas aplicações a situações reais. Entre os mais conhecidos, e exaustivamente estudados, estão os que envolvem os conceitos de vigilância e ocultação em estruturas geométricas (problemas de vigilância e ocultação). Neste trabalho são estudados problemas de visibilidade em estruturas geométricas conhecidas como polígonos, uma vez que estes podem representar, de forma apropriada, muitos dos objectos reais e são de fácil manipulação computacional. O objectivo dos problemas de vigilância é a determinação do número mínimo de posições para a colocação de dispositivos num dado polígono, de modo a que estes dispositivos consigam “ver” a totalidade do polígono. Por outro lado, o objectivo dos problemas de ocultação é a determinação do número máximo de posições num dado polígono, de modo a que quaisquer duas posições não se consigam “ver”. Infelizmente, a maior parte dos problemas de visibilidade em polígonos são NP-difíceis, o que dá origem a duas linhas de investigação: o desenvolvimento de algoritmos que estabelecem soluções aproximadas e a determinação de soluções exactas para classes especiais de polígonos. Atendendo a estas duas linhas de investigação, o trabalho é dividido em duas partes. Na primeira parte são propostos algoritmos aproximados, baseados essencialmente em metaheurísticas e metaheurísticas híbridas, para resolver alguns problemas de visibilidade, tanto em polígonos arbitrários como ortogonais. Os problemas estudados são os seguintes: “Maximum Hidden Vertex Set problem”, “Minimum Vertex Guard Set problem”, “Minimum Vertex Floodlight Set problem” e “Minimum Vertex k-Modem Set problem”. São também desenvolvidos métodos que permitem determinar a razão de aproximação dos algoritmos propostos. Para cada problema são implementados os algoritmos apresentados e é realizado um estudo estatístico para estabelecer qual o algoritmo que obtém as melhores soluções num tempo razoável. Este estudo permite concluir que as metaheurísticas híbridas são, em geral, as melhores estratégias para resolver os problemas de visibilidade estudados. Na segunda parte desta dissertação são abordados os problemas “Minimum Vertex Guard Set”, “Maximum Hidden Set” e “Maximum Hidden Vertex Set”, onde são identificadas e estudadas algumas classes de polígonos para as quais são determinadas soluções exactas e/ou limites combinatórios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de Mestrado, Biologia Marinha, Especialização em Pescas e Aquacultura, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2009

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time-sensitive Wireless Sensor Network (WSN) applications require finite delay bounds in critical situations. This paper provides a methodology for the modeling and the worst-case dimensioning of cluster-tree WSNs. We provide a fine model of the worst-case cluster-tree topology characterized by its depth, the maximum number of child routers and the maximum number of child nodes for each parent router. Using Network Calculus, we derive “plug-and-play” expressions for the endto- end delay bounds, buffering and bandwidth requirements as a function of the WSN cluster-tree characteristics and traffic specifications. The cluster-tree topology has been adopted by many cluster-based solutions for WSNs. We demonstrate how to apply our general results for dimensioning IEEE 802.15.4/Zigbee cluster-tree WSNs. We believe that this paper shows the fundamental performance limits of cluster-tree wireless sensor networks by the provision of a simple and effective methodology for the design of such WSNs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem addressed here originates in the industry of flat glass cutting and wood panel sawing, where smaller items are cut from larger items accordingly to predefined cutting patterns. In this type of industry the smaller pieces that are cut from the patterns are piled around the machine in stacks according to the size of the pieces, which are moved to the warehouse only when all items of the same size have been cut. If the cutting machine can process only one pattern at a time, and the workspace is limited, it is desirable to set the sequence in which the cutting patterns are processed in a way to minimize the maximum number of open stacks around the machine. This problem is known in literature as the minimization of open stacks (MOSP). To find the best sequence of the cutting patterns, we propose an integer programming model, based on interval graphs, that searches for an appropriate edge completion of the given graph of the problem, while defining a suitable coloring of its vertices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores situations where tenants in public houses, in a specific neighborhood, are given the legislated right to buy the houses they live in or can choose to remain in their houses and pay the regulated rent. This type of legislation has been passed in many European countries in the last 30-35 years (the U.K. Housing Act 1980 is a leading example). The main objective with this type of legislation is to transfer the ownership of the houses from the public authority to the tenants. To achieve this goal, selling prices of the public houses are typically heavily subsidized. The legislating body then faces a trade-off between achieving the goals of the legislation and allocating the houses efficiently. This paper investigates this specific trade-off and identifies an allocation rule that is individually rational, equilibrium selecting, and group non-manipulable in a restricted preference domain that contains “almost all” preference profiles. In this restricted domain, the identified rule is the equilibrium selecting rule that transfers the maximum number of ownerships from the public authority to the tenants. This rule is preferred to the current U.K. system by both the existing tenants and the public authority. Finally, a dynamic process for finding the outcome of the identified rule, in a finite number of steps, is provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a new methodology for the prediction of scoliosis curve types from non invasive acquisitions of the back surface of the trunk is proposed. One hundred and fifty-nine scoliosis patients had their back surface acquired in 3D using an optical digitizer. Each surface is then characterized by 45 local measurements of the back surface rotation. Using a semi-supervised algorithm, the classifier is trained with only 32 labeled and 58 unlabeled data. Tested on 69 new samples, the classifier succeeded in classifying correctly 87.0% of the data. After reducing the number of labeled training samples to 12, the behavior of the resulting classifier tends to be similar to the reference case where the classifier is trained only with the maximum number of available labeled data. Moreover, the addition of unlabeled data guided the classifier towards more generalizable boundaries between the classes. Those results provide a proof of feasibility for using a semi-supervised learning algorithm to train a classifier for the prediction of a scoliosis curve type, when only a few training data are labeled. This constitutes a promising clinical finding since it will allow the diagnosis and the follow-up of scoliotic deformities without exposing the patient to X-ray radiations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis investigated the elastic properties and phase transitions in selected mixed sulphate crystals – Lithium Hydrazinium Sulphate [LiN2H2SO4], Lithium Ammonium Sulphate [LiNH4SO4] and Lithium Potassium Sulphate [LiKSO4] – using ultrasonic technique. The pulse echo overlap technique has been used for measuring ultrasonic velocity and its dependence on temperature along different directions with waves of longitudinal and transverse polarizations. Two major numerical techniques and the corresponding computer programs developed as part of present work are presented in this thesis. All the 9 elastic constants of LHS are determined accurately from ultrasonic measurements and applying misorientation correction refines the constants. Ultrasonic measurements are performed in LAS to determine the elastic constants and to study the low temperature phase transitions. Temperature variation studies of elastic constant of LAS are performed for 6 different modes of propagation for heating and cooling at low temperatures. All the 5 independent elastic constants of LPS is determined using ultrasonic measurements. It is concluded that LPS crystal does not undergo a phase transition near this temperature. A comparison of the three crystals studied shows that LPS has maximum number of phase transitions and LHS has the least number. It is interesting to note that LPS has the simplest formula unit among the three. There is considerable scope for the future work on these crystals and others belonging to the sulphate family.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis entitled “Contribution of size fractions of planktonic algae to primary organic productivity in the coastal waters of cochin,south west coast of india”. Marine ecosystems planktonic algae are the most important primary producers on wliich considerable attention is being given on account of their supreme status in the marine food chain.The study of primary production in the Indian Ocean started With DANA (I928-30),, John Murray t I933-34). Discovery ( I934) and Albatross (I947-48) expeditions which tried to evaluate productivity from nutrients and standing crop of phytoplankton .The bioproductivity of the marine environment is dependent on various primary producers. ranging in size from picoplankton to larger macro phytoplankton. The quantity and quality of various size fractions of planktonic algae at any locality depend mainly on the hydrographic conditions of the area .In the coastal waters of Cochin- south west coast of lndia. Planktonic algal community is composed mainly of the diatoms, the dinoflagellates, the blue-green algae and the silicoflagellates, the former two contributing the major flora and found distributed in the all size fractions. The maximum number of species of diatoms at station 1 and station 2 was found in the pre-monsoon season.. The size groups of planktonic algae greater than 53 um are dominated by filamentous- chain forming and colonial diatoms. The coastal waters of Cochin. planktonic algae less than 53 um in size contribute significantly to primary productivity and the biodiversity of the microflora, indicating the presence of rich fishery resources in the south west coast of india.The study of different size fractions of planktonic algae and their relative contribution to the primary organic production is a useful tool for the estimation of the quantity and quality of fisheries.A deeper investigation on the occurrence of these microalgae and proper identification of their species would be of immense help for the assessment of the specificity and magnitude of fishery resources.