927 resultados para Elementary shortest path with resource constraints
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
The lectures on botany were first published in 1829.
Resumo:
Cyclic peptides containing oxazole and thiazole heterocycles have been examined for their capacity to be used as scaffolds in larger, more complex, protein-like structures. Both the macrocyclic scaffolds and the supramolecular structures derived therefrom have been visualised by molecular modelling techniques. These molecules are too symmetrical to examine structurally by NMR spectroscopy. The cyclic hexapeptide ([Aaa-Thz](3), [Aaa-Oxz](3)) and cyclic octapeptide ([Aaa-Thz](4), [Aaa-Oxz](4)) analogues are composed of dipeptide surrogates (Aaa: amino acid, Thz: thiazole, Oxz: oxazole) derived from intramolecular condensation of cysteine or serine/threonine side chains in dipeptides like Aaa-Cys, Aaa-Ser and Aaa-Thr. The five-membered heterocyclic rings, like thiazole, oxazole and reduced analogues like thiazoline, thiazolidine and oxazoline have profound influences on the structures and bioactivities of cyclic peptides derived therefrom. This work suggests that such constrained cyclic peptides can be used as scaffolds to create a range of novel protein-like supramolecular structures (e.g. cylinders, troughs, cones, multi-loop structures, helix bundles) that are comparable in size, shape and composition to bioactive surfaces of proteins. They may therefore represent interesting starting points for the design of novel artificial proteins and artificial enzymes. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
In this paper the network problem of determining all-pairs shortest-path is examined. A distributed algorithm which runs in O(n) time on a network of n nodes is presented. The number of messages of the algorithm is O(e+n log n) where e is the number of communication links of the network. We prove that this algorithm is time optimal.
Resumo:
In machine learning, Gaussian process latent variable model (GP-LVM) has been extensively applied in the field of unsupervised dimensionality reduction. When some supervised information, e.g., pairwise constraints or labels of the data, is available, the traditional GP-LVM cannot directly utilize such supervised information to improve the performance of dimensionality reduction. In this case, it is necessary to modify the traditional GP-LVM to make it capable of handing the supervised or semi-supervised learning tasks. For this purpose, we propose a new semi-supervised GP-LVM framework under the pairwise constraints. Through transferring the pairwise constraints in the observed space to the latent space, the constrained priori information on the latent variables can be obtained. Under this constrained priori, the latent variables are optimized by the maximum a posteriori (MAP) algorithm. The effectiveness of the proposed algorithm is demonstrated with experiments on a variety of data sets. © 2010 Elsevier B.V.
Resumo:
Writing is an academic skill critical to students in today's schools as it serves as a predominant means for demonstrating knowledge during school years (Graham, 2008). However, for many students with Specific Learning Disabilities (SLD), learning to write is a challenging, complex process (Lane, Graham, Harris, & Weisenbach, 2006). Students SLD have substantial writing challenges related to the nature of their disability (Mayes & Calhoun, 2005). ^ This study investigated the effects of computer graphic organizer software on the narrative writing compositions of four, fourth- and fifth-grade, elementary-level boys with SLD. A multiple baseline design across subjects was used to explore the effects of the computer graphic organizer software on four dependent variables: total number of words, total planning time, number of common story elements, and overall organization. ^ Prior to baseline, participants were taught the fundamentals of narrative writing. Throughout baseline and intervention, participants were read a narrative writing prompt and were allowed up to 10 minutes to plan their writing, followed by 15 minutes for writing, and 5 minutes of editing. During baseline, all planning was done using paper and pencil. During intervention, planning was done on the computer using a graphic organizer developed from the software program Kidspiration 3.0 (2011). All compositions were written and editing was done using paper and pencil during baseline and intervention. ^ The results of this study indicated that to varying degrees computer graphic organizers had a positive effect on the narrative writing abilities of elementary aged students with SLD. Participants wrote more words (from 54.74 to 96.60 more), planned for longer periods of time (from 4.50 to 9.50 more minutes), and included more story elements in their compositions (from 2.00 to 5.10 more out of a possible 6). There were nominal to no improvements in overall organization across the 4 participants. ^ The results suggest that teachers of students with SLD should considering use computer graphic organizers in their narrative writing instruction, perhaps in conjunction with remedial writing strategies. Future investigations can include other types of writing genres, other stages of writing, participants with varied demographics and their use combined with remedial writing instruction. ^
Resumo:
We consider a linear precoder design for an underlay cognitive radio multiple-input multiple-output broadcast channel, where the secondary system consisting of a secondary base-station (BS) and a group of secondary users (SUs) is allowed to share the same spectrum with the primary system. All the transceivers are equipped with multiple antennas, each of which has its own maximum power constraint. Assuming zero-forcing method to eliminate the multiuser interference, we study the sum rate maximization problem for the secondary system subject to both per-antenna power constraints at the secondary BS and the interference power constraints at the primary users. The problem of interest differs from the ones studied previously that often assumed a sum power constraint and/or single antenna employed at either both the primary and secondary receivers or the primary receivers. To develop an efficient numerical algorithm, we first invoke the rank relaxation method to transform the considered problem into a convex-concave problem based on a downlink-uplink result. We then propose a barrier interior-point method to solve the resulting saddle point problem. In particular, in each iteration of the proposed method we find the Newton step by solving a system of discrete-time Sylvester equations, which help reduce the complexity significantly, compared to the conventional method. Simulation results are provided to demonstrate fast convergence and effectiveness of the proposed algorithm.
Resumo:
Abstract not available
Resumo:
Production companies use raw materials to compose end-products. They often make different products with the same raw materials. In this research, the focus lies on the production of two end-products consisting of (partly) the same raw materials as cheap as possible. Each of the products has its own demand and quality requirements consisting of quadratic constraints. The minimization of the costs, given the quadratic constraints is a global optimization problem, which can be difficult because of possible local optima. Therefore, the multi modal character of the (bi-) blend problem is investigated. Standard optimization packages (solvers) in Matlab and GAMS were tested on their ability to solve the problem. In total 20 test cases were generated and taken from literature to test solvers on their effectiveness and efficiency to solve the problem. The research also gives insight in adjusting the quadratic constraints of the problem in order to make a robust problem formulation of the bi-blend problem.
Resumo:
In questa tesi viene trattata la problematica di determinare le migliori K soluzioni per due problemi di ottimizzazione, il Knapsack Problem 0-1 e lo Shortest Path Problem. Tali soluzioni possono essere impiegate all'interno di metodi di column generation per la risoluzione di problemi reali, ad esempio Bin Packing Problems e problemi di scheduling di veicoli ed equipaggi. Sono stati implementati, per verificarne sperimentalmente le prestazioni, nuovi algoritmi di programmazione dinamica, sviluppati nell’ambito di un programma di ricerca. Inizialmente, per entrambi i problemi, è stato descritto un algoritmo che determinasse le migliori K soluzioni per ogni possibile sottoproblema; partendo da uno zaino con capacità nulla, nel caso del Knapsack Problem 0-1, e dalla determinazione di un cammino dal vertice sorgente in se stesso per lo Shortest Path Problem, l’algoritmo determina le migliori soluzioni di sottoproblemi via via sempre più grandi, utilizzando le soluzioni costruite per gli stati precedenti, fino a ottenere le migliori soluzioni del problema globale. Successivamente, è stato definito un algoritmo basato su un approccio di ricorsione backward; in questo caso si utilizza una funzione ricorsiva che, chiamata a partire dallo stato corrispondente al problema globale, viene richiamata solo sugli stati intermedi strettamente necessari, e per ognuno di essi non vengono determinate soluzioni superflue.
Resumo:
Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature
Resumo:
A natureza rígida de redes de multiplexação por divisão de comprimentos de onda (WDM) provoca exploração ineficiente de capacidade espectral. Dessa forma, redes flexíveis são um possível avanço para a tecnologia óptica por viabilizarem melhor aproveitamento dos recursos espectrais disponíveis. Com o intuito de aferir a possível aplicabilidade de redes flexíveis, este trabalho propõe uma estratégia de avaliação de desempenho baseada em simulações e comparações entre resultados obtidos. Para tanto, várias simulações a tempo discreto foram implementadas em dois simuladores desenvolvidos em Matlab a fim de analisar diferentes políticas de alocação de espectro (First-Fit, Smallest-Fit, Exact-Fit e Random-Fit) em três algoritmos de roteamento por caminhos ópticos não híbridos: o roteamento por fragmentação externa (FA), por caminhos mais curtos com máxima eficiência de reuso espectral (SPSR) e por balanceamento de cargas (BLSA). Duas topologias de rede foram utilizadas: um pequeno subconjunto de 6 nós da Cost239 e uma topologia aleatória de 7 nós. Admitindo-se que efeitos de camada física não foram configurados como restrições, foram realizadas comparações entre as diversas técnicas estudadas, objetivando-se apontar, baseado nas especificidades dos cenários propostos, qual o método mais adequado de alocação espectral em termos de frequência de bloqueio entre as quatro políticas de alocação de espectro consideradas.
Resumo:
O monitoramento da razão Z de duas variáveis através de gráfico de controle tem sido um tema recentemente explorado na literatura. Para analisar mais o assunto, o estudo avalia a eficiência e viabilidade de aplicação dessa ferramenta como suporte na tomada de decisão para gerenciamento de capacidade de mão de obra de retaguarda (doravante mencionado como Backoffice) de serviço de empresa do setor bancário. Tradicionalmente, gráficos de controle tem sido utilizados para monitorar o processo produtivo de manufaturas, mas recentemente tem sido adotado para monitoramento de alguns serviços. Apesar de ainda seguir muitos conceitos pioneiros na manufatura, a atividade do setor de serviços apresenta suas particularidades como, por exemplo, a impossibilidade de gerar estoque. Assim, a necessidade de adequar seus recursos à demanda torna-se essencial, sendo fundamental a gestão de controles e sua urgência para que possa reagir rapidamente em caso de variação de demanda e adequar sua capacidade. Em um cenário de restrição de recursos, planejar é crucial para evitar desperdícios e garantir eficiência. O objetivo deste estudo é apresentar o gráfico de controle como ferramenta para monitorar a razão de duas variáveis aleatórias: a demanda e a mão de obra em Backoffice de serviço em um banco. Nesse trabalho, gráfico de controle de Shewhart tradicional e gráfico de controle de Shewhart com regras suplementares são analisados e os resultados obtidos confirmam a possibilidade de utilização da ferramenta de gráficos de controle para o gerenciamento e adequação de mão de obra para atender a demanda. O monitoramento da razão (demanda/ mão de obra) ajudará o gestor a alocar adequadamente o time (mão de obra) de acordo com a demanda e a capacidade produtiva. Como contribuição, o estudo avalia o comportamento da razão Z = X/Y em situação de alta variabilidade da variável X e baixa variabilidade da variável Y .