872 resultados para direct search optimization algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to compare the race characteristics of the start and turn segments of national and regional level swimmers. In the study, 100 and 200-m events were analysed during the finals session of the Open Comunidad de Madrid (Spain) tournament. The “individualized-distance” method with two-dimensional direct linear transformation algorithm was used to perform race analyses. National level swimmers obtained faster velocities in all race segments and stroke comparisons,although significant inter-level differences in start velocity were only obtained in half (8 out of 16) of the analysed events. Higher level swimmers also travelled for longer start and turn distances but only in the race segments where the gain of speed was high. This was observed in the turn segments, in the backstroke and butterfly strokes and during the 200-m breaststroke event, but not in any of the freestyle events. Time improvements due to the appropriate extension of the underwater subsections appeared to be critical for the end race result and should be carefully evaluated by the “individualized-distance” method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper the daily temporal and spatial behavior of electric vehicles (EVs) is modelled using an activity-based (ActBM) microsimulation model for Flanders region (Belgium). Assuming that all EVs are completely charged at the beginning of the day, this mobility model is used to determine the percentage of Flemish vehicles that cannot cover their programmed daily trips and need to be recharged during the day. Assuming a variable electricity price, an optimization algorithm determines when and where EVs can be recharged at minimum cost for their owners. This optimization takes into account the individual mobility constraint for each vehicle, as they can only be charged when the car is stopped and the owner is performing an activity. From this information, the aggregated electric demand for Flanders is obtained, identifying the most overloaded areas at the critical hours. Finally it is also analyzed what activities EV owners are underway during their recharging period. From this analysis, different actions for public charging point deployment in different areas and for different activities are proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La electrónica digital moderna presenta un desafío a los diseñadores de sistemas de potencia. El creciente alto rendimiento de microprocesadores, FPGAs y ASICs necesitan sistemas de alimentación que cumplan con requirimientos dinámicos y estáticos muy estrictos. Específicamente, estas alimentaciones son convertidores DC-DC de baja tensión y alta corriente que necesitan ser diseñados para tener un pequeño rizado de tensión y una pequeña desviación de tensión de salida bajo transitorios de carga de una alta pendiente. Además, dependiendo de la aplicación, se necesita cumplir con otros requerimientos tal y como proveer a la carga con ”Escalado dinámico de tensión”, donde el convertidor necesitar cambiar su tensión de salida tan rápidamente posible sin sobreoscilaciones, o ”Posicionado Adaptativo de la Tensión” donde la tensión de salida se reduce ligeramente cuanto más grande sea la potencia de salida. Por supuesto, desde el punto de vista de la industria, las figuras de mérito de estos convertidores son el coste, la eficiencia y el tamaño/peso. Idealmente, la industria necesita un convertidor que es más barato, más eficiente, más pequeño y que aún así cumpla con los requerimienos dinámicos de la aplicación. En este contexto, varios enfoques para mejorar la figuras de mérito de estos convertidores se han seguido por la industria y la academia tales como mejorar la topología del convertidor, mejorar la tecnología de semiconducores y mejorar el control. En efecto, el control es una parte fundamental en estas aplicaciones ya que un control muy rápido hace que sea más fácil que una determinada topología cumpla con los estrictos requerimientos dinámicos y, consecuentemente, le da al diseñador un margen de libertar más amplio para mejorar el coste, la eficiencia y/o el tamaño del sistema de potencia. En esta tesis, se investiga cómo diseñar e implementar controles muy rápidos para el convertidor tipo Buck. En esta tesis se demuestra que medir la tensión de salida es todo lo que se necesita para lograr una respuesta casi óptima y se propone una guía de diseño unificada para controles que sólo miden la tensión de salida Luego, para asegurar robustez en controles muy rápidos, se proponen un modelado y un análisis de estabilidad muy precisos de convertidores DC-DC que tienen en cuenta circuitería para sensado y elementos parásitos críticos. También, usando este modelado, se propone una algoritmo de optimización que tiene en cuenta las tolerancias de los componentes y sensados distorsionados. Us ando este algoritmo, se comparan controles muy rápidos del estado del arte y su capacidad para lograr una rápida respuesta dinámica se posiciona según el condensador de salida utilizado. Además, se propone una técnica para mejorar la respuesta dinámica de los controladores. Todas las propuestas se han corroborado por extensas simulaciones y prototipos experimentales. Con todo, esta tesis sirve como una metodología para ingenieros para diseñar e implementar controles rápidos y robustos de convertidores tipo Buck. ABSTRACT Modern digital electronics present a challenge to designers of power systems. The increasingly high-performance of microprocessors, FPGAs (Field Programmable Gate Array) and ASICs (Application-Specific Integrated Circuit) require power supplies to comply with very demanding static and dynamic requirements. Specifically, these power supplies are low-voltage/high-current DC-DC converters that need to be designed to exhibit low voltage ripple and low voltage deviation under high slew-rate load transients. Additionally, depending on the application, other requirements need to be met such as to provide to the load ”Dynamic Voltage Scaling” (DVS), where the converter needs to change the output voltage as fast as possible without underdamping, or ”Adaptive Voltage Positioning” (AVP) where the output voltage is slightly reduced the greater the output power. Of course, from the point of view of the industry, the figures of merit of these converters are the cost, efficiency and size/weight. Ideally, the industry needs a converter that is cheaper, more efficient, smaller and that can still meet the dynamic requirements of the application. In this context, several approaches to improve the figures of merit of these power supplies are followed in the industry and academia such as improving the topology of the converter, improving the semiconductor technology and improving the control. Indeed, the control is a fundamental part in these applications as a very fast control makes it easier for the topology to comply with the strict dynamic requirements and, consequently, gives the designer a larger margin of freedom to improve the cost, efficiency and/or size of the power supply. In this thesis, how to design and implement very fast controls for the Buck converter is investigated. This thesis proves that sensing the output voltage is all that is needed to achieve an almost time-optimal response and a unified design guideline for controls that only sense the output voltage is proposed. Then, in order to assure robustness in very fast controls, a very accurate modeling and stability analysis of DC-DC converters is proposed that takes into account sensing networks and critical parasitic elements. Also, using this modeling approach, an optimization algorithm that takes into account tolerances of components and distorted measurements is proposed. With the use of the algorithm, very fast analog controls of the state-of-the-art are compared and their capabilities to achieve a fast dynamic response are positioned de pending on the output capacitor. Additionally, a technique to improve the dynamic response of controllers is also proposed. All the proposals are corroborated by extensive simulations and experimental prototypes. Overall, this thesis serves as a methodology for engineers to design and implement fast and robust controls for Buck-type converters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El consumo de combustible en un automóvil es una característica que se intenta mejorar continuamente debido a los precios del carburante y a la creciente conciencia medioambiental. Esta tesis doctoral plantea un algoritmo de optimización del consumo que tiene en cuenta las especificaciones técnicas del vehículo, el perfil de orografía de la carretera y el tráfico presente en ella. El algoritmo de optimización calcula el perfil de velocidad óptima que debe seguir el vehículo para completar un recorrido empleando un tiempo de viaje especificado. El cálculo del perfil de velocidad óptima considera los valores de pendiente de la carretera así como también las condiciones de tráfico vehicular de la franja horaria en que se realiza el recorrido. El algoritmo de optimización reacciona ante condiciones de tráfico cambiantes y adapta continuamente el perfil óptimo de velocidad para que el vehículo llegue al destino cumpliendo el horario de llegada establecido. La optimización de consumo es aplicada en vehículos convencionales de motor de combustión interna y en vehículos híbridos tipo serie. Los datos de consumo utilizados por el algoritmo de optimización se obtienen mediante la simulación de modelos cuasi-estáticos de los vehículos. La técnica de minimización empleada por el algoritmo es la Programación Dinámica. El algoritmo divide la optimización del consumo en dos partes claramente diferenciadas y aplica la Programación Dinámica sobre cada una de ellas. La primera parte corresponde a la optimización del consumo del vehículo en función de las condiciones de tráfico. Esta optimización calcula un perfil de velocidad promedio que evita, cuando es posible, las retenciones de tráfico. El tiempo de viaje perdido durante una retención de tráfico debe recuperarse a través de un aumento posterior de la velocidad promedio que incrementaría el consumo del vehículo. La segunda parte de la optimización es la encargada del cálculo de la velocidad óptima en función de la orografía y del tiempo de viaje disponible. Dado que el consumo de combustible del vehículo se incrementa cuando disminuye el tiempo disponible para finalizar un recorrido, esta optimización utiliza factores de ponderación para modular la influencia que tiene cada una de estas dos variables en el proceso de minimización. Aunque los factores de ponderación y la orografía de la carretera condicionan el nivel de ahorro de la optimización, los perfiles de velocidad óptima calculados logran ahorros de consumo respecto de un perfil de velocidad constante que obtiene el mismo tiempo de recorrido. Las simulaciones indican que el ahorro de combustible del vehículo convencional puede lograr hasta un 8.9% mientras que el ahorro de energía eléctrica del vehículo híbrido serie un 2.8%. El algoritmo fusiona la optimización en función de las condiciones del tráfico y la optimización en función de la orografía durante el cálculo en tiempo real del perfil óptimo de velocidad. La optimización conjunta se logra cuando el perfil de velocidad promedio resultante de la optimización en función de las condiciones de tráfico define los valores de los factores de ponderación de la optimización en función de la orografía. Aunque el nivel de ahorro de la optimización conjunta depende de las condiciones de tráfico, de la orografía, del tiempo de recorrido y de las características propias del vehículo, las simulaciones indican ahorros de consumo superiores al 6% en ambas clases de vehículo respecto a optimizaciones que no logran evitar retenciones de tráfico en la carretera. ABSTRACT Fuel consumption of cars is a feature that is continuously being improved due to the fuel price and an increasing environmental awareness. This doctoral dissertation describes an optimization algorithm to decrease the fuel consumption taking into account the technical specifications of the vehicle, the terrain profile of the road and the traffic conditions of the trip. The algorithm calculates the optimal speed profile that completes a trip having a specified travel time. This calculation considers the road slope and the expected traffic conditions during the trip. The optimization algorithm is also able to react to changing traffic conditions and tunes the optimal speed profile to reach the destination within the specified arrival time. The optimization is applied on a conventional vehicle and also on a Series Hybrid Electric vehicle (SHEV). The fuel consumption optimization algorithm uses data obtained from quasi-static simulations. The algorithm is based on Dynamic Programming and divides the fuel consumption optimization problem into two parts. The first part of the optimization process reduces the fuel consumption according to foreseeable traffic conditions. It calculates an average speed profile that tries to avoid, if possible, the traffic jams on the road. Traffic jams that delay drivers result in higher vehicle speed to make up for lost time. A higher speed of the vehicle within an already defined time scheme increases fuel consumption. The second part of the optimization process is in charge of calculating the optimal speed profile according to the road slope and the remaining travel time. The optimization tunes the fuel consumption and travel time relevancies by using two penalty factors. Although the optimization results depend on the road slope and the travel time, the optimal speed profile produces improvements of 8.9% on the fuel consumption of the conventional car and of 2.8% on the spent energy of the hybrid vehicle when compared with a constant speed profile. The two parts of the optimization process are combined during the Real-Time execution of the algorithm. The average speed profile calculated by the optimization according to the traffic conditions provides values for the two penalty factors utilized by the second part of the optimization process. Although the savings depend on the road slope, traffic conditions, vehicle features, and the remaining travel time, simulations show that this joint optimization process can improve the energy consumption of the two vehicles types by more than 6%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Debido al creciente aumento del tamaño de los datos en muchos de los actuales sistemas de información, muchos de los algoritmos de recorrido de estas estructuras pierden rendimento para realizar búsquedas en estos. Debido a que la representacion de estos datos en muchos casos se realiza mediante estructuras nodo-vertice (Grafos), en el año 2009 se creó el reto Graph500. Con anterioridad, otros retos como Top500 servían para medir el rendimiento en base a la capacidad de cálculo de los sistemas, mediante tests LINPACK. En caso de Graph500 la medicion se realiza mediante la ejecución de un algoritmo de recorrido en anchura de grafos (BFS en inglés) aplicada a Grafos. El algoritmo BFS es uno de los pilares de otros muchos algoritmos utilizados en grafos como SSSP, shortest path o Betweeness centrality. Una mejora en este ayudaría a la mejora de los otros que lo utilizan. Analisis del Problema El algoritmos BFS utilizado en los sistemas de computación de alto rendimiento (HPC en ingles) es usualmente una version para sistemas distribuidos del algoritmo secuencial original. En esta versión distribuida se inicia la ejecución realizando un particionado del grafo y posteriormente cada uno de los procesadores distribuidos computará una parte y distribuirá sus resultados a los demás sistemas. Debido a que la diferencia de velocidad entre el procesamiento en cada uno de estos nodos y la transfencia de datos por la red de interconexión es muy alta (estando en desventaja la red de interconexion) han sido bastantes las aproximaciones tomadas para reducir la perdida de rendimiento al realizar transferencias. Respecto al particionado inicial del grafo, el enfoque tradicional (llamado 1D-partitioned graph en ingles) consiste en asignar a cada nodo unos vertices fijos que él procesará. Para disminuir el tráfico de datos se propuso otro particionado (2D) en el cual la distribución se haciá en base a las aristas del grafo, en vez de a los vertices. Este particionado reducía el trafico en la red en una proporcion O(NxM) a O(log(N)). Si bien han habido otros enfoques para reducir la transferecnia como: reordemaniento inicial de los vertices para añadir localidad en los nodos, o particionados dinámicos, el enfoque que se va a proponer en este trabajo va a consistir en aplicar técnicas recientes de compression de grandes sistemas de datos como Bases de datos de alto volume o motores de búsqueda en internet para comprimir los datos de las transferencias entre nodos.---ABSTRACT---The Breadth First Search (BFS) algorithm is the foundation and building block of many higher graph-based operations such as spanning trees, shortest paths and betweenness centrality. The importance of this algorithm increases each day due to it is a key requirement for many data structures which are becoming popular nowadays. These data structures turn out to be internally graph structures. When the BFS algorithm is parallelized and the data is distributed into several processors, some research shows a performance limitation introduced by the interconnection network [31]. Hence, improvements on the area of communications may benefit the global performance in this key algorithm. In this work it is presented an alternative compression mechanism. It differs with current existing methods in that it is aware of characteristics of the data which may benefit the compression. Apart from this, we will perform a other test to see how this algorithm (in a dis- tributed scenario) benefits from traditional instruction-based optimizations. Last, we will review the current supercomputing techniques and the related work being done in the area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's of the new mesh remain constant and equal to the initial FE mesh. In order to find the mesh producing the minimum of the selected objective function the steepest descent gradient technique has been applied as optimization algorithm. However this efficient technique has the drawback that demands a large computation power. Extensive application of this methodology to different 2-D elasticity problems leads to the conclusion that isometric isostatic meshes (ii-meshes) produce better results than the standard reasonably initial regular meshes used in practice. This conclusion seems to be independent on the objective function used for comparison. These ii-meshes are obtained by placing FE nodes along the isostatic lines, i.e. curves tangent at each point to the principal direction lines of the elastic problem to be solved and they should be regularly spaced in order to build regular elements. That means ii-meshes are usually obtained by iteration, i.e. with the initial FE mesh the elastic analysis is carried out. By using the obtained results of this analysis the net of isostatic lines can be drawn and in a first trial an ii-mesh can be built. This first ii-mesh can be improved, if it necessary, by analyzing again the problem and generate after the FE analysis the new and improved ii-mesh. Typically, after two first tentative ii-meshes it is sufficient to produce good FE results from the elastic analysis. Several example of this procedure are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present different error measurements with the aim to evaluate the quality of the approximations generated by the GNG3D method for mesh simplification. The first phase of this method consists on the execution of the GNG3D algorithm, described in the paper. The primary goal of this phase is to obtain a simplified set of vertices representing the best approximation of the original 3D object. In the reconstruction phase we use the information provided by the optimization algorithm to reconstruct the faces thus obtaining the optimized mesh. The implementation of three error functions, named Eavg, Emax, Esur, permitts us to control the error of the simplified model, as it is shown in the examples studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tese de doutoramento, Engenharia Biomédica e Biofísica, Universidade de Lisboa, Faculdade de Ciências, 2016

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La riduzione dei consumi di combustibili fossili e lo sviluppo di tecnologie per il risparmio energetico sono una questione di centrale importanza sia per l’industria che per la ricerca, a causa dei drastici effetti che le emissioni di inquinanti antropogenici stanno avendo sull’ambiente. Mentre un crescente numero di normative e regolamenti vengono emessi per far fronte a questi problemi, la necessità di sviluppare tecnologie a basse emissioni sta guidando la ricerca in numerosi settori industriali. Nonostante la realizzazione di fonti energetiche rinnovabili sia vista come la soluzione più promettente nel lungo periodo, un’efficace e completa integrazione di tali tecnologie risulta ad oggi impraticabile, a causa sia di vincoli tecnici che della vastità della quota di energia prodotta, attualmente soddisfatta da fonti fossili, che le tecnologie alternative dovrebbero andare a coprire. L’ottimizzazione della produzione e della gestione energetica d’altra parte, associata allo sviluppo di tecnologie per la riduzione dei consumi energetici, rappresenta una soluzione adeguata al problema, che può al contempo essere integrata all’interno di orizzonti temporali più brevi. L’obiettivo della presente tesi è quello di investigare, sviluppare ed applicare un insieme di strumenti numerici per ottimizzare la progettazione e la gestione di processi energetici che possa essere usato per ottenere una riduzione dei consumi di combustibile ed un’ottimizzazione dell’efficienza energetica. La metodologia sviluppata si appoggia su un approccio basato sulla modellazione numerica dei sistemi, che sfrutta le capacità predittive, derivanti da una rappresentazione matematica dei processi, per sviluppare delle strategie di ottimizzazione degli stessi, a fronte di condizioni di impiego realistiche. Nello sviluppo di queste procedure, particolare enfasi viene data alla necessità di derivare delle corrette strategie di gestione, che tengano conto delle dinamiche degli impianti analizzati, per poter ottenere le migliori prestazioni durante l’effettiva fase operativa. Durante lo sviluppo della tesi il problema dell’ottimizzazione energetica è stato affrontato in riferimento a tre diverse applicazioni tecnologiche. Nella prima di queste è stato considerato un impianto multi-fonte per la soddisfazione della domanda energetica di un edificio ad uso commerciale. Poiché tale sistema utilizza una serie di molteplici tecnologie per la produzione dell’energia termica ed elettrica richiesta dalle utenze, è necessario identificare la corretta strategia di ripartizione dei carichi, in grado di garantire la massima efficienza energetica dell’impianto. Basandosi su un modello semplificato dell’impianto, il problema è stato risolto applicando un algoritmo di Programmazione Dinamica deterministico, e i risultati ottenuti sono stati comparati con quelli derivanti dall’adozione di una più semplice strategia a regole, provando in tal modo i vantaggi connessi all’adozione di una strategia di controllo ottimale. Nella seconda applicazione è stata investigata la progettazione di una soluzione ibrida per il recupero energetico da uno scavatore idraulico. Poiché diversi layout tecnologici per implementare questa soluzione possono essere concepiti e l’introduzione di componenti aggiuntivi necessita di un corretto dimensionamento, è necessario lo sviluppo di una metodologia che permetta di valutare le massime prestazioni ottenibili da ognuna di tali soluzioni alternative. Il confronto fra i diversi layout è stato perciò condotto sulla base delle prestazioni energetiche del macchinario durante un ciclo di scavo standardizzato, stimate grazie all’ausilio di un dettagliato modello dell’impianto. Poiché l’aggiunta di dispositivi per il recupero energetico introduce gradi di libertà addizionali nel sistema, è stato inoltre necessario determinare la strategia di controllo ottimale dei medesimi, al fine di poter valutare le massime prestazioni ottenibili da ciascun layout. Tale problema è stato di nuovo risolto grazie all’ausilio di un algoritmo di Programmazione Dinamica, che sfrutta un modello semplificato del sistema, ideato per lo scopo. Una volta che le prestazioni ottimali per ogni soluzione progettuale sono state determinate, è stato possibile effettuare un equo confronto fra le diverse alternative. Nella terza ed ultima applicazione è stato analizzato un impianto a ciclo Rankine organico (ORC) per il recupero di cascami termici dai gas di scarico di autovetture. Nonostante gli impianti ORC siano potenzialmente in grado di produrre rilevanti incrementi nel risparmio di combustibile di un veicolo, è necessario per il loro corretto funzionamento lo sviluppo di complesse strategie di controllo, che siano in grado di far fronte alla variabilità della fonte di calore per il processo; inoltre, contemporaneamente alla massimizzazione dei risparmi di combustibile, il sistema deve essere mantenuto in condizioni di funzionamento sicure. Per far fronte al problema, un robusto ed efficace modello dell’impianto è stato realizzato, basandosi sulla Moving Boundary Methodology, per la simulazione delle dinamiche di cambio di fase del fluido organico e la stima delle prestazioni dell’impianto. Tale modello è stato in seguito utilizzato per progettare un controllore predittivo (MPC) in grado di stimare i parametri di controllo ottimali per la gestione del sistema durante il funzionamento transitorio. Per la soluzione del corrispondente problema di ottimizzazione dinamica non lineare, un algoritmo basato sulla Particle Swarm Optimization è stato sviluppato. I risultati ottenuti con l’adozione di tale controllore sono stati confrontati con quelli ottenibili da un classico controllore proporzionale integrale (PI), mostrando nuovamente i vantaggi, da un punto di vista energetico, derivanti dall’adozione di una strategia di controllo ottima.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report the formation and structural properties of co-crystals containing gemfibrozil and hydroxy derivatives of t-butylamine H2NC(CH3)3-n(CH2OH)n, with n=0, 1, 2 and 3. In each case, a 1:1 co-crystal is formed, with transfer of a proton from the carboxylic acid group of gemfibrozil to the amino group of the t-butylamine derivative. All of the co-crystal materials prepared are polycrystalline powders, and do not contain single crystals of suitable size and/or quality for single crystal X-ray diffraction studies. Structure determination of these materials has been carried out directly from powder X-ray diffraction data, using the direct-space Genetic Algorithm technique for structure solution followed by Rietveld refinement. The structural chemistry of this series of co-crystal materials reveals well-defined structural trends within the first three members of the family (n=0, 1, 2), but significantly contrasting structural properties for the member with n=3. © 2007 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MOTIVATION: There is much interest in reducing the complexity inherent in the representation of the 20 standard amino acids within bioinformatics algorithms by developing a so-called reduced alphabet. Although there is no universally applicable residue grouping, there are numerous physiochemical criteria upon which one can base groupings. Local descriptors are a form of alignment-free analysis, the efficiency of which is dependent upon the correct selection of amino acid groupings. RESULTS: Within the context of G-protein coupled receptor (GPCR) classification, an optimization algorithm was developed, which was able to identify the most efficient grouping when used to generate local descriptors. The algorithm was inspired by the relatively new computational intelligence paradigm of artificial immune systems. A number of amino acid groupings produced by this algorithm were evaluated with respect to their ability to generate local descriptors capable of providing an accurate classification algorithm for GPCRs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integrated supplier selection and order allocation is an important decision for both designing and operating supply chains. This decision is often influenced by the concerned stakeholders, suppliers, plant operators and customers in different tiers. As firms continue to seek competitive advantage through supply chain design and operations they aim to create optimized supply chains. This calls for on one hand consideration of multiple conflicting criteria and on the other hand consideration of uncertainties of demand and supply. Although there are studies on supplier selection using advanced mathematical models to cover a stochastic approach, multiple criteria decision making techniques and multiple stakeholder requirements separately, according to authors' knowledge there is no work that integrates these three aspects in a common framework. This paper proposes an integrated method for dealing with such problems using a combined Analytic Hierarchy Process-Quality Function Deployment (AHP-QFD) and chance constrained optimization algorithm approach that selects appropriate suppliers and allocates orders optimally between them. The effectiveness of the proposed decision support system has been demonstrated through application and validation in the bioenergy industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A comprehensive coverage is crucial for communication, supply, and transportation networks, yet it is limited by the requirement of extensive infrastructure and heavy energy consumption. Here, we draw an analogy between spins in antiferromagnet and outlets in supply networks, and apply techniques from the studies of disordered systems to elucidate the effects of balancing the coverage and supply costs on the network behavior. A readily applicable, coverage optimization algorithm is derived. Simulation results show that magnetized and antiferromagnetic domains emerge and coexist to balance the need for coverage and energy saving. The scaling of parameters with system size agrees with the continuum approximation in two dimensions and the tree approximation in random graphs. Due to frustration caused by the competition between coverage and supply cost, a transition between easy and hard computation regimes is observed. We further suggest a local expansion approach to greatly simplify the message updates which shed light on simplifications in other problems. © 2014 American Physical Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scientific education has been passing by redefinitions, contestations and new contributions from the research on science teaching. One contribution is the idea of science and technology literacy, allowing the citizens not only knowing science but also understand aspects on the construction and motivation of scientific and technological research. In accordance with this idea, there is the Science-Technology-Society (STS) studies which, since the 1970s, has been contributing for science teaching and learning according to the comprehension of the relationships with society in the Western countries of the North. In Brazil, this approach began to gain projection from the 1990s when the first essays on the theme were published. Currently, there is a clear influence of this approach on the national curriculum guidelines, especially for the area of Natural Sciences, and also on the textbooks chosen by the High School National Program (Programa Nacional do Ensino Médio). However, there seems to be a gap in relation to the discussion on the specific curricular component seen in college on this approach. Thus, this study aims at adopting the approach STS, face to the preparation of complimentary educational material on acid and bases concepts studied in the course of General Chemistry of the Natural Sciences graduation program. To this end, it was performed a bibliographical research aiming at making the state-of-the-art in in these concepts in specific literature to science teaching. It is divided in two stages: systematic study (with sixteen journals chosen according to Qualis-Capes and an unsystematic study with direct search in databases and references in the papers of the systematic study. The studies had their content analyzed and the categories chosen a priori were the level of education, the acid-base theory adopted, and the strategy/theoretical frame of reference adopted. A second stage aimed at identifying attitudes and beliefs on STS (Science-Technology-Society) and CSE (Chemistry-Society-Environment) of students in the teacher and technologist training course in three diferent institutions: UTFPR, UFRN and IFRN. In this study, it was used two questionnaires, composed of a Likert scale, semantic differential scale and open questions. The quantitative data reliability was estimated through Cronbach’s alpha method, and tha data were treated according to classic statistics, using the mean as the centrality measures, and the mean deviation as dispersion. The qualitative data were treated according to the content analysis with categories taken from the reading of answers. In the third stage, it was analyzed the presence of STS and CSE content in chapters on acid and bases concepts of nine General Chemistry textbooks, frequently used in graduation programs in public institutions of the state of Rio Grande do Norte. The results showed that there are few proposals of acid and bases teaching, and they are generally aimed at High School or at instrumentation for teaching courses, and no course for General Chemistry. The student’s attitudes and beliefs show the presence of a positivist point of view based on the concept of Science and Technology neutrality and the salvation of its mediation. The books analysis showed just a few content on STS and CSE are found in the studied chapters, and they are generally presented disjointedly in relation to the rest of the main text. In the end, as suggestion to solve the absence of proposals STS in General Chemistry books, as well as the student’s positivist attitudes, it was developed some educational material to be used in the course of General Chemistry at College. The material is structured to introduce a historical view of the concepts preparation, present the use of materials, the industrial and technological processes, and social and environmental consequences of this activities