270 resultados para PSO-teorin


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Organiska solceller har gjort stora framsteg med avseende på effektivitet och stabilitet under det senaste årtiondet. Oavsiktlig dopning är ett ofta förekommande fenomen i organiska halvledare. Orenheter och defekter i halvledarmaterialet kan ge upphov till dopning. Dessutom kan organiska halvledare dopas över tid då de kommer i kontakt med syre eller fukt. I takt med att andra egenskaper optimeras och effektiviteten stiger blir dopningen en allt viktigare förlustprocess att ta i beaktande i organiska solceller. Målet med detta arbete har varit att bättre förstå dopningens inverkan på organiska dioder och solceller. Tanken var att med hjälp av två olika dopningsmolekyler åstadkomma avsiktlig dopning och på så sätt kontrollerat undersöka och skapa en bättre förståelse av hur dopningen påverkar egenskaperna hos organiska halvledare i dioder och solceller. För att undersöka dopningen har en mätmetod som baserar sig på laddningsextraktion med hjälp av en linjärt ökande spänning, kallad CELIV, använts. Teorin för mätmetoden har utvecklats för att analysera godtyckliga dopningsprofiler och för att ta i beaktande dopningsprofilen vid bestämning av mobiliteten. Metoden har, genom de experiment som utförts, bekräftats fungera väl för undersökning av dopning i organiska tunnfilmsdioder. De två dopningsmolekyler som använts har testats framgångsrikt för dopning av polymererna P3HT och PBTTT. Under arbetets gång kunde en viktig orsak till oavsiktlig dopning identifieras. Som selektivt håltransportlager vid anoden är molybdentrioxid ett av de mest använda materialen. I detta arbete visas att molekyler från ett tunt lager av molybdentrioxid diffunderar in i halvledarlagret och orsakar dopning. Dopningen i P3HT:PCBM till följd av ett molybdentrioxidlager är så hög att även tunna solceller, kring 100 nm, kommer att påverkas negativt på grund av ökad rekombination. Dopningen till följd av diffusion av molybdentrioxid är ett resultat som visar på ett behov att hitta nya alternativa material för håltransport. Ett alternativ kunde vara att använda kraftigt dopade organiska halvledare. För detta ändamål kan den avsiktliga dopning som här testats vara relevant. De experimentella resultat som ingår i denna avhandling bekräftar att CELIV-metoden lämpar sig väl för att mäta dopningskoncentration och dopningsprofiler i organiska dioder samtidigt som man kan erhålla information om laddningstransporten i halvledarlagret.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El objetivo de este trabajo consiste en interpretar la forma en que la novela Calypso metaforiza el encuentro – desencuentro entre la cultura meseteña y la cultura del Caribe costarricense. Con base en las características históricas de ese encuentro, Tatiana Lobo pone frente a frente a los dos grupos culturales y recrea los rasgos básicos de cada uno de ellos. A partir de aquí, desarrolla una valoración de ambas culturas en la que destaca sus profundas diferencias en la concepción del mundo, para lo cual utiliza una metáfora muy productiva que le permite, por un lado, recrear la imposibilidad que tiene el blanco de entender al negro y, por otro, parodiar negativamente la cultura costarricense meseteña. El recurso que utiliza es el caly(i)pso, como manifestación propia de la cultura afrocaribeña, pero además como elemento evocador de la cultura clásica griega.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this research work, a new routing protocol for Opportunistic Networks is presented. The proposed protocol is called PSONET (PSO for Opportunistic Networks) since the proposal uses a hybrid system composed of a Particle Swarm Optimization algorithm (PSO). The main motivation for using the PSO is to take advantage of its search based on individuals and their learning adaptation. The PSONET uses the Particle Swarm Optimization technique to drive the network traffic through of a good subset of forwarders messages. The PSONET analyzes network communication conditions, detecting whether each node has sparse or dense connections and thus make better decisions about routing messages. The PSONET protocol is compared with the Epidemic and PROPHET protocols in three different scenarios of mobility: a mobility model based in activities, which simulates the everyday life of people in their work activities, leisure and rest; a mobility model based on a community of people, which simulates a group of people in their communities, which eventually will contact other people who may or may not be part of your community, to exchange information; and a random mobility pattern, which simulates a scenario divided into communities where people choose a destination at random, and based on the restriction map, move to this destination using the shortest path. The simulation results, obtained through The ONE simulator, show that in scenarios where the mobility model based on a community of people and also where the mobility model is random, the PSONET protocol achieves a higher messages delivery rate and a lower replication messages compared with the Epidemic and PROPHET protocols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this research is to study sedimentation mechanism by mathematical modeling in access channels which are affected by tidal currents. The most important factor for recognizing sedimentation process in every water environment is the flow pattern of that environment. It is noteworthy that the flow pattern is affected by the geometry and the shape of the environment as well as the type of existing affects in area. The area under the study in this thesis is located in Bushehr Gulf and the access channels (inner and outer). The study utilizes the hydrodynamic modeling with unstructured triangular and non-overlapping grids, using the finite volume, From method analysis in two scale sizes: large scale (200 m to 7.5km) and small scale (50m to 7.5km) in two different time durations of 15 days and 3.5 days to obtain the flow patterns. The 2D governing equations used in the model are the Depth-Averaged Shallow Water Equations. Turbulence Modeling is required to calculate the Eddy Viscosity Coefficient using the Smagorinsky Model with coefficient of 0.3. In addition to the flow modeling in two different scales and the use of the data of 3.5 day tidal current modeling have been considered to study the effects of the sediments equilibrium in the area and the channels. This model is capable of covering the area which is being settled and eroded and to identify the effects of tidal current of these processes. The required data of the above mentioned models such as current and sediments data have been obtained by the measurements in Bushehr Gulf and the access channels which was one of the PSO's (Port and Shipping Organization) project-titled, "The Sedimentation Modeling in Bushehr Port" in 1379. Hydrographic data have been obtained from Admiralty maps (2003) and Cartography Organization (1378, 1379). The results of the modeling includes: cross shore currents in northern and north western coasts of Bushehr Gulf during the neap tide and also the same current in northern and north eastern coasts of the Gulf during the spring tide. These currents wash and carry fine particles (silt, clay, and mud) from the coastal bed of which are generally made of mud and clay with some silts. In this regard, the role of sediments in the islands of this area and the islands made of depot of dredged sediments should not be ignored. The result of using 3.5 day modeling is that the cross channels currents leads to settlement places in inner and outer channels in tidal period. In neap tide the current enters the channel from upside bend of the two channels and outer channel. Then it crosses the channel oblique in some places of the outer channel. Also the oblique currents or even almost perpendicular current from up slope of inner channel between No. 15 and No. 18 buoys interact between the parallel currents in the channel and made secondary oblique currents which exit as a down-slope current in the channel and causes deposit of sediments as well as settling the suspended sediments carried by these currents. In addition in outer channel the speed of parallel currents in the bend of the channel which is naturally deeper increases. Therefore, it leads to erosion and suspension of sediments in this area. The speed of suspended sediments carried by this current which is parallel to the channel axis decreases when they pass through the shallower part of the channel where it is in the buoys No.7 and 8 to 5 and 6 are located. Therefore, the suspended sediment settles and because of this process these places will be even shallower. Furthermore, the passing of oblique upstream leads to settlement of the sediments in the up-slope and has an additional effect on the process of decreasing the depth of these locations. On the contrary, in the down-slope channel, as the results of sediments and current modeling indicates the speed of current increases and the currents make the particles of down-slope channel suspended and be carried away. Thus, in a vast area of downstream of both channels, the sediments have settled. At the end of the neap tide, the process along with circulations in this area produces eddies which causes sedimentation in the area. During spring some parts of this active location for sedimentation will enter both channels in a reverse process. The above mentioned processes and the places of sedimentation and erosion in inner and outer channels are validated by the sediments equilibrium modeling. This model will be able to estimate the suspended, bed load and the boundary layer thickness in each point of both channels and in the modeled area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2015.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Con el objetivo de comparar las variaciones de frecuencia cardíaca, tensión arterial, oximetría de pulso, consumo de halotano y complicaciones en dos grupos de niïños de hasta 20 kg de pso a los que administró anestesia general ya sea con intubación endotraqueal o con mascarilla laríngea, se realizó un estudio clínico controlado aleatorizado. Se asignaron aleatoriamente dos grupos iguales de 50 niños en el Hospital VCM, desde enero del 200 a junio del 2001, sometidos a procedimientos quirúrgicos bajo anestesia general inhalatoria con halotano. En el grupo TES se utilizó intubación endotraqueal y en el grupo LMA únicamente mascarilla laríngea. Se midió hemodinamia, oximetría de pulso y consumo de halotano. Resultados:Los grupos fueron comparables en las variables demográficas a excepción del sexo, variable sin repercusión en la prueba de hipótesis. Las diferencias de frecuencia cardíaca y tensión arterial, entre los grupos, no fueron estadísticamente significativas, pero si lo fue la oximetría de pulso con valores más altos en el grupo LMA (p<0.05). El consumo de halotano fue similar. La depresión respiratoria fue más frecuente en el grupo TET (18vs 8). En ambos grupos hubo espasmo de glotis (4) y un caso (2) de vómito en el grupo TET. Ninguna de las diferencias fueron estadísticamente significativas. Conclusiones: el manejo de la vía aérea con LMZ es una alternativa de utilidad comparable a la intubación endotraqueal atendiendo a las indicaciones precisas de su uso. La LMA es un dispositivo idóneo, no invasivo, versátil, de gran ayuda en pacientes con y sin ventilación aérea difícil que debe ser incluido en el arsenal del anestesiólogo

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El artículo aborda la identificación de parámetros borrosos mediante técnicas de optimización de enjambre de partículas (PSO) y su aplicación al control de un sistema de suspensión activa. En particular, se adopta un controlador de tipo Takagi-Sugeno de orden cero con partición diifusa estándar de sus antecedentes. A diferencia de trabajos previos, donde el aprendizaje se limitaba a los parámetros de escala del control, el método propuesto permite la optimización de los conjuntos borrosos de los antecedentes. La metodología propuesta se ha experimentado con éxito sobre un sistema físico de un cuarto de vehículo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Maximum Power Point Tracking (MPPT) is an important concern in Photovoltaic (PV) systems. As PV systems have a high cost of energy it is essential that they are operated to extract the maximum possible power at all times. However, under non-uniform environmental conditions, which frequently arise in the outdoor environment, many MPPT techniques will fail to track the global peak power. This review paper discusses conventional MPPT techniques designed to operate under uniform environmental conditions and highlights why these techniques fail under non-uniform conditions. Following this, techniques designed specifically to operate under non-uniform environmental conditions are analysed and compared. Simulation results which compare the performance of the common Perturb and Observe (P&O) method, the Particle Swarm Optimisation (PSO) and the Simulated Annealing (SA) MPPT approaches under non-uniform environmental conditions are also presented. The research presented in this review indicates that there is no single technique which can achieve reliable global MPPT with low cost and complexity and be easily adapted to different PV systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Developing an effective memetic algorithm that integrates the Particle Swarm Optimization (PSO) algorithm and a local search method is a difficult task. The challenging issues include when the local search method should be called, the frequency of calling the local search method, as well as which particle should undergo the local search operations. Motivated by this challenge, we introduce a new Reinforcement Learning-based Memetic Particle Swarm Optimization (RLMPSO) model. Each particle is subject to five operations under the control of the Reinforcement Learning (RL) algorithm, i.e. exploration, convergence, high-jump, low-jump, and fine-tuning. These operations are executed by the particle according to the action generated by the RL algorithm. The proposed RLMPSO model is evaluated using four uni-modal and multi-modal benchmark problems, six composite benchmark problems, five shifted and rotated benchmark problems, as well as two benchmark application problems. The experimental results show that RLMPSO is useful, and it outperforms a number of state-of-the-art PSO-based algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Partial shading is an unavoidable condition which significantly reduces the efficiency and stability of a photovoltaic (PV) system. When partial shading occurs the system has multiple-peak output power characteristics. In order to track the global maximum power point (GMPP) within an appropriate period a reliable technique is required. Conventional techniques such as hill climbing and perturbation and observation (P&O) are inadequate in tracking the GMPP subject to this condition resulting in a dramatic reduction in the efficiency of the PV system. Recent artificial intelligence methods have been proposed, however they have a higher computational cost, slower processing time and increased oscillations which results in further instability at the output of the PV system. This paper proposes a fast and efficient technique based on Radial Movement Optimization (RMO) for detecting the GMPP under partial shading conditions. The paper begins with a brief description of the behavior of PV systems under partial shading conditions followed by the introduction of the new RMO-based technique for GMPP tracking. Finally, results are presented to demonstration the performance of the proposed technique under different partial shading conditions. The results are compared with those of the PSO method, one of the most widely used methods in the literature. Four factors, namely convergence speed, efficiency (power loss reduction), stability (oscillation reduction) and computational cost, are considered in the comparison with the PSO technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cardiac autonomic neuropathy (CAN) poses an important clinical problem, which often remains undetected due difficulty of conducting the current tests and their lack of sensitivity. CAN has been associated with growth in the risk of unexpected death in cardiac patients with diabetes mellitus. Heart rate variability (HRV) attributes have been actively investigated, since they are important for diagnostics in diabetes, Parkinson's disease, cardiac and renal disease. Due to the adverse effects of CAN it is important to obtain a robust and highly accurate diagnostic tool for identification of early CAN, when treatment has the best outcome. Use of HRV attributes to enhance the effectiveness of diagnosis of CAN progression may provide such a tool. In the present paper we propose a new machine learning algorithm, the Multi-Layer Attribute Selection and Classification (MLASC), for the diagnosis of CAN progression based on HRV attributes. It incorporates our new automated attribute selection procedure, Double Wrapper Subset Evaluator with Particle Swarm Optimization (DWSE-PSO). We present the results of experiments, which compare MLASC with other simpler versions and counterpart methods. The experiments used our large and well-known diabetes complications database. The results of experiments demonstrate that MLASC has significantly outperformed other simpler techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSNs) are widely used for various civilian and military applications, and thus have attracted significant interest in recent years. This work investigates the important problem of optimal deployment of WSNs in terms of coverage and energy consumption. Five deployment algorithms are developed for maximal sensing range and minimal energy consumption in order to provide optimal sensing coverage and maximum lifetime. Also, all developed algorithms include self-healing capabilities in order to restore the operation of WSNs after a number of nodes have become inoperative. Two centralized optimization algorithms are developed, one based on Genetic Algorithms (GAs) and one based on Particle Swarm Optimization (PSO). Both optimization algorithms use powerful central nodes to calculate and obtain the global optimum outcomes. The GA is used to determine the optimal tradeoff between network coverage and overall distance travelled by fixed range sensors. The PSO algorithm is used to ensure 100% network coverage and minimize the energy consumed by mobile and range-adjustable sensors. Up to 30% - 90% energy savings can be provided in different scenarios by using the developed optimization algorithms thereby extending the lifetime of the sensor by 1.4 to 10 times. Three distributed optimization algorithms are also developed to relocate the sensors and optimize the coverage of networks with more stringent design and cost constraints. Each algorithm is cooperatively executed by all sensors to achieve better coverage. Two of our algorithms use the relative positions between sensors to optimize the coverage and energy savings. They provide 20% to 25% more energy savings than existing solutions. Our third algorithm is developed for networks without self-localization capabilities and supports the optimal deployment of such networks without requiring the use of expensive geolocation hardware or energy consuming localization algorithms. This is important for indoor monitoring applications since current localization algorithms cannot provide good accuracy for sensor relocation algorithms in such indoor environments. Also, no sensor redeployment algorithms, which can operate without self-localization systems, developed before our work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The low-frequency electromagnetic compatibility (EMC) is an increasingly important aspect in the design of practical systems to ensure the functional safety and reliability of complex products. The opportunities for using numerical techniques to predict and analyze system’s EMC are therefore of considerable interest in many industries. As the first phase of study, a proper model, including all the details of the component, was required. Therefore, the advances in EMC modeling were studied with classifying analytical and numerical models. The selected model was finite element (FE) modeling, coupled with the distributed network method, to generate the model of the converter’s components and obtain the frequency behavioral model of the converter. The method has the ability to reveal the behavior of parasitic elements and higher resonances, which have critical impacts in studying EMI problems. For the EMC and signature studies of the machine drives, the equivalent source modeling was studied. Considering the details of the multi-machine environment, including actual models, some innovation in equivalent source modeling was performed to decrease the simulation time dramatically. Several models were designed in this study and the voltage current cube model and wire model have the best result. The GA-based PSO method is used as the optimization process. Superposition and suppression of the fields in coupling the components were also studied and verified. The simulation time of the equivalent model is 80-100 times lower than the detailed model. All tests were verified experimentally. As the application of EMC and signature study, the fault diagnosis and condition monitoring of an induction motor drive was developed using radiated fields. In addition to experimental tests, the 3DFE analysis was coupled with circuit-based software to implement the incipient fault cases. The identification was implemented using ANN for seventy various faulty cases. The simulation results were verified experimentally. Finally, the identification of the types of power components were implemented. The results show that it is possible to identify the type of components, as well as the faulty components, by comparing the amplitudes of their stray field harmonics. The identification using the stray fields is nondestructive and can be used for the setups that cannot go offline and be dismantled

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I takt med att digitala medier har utvecklats under de senaste åren har köpresan för-ändrats till att kunder idag i ett mycket senare skede släpper in leverantörer i dialogen. Marketing Automation adresserar den problembilden och har växt fram som en brygga mellan sälj- och marknadsprocessen. Systemet ger möjlighet att effektivt och automatiserat utveckla leads (potentiell kund). Syftet med denna studie är att undersöka hur Marketing Automation påverkar sälj- och marknadsprocesserna. Vilka förutsättningar krävs för en implementation? Ökar lönsamheten? Vi har därför valt att i det teoretiska ramverket beskriva Marketing Automation och bland annat undersöka om ett införande av Marketing Automation medför att sälj- och marknadsorganisationerna slås samman till en organisatorisk enhet. I studien har vi dessutom kartlagt och beskrivit den moderna köpresan och det som ibland kallas intäktsorganisationen. Vi har funnit att Marketing Automation är relativt outforskat i en svensk kontext. För att utröna om teorin, som i stor utsträckning bygger på internationell litteratur och internationella undersökningar, går att överföra till en svensk kontext har vi valt att genomföra en kvalitativ studie i form av en fallstudie av leverantörer av produkter och tjänster inom området samt företag, med den gemensamma nämnaren att de re-presenterar ett kunskapsintensivt erbjudande och har implementerat lösningar för Marketing Automation. I vår analys finns en samsyn mellan leverantörer och kunder i förutsättningar för ett införande, men vi kan även se hur resultaten divergerar och pekar på implikationer, inte minst avseende måluppfyllnad och samverkan mellan sälj- och marknadsorgani-sationerna. Vår slutsats visar bland annat att Marketing Automation kan leda till uppfyllnad av mjuka värden i företaget men har inte bevisats leda till ökad lönsamhet per automatik. Vi ser lönsamhet och Return on Investment (ROI) som ett område som bör utforskas vidare.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A cloud workflow system is a type of platform service which facilitates the automation of distributed applications based on the novel cloud infrastructure. One of the most important aspects which differentiate a cloud workflow system from its other counterparts is the market-oriented business model. This is a significant innovation which brings many challenges to conventional workflow scheduling strategies. To investigate such an issue, this paper proposes a market-oriented hierarchical scheduling strategy in cloud workflow systems. Specifically, the service-level scheduling deals with the Task-to-Service assignment where tasks of individual workflow instances are mapped to cloud services in the global cloud markets based on their functional and non-functional QoS requirements; the task-level scheduling deals with the optimisation of the Task-to-VM (virtual machine) assignment in local cloud data centres where the overall running cost of cloud workflow systems will be minimised given the satisfaction of QoS constraints for individual tasks. Based on our hierarchical scheduling strategy, a package based random scheduling algorithm is presented as the candidate service-level scheduling algorithm and three representative metaheuristic based scheduling algorithms including genetic algorithm (GA), ant colony optimisation (ACO), and particle swarm optimisation (PSO) are adapted, implemented and analysed as the candidate task-level scheduling algorithms. The hierarchical scheduling strategy is being implemented in our SwinDeW-C cloud workflow system and demonstrating satisfactory performance. Meanwhile, the experimental results show that the overall performance of ACO based scheduling algorithm is better than others on three basic measurements: the optimisation rate on makespan, the optimisation rate on cost and the CPU time.