192 resultados para PSO
Resumo:
Water-alternating-gas (WAG) is an enhanced oil recovery method combining the improved macroscopic sweep of water flooding with the improved microscopic displacement of gas injection. The optimal design of the WAG parameters is usually based on numerical reservoir simulation via trial and error, limited by the reservoir engineer’s availability. Employing optimisation techniques can guide the simulation runs and reduce the number of function evaluations. In this study, robust evolutionary algorithms are utilized to optimise hydrocarbon WAG performance in the E-segment of the Norne field. The first objective function is selected to be the net present value (NPV) and two global semi-random search strategies, a genetic algorithm (GA) and particle swarm optimisation (PSO) are tested on different case studies with different numbers of controlling variables which are sampled from the set of water and gas injection rates, bottom-hole pressures of the oil production wells, cycle ratio, cycle time, the composition of the injected hydrocarbon gas (miscible/immiscible WAG) and the total WAG period. In progressive experiments, the number of decision-making variables is increased, increasing the problem complexity while potentially improving the efficacy of the WAG process. The second objective function is selected to be the incremental recovery factor (IRF) within a fixed total WAG simulation time and it is optimised using the same optimisation algorithms. The results from the two optimisation techniques are analyzed and their performance, convergence speed and the quality of the optimal solutions found by the algorithms in multiple trials are compared for each experiment. The distinctions between the optimal WAG parameters resulting from NPV and oil recovery optimisation are also examined. This is the first known work optimising over this complete set of WAG variables. The first use of PSO to optimise a WAG project at the field scale is also illustrated. Compared to the reference cases, the best overall values of the objective functions found by GA and PSO were 13.8% and 14.2% higher, respectively, if NPV is optimised over all the above variables, and 14.2% and 16.2% higher, respectively, if IRF is optimised.
Resumo:
Over the last thirty years, there has been an increased demand for better management of public sector organisations (PSOs). This requires that they are answerable for the inputs that they are given but also for what they achieve with these inputs (Hood 1991; Hood 1995). It is suggested that this will improve the management of the organisation through better planning and control, and the achievement of greater accountability (Smith 1995). However, such a rational approach with clear goals and the means to measure achievement can cause difficulties for many PSOs. These difficulties include the distinctive nature of the public sector due to the political environment within which the public sector manager operates (Stewart and Walsh 1992) and the fact that PSOs will have many stakeholders, each of whom will have their own specific objectives based on their own perspective (Boyle 1995). This can
result in goal ambiguity which means that there is leeway in interpreting the results of the PSO. The National Asset Management Agency (NAMA) was set up to bring stability to the financial system by buying loans from the banks (which were in most cases, non-performing loans). The intention was to cleanse the banks of these loans so that they could return to their normal business of taking deposits and making loans. However, the legislation, also gave NAMA a wide range of other responsibilities including responsibility for facilitating credit in the economy and protecting the interests of taxpayers. In more recent times, NAMA has been given responsibility for building social housing. This wide-range of activities is a clear example of a PSO being given multiple goals which may conflict and is therefore likely to lead to goal ambiguity. This makes it very difficult to evaluate NAMA’s performance as they are attempting to meet numerous goals at the same time and also highlights the complexity of policy making in the public sector. The purpose of this paper is to examine how NAMA dealt with goal ambiguity. This will be done through a thematic analysis of its annual reports over the last five years. The paper’s will contribute to the ongoing debate about the evaluation of PSOs and the complex environment within which they operate which makes evaluation difficult as they are
answerable to multiple stakeholders who have different objectives and different criteria for measuring success.
Resumo:
The purpose of this study is to examine the strategic choice of mainstream parties in relation to the competition of voters posed by a niche party and their most important issue, in this case radicalist rightwing populists and the migration issue. The study uses a comparative approach to examine the mainstream parties Social Democrats and Moderates reaction to the niche parties New Democracy 1991-1994 and Sweden Democrats 2010-2015. Using Meguid´s PSO-theory and by performing an qualitative analyse of the parties rhetoric and political suggestions in the parliamentary debates as well as in government bills and reservations in committee reports, the study aims to describe mainstream parties position on the issue and if and how they change position and strategy. The results of the study shows that both mainstream parties over all applies an adversarial strategy, aiming to maintain distance to the niche party and its position but with time and due to changes in the political environment, changes in position and strategy takes place and the mainstream parties applies a slightly more accommodative strategy.
Resumo:
The purpose of this study is to examine the strategic choice of mainstream parties in relation to the competition of voters posed by a niche party and their most important issue, in this case radicalist rightwing populists and the migration issue. The study uses a comparative approach to examine the mainstream parties Social Democrats and Moderates reaction to the niche parties New Democracy 1991-1994 and Sweden Democrats 2010-2015. Using Meguid´s PSO-theory and by performing an qualitative analyse of the parties rhetoric and political suggestions in the parliamentary debates as well as in government bills and reservations in committee reports, the study aims to describe mainstream parties position on the issue and if and how they change position and strategy. The results of the study shows that both mainstream parties over all applies an adversarial strategy, aiming to maintain distance to the niche party and its position but with time and due to changes in the political environment, changes in position and strategy takes place and the mainstream parties applies a slightly more accommodative strategy.
Resumo:
El objetivo de este trabajo consiste en interpretar la forma en que la novela Calypso metaforiza el encuentro – desencuentro entre la cultura meseteña y la cultura del Caribe costarricense. Con base en las características históricas de ese encuentro, Tatiana Lobo pone frente a frente a los dos grupos culturales y recrea los rasgos básicos de cada uno de ellos. A partir de aquí, desarrolla una valoración de ambas culturas en la que destaca sus profundas diferencias en la concepción del mundo, para lo cual utiliza una metáfora muy productiva que le permite, por un lado, recrear la imposibilidad que tiene el blanco de entender al negro y, por otro, parodiar negativamente la cultura costarricense meseteña. El recurso que utiliza es el caly(i)pso, como manifestación propia de la cultura afrocaribeña, pero además como elemento evocador de la cultura clásica griega.
Resumo:
In this research work, a new routing protocol for Opportunistic Networks is presented. The proposed protocol is called PSONET (PSO for Opportunistic Networks) since the proposal uses a hybrid system composed of a Particle Swarm Optimization algorithm (PSO). The main motivation for using the PSO is to take advantage of its search based on individuals and their learning adaptation. The PSONET uses the Particle Swarm Optimization technique to drive the network traffic through of a good subset of forwarders messages. The PSONET analyzes network communication conditions, detecting whether each node has sparse or dense connections and thus make better decisions about routing messages. The PSONET protocol is compared with the Epidemic and PROPHET protocols in three different scenarios of mobility: a mobility model based in activities, which simulates the everyday life of people in their work activities, leisure and rest; a mobility model based on a community of people, which simulates a group of people in their communities, which eventually will contact other people who may or may not be part of your community, to exchange information; and a random mobility pattern, which simulates a scenario divided into communities where people choose a destination at random, and based on the restriction map, move to this destination using the shortest path. The simulation results, obtained through The ONE simulator, show that in scenarios where the mobility model based on a community of people and also where the mobility model is random, the PSONET protocol achieves a higher messages delivery rate and a lower replication messages compared with the Epidemic and PROPHET protocols.
Resumo:
The purpose of this research is to study sedimentation mechanism by mathematical modeling in access channels which are affected by tidal currents. The most important factor for recognizing sedimentation process in every water environment is the flow pattern of that environment. It is noteworthy that the flow pattern is affected by the geometry and the shape of the environment as well as the type of existing affects in area. The area under the study in this thesis is located in Bushehr Gulf and the access channels (inner and outer). The study utilizes the hydrodynamic modeling with unstructured triangular and non-overlapping grids, using the finite volume, From method analysis in two scale sizes: large scale (200 m to 7.5km) and small scale (50m to 7.5km) in two different time durations of 15 days and 3.5 days to obtain the flow patterns. The 2D governing equations used in the model are the Depth-Averaged Shallow Water Equations. Turbulence Modeling is required to calculate the Eddy Viscosity Coefficient using the Smagorinsky Model with coefficient of 0.3. In addition to the flow modeling in two different scales and the use of the data of 3.5 day tidal current modeling have been considered to study the effects of the sediments equilibrium in the area and the channels. This model is capable of covering the area which is being settled and eroded and to identify the effects of tidal current of these processes. The required data of the above mentioned models such as current and sediments data have been obtained by the measurements in Bushehr Gulf and the access channels which was one of the PSO's (Port and Shipping Organization) project-titled, "The Sedimentation Modeling in Bushehr Port" in 1379. Hydrographic data have been obtained from Admiralty maps (2003) and Cartography Organization (1378, 1379). The results of the modeling includes: cross shore currents in northern and north western coasts of Bushehr Gulf during the neap tide and also the same current in northern and north eastern coasts of the Gulf during the spring tide. These currents wash and carry fine particles (silt, clay, and mud) from the coastal bed of which are generally made of mud and clay with some silts. In this regard, the role of sediments in the islands of this area and the islands made of depot of dredged sediments should not be ignored. The result of using 3.5 day modeling is that the cross channels currents leads to settlement places in inner and outer channels in tidal period. In neap tide the current enters the channel from upside bend of the two channels and outer channel. Then it crosses the channel oblique in some places of the outer channel. Also the oblique currents or even almost perpendicular current from up slope of inner channel between No. 15 and No. 18 buoys interact between the parallel currents in the channel and made secondary oblique currents which exit as a down-slope current in the channel and causes deposit of sediments as well as settling the suspended sediments carried by these currents. In addition in outer channel the speed of parallel currents in the bend of the channel which is naturally deeper increases. Therefore, it leads to erosion and suspension of sediments in this area. The speed of suspended sediments carried by this current which is parallel to the channel axis decreases when they pass through the shallower part of the channel where it is in the buoys No.7 and 8 to 5 and 6 are located. Therefore, the suspended sediment settles and because of this process these places will be even shallower. Furthermore, the passing of oblique upstream leads to settlement of the sediments in the up-slope and has an additional effect on the process of decreasing the depth of these locations. On the contrary, in the down-slope channel, as the results of sediments and current modeling indicates the speed of current increases and the currents make the particles of down-slope channel suspended and be carried away. Thus, in a vast area of downstream of both channels, the sediments have settled. At the end of the neap tide, the process along with circulations in this area produces eddies which causes sedimentation in the area. During spring some parts of this active location for sedimentation will enter both channels in a reverse process. The above mentioned processes and the places of sedimentation and erosion in inner and outer channels are validated by the sediments equilibrium modeling. This model will be able to estimate the suspended, bed load and the boundary layer thickness in each point of both channels and in the modeled area.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2015.
Resumo:
Con el objetivo de comparar las variaciones de frecuencia cardíaca, tensión arterial, oximetría de pulso, consumo de halotano y complicaciones en dos grupos de niïños de hasta 20 kg de pso a los que administró anestesia general ya sea con intubación endotraqueal o con mascarilla laríngea, se realizó un estudio clínico controlado aleatorizado. Se asignaron aleatoriamente dos grupos iguales de 50 niños en el Hospital VCM, desde enero del 200 a junio del 2001, sometidos a procedimientos quirúrgicos bajo anestesia general inhalatoria con halotano. En el grupo TES se utilizó intubación endotraqueal y en el grupo LMA únicamente mascarilla laríngea. Se midió hemodinamia, oximetría de pulso y consumo de halotano. Resultados:Los grupos fueron comparables en las variables demográficas a excepción del sexo, variable sin repercusión en la prueba de hipótesis. Las diferencias de frecuencia cardíaca y tensión arterial, entre los grupos, no fueron estadísticamente significativas, pero si lo fue la oximetría de pulso con valores más altos en el grupo LMA (p<0.05). El consumo de halotano fue similar. La depresión respiratoria fue más frecuente en el grupo TET (18vs 8). En ambos grupos hubo espasmo de glotis (4) y un caso (2) de vómito en el grupo TET. Ninguna de las diferencias fueron estadísticamente significativas. Conclusiones: el manejo de la vía aérea con LMZ es una alternativa de utilidad comparable a la intubación endotraqueal atendiendo a las indicaciones precisas de su uso. La LMA es un dispositivo idóneo, no invasivo, versátil, de gran ayuda en pacientes con y sin ventilación aérea difícil que debe ser incluido en el arsenal del anestesiólogo
Resumo:
El artículo aborda la identificación de parámetros borrosos mediante técnicas de optimización de enjambre de partículas (PSO) y su aplicación al control de un sistema de suspensión activa. En particular, se adopta un controlador de tipo Takagi-Sugeno de orden cero con partición diifusa estándar de sus antecedentes. A diferencia de trabajos previos, donde el aprendizaje se limitaba a los parámetros de escala del control, el método propuesto permite la optimización de los conjuntos borrosos de los antecedentes. La metodología propuesta se ha experimentado con éxito sobre un sistema físico de un cuarto de vehículo.
Resumo:
Wireless Sensor Networks (WSNs) are widely used for various civilian and military applications, and thus have attracted significant interest in recent years. This work investigates the important problem of optimal deployment of WSNs in terms of coverage and energy consumption. Five deployment algorithms are developed for maximal sensing range and minimal energy consumption in order to provide optimal sensing coverage and maximum lifetime. Also, all developed algorithms include self-healing capabilities in order to restore the operation of WSNs after a number of nodes have become inoperative. Two centralized optimization algorithms are developed, one based on Genetic Algorithms (GAs) and one based on Particle Swarm Optimization (PSO). Both optimization algorithms use powerful central nodes to calculate and obtain the global optimum outcomes. The GA is used to determine the optimal tradeoff between network coverage and overall distance travelled by fixed range sensors. The PSO algorithm is used to ensure 100% network coverage and minimize the energy consumed by mobile and range-adjustable sensors. Up to 30% - 90% energy savings can be provided in different scenarios by using the developed optimization algorithms thereby extending the lifetime of the sensor by 1.4 to 10 times. Three distributed optimization algorithms are also developed to relocate the sensors and optimize the coverage of networks with more stringent design and cost constraints. Each algorithm is cooperatively executed by all sensors to achieve better coverage. Two of our algorithms use the relative positions between sensors to optimize the coverage and energy savings. They provide 20% to 25% more energy savings than existing solutions. Our third algorithm is developed for networks without self-localization capabilities and supports the optimal deployment of such networks without requiring the use of expensive geolocation hardware or energy consuming localization algorithms. This is important for indoor monitoring applications since current localization algorithms cannot provide good accuracy for sensor relocation algorithms in such indoor environments. Also, no sensor redeployment algorithms, which can operate without self-localization systems, developed before our work.
Resumo:
The low-frequency electromagnetic compatibility (EMC) is an increasingly important aspect in the design of practical systems to ensure the functional safety and reliability of complex products. The opportunities for using numerical techniques to predict and analyze system’s EMC are therefore of considerable interest in many industries. As the first phase of study, a proper model, including all the details of the component, was required. Therefore, the advances in EMC modeling were studied with classifying analytical and numerical models. The selected model was finite element (FE) modeling, coupled with the distributed network method, to generate the model of the converter’s components and obtain the frequency behavioral model of the converter. The method has the ability to reveal the behavior of parasitic elements and higher resonances, which have critical impacts in studying EMI problems. For the EMC and signature studies of the machine drives, the equivalent source modeling was studied. Considering the details of the multi-machine environment, including actual models, some innovation in equivalent source modeling was performed to decrease the simulation time dramatically. Several models were designed in this study and the voltage current cube model and wire model have the best result. The GA-based PSO method is used as the optimization process. Superposition and suppression of the fields in coupling the components were also studied and verified. The simulation time of the equivalent model is 80-100 times lower than the detailed model. All tests were verified experimentally. As the application of EMC and signature study, the fault diagnosis and condition monitoring of an induction motor drive was developed using radiated fields. In addition to experimental tests, the 3DFE analysis was coupled with circuit-based software to implement the incipient fault cases. The identification was implemented using ANN for seventy various faulty cases. The simulation results were verified experimentally. Finally, the identification of the types of power components were implemented. The results show that it is possible to identify the type of components, as well as the faulty components, by comparing the amplitudes of their stray field harmonics. The identification using the stray fields is nondestructive and can be used for the setups that cannot go offline and be dismantled