917 resultados para Non-linear Loads


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The domain of thermal therapies applications can be improved with the development of accurate non-invasive timespatial temperature models. These models should represent the non-linear tissue thermal behaviour and be capable of tracking temperature at both time-instant and spatial position. If such estimators exist then efficient controllers for the therapeutic instrumentation could be developed, and the desired safety and effectiveness reached.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The introduction of new distributed energy resources, based on natural intermittent power sources, in power systems imposes the development of new adequate operation management and control methods. This paper proposes a short-term Energy Resource Management (ERM) methodology performed in two phases. The first one addresses the hour-ahead ERM scheduling and the second one deals with the five-minute ahead ERM scheduling. Both phases consider the day-ahead resource scheduling solution. The ERM scheduling is formulated as an optimization problem that aims to minimize the operation costs from the point of view of a virtual power player that manages the network and the existing resources. The optimization problem is solved by a deterministic mixed-integer non-linear programming approach and by a heuristic approach based on genetic algorithms. A case study considering a distribution network with 33 bus, 66 distributed generation, 32 loads with demand response contracts and 7 storage units has been implemented in a PSCADbased simulator developed in the field of the presented work, in order to validate the proposed short-term ERM methodology considering the dynamic power system behavior.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Distributed Energy Resources (DER) scheduling in smart grids presents a new challenge to system operators. The increase of new resources, such as storage systems and demand response programs, results in additional computational efforts for optimization problems. On the other hand, since natural resources, such as wind and sun, can only be precisely forecasted with small anticipation, short-term scheduling is especially relevant requiring a very good performance on large dimension problems. Traditional techniques such as Mixed-Integer Non-Linear Programming (MINLP) do not cope well with large scale problems. This type of problems can be appropriately addressed by metaheuristics approaches. This paper proposes a new methodology called Signaled Particle Swarm Optimization (SiPSO) to address the energy resources management problem in the scope of smart grids, with intensive use of DER. The proposed methodology’s performance is illustrated by a case study with 99 distributed generators, 208 loads, and 27 storage units. The results are compared with those obtained in other methodologies, namely MINLP, Genetic Algorithm, original Particle Swarm Optimization (PSO), Evolutionary PSO, and New PSO. SiPSO performance is superior to the other tested PSO variants, demonstrating its adequacy to solve large dimension problems which require a decision in a short period of time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper addresses the problem of energy resource scheduling. An aggregator will manage all distributed resources connected to its distribution network, including distributed generation based on renewable energy resources, demand response, storage systems, and electrical gridable vehicles. The use of gridable vehicles will have a significant impact on power systems management, especially in distribution networks. Therefore, the inclusion of vehicles in the optimal scheduling problem will be very important in future network management. The proposed particle swarm optimization approach is compared with a reference methodology based on mixed integer non-linear programming, implemented in GAMS, to evaluate the effectiveness of the proposed methodology. The paper includes a case study that consider a 32 bus distribution network with 66 distributed generators, 32 loads and 50 electric vehicles.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years the use of several new resources in power systems, such as distributed generation, demand response and more recently electric vehicles, has significantly increased. Power systems aim at lowering operational costs, requiring an adequate energy resources management. In this context, load consumption management plays an important role, being necessary to use optimization strategies to adjust the consumption to the supply profile. These optimization strategies can be integrated in demand response programs. The control of the energy consumption of an intelligent house has the objective of optimizing the load consumption. This paper presents a genetic algorithm approach to manage the consumption of a residential house making use of a SCADA system developed by the authors. Consumption management is done reducing or curtailing loads to keep the power consumption in, or below, a specified energy consumption limit. This limit is determined according to the consumer strategy and taking into account the renewable based micro generation, energy price, supplier solicitations, and consumers’ preferences. The proposed approach is compared with a mixed integer non-linear approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The large increase of Distributed Generation (DG) in Power Systems (PS) and specially in distribution networks makes the management of distribution generation resources an increasingly important issue. Beyond DG, other resources such as storage systems and demand response must be managed in order to obtain more efficient and “green” operation of PS. More players, such as aggregators or Virtual Power Players (VPP), that operate these kinds of resources will be appearing. This paper proposes a new methodology to solve the distribution network short term scheduling problem in the Smart Grid context. This methodology is based on a Genetic Algorithms (GA) approach for energy resource scheduling optimization and on PSCAD software to obtain realistic results for power system simulation. The paper includes a case study with 99 distributed generators, 208 loads and 27 storage units. The GA results for the determination of the economic dispatch considering the generation forecast, storage management and load curtailment in each period (one hour) are compared with the ones obtained with a Mixed Integer Non-Linear Programming (MINLP) approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this contribution is to extend the models of cellular/composite material design to nonlinear material behaviour and apply them for design of materials for passive vibration control. As a first step a computational tool allowing determination of optimised one-dimensional isolator behaviour was developed. This model can serve as a representation for idealised macroscopic behaviour. Optimal isolator behaviour to a given set of loads is obtained by generic probabilistic metaalgorithm, simulated annealing. Cost functional involves minimization of maximum response amplitude in a set of predefined time intervals and maximization of total energy absorbed in the first loop. Dependence of the global optimum on several combinations of leading parameters of the simulated annealing procedure, like neighbourhood definition and annealing schedule, is also studied and analyzed. Obtained results facilitate the design of elastomeric cellular materials with improved behaviour in terms of dynamic stiffness for passive vibration control.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Linear alkylbenzenes, LAB, formed by the Alel3 or HF catalyzed alkylation of benzene are common raw materials for surfactant manufacture. Normally they are sulphonated using S03 or oleum to give the corresponding linear alkylbenzene sulphonates In >95 % yield. As concern has grown about the environmental impact of surfactants,' questions have been raised about the trace levels of unreacted raw materials, linear alkylbenzenes and minor impurities present in them. With the advent of modem analytical instruments and techniques, namely GCIMS, the opportunity has arisen to identify the exact nature of these impurities and to determine the actual levels of them present in the commercial linear ,alkylbenzenes. The object of the proposed study was to separate, identify and quantify major and minor components (1-10%) in commercial linear alkylbenzenes. The focus of this study was on the structure elucidation and determination of impurities and on the qualitative determination of them in all analyzed linear alkylbenzene samples. A gas chromatography/mass spectrometry, (GCIMS) study was performed o~ five samples from the same manufacturer (different production dates) and then it was followed by the analyses of ten commercial linear alkylbenzenes from four different suppliers. All the major components, namely linear alkylbenzene isomers, followed the same elution pattern with the 2-phenyl isomer eluting last. The individual isomers were identified by interpretation of their electron impact and chemical ionization mass spectra. The percent isomer distribution was found to be different from sample to sample. Average molecular weights were calculated using two methods, GC and GCIMS, and compared with the results reported on the Certificate of Analyses (C.O.A.) provided by the manufacturers of commercial linear alkylbenzenes. The GC results in most cases agreed with the reported values, whereas GC/MS results were significantly lower, between 0.41 and 3.29 amu. The minor components, impurities such as branched alkylbenzenes and dialkyltetralins eluted according to their molecular weights. Their fragmentation patterns were studied using electron impact ionization mode and their molecular weight ions confirmed by a 'soft ionization technique', chemical ionization. The level of impurities present i~ the analyzed commercial linear alkylbenzenes was expressed as the percent of the total sample weight, as well as, in mg/g. The percent of impurities was observed to vary between 4.5 % and 16.8 % with the highest being in sample "I". Quantitation (mg/g) of impurities such as branched alkylbenzenes and dialkyltetralins was done using cis/trans-l,4,6,7-tetramethyltetralin as an internal standard. Samples were analyzed using .GC/MS system operating under full scan and single ion monitoring data acquisition modes. The latter data acquisition mode, which offers higher sensitivity, was used to analyze all samples under investigation for presence of linear dialkyltetralins. Dialkyltetralins were reported quantitatively, whereas branched alkylbenzenes were reported semi-qualitatively. The GC/MS method that was developed during the course of this study allowed identification of some other trace impurities present in commercial LABs. Compounds such as non-linear dialkyltetralins, dialkylindanes, diphenylalkanes and alkylnaphthalenes were identified but their detailed structure elucidation and the quantitation was beyond the scope of this study. However, further investigation of these compounds will be the subject of a future study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nous avons étudié la cohérence excitonique dans le poly[N- 9’-heptadecanyl-2,7-carbazole-alt-5,5-(4,7-di-2-thienyl-2’,1’,3’-benzothiadiazole] (PCDTBT). À l’aide d’un modulateur spatial de lumière, nous avons forgé des impulsions lasers ultracourtes permettant de sonder les cohérences du système. Nous nous sommes concentrés sur les propriétés cohérentes des états excitoniques, soit le singulet et l’état à transfert de charge. Nous avons observé que 35 fs après l’excitation, le singulet et l’état à transfert de charge sont toujours cohérents. Cette cohérence se mesure à l’aide de la visibilité qui est de respectivement environ 10% et 30%. De plus, nous avons démontré que les mécanismes permettant de générer du photocourant dans de tels dispositifs photovoltaïques ne sont déjà plus cohérents après 35 fs. Ces mesures révèlent une visibilité inférieure à 3%, ce qui est en deçà de la précision de nos instruments. Nous concluons donc que les états à transfert de charge ne sont pas les états précurseurs à la génération de photocourant, car ceux-ci se comportent très différemment dans les mesures de cohérences.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L'objectif du présent mémoire vise à présenter des modèles de séries chronologiques multivariés impliquant des vecteurs aléatoires dont chaque composante est non-négative. Nous considérons les modèles vMEM (modèles vectoriels et multiplicatifs avec erreurs non-négatives) présentés par Cipollini, Engle et Gallo (2006) et Cipollini et Gallo (2010). Ces modèles représentent une généralisation au cas multivarié des modèles MEM introduits par Engle (2002). Ces modèles trouvent notamment des applications avec les séries chronologiques financières. Les modèles vMEM permettent de modéliser des séries chronologiques impliquant des volumes d'actif, des durées, des variances conditionnelles, pour ne citer que ces applications. Il est également possible de faire une modélisation conjointe et d'étudier les dynamiques présentes entre les séries chronologiques formant le système étudié. Afin de modéliser des séries chronologiques multivariées à composantes non-négatives, plusieurs spécifications du terme d'erreur vectoriel ont été proposées dans la littérature. Une première approche consiste à considérer l'utilisation de vecteurs aléatoires dont la distribution du terme d'erreur est telle que chaque composante est non-négative. Cependant, trouver une distribution multivariée suffisamment souple définie sur le support positif est plutôt difficile, au moins avec les applications citées précédemment. Comme indiqué par Cipollini, Engle et Gallo (2006), un candidat possible est une distribution gamma multivariée, qui impose cependant des restrictions sévères sur les corrélations contemporaines entre les variables. Compte tenu que les possibilités sont limitées, une approche possible est d'utiliser la théorie des copules. Ainsi, selon cette approche, des distributions marginales (ou marges) peuvent être spécifiées, dont les distributions en cause ont des supports non-négatifs, et une fonction de copule permet de tenir compte de la dépendance entre les composantes. Une technique d'estimation possible est la méthode du maximum de vraisemblance. Une approche alternative est la méthode des moments généralisés (GMM). Cette dernière méthode présente l'avantage d'être semi-paramétrique dans le sens que contrairement à l'approche imposant une loi multivariée, il n'est pas nécessaire de spécifier une distribution multivariée pour le terme d'erreur. De manière générale, l'estimation des modèles vMEM est compliquée. Les algorithmes existants doivent tenir compte du grand nombre de paramètres et de la nature élaborée de la fonction de vraisemblance. Dans le cas de l'estimation par la méthode GMM, le système à résoudre nécessite également l'utilisation de solveurs pour systèmes non-linéaires. Dans ce mémoire, beaucoup d'énergies ont été consacrées à l'élaboration de code informatique (dans le langage R) pour estimer les différents paramètres du modèle. Dans le premier chapitre, nous définissons les processus stationnaires, les processus autorégressifs, les processus autorégressifs conditionnellement hétéroscédastiques (ARCH) et les processus ARCH généralisés (GARCH). Nous présentons aussi les modèles de durées ACD et les modèles MEM. Dans le deuxième chapitre, nous présentons la théorie des copules nécessaire pour notre travail, dans le cadre des modèles vectoriels et multiplicatifs avec erreurs non-négatives vMEM. Nous discutons également des méthodes possibles d'estimation. Dans le troisième chapitre, nous discutons les résultats des simulations pour plusieurs méthodes d'estimation. Dans le dernier chapitre, des applications sur des séries financières sont présentées. Le code R est fourni dans une annexe. Une conclusion complète ce mémoire.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Das Werkstoffverhalten von stahlfaserfreiem bzw. stahlfaserverstärktem Stahlbeton unter biaxialle Druck- Zugbeanspruchung wurde experimentell und theoretisch untersucht. Die Basis der experimentellen Untersuchungen waren zahlreiche Versuche, die in der Vergangenheit an faserfreiem Stahlbetonscheiben zur Bestimmung des Werkstoffverhaltens von gerissenem Stahlbeton im ebenen Spannungszustand durchgeführt wurden. Bei diesen Untersuchungen wurde festgestellt, dass infolge einer Querzugbeanspruchung eine Abminderung der biaxialen Druckfestigkeit entsteht. Unter Berücksichtigung dieser Erkenntnisse sind zur Verbesserung der Werkstoffeigenschaften des Betons, Stahlbetonscheiben aus stahlfaserverstärktem Beton hergestellt worden. Die aus der Literatur bekannten Werkstoffmodelle für Beton sowie Stahlbeton, im ungerissenen und gerissenen Zustand wurden hinsichtlich der in der Vergangenheit ermittelten Materialeigenschaften des Betons bzw. Stahlbetons unter proportionalen sowie nichtproportionalen äußeren Belastungen erklärt und kritisch untersucht. In den frischen Beton wurden Stahlfasern hinzugegeben. Dadurch konnte die Festigkeits- und die Materialsteifigkeitsabminderung infolge Rissbildung, die zur Schädigung des Verbundwerkstoffs Beton führt, reduziert werden. Man konnte sehen, dass der Druckfestigkeitsabminderungsfaktor und insbesondere die zur maximal aufnehmbaren Zylinderdruckfestigkeit gehörende Stauchung, durch Zugabe von Stahlfasern besser begrenzt wird. Die experimentelle Untersuchungen wurden an sechs faserfreien und sieben stahlfaserverstärkten Stahlbetonscheiben unter Druck-Zugbelastung zur Bestimmung des Verhaltens des gerissenen faserfreien und stahlfaserverstärkten Stahlbetons durchgeführt. Die aus eigenen Versuchen ermittelten Materialeigenschaften des Betons, des stahlfaserverstärkten Betons und Stahlbetons im gerissenen Zustand wurden dargelegt und diskutiert. Bei der Rissbildung des quasi- spröden Werkstoffs Beton und dem stahlfaserverstärkten Beton wurde neben dem plastischen Fließen, auch die Abnahme des Elastizitätsmoduls festgestellt. Die Abminderung der aufnehmbaren Festigkeit und der zugehörigen Verzerrung lässt sich nicht mit der klassischen Fließtheorie der Plastizität ohne Modifizierung des Verfestigungsgesetzes erfassen. Es wurden auf elasto-plastischen Werkstoffmodellen basierende konstitutive Beziehungen für den faserfreien sowie den stahlfaserverstärkten Beton vorgeschlagen. Darüber hinaus wurde in der vorliegenden Arbeit eine auf dem elasto-plastischen Werkstoffmodell basierende konstitutive Beziehung für Beton und den stahlfaser-verstärkten Beton im gerissenen Zustand formuliert. Die formulierten Werkstoffmodelle wurden mittels dem in einer modularen Form aufgebauten nichtlinearen Finite Elemente Programm DIANA zu numerischen Untersuchungen an ausgewählten experimentell untersuchten Flächentragwerken, wie scheibenartigen-, plattenartigen- und Schalentragwerken aus faserfreiem sowie stahlfaserverstärktem Beton verwendet. Das entwickelte elasto-plastische Modell ermöglichte durch eine modifizierte effektive Spannungs-Verzerrungs-Beziehung für das Verfestigungsmodell, nicht nur die Erfassung des plastischen Fließens sondern auch die Berücksichtigung der Schädigung der Elastizitätsmodule infolge Mikrorissen sowie Makrorissen im Hauptzugspannungs-Hauptdruckspannungs-Bereich. Es wurde bei den numerischen Untersuchungen zur Ermittlung des Last-Verformungsverhaltens von scheibenartigen, plattenartigen- und Schalentragwerken aus faserfreiem und stahlfaserverstärktem Stahlbeton, im Vergleich mit den aus Versuchen ermittelten Ergebnissen, eine gute Übereinstimmung festgestellt.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper shows that a wavelet network and a linear term can be advantageously combined for the purpose of non linear system identification. The theoretical foundation of this approach is laid by proving that radial wavelets are orthogonal to linear functions. A constructive procedure for building such nonlinear regression structures, termed linear-wavelet models, is described. For illustration, sim ulation data are used to identify a model for a two-link robotic manipulator. The results show that the introduction of wavelets does improve the prediction ability of a linear model.