941 resultados para non-linear loads


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the pursuit of producing high quality, low-cost composite aircraft structures, out-of-autoclave manufacturing processes for textile reinforcements are being simulated with increasing accuracy. This paper focuses on the continuum-based, finite element modelling of textile composites as they deform during the draping process. A non-orthogonal constitutive model tracks yarn orientations within a material subroutine developed for Abaqus/Explicit, resulting in the realistic determination of fabric shearing and material draw-in. Supplementary material characterisation was experimentally performed in order to define the tensile and non-linear shear behaviour accurately. The validity of the finite element model has been studied through comparison with similar research in the field and the experimental lay-up of carbon fibre textile reinforcement over a tool with double curvature geometry, showing good agreement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The accurate definition of the extreme wave loads which act on offshore structures represents a significant challenge for design engineers and even with decades of empirical data to base designs upon there are still failures attributed to wave loading. The environmental conditions which cause these loads are infrequent and highly non-linear which means that they are not well understood or simple to describe. If the structure is large enough to affect the incident wave significantly further non-linear effects can influence the loading. Moreover if the structure is floating and excited by the wave field then its responses, which are also likely to be highly non-linear, must be included in the analysis. This makes the description of the loading on such a structure difficult to determine and the design codes will often suggest employing various tools including small scale experiments, numerical and analytical methods, as well as empirical data if available.
Wave Energy Converters (WECs) are a new class of offshore structure which pose new design challenges, lacking the design codes and empirical data found in other industries. These machines are located in highly exposed and energetic sites, designed to be excited by the waves and will be expected to withstand extreme conditions over their 25 year design life. One such WEC is being developed by Aquamarine Power Ltd and is called Oyster. Oyster is a buoyant flap which is hinged close to the seabed, in water depths of 10 to 15m, piercing the water surface. The flap is driven back and forth by the action of the waves and this mechanical energy is then converted to electricity.
It has been identified in previous experiments that Oyster is not only subject to wave impacts but it occasionally slams into the water surface with high angular velocity. This slamming effect has been identified as an extreme load case and work is ongoing to describe it in terms of the pressure exerted on the outer skin and the transfer of this short duration impulsive load through various parts of the structure.
This paper describes a series of 40th scale experiments undertaken to investigate the pressure on the face of the flap during the slamming event. A vertical array of pressure sensors are used to measure the pressure exerted on the flap. Characteristics of the slam pressure such as the rise time, magnitude, spatial distribution and temporal evolution are revealed. Similarities are drawn between this slamming phenomenon and the classical water entry problems, such as ship hull slamming. With this similitude identified, common analytical tools are used to predict the slam pressure which is compared to that measured in the experiment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Traditional internal combustion engine vehicles are a major contributor to global greenhouse gas emissions and other air pollutants, such as particulate matter and nitrogen oxides. If the tail pipe point emissions could be managed centrally without reducing the commercial and personal user functionalities, then one of the most attractive solutions for achieving a significant reduction of emissions in the transport sector would be the mass deployment of electric vehicles. Though electric vehicle sales are still hindered by battery performance, cost and a few other technological bottlenecks, focused commercialisation and support from government policies are encouraging large scale electric vehicle adoptions. The mass proliferation of plug-in electric vehicles is likely to bring a significant additional electric load onto the grid creating a highly complex operational problem for power system operators. Electric vehicle batteries also have the ability to act as energy storage points on the distribution system. This double charge and storage impact of many uncontrollable small kW loads, as consumers will want maximum flexibility, on a distribution system which was originally not designed for such operations has the potential to be detrimental to grid balancing. Intelligent scheduling methods if established correctly could smoothly integrate electric vehicles onto the grid. Intelligent scheduling methods will help to avoid cycling of large combustion plants, using expensive fossil fuel peaking plant, match renewable generation to electric vehicle charging and not overload the distribution system causing a reduction in power quality. In this paper, a state-of-the-art review of scheduling methods to integrate plug-in electric vehicles are reviewed, examined and categorised based on their computational techniques. Thus, in addition to various existing approaches covering analytical scheduling, conventional optimisation methods (e.g. linear, non-linear mixed integer programming and dynamic programming), and game theory, meta-heuristic algorithms including genetic algorithm and particle swarm optimisation, are all comprehensively surveyed, offering a systematic reference for grid scheduling considering intelligent electric vehicle integration.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A glicosilação não-enzimática e o stress oxidativo representam dois processos importantes visto desempenharem um papel importante no que respeita às complicações de vários processos patofisiológicos. No presente, a associação entre a glicosilação não-enzimática e a oxidação de proteínas é reconhecida como sendo um dos principais responsáveis pela acumulação de proteínas não-funcionais que, por sua vez, promove uma contínua sensibilização para um aumento do stress oxidativo ao nível celular. Embora esteja disponível bastante informação no que respeita aos dois processos e suas consequências ao nível estrutural e funcional, permanecem questões por esclarecer acerca do que se desenvolve ao nível molecular. Com o objectivo de contribuir para uma melhor compreensão da relação entre a glicosilação não-enzimática e a oxidação, proteínas modelo (albumina, insulina e histonas H2B e H1) foram submetidas a sistemas in vitro de glicosilação não-enzimática e oxidação em condições controladas e durante um período de tempo específico. A identificação dos locais de glicosilação e oxidação foi realizada através de uma abordagem proteómica, na qual após digestão enzimática se procedeu à análise por cromatografia líquida acoplada a espectrometria de massa tandem (MALDI-TOF/TOF). Esta abordagem permitiu a obtenção de elevadas taxas de cobertura das sequências proteicas, permitindo a identificação dos locais preferenciais de glicosilação e oxidação nas diferentes proteínas estudadas. Como esperado, os resíduos de lisina foram os preferencialmente glicosilados. No que respeita à oxidação, além das modificações envolvendo hidroxilações e adições de oxigénio, foram identificadas deamidações, carbamilações e conversões oxidativas específicas de vários aminoácidos. No geral, os resíduos mais afectados pela oxidação foram os resíduos de cisteína, metionina, triptofano, tirosina, prolina, lisina e fenilalanina. Ao longo do período de tempo estudado, os resultados indicaram que a oxidação teve início em zonas expostas da proteína e/ou localizadas na vizinhança de resíduos de cisteína e metionina, ao invés de exibir um comportamente aleatório, ocorrendo de uma forma nãolinear por sua vez dependente da estabilidade conformacional da proteína. O estudo ao longo do tempo mostrou igualmente que, no caso das proteínas préglicosiladas, a oxidação das mesmas ocorreu de forma mais rápida e acentuada, sugerindo que as alterações estruturais induzidas pela glicosilação promovem um estado pro-oxidativo. No caso das proteínas pré-glicosiladas e oxidadas, foi identificado um maior número de modificações oxidativas assim como de resíduos modificados na vizinhança de resíduos glicosilados. Com esta abordagem é realizada uma importante contribuição na investigação das consequências do dano ‘glico-oxidativo’ em proteínas ao nível molecular através da combinação da espectrometria de massa e da bioinformática.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A análise dos efeitos dos sismos mostra que a investigação em engenharia sísmica deve dar especial atenção à avaliação da vulnerabilidade das construções existentes, frequentemente desprovidas de adequada resistência sísmica tal como acontece em edifícios de betão armado (BA) de muitas cidades em países do sul da Europa, entre os quais Portugal. Sendo os pilares elementos estruturais fundamentais na resistência sísmica dos edifícios, deve ser dada especial atenção à sua resposta sob ações cíclicas. Acresce que o sismo é um tipo de ação cujos efeitos nos edifícios exige a consideração de duas componentes horizontais, o que tem exigências mais severas nos pilares comparativamente à ação unidirecional. Assim, esta tese centra-se na avaliação da resposta estrutural de pilares de betão armado sujeitos a ações cíclicas horizontais biaxiais, em três linhas principais. Em primeiro lugar desenvolveu-se uma campanha de ensaios para o estudo do comportamento cíclico uniaxial e biaxial de pilares de betão armado com esforço axial constante. Para tal foram construídas quatro séries de pilares retangulares de betão armado (24 no total) com diferentes características geométricas e quantidades de armadura longitudinal, tendo os pilares sido ensaiados para diferentes histórias de carga. Os resultados experimentais obtidos são analisados e discutidos dando particular atenção à evolução do dano, à degradação de rigidez e resistência com o aumento das exigências de deformação, à energia dissipada, ao amortecimento viscoso equivalente; por fim é proposto um índice de dano para pilares solicitados biaxialmente. De seguida foram aplicadas diferentes estratégias de modelação não-linear para a representação do comportamento biaxial dos pilares ensaiados, considerando não-linearidade distribuída ao longo dos elementos ou concentrada nas extremidades dos mesmos. Os resultados obtidos com as várias estratégias de modelação demonstraram representar adequadamente a resposta em termos das curvas envolventes força-deslocamento, mas foram encontradas algumas dificuldades na representação da degradação de resistência e na evolução da energia dissipada. Por fim, é proposto um modelo global para a representação do comportamento não-linear em flexão de elementos de betão armado sujeitos a ações biaxiais cíclicas. Este modelo tem por base um modelo uniaxial conhecido, combinado com uma função de interação desenvolvida com base no modelo de Bouc- Wen. Esta função de interação foi calibrada com recurso a técnicas de otimização e usando resultados de uma série de análises numéricas com um modelo refinado. É ainda demonstrada a capacidade do modelo simplificado em reproduzir os resultados experimentais de ensaios biaxiais de pilares.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The domain of thermal therapies applications can be improved with the development of accurate non-invasive timespatial temperature models. These models should represent the non-linear tissue thermal behaviour and be capable of tracking temperature at both time-instant and spatial position. If such estimators exist then efficient controllers for the therapeutic instrumentation could be developed, and the desired safety and effectiveness reached.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The introduction of new distributed energy resources, based on natural intermittent power sources, in power systems imposes the development of new adequate operation management and control methods. This paper proposes a short-term Energy Resource Management (ERM) methodology performed in two phases. The first one addresses the hour-ahead ERM scheduling and the second one deals with the five-minute ahead ERM scheduling. Both phases consider the day-ahead resource scheduling solution. The ERM scheduling is formulated as an optimization problem that aims to minimize the operation costs from the point of view of a virtual power player that manages the network and the existing resources. The optimization problem is solved by a deterministic mixed-integer non-linear programming approach and by a heuristic approach based on genetic algorithms. A case study considering a distribution network with 33 bus, 66 distributed generation, 32 loads with demand response contracts and 7 storage units has been implemented in a PSCADbased simulator developed in the field of the presented work, in order to validate the proposed short-term ERM methodology considering the dynamic power system behavior.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Distributed Energy Resources (DER) scheduling in smart grids presents a new challenge to system operators. The increase of new resources, such as storage systems and demand response programs, results in additional computational efforts for optimization problems. On the other hand, since natural resources, such as wind and sun, can only be precisely forecasted with small anticipation, short-term scheduling is especially relevant requiring a very good performance on large dimension problems. Traditional techniques such as Mixed-Integer Non-Linear Programming (MINLP) do not cope well with large scale problems. This type of problems can be appropriately addressed by metaheuristics approaches. This paper proposes a new methodology called Signaled Particle Swarm Optimization (SiPSO) to address the energy resources management problem in the scope of smart grids, with intensive use of DER. The proposed methodology’s performance is illustrated by a case study with 99 distributed generators, 208 loads, and 27 storage units. The results are compared with those obtained in other methodologies, namely MINLP, Genetic Algorithm, original Particle Swarm Optimization (PSO), Evolutionary PSO, and New PSO. SiPSO performance is superior to the other tested PSO variants, demonstrating its adequacy to solve large dimension problems which require a decision in a short period of time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper addresses the problem of energy resource scheduling. An aggregator will manage all distributed resources connected to its distribution network, including distributed generation based on renewable energy resources, demand response, storage systems, and electrical gridable vehicles. The use of gridable vehicles will have a significant impact on power systems management, especially in distribution networks. Therefore, the inclusion of vehicles in the optimal scheduling problem will be very important in future network management. The proposed particle swarm optimization approach is compared with a reference methodology based on mixed integer non-linear programming, implemented in GAMS, to evaluate the effectiveness of the proposed methodology. The paper includes a case study that consider a 32 bus distribution network with 66 distributed generators, 32 loads and 50 electric vehicles.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years the use of several new resources in power systems, such as distributed generation, demand response and more recently electric vehicles, has significantly increased. Power systems aim at lowering operational costs, requiring an adequate energy resources management. In this context, load consumption management plays an important role, being necessary to use optimization strategies to adjust the consumption to the supply profile. These optimization strategies can be integrated in demand response programs. The control of the energy consumption of an intelligent house has the objective of optimizing the load consumption. This paper presents a genetic algorithm approach to manage the consumption of a residential house making use of a SCADA system developed by the authors. Consumption management is done reducing or curtailing loads to keep the power consumption in, or below, a specified energy consumption limit. This limit is determined according to the consumer strategy and taking into account the renewable based micro generation, energy price, supplier solicitations, and consumers’ preferences. The proposed approach is compared with a mixed integer non-linear approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The large increase of Distributed Generation (DG) in Power Systems (PS) and specially in distribution networks makes the management of distribution generation resources an increasingly important issue. Beyond DG, other resources such as storage systems and demand response must be managed in order to obtain more efficient and “green” operation of PS. More players, such as aggregators or Virtual Power Players (VPP), that operate these kinds of resources will be appearing. This paper proposes a new methodology to solve the distribution network short term scheduling problem in the Smart Grid context. This methodology is based on a Genetic Algorithms (GA) approach for energy resource scheduling optimization and on PSCAD software to obtain realistic results for power system simulation. The paper includes a case study with 99 distributed generators, 208 loads and 27 storage units. The GA results for the determination of the economic dispatch considering the generation forecast, storage management and load curtailment in each period (one hour) are compared with the ones obtained with a Mixed Integer Non-Linear Programming (MINLP) approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this contribution is to extend the models of cellular/composite material design to nonlinear material behaviour and apply them for design of materials for passive vibration control. As a first step a computational tool allowing determination of optimised one-dimensional isolator behaviour was developed. This model can serve as a representation for idealised macroscopic behaviour. Optimal isolator behaviour to a given set of loads is obtained by generic probabilistic metaalgorithm, simulated annealing. Cost functional involves minimization of maximum response amplitude in a set of predefined time intervals and maximization of total energy absorbed in the first loop. Dependence of the global optimum on several combinations of leading parameters of the simulated annealing procedure, like neighbourhood definition and annealing schedule, is also studied and analyzed. Obtained results facilitate the design of elastomeric cellular materials with improved behaviour in terms of dynamic stiffness for passive vibration control.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Linear alkylbenzenes, LAB, formed by the Alel3 or HF catalyzed alkylation of benzene are common raw materials for surfactant manufacture. Normally they are sulphonated using S03 or oleum to give the corresponding linear alkylbenzene sulphonates In >95 % yield. As concern has grown about the environmental impact of surfactants,' questions have been raised about the trace levels of unreacted raw materials, linear alkylbenzenes and minor impurities present in them. With the advent of modem analytical instruments and techniques, namely GCIMS, the opportunity has arisen to identify the exact nature of these impurities and to determine the actual levels of them present in the commercial linear ,alkylbenzenes. The object of the proposed study was to separate, identify and quantify major and minor components (1-10%) in commercial linear alkylbenzenes. The focus of this study was on the structure elucidation and determination of impurities and on the qualitative determination of them in all analyzed linear alkylbenzene samples. A gas chromatography/mass spectrometry, (GCIMS) study was performed o~ five samples from the same manufacturer (different production dates) and then it was followed by the analyses of ten commercial linear alkylbenzenes from four different suppliers. All the major components, namely linear alkylbenzene isomers, followed the same elution pattern with the 2-phenyl isomer eluting last. The individual isomers were identified by interpretation of their electron impact and chemical ionization mass spectra. The percent isomer distribution was found to be different from sample to sample. Average molecular weights were calculated using two methods, GC and GCIMS, and compared with the results reported on the Certificate of Analyses (C.O.A.) provided by the manufacturers of commercial linear alkylbenzenes. The GC results in most cases agreed with the reported values, whereas GC/MS results were significantly lower, between 0.41 and 3.29 amu. The minor components, impurities such as branched alkylbenzenes and dialkyltetralins eluted according to their molecular weights. Their fragmentation patterns were studied using electron impact ionization mode and their molecular weight ions confirmed by a 'soft ionization technique', chemical ionization. The level of impurities present i~ the analyzed commercial linear alkylbenzenes was expressed as the percent of the total sample weight, as well as, in mg/g. The percent of impurities was observed to vary between 4.5 % and 16.8 % with the highest being in sample "I". Quantitation (mg/g) of impurities such as branched alkylbenzenes and dialkyltetralins was done using cis/trans-l,4,6,7-tetramethyltetralin as an internal standard. Samples were analyzed using .GC/MS system operating under full scan and single ion monitoring data acquisition modes. The latter data acquisition mode, which offers higher sensitivity, was used to analyze all samples under investigation for presence of linear dialkyltetralins. Dialkyltetralins were reported quantitatively, whereas branched alkylbenzenes were reported semi-qualitatively. The GC/MS method that was developed during the course of this study allowed identification of some other trace impurities present in commercial LABs. Compounds such as non-linear dialkyltetralins, dialkylindanes, diphenylalkanes and alkylnaphthalenes were identified but their detailed structure elucidation and the quantitation was beyond the scope of this study. However, further investigation of these compounds will be the subject of a future study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nous avons étudié la cohérence excitonique dans le poly[N- 9’-heptadecanyl-2,7-carbazole-alt-5,5-(4,7-di-2-thienyl-2’,1’,3’-benzothiadiazole] (PCDTBT). À l’aide d’un modulateur spatial de lumière, nous avons forgé des impulsions lasers ultracourtes permettant de sonder les cohérences du système. Nous nous sommes concentrés sur les propriétés cohérentes des états excitoniques, soit le singulet et l’état à transfert de charge. Nous avons observé que 35 fs après l’excitation, le singulet et l’état à transfert de charge sont toujours cohérents. Cette cohérence se mesure à l’aide de la visibilité qui est de respectivement environ 10% et 30%. De plus, nous avons démontré que les mécanismes permettant de générer du photocourant dans de tels dispositifs photovoltaïques ne sont déjà plus cohérents après 35 fs. Ces mesures révèlent une visibilité inférieure à 3%, ce qui est en deçà de la précision de nos instruments. Nous concluons donc que les états à transfert de charge ne sont pas les états précurseurs à la génération de photocourant, car ceux-ci se comportent très différemment dans les mesures de cohérences.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L'objectif du présent mémoire vise à présenter des modèles de séries chronologiques multivariés impliquant des vecteurs aléatoires dont chaque composante est non-négative. Nous considérons les modèles vMEM (modèles vectoriels et multiplicatifs avec erreurs non-négatives) présentés par Cipollini, Engle et Gallo (2006) et Cipollini et Gallo (2010). Ces modèles représentent une généralisation au cas multivarié des modèles MEM introduits par Engle (2002). Ces modèles trouvent notamment des applications avec les séries chronologiques financières. Les modèles vMEM permettent de modéliser des séries chronologiques impliquant des volumes d'actif, des durées, des variances conditionnelles, pour ne citer que ces applications. Il est également possible de faire une modélisation conjointe et d'étudier les dynamiques présentes entre les séries chronologiques formant le système étudié. Afin de modéliser des séries chronologiques multivariées à composantes non-négatives, plusieurs spécifications du terme d'erreur vectoriel ont été proposées dans la littérature. Une première approche consiste à considérer l'utilisation de vecteurs aléatoires dont la distribution du terme d'erreur est telle que chaque composante est non-négative. Cependant, trouver une distribution multivariée suffisamment souple définie sur le support positif est plutôt difficile, au moins avec les applications citées précédemment. Comme indiqué par Cipollini, Engle et Gallo (2006), un candidat possible est une distribution gamma multivariée, qui impose cependant des restrictions sévères sur les corrélations contemporaines entre les variables. Compte tenu que les possibilités sont limitées, une approche possible est d'utiliser la théorie des copules. Ainsi, selon cette approche, des distributions marginales (ou marges) peuvent être spécifiées, dont les distributions en cause ont des supports non-négatifs, et une fonction de copule permet de tenir compte de la dépendance entre les composantes. Une technique d'estimation possible est la méthode du maximum de vraisemblance. Une approche alternative est la méthode des moments généralisés (GMM). Cette dernière méthode présente l'avantage d'être semi-paramétrique dans le sens que contrairement à l'approche imposant une loi multivariée, il n'est pas nécessaire de spécifier une distribution multivariée pour le terme d'erreur. De manière générale, l'estimation des modèles vMEM est compliquée. Les algorithmes existants doivent tenir compte du grand nombre de paramètres et de la nature élaborée de la fonction de vraisemblance. Dans le cas de l'estimation par la méthode GMM, le système à résoudre nécessite également l'utilisation de solveurs pour systèmes non-linéaires. Dans ce mémoire, beaucoup d'énergies ont été consacrées à l'élaboration de code informatique (dans le langage R) pour estimer les différents paramètres du modèle. Dans le premier chapitre, nous définissons les processus stationnaires, les processus autorégressifs, les processus autorégressifs conditionnellement hétéroscédastiques (ARCH) et les processus ARCH généralisés (GARCH). Nous présentons aussi les modèles de durées ACD et les modèles MEM. Dans le deuxième chapitre, nous présentons la théorie des copules nécessaire pour notre travail, dans le cadre des modèles vectoriels et multiplicatifs avec erreurs non-négatives vMEM. Nous discutons également des méthodes possibles d'estimation. Dans le troisième chapitre, nous discutons les résultats des simulations pour plusieurs méthodes d'estimation. Dans le dernier chapitre, des applications sur des séries financières sont présentées. Le code R est fourni dans une annexe. Une conclusion complète ce mémoire.