898 resultados para Viscous Dampers,Five Step Method,Equivalent Static Analysis Procedure,Yielding Frames,Passive Energy Dissipation Systems
Resumo:
Wind energy is the energy source that contributes most to the renewable energy mix of European countries. While there are good wind resources throughout Europe, the intermittency of the wind represents a major problem for the deployment of wind energy into the electricity networks. To ensure grid security a Transmission System Operator needs today for each kilowatt of wind energy either an equal amount of spinning reserve or a forecasting system that can predict the amount of energy that will be produced from wind over a period of 1 to 48 hours. In the range from 5m/s to 15m/s a wind turbine’s production increases with a power of three. For this reason, a Transmission System Operator requires an accuracy for wind speed forecasts of 1m/s in this wind speed range. Forecasting wind energy with a numerical weather prediction model in this context builds the background of this work. The author’s goal was to present a pragmatic solution to this specific problem in the ”real world”. This work therefore has to be seen in a technical context and hence does not provide nor intends to provide a general overview of the benefits and drawbacks of wind energy as a renewable energy source. In the first part of this work the accuracy requirements of the energy sector for wind speed predictions from numerical weather prediction models are described and analysed. A unique set of numerical experiments has been carried out in collaboration with the Danish Meteorological Institute to investigate the forecast quality of an operational numerical weather prediction model for this purpose. The results of this investigation revealed that the accuracy requirements for wind speed and wind power forecasts from today’s numerical weather prediction models can only be met at certain times. This means that the uncertainty of the forecast quality becomes a parameter that is as important as the wind speed and wind power itself. To quantify the uncertainty of a forecast valid for tomorrow requires an ensemble of forecasts. In the second part of this work such an ensemble of forecasts was designed and verified for its ability to quantify the forecast error. This was accomplished by correlating the measured error and the forecasted uncertainty on area integrated wind speed and wind power in Denmark and Ireland. A correlation of 93% was achieved in these areas. This method cannot solve the accuracy requirements of the energy sector. By knowing the uncertainty of the forecasts, the focus can however be put on the accuracy requirements at times when it is possible to accurately predict the weather. Thus, this result presents a major step forward in making wind energy a compatible energy source in the future.
Resumo:
The European Union has set out an ambitious 20% target for renewable energy use by 2020. It is expected that this will be met mainly by wind energy. Looking towards 2050, reductions in greenhouse gas emissions of 80-95% are to be sought. Given the issues securing this target in the transport and agriculture sectors, it may only be possible to achieve this target if the power sector is carbon neutral well in advance of 2050. This has permitted the vast expansion of offshore renewables, wind, wave and tidal energy. Offshore wind has undergone rapid development in recent years however faces significant challenges up to 2020 to ensure commercial viability without the need for government subsidies. Wave energy is still in the very early stages of development so as yet there has been no commercial roll out. As both of these technologies are to face similar challenges in ensuring they are a viable alternative power generation method to fossil fuels, capitalising on the synergies is potentially a significant cost saving initiative. The advent of hybrid solutions in a variety of configurations is the subject of this thesis. A singular wind-wave energy platform embodies all the attributes of a hybrid system, including sharing space, transmission infrastructure, O&M activities and a platform/foundation. This configuration is the subject of this thesis, and it is found that an OWC Array platform with multi-MegaWatt wind turbines is a technically feasible, and potentially an economically feasible solution in the long term. Methods of design and analysis adopted in this thesis include numerical and physical modelling of power performance, structural analysis, fabrication cost modelling, simplified project economic modelling and time domain reliability modelling of a 210MW hybrid farm. The application of these design and analysis methods has resulted in a hybrid solution capable of producing energy at a cost between €0.22/kWh and €0.31/kWh depending on the source of funding for the project. Further optimisation through detailed design is expected to lower this further. This thesis develops new and existing methods of design and analysis of wind and wave energy devices. This streamlines the process of early stage development, while adhering to the widely adopted Concept Development Protocol, to develop a technically and economically feasible, combined wind-wave energy hybrid solution.
Resumo:
In this work we show how automatic relative debugging can be used to find differences in computation between a correct serial program and an OpenMP parallel version of that program that does not yield correct results. Backtracking and re-execution are used to determine the first OpenMP parallel region that produces a difference in computation that may lead to an incorrect value the user has indicated. Our approach also lends itself to finding differences between parallel computations, where executing with M threads produces expected results but an N thread execution does not (M, N > 1, M ≠ N). OpenMP programs created using a parallelization tool are addressed by utilizing static analysis and directive information from the tool. Hand-parallelized programs, where OpenMP directives are inserted by the user, are addressed by performing data dependence and directive analysis.
Resumo:
In this paper, we first demonstrate that the classical Purcell's vector method when combined with row pivoting yields a consistently small growth factor in comparison to the well-known Gauss elimination method, the Gauss–Jordan method and the Gauss–Huard method with partial pivoting. We then present six parallel algorithms of the Purcell method that may be used for direct solution of linear systems. The algorithms differ in ways of pivoting and load balancing. We recommend algorithms V and VI for their reliability and algorithms III and IV for good load balance if local pivoting is acceptable. Some numerical results are presented.
Resumo:
An energy storage system (ESS) installed in a power system can effectively damp power system oscillations through controlling exchange of either active or reactive power between the ESS and power system. This paper investigates the robustness of damping control implemented by the ESS to the variations of power system operating conditions. It proposes a new analytical method based on the well-known equal-area criterion and small-signal stability analysis. By using the proposed method, it is concluded in the paper that damping control implemented by the ESS through controlling its active power exchange with the power system is robust to the changes of power system operating conditions. While if the ESS damping control is realized by controlling its reactive power exchange with the power system, effectiveness of damping control changes with variations of power system operating condition. In the paper, an example power system installed with a battery ESS (BESS) is presented. Simulation results confirm the analytical conclusions made in the paper about the robustness of ESS damping control. Laboratory experiment of a physical power system installed with a 35kJ/7kW SMES (Superconducting Magnetic Energy Storage) was carried out to evaluate theoretical study. Results are given in the paper, which demonstrate that effectiveness of SMES damping control realized through regulating active power is robust to changes of load conditions of the physical power system.
Resumo:
The five room temperature ionic liquids: 1-alkyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([CnMIM][N(Tf)(2)], n = 2, 4, 8, 10) and n-hexyltriethylammonium bis(trifluoromethylsulfonyl)imide ([N-6222][N(Tf)(2)]) were investigated as solvents in which to study the electrochemical oxidation of N,N,N',N'-tetramethyl-para-phenylenediamine (TMPD) and N,N,N',N'-tetrabutyl-paraphenylenediamine (TBPD), using 20 mul micro-samples under vacuum conditions. The effect of dissolved atmospheric gases on the accessible electrochemical window was probed and determined to be less significant than seen previously for ionic liquids containing alternative anions. Chronoamperometric transients recorded at a microdisk electrode were analysed via a process of non-linear curve fitting to yield values for the diffusion coefficients of the electroactive species without requiring a knowledge of their initial concentration. Comparison of experimental and simulated cyclic voltammetry was then employed to corroborate these results and allow diffusion coefficients for the electrogenerated species to be estimated. The diffusion coefficients obtained for the neutral compounds in the five ionic liquids via this analysis were, in units of 10(-11) m(2) s(-1), 2.62, 1.87, 1.12, 1.13 and 0.70 for TMPD. and 1.23, 0.80, 0.40, 0.52 and 0.24 for TBPD (listed using the same order for the ionic liquids as stated above). The most significant consequence of changing the cationic component of the ionic liquid was found to be its effect on the solvent viscosity; the diffusion coefficient of each species was found to be approximately inversely proportional to viscosity across the series of ionic liquids, in accordance with Walden's rule. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Okadaic acid (OA) and structurally related toxins dinophysistoxin-1 (DTX-1), and DTX-2, are lipophilic marine biotoxins. The current reference method for the analysis of these toxins is the mouse bioassay (MBA). This method is under increasing criticism both from an ethical point of view and because of its limited sensitivity and specificity. Alternative replacement methods must be rapid, robust, cost effective, specific and sensitive. Although published immuno-based detection techniques have good sensitivities, they are restricted in their use because of their inability to: (i) detect all of the OA toxins that contribute to contamination; and (ii) factor in the relative toxicities of each contaminant. Monoclonal antibodies (MAbs) were produced to OA and an automated biosensor screening assay developed and compared with ELISA techniques. The screening assay was designed to increase the probability of identifying a MAb capable of detecting all OA toxins. The result was the generation of a unique MAb which not only cross-reacted with both DTX-1 and DTX-2 but had a cross-reactivity profile in buffer that reflected exactly the intrinsic toxic potency of the OA group of toxins. Preliminary matrix studies reflected these results. This antibody is an excellent candidate for the development of a range of functional immunochemical-based detection assays for this group of toxins.
Resumo:
The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory ‘tail’ DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the ‘randomness’ of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.
Resumo:
This article reports on research carried out on 200 child welfare files from the largest welfare authority in Northern Ireland from 1950-1968. The literature review provides a commentary on some of the major debates surrounding child welfare and protection social work from the perspective of its historical development. The report of the research which follows offers an insight into one core, and less well-known period of child welfare history in Northern Ireland between the two Children and Young Persons Acts (1950 & 1968). Using a method of discourse analysis influenced by Michel Foucault, a detailed description of the nature of practice is offered. This paper is offered as a work in progress, with further work being planned for dissemination of more detailed analysis of the method and outcomes. The research seeks to ask a few core questions based on problems identified in the present with our current understandings of child welfare and protection histories. While recognising the limitations of this study and the need for broader analysis of the wider context surrounding child welfare practice at the moment, it is argued that some salient conclusions can be drawn about continuity and discontinuity in practice which are of interest to practitioners and students of child welfare social work.
Resumo:
We present here evidence for the observation of the magnetohydrodynamic (MHD) sausage modes in magnetic pores in the solar photosphere. Further evidence for the omnipresent nature of acoustic global modes is also found. The empirical decomposition method of wave analysis is used to identify the oscillations detected through a 4170 Å "blue continuum" filter observed with the Rapid Oscillations in the Solar Atmosphere (ROSA) instrument. Out of phase, periodic behavior in pore size and intensity is used as an indicator of the presence of magnetoacoustic sausage oscillations. Multiple signatures of the magnetoacoustic sausage mode are found in a number of pores. The periods range from as short as 30 s up to 450 s. A number of the magnetoacoustic sausage mode oscillations found have periods of 3 and 5 minutes, similar to the acoustic global modes of the solar interior. It is proposed that these global oscillations could be the driver of the sausage-type magnetoacoustic MHD wave modes in pores.
Resumo:
Traditional static analysis fails to auto-parallelize programs with a complex control and data flow. Furthermore, thread-level parallelism in such programs is often restricted to pipeline parallelism, which can be hard to discover by a programmer. In this paper we propose a tool that, based on profiling information, helps the programmer to discover parallelism. The programmer hand-picks the code transformations from among the proposed candidates which are then applied by automatic code transformation techniques.
This paper contributes to the literature by presenting a profiling tool for discovering thread-level parallelism. We track dependencies at the whole-data structure level rather than at the element level or byte level in order to limit the profiling overhead. We perform a thorough analysis of the needs and costs of this technique. Furthermore, we present and validate the belief that programs with complex control and data flow contain significant amounts of exploitable coarse-grain pipeline parallelism in the program’s outer loops. This observation validates our approach to whole-data structure dependencies. As state-of-the-art compilers focus on loops iterating over data structure members, this observation also explains why our approach finds coarse-grain pipeline parallelism in cases that have remained out of reach for state-of-the-art compilers. In cases where traditional compilation techniques do find parallelism, our approach allows to discover higher degrees of parallelism, allowing a 40% speedup over traditional compilation techniques. Moreover, we demonstrate real speedups on multiple hardware platforms.