38 resultados para Helicity method, subtraction method, numerical methods, random polarizations

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reinforced concrete (RC) jacketing is a common method to retrofit existing columns with poor structural performance. It can be applied in two different ways: if the continuity of the jacket is ensured, the axial load of the column can be transferred to the jacket, which will be directly loaded; conversely, if no continuity is provided, the jacket induces only confinement action. In both cases the strength and ductility evaluation is rather complex, due to the different physical phenomena included, such as confinement, composite action core-jacket, preload, buckling of longitudinal bars.
Although different theoretical studies have been carried out to calculate the confinement effects, a practical approach to evaluate the flexural capacity and ductility is still missing. The calculation of these quantities is often related to the use of commercial computer programs, taking advantage of numerical methods such as fiber method or finite element method.
This paper presents a simplified approach to calculate the flexural strength and ductility of square RC jacketed sections subjected to axial load and bending moment. In particular the proposed approach is based on the calibration of the stress-block parameters including the confinement effect. Equilibrium equations are determined and buckling of longitudinal bars is modeled with a suitable stress-strain law. Moment-curvature curves are derived with simple calculations. Finally, comparisons are made with numerical analyses carried out with the code OpenSees and with experimental data available in the literature, showing good agreement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reinforced concrete (RC) jacketing is a common method for retrofitting existing columns with poor structural performance. It can be applied in two different ways: if the continuity of the jacket is ensured, the axial load of the column can be transferred to the jacket, which will be directly loaded; conversely, if no continuity is provided, the jacket will induce only confinement action. In both cases the strength and ductility evaluation is rather complex, due to the different physical phenomena included, such as confinement, core-jacket composite action, preload and buckling of longitudinal bars.
Although different theoretical studies have been carried out to calculate the confinement effects, a practical approach to evaluate the flexural capacity and ductility is still missing. The calculation of these quantities is often related to the use of commercial software, taking advantage of numerical methods such as fibre method or finite element method.
This paper presents a simplified approach to calculate the flexural strength and ductility of square RC jacketed sections subjected to axial load and bending moment. In particular the proposed approach is based on the calibration of the stress-block parameters including the confinement effect. Equilibrium equations are determined and buckling of longitudinal bars is modelled with a suitable stress-strain law. Moment-curvature curves are derived with simple calculations. Finally, comparisons are made with numerical analyses carried out with the code OpenSees and with experimental data available in the literature, showing good agreement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory ‘tail’ DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the ‘randomness’ of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The R-matrix method when applied to the study of intermediate energy electron scattering by the hydrogen atom gives rise to a large number of two electron integrals between numerical basis functions. Each integral is evaluated independently of the others, thereby rendering this a prime candidate for a parallel implementation. In this paper, we present a parallel implementation of this routine which uses a Graphical Processing Unit as a co-processor, giving a speedup of approximately 20 times when compared with a sequential version. We briefly consider properties of this calculation which make a GPU implementation appropriate with a view to identifying other calculations which might similarly benet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large-scale commercial exploitation of wave energy is certain to require the deployment of wave energy converters (WECs) in arrays, creating ‘WEC farms’. An understanding of the hydrodynamic interactions in such arrays is essential for determining optimum layouts of WECs, as well as calculating the area of ocean that the farms will require. It is equally important to consider the potential impact of wave farms on the local and distal wave climates and coastal processes; a poor understanding of the resulting environmental impact may hamper progress, as it would make planning consents more difficult to obtain. It is therefore clear that an understanding the interactions between WECs within a farm is vital for the continued development of the wave energy industry.To support WEC farm design, a range of different numerical models have been developed, with both wave phase-resolving and wave phase-averaging models now available. Phase-resolving methods are primarily based on potential flow models and include semi-analytical techniques, boundary element methods and methods involving the mild-slope equations. Phase-averaging methods are all based around spectral wave models, with supra-grid and sub-grid wave farm models available as alternative implementations.The aims, underlying principles, strengths, weaknesses and obtained results of the main numerical methods currently used for modelling wave energy converter arrays are described in this paper, using a common framework. This allows a qualitative comparative analysis of the different methods to be performed at the end of the paper. This includes consideration of the conditions under which the models may be applied, the output of the models and the relationship between array size and computational effort. Guidance for developers is also presented on the most suitable numerical method to use for given aspects of WEC farm design. For instance, certain models are more suitable for studying near-field effects, whilst others are preferable for investigating far-field effects of the WEC farms. Furthermore, the analysis presented in this paper identifies areas in which the numerical modelling of WEC arrays is relatively weak and thus highlights those in which future developments are required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is convenient and effective to solve nonlinear problems with a model that has a linear-in-the-parameters (LITP) structure. However, the nonlinear parameters (e.g. the width of Gaussian function) of each model term needs to be pre-determined either from expert experience or through exhaustive search. An alternative approach is to optimize them by a gradient-based technique (e.g. Newton’s method). Unfortunately, all of these methods still need a lot of computations. Recently, the extreme learning machine (ELM) has shown its advantages in terms of fast learning from data, but the sparsity of the constructed model cannot be guaranteed. This paper proposes a novel algorithm for automatic construction of a nonlinear system model based on the extreme learning machine. This is achieved by effectively integrating the ELM and leave-one-out (LOO) cross validation with our two-stage stepwise construction procedure [1]. The main objective is to improve the compactness and generalization capability of the model constructed by the ELM method. Numerical analysis shows that the proposed algorithm only involves about half of the computation of orthogonal least squares (OLS) based method. Simulation examples are included to confirm the efficacy and superiority of the proposed technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents experimental and numerical studies into the hydrodynamic loading of a bottom-hinged large buoyant flap held rigidly upright in waves. Possible applications and limitations of physical experiments, a linear potential analytical method, a linear potential numerical method, a weakly non-linear tool and RANS CFD simulations are discussed. Different domains of applicability of these research techniques are highlighted considering the validity of underlying assumptions, complexity of application and feasibility in terms of resources like time and computing power needed to obtain results. Conclusions are drawn regarding the future extension of the numerical methods to the case of a moving flap.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research aims: 
To describe service provision for the transition from children’s to adult services for young people with life-limiting conditions in Northern Ireland, and to identify organisational factors that promote or inhibit effective transition. 
Study population: 
Health, social, educational and charitable organisations providing transition services to young people with life-limiting conditions in Northern Ireland. 
Study design and methods: 
A questionnaire has been developed by the research team drawing on examples from the literature and the advice of an expert advisory group. The questionnaire was piloted with clinicians,academics and researchers in June 2013. The questionnaire focuses on components of practice which may promote continuity in the transition from child to adult care for young people with a life-limiting condition. The survey will be distributed throughout Northern Ireland to an estimated 75 organisations, following the Dillman total design survey method. Numerical data will be analysed using PASW Statistical software to generate descriptive statistics along with a thematic analysis of data generated by open-ended questions. 
Results and interpretations: 
The survey will provide a description of services, transition policies, approaches to managing transition, categories of service users, the ages at which transition starts and completes, experiences with minority ethnic groups, the input of service users to the process, organisational factors promoting or hindering effective transition, links between services, and service providers’ recommendations for improvements in services.The outcomes will be an overview of the transition services currently provided in Northern Ireland identifying models of good practice and the key factors influencing the quality, safety and continuity of care. Survey results are due early in 2014.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Algorithms for concept drift handling are important for various applications including video analysis and smart grids. In this paper we present decision tree ensemble classication method based on the Random Forest algorithm for concept drift. The weighted majority voting ensemble aggregation rule is employed based on the ideas of Accuracy Weighted Ensemble (AWE) method. Base learner weight in our case is computed for each sample evaluation using base learners accuracy and intrinsic proximity measure of Random Forest. Our algorithm exploits both temporal weighting of samples and ensemble pruning as a forgetting strategy. We present results of empirical comparison of our method with îriginal random forest with incorporated replace-the-looser forgetting andother state-of-the-art concept-drift classiers like AWE2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The structure of a turbulent non-premixed flame of a biogas fuel in a hot and diluted coflow mimicking moderate and intense low dilution (MILD) combustion is studied numerically. Biogas fuel is obtained by dilution of Dutch natural gas (DNG) with CO2. The results of biogas combustion are compared with those of DNG combustion in the Delft Jet-in-Hot-Coflow (DJHC) burner. New experimental measurements of lift-off height and of velocity and temperature statistics have been made to provide a database for evaluating the capability of numerical methods in predicting the flame structure. Compared to the lift-off height of the DNG flame, addition of 30 % carbon dioxide to the fuel increases the lift-off height by less than 15 %. Numerical simulations are conducted by solving the RANS equations using Reynolds stress model (RSM) as turbulence model in combination with EDC (Eddy Dissipation Concept) and transported probability density function (PDF) as turbulence-chemistry interaction models. The DRM19 reduced mechanism is used as chemical kinetics with the EDC model. A tabulated chemistry model based on the Flamelet Generated Manifold (FGM) is adopted in the PDF method. The table describes a non-adiabatic three stream mixing problem between fuel, coflow and ambient air based on igniting counterflow diffusion flamelets. The results show that the EDC/DRM19 and PDF/FGM models predict the experimentally observed decreasing trend of lift-off height with increase of the coflow temperature. Although more detailed chemistry is used with EDC, the temperature fluctuations at the coflow inlet (approximately 100K) cannot be included resulting in a significant overprediction of the flame temperature. Only the PDF modeling results with temperature fluctuations predict the correct mean temperature profiles of the biogas case and compare well with the experimental temperature distributions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical methods have enabled the simulation of complex problems in off-shore and marine engineering. A significant challenge in these simulations is the creation of a realistic wave field. A good numerical tank requires wave creation and absorption of waves at various locations. Several numerical wavemakers with these capabilities have been presented in the past. This paper reviews four different wave-maker methods and discusses limitations, computational efficiency, requirements on the mesh and preprocessing and complexity of implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with the finite element simulation of debonding failures in FRP-strengthened concrete beams. A key challenge for such simulations is that common solution techniques such as the Newton-Raphson method and the arc-length method often fail to converge. This paper examines the effectiveness of using a dynamic analysis approach in such FE simulations, in which debonding failure is treated as a dynamic problem and solved using an appropriate time integration method. Numerical results are presented to show that an appropriate dynamic approach effectively overcomes the convergence problem and provides accurate predictions of test results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With Tweet volumes reaching 500 million a day, sampling is inevitable for any application using Twitter data. Realizing this, data providers such as Twitter, Gnip and Boardreader license sampled data streams priced in accordance with the sample size. Big Data applications working with sampled data would be interested in working with a large enough sample that is representative of the universal dataset. Previous work focusing on the representativeness issue has considered ensuring the global occurrence rates of key terms, be reliably estimated from the sample. Present technology allows sample size estimation in accordance with probabilistic bounds on occurrence rates for the case of uniform random sampling. In this paper, we consider the problem of further improving sample size estimates by leveraging stratification in Twitter data. We analyze our estimates through an extensive study using simulations and real-world data, establishing the superiority of our method over uniform random sampling. Our work provides the technical know-how for data providers to expand their portfolio to include stratified sampled datasets, whereas applications are benefited by being able to monitor more topics/events at the same data and computing cost.