953 resultados para DFT calculation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

To investigate whether venous occlusion plethysmography (VOP) may be used to measure high rates of arterial inflow associated with exercise, venous occlusions were performed at rest, and following dynamic handgrip exercise at 15, 30, 45, and 60 % of maximum voluntary contraction (MVC) in seven healthy males. The effect of including more than one cardiac cycle in the calculation of blood flow was assessed by comparing the cumulative blood flow over one, two, three, or four cardiac cycles. The inclusion of more than one cardiac cycle at 30 and 60 % MVC, and more than two cardiac cycles at 15 and 45 % MVC resulted in a lower blood flow compared to using only the first cardiac cycle (P < 0.05). Despite the small time interval over which arterial inflow was measured (~1 second), this did not affect the reproducibility of the technique. Reproducibility (coefficient of variation for arterial inflow over three trials) tended to be poorer at the higher workloads, although this was not significant (12.7 ± 6.6 %, 16.2 ± 7.3 %, and 22.9 ± 9.9 % for the 15, 30, and 45 % MVC workloads; P=0.102). There was also a tendency for greater reproducibility with the inclusion of more cardiac cycles at the highest workload, but this did not reach significance (P=0.070). In conclusion, when calculated over the first cardiac cycle only during venous occlusion, high rates of FBF can be measured using VOP, and this can be achieved without a significant decrease in the reproducibility of the measurement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The arsenite minerals finnemanite Pb5(As3+O3)3Cl been studied by Raman spectroscopy. The most intense Raman band at 871 cm-1 is assigned to the ν1 (AsO3)3- symmetric stretching vibration. Three Raman bands at 898, 908 and 947 cm-1 are assigned to the ν3 (AsO3)3- antisymmetric stretching vibration. The observation of multiple antisymmetric stretching vibrations suggest that the (AsO3)3- units are not equivalent in the molecular structure of finnemanite. Two Raman bands at 383 and 399 cm-1 are assigned to the ν2 (AsO3)3- bending modes. DFT calculations enabled the position of AsO32- symmetric stretching mode at 839 cm-1, the antisymmetric stretching mode at 813 cm-1, and the deformation mode at 449 cm-1 to be calculated. Raman bands are observed at 115, 145, 162, 176, 192, 216 and 234 cm-1 as well. The two most intense bands are observed at 176 and 192 cm-1. These bands are assigned to PbCl stretching vibrations and result from transverse/ longitudinal splitting. The bands at 145 and 162 cm-1 may be assigned to Cl-Pb-Cl bending modes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The selected arsenite minerals leiteite, reinerite and cafarsite have been studied by Raman spectroscopy. DFT calculations enabled the position of AsO22- symmetric stretching mode at 839 cm-1, the antisymmetric stretching mode at 813 cm-1, and the deformation mode at 449 cm-1 to be calculated. The Raman spectrum of leiteite shows bands at 804 and 763 cm-1 assigned to the As2O42- symmetric and antisymmetric stretching modes. The most intense Raman band of leiteite is the band at 457 cm-1 and is assigned to the ν2 As2O42- bending mode. A comparison of the Raman spectrum of leiteite is made with the arsenite minerals reinerite and cafarsite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The mixed anion mineral dixenite has been studied by Raman spectroscopy, complimented with infrared spectroscopy. The Raman spectrum of dixenite shows bands at 839 and 813 cm-1 assigned to the (AsO3)3- symmetric and antisymmetric stretching modes. The most intense Raman band of dixenite is the band at 526 cm-1 and is assigned to the ν2 AsO33- bending mode. DFT calculations enabled the position of AsO22- symmetric stretching mode at 839 cm-1, the antisymmetric stretching mode at 813 cm-1, and the deformation mode at 449 cm-1 to be calculated. Raman bands at 1026 and 1057 cm-1 are assigned to the SiO42- symmetric stretching vibrations and at 1349 and 1386 cm-1 to the SiO42- antisymmetric stretching vibrations. Both Raman and infrared spectra indicate the presence of water in the structure of dixenite. This brings into question the commonly accepted formula of dixenite as CuMn2+14Fe3+(AsO3)5(SiO4)2(AsO4)(OH)6. The formula may be better written as CuMn2+14Fe3+(AsO3)5(SiO4)2(AsO4)(OH)6•xH2O.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper focuses on the varying approaches and methodologies adopted when the calculation of holding costs is undertaken, focusing on greenfield development. Whilst acknowledging there may be some consistency in embracing first principles relating to holding cost theory, a review of the literature reveals considerable lack of uniformity in this regard. There is even less clarity in quantitative determination, especially in Australia where there has been only limited empirical analysis undertaken. Despite a growing quantum of research undertaken in relation to various elements connected with housing affordability, the matter of holding costs has not been well addressed regardless of its part in the highly prioritised Australian Government’s housing research agenda. The end result has been a modicum of qualitative commentary relating to holding costs. There have been few attempts at finer-tuned analysis that exposes a quantified level of holding cost calculated with underlying rigour. Holding costs can take many forms, but they inevitably involve the computation of “carrying costs” of an initial outlay that has yet to fully realise its ultimate yield. Although sometimes considered a “hidden” cost, it is submitted that holding costs prospectively represent a major determinate of value. If this is the case, then considered in the context of housing affordability, it is therefore potentially pervasive.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is widely held that strong relationships exist between housing, economic status, and well being. This is exemplified by widespread housing stock surpluses in many countries which threaten to destabilise numerous aspects related to individuals and community. However, the position of housing demand and supply is not consistent. The Australian position provides a distinct contrast whereby seemingly inexorable housing demand generally remains a critical issue affecting the socio-economic landscape. Underpinned by high levels of immigration, and further buoyed by sustained historically low interest rates, increasing income levels, and increased government assistance for first home buyers, this strong housing demand ensures elements related to housing affordability continue to gain prominence. A significant, but less visible factor impacting housing affordability – particularly new housing development – relates to holding costs. These costs are in many ways “hidden” and cannot always be easily identified. Although it is only one contributor, the nature and extent of its impact requires elucidation. In its simplest form, it commences with a calculation of the interest or opportunity cost of land holding. However, there is significantly more complexity for major new developments - particularly greenfield property development. Preliminary analysis conducted by the author suggests that even small shifts in primary factors impacting holding costs can appreciably affect housing affordability – and notably, to a greater extent than commonly held. Even so, their importance and perceived high level impact can be gauged from the unprecedented level of attention policy makers have given them over recent years. This may be evidenced by the embedding of specific strategies to address burgeoning holding costs (and particularly those cost savings associated with streamlining regulatory assessment) within statutory instruments such as the Queensland Housing Affordability Strategy, and the South East Queensland Regional Plan. However, several key issues require investigation. Firstly, the computation and methodology behind the calculation of holding costs varies widely. In fact, it is not only variable, but in some instances completely ignored. Secondly, some ambiguity exists in terms of the inclusion of various elements of holding costs, thereby affecting the assessment of their relative contribution. Perhaps this may in part be explained by their nature: such costs are not always immediately apparent. Some forms of holding costs are not as visible as the more tangible cost items associated with greenfield development such as regulatory fees, government taxes, acquisition costs, selling fees, commissions and others. Holding costs are also more difficult to evaluate since for the most part they must be ultimately assessed over time in an ever-changing environment, based on their strong relationship with opportunity cost which is in turn dependant, inter alia, upon prevailing inflation and / or interest rates. By extending research in the general area of housing affordability, this thesis seeks to provide a more detailed investigation of those elements related to holding costs, and in so doing determine the size of their impact specifically on the end user. This will involve the development of soundly based economic and econometric models which seek to clarify the componentry impacts of holding costs. Ultimately, there are significant policy implications in relation to the framework used in Australian jurisdictions that promote, retain, or otherwise maximise, the opportunities for affordable housing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes experiments conducted in order to simultaneously tune 15 joints of a humanoid robot. Two Genetic Algorithm (GA) based tuning methods were developed and compared against a hand-tuned solution. The system was tuned in order to minimise tracking error while at the same time achieve smooth joint motion. Joint smoothness is crucial for the accurate calculation of online ZMP estimation, a prerequisite for a closedloop dynamically stable humanoid walking gait. Results in both simulation and on a real robot are presented, demonstrating the superior smoothness performance of the GA based methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An algorithm to improve the accuracy and stability of rigid-body contact force calculation is presented. The algorithm uses a combination of analytic solutions and numerical methods to solve a spring-damper differential equation typical of a contact model. The solution method employs the recently proposed patch method, which especially suits the spring-damper differential equations. The resulting semi-analytic solution reduces the stiffness of the differential equations, while performing faster than conventional alternatives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a vision-based method of vehicle localisation that has been developed and tested on a large forklift type robotic vehicle which operates in a mainly outdoor industrial setting. The localiser uses a sparse 3D edgemap of the environment and a particle filter to estimate the pose of the vehicle. The vehicle operates in dynamic and non-uniform outdoor lighting conditions, an issue that is addressed by using knowledge of the scene to intelligently adjust the camera exposure and hence improve the quality of the information in the image. Results from the industrial vehicle are shown and compared to another laser-based localiser which acts as a ground truth. An improved likelihood metric, using peredge calculation, is presented and has shown to be 40% more accurate in estimating rotation. Visual localization results from the vehicle driving an arbitrary 1.5km path during a bright sunny period show an average position error of 0.44m and rotation error of 0.62deg.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The over representation of novice drivers in crashes is alarming. Research indicates that one in five drivers’ crashes within their first year of driving. Driver training is one of the interventions aimed at decreasing the number of crashes that involve young drivers. Currently, there is a need to develop comprehensive driver evaluation system that benefits from the advances in Driver Assistance Systems. Since driving is dependent on fuzzy inputs from the driver (i.e. approximate distance calculation from the other vehicles, approximate assumption of the other vehicle speed), it is necessary that the evaluation system is based on criteria and rules that handles uncertain and fuzzy characteristics of the drive. This paper presents a system that evaluates the data stream acquired from multiple in-vehicle sensors (acquired from Driver Vehicle Environment-DVE) using fuzzy rules and classifies the driving manoeuvres (i.e. overtake, lane change and turn) as low risk or high risk. The fuzzy rules use parameters such as following distance, frequency of mirror checks, gaze depth and scan area, distance with respect to lanes and excessive acceleration or braking during the manoeuvre to assess risk. The fuzzy rules to estimate risk are designed after analysing the selected driving manoeuvres performed by driver trainers. This paper focuses mainly on the difference in gaze pattern for experienced and novice drivers during the selected manoeuvres. Using this system, trainers of novice drivers would be able to empirically evaluate and give feedback to the novice drivers regarding their driving behaviour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Productivity is basic statistical information for many international comparisons and country performance assessments. This study estimates the construction labour productivity of 79 selected economies. The real (purchasing power parities converted) and nominal construction expenditure from the Report of 2005 International Comparison Programme published by the World Bank and construction employment from the database of labour statistics (LABORSTA) operated by the Bureau of Statistics of International Labour Organization were used in the estimation. The inference statistics indicate that the descending order of nominal construction labour productivity from high income economies to low income economies is not established. The average construction labour productivity of low income economies is higher than middle income economies when the productivity calculation uses purchasing power parities converted data. Malaysia ranked 50th and 63rd position among the 79 selected economies on real and nominal measurement respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

LiteSteel Beam (LSB) is a new cold-formed steel beam produced by OneSteel Australian Tube Mills. The new beam is effectively a channel section with two rectangular hollow flanges and a slender web, and is manufactured using a combined cold-forming and electric resistance welding process. OneSteel Australian Tube Mills is promoting the use of LSBs as flexural members in a range of applications, such as floor bearers. When LSBs are used as back to back built-up sections, they are likely to improve their moment capacity and thus extend their applications further. However, the structural behaviour of built-up beams is not well understood. Many steel design codes include guidelines for connecting two channels to form a built-up I-section including the required longitudinal spacing of connections. But these rules were found to be inadequate in some applications. Currently the safe spans of builtup beams are determined based on twice the moment capacity of a single section. Research has shown that these guidelines are conservative. Therefore large scale lateral buckling tests and advanced numerical analyses were undertaken to investigate the flexural behaviour of back to back LSBs connected by fasteners (bolts) at various longitudinal spacings under uniform moment conditions. In this research an experimental investigation was first undertaken to study the flexural behaviour of back to back LSBs including its buckling characteristics. This experimental study included tensile coupon tests, initial geometric imperfection measurements and lateral buckling tests. The initial geometric imperfection measurements taken on several back to back LSB specimens showed that the back to back bolting process is not likely to alter the imperfections, and the measured imperfections are well below the fabrication tolerance limits. Twelve large scale lateral buckling tests were conducted to investigate the behaviour of back to back built-up LSBs with various longitudinal fastener spacings under uniform moment conditions. Tests also included two single LSB specimens. Test results showed that the back to back LSBs gave higher moment capacities in comparison with single LSBs, and the fastener spacing influenced the ultimate moment capacities. As the fastener spacing was reduced the ultimate moment capacities of back to back LSBs increased. Finite element models of back to back LSBs with varying fastener spacings were then developed to conduct a detailed parametric study on the flexural behaviour of back to back built-up LSBs. Two finite element models were developed, namely experimental and ideal finite element models. The models included the complex contact behaviour between LSB web elements and intermittently fastened bolted connections along the web elements. They were validated by comparing their results with experimental results and numerical results obtained from an established buckling analysis program called THIN-WALL. These comparisons showed that the developed models could accurately predict both the elastic lateral distortional buckling moments and the non-linear ultimate moment capacities of back to back LSBs. Therefore the ideal finite element models incorporating ideal simply supported boundary conditions and uniform moment conditions were used in a detailed parametric study on the flexural behaviour of back to back LSB members. In the detailed parametric study, both elastic buckling and nonlinear analyses of back to back LSBs were conducted for 13 LSB sections with varying spans and fastener spacings. Finite element analysis results confirmed that the current design rules in AS/NZS 4600 (SA, 2005) are very conservative while the new design rules developed by Anapayan and Mahendran (2009a) for single LSB members were also found to be conservative. Thus new member capacity design rules were developed for back to back LSB members as a function of non-dimensional member slenderness. New empirical equations were also developed to aid in the calculation of elastic lateral distortional buckling moments of intermittently fastened back to back LSBs. Design guidelines were developed for the maximum fastener spacing of back to back LSBs in order to optimise the use of fasteners. A closer fastener spacing of span/6 was recommended for intermediate spans and some long spans where the influence of fastener spacing was found to be high. In the last phase of this research, a detailed investigation was conducted to investigate the potential use of different types of connections and stiffeners in improving the flexural strength of back to back LSB members. It was found that using transverse web stiffeners was the most cost-effective and simple strengthening method. It is recommended that web stiffeners are used at the supports and every third points within the span, and their thickness is in the range of 3 to 5 mm depending on the size of LSB section. The use of web stiffeners eliminated most of the lateral distortional buckling effects and hence improved the ultimate moment capacities. A suitable design equation was developed to calculate the elastic lateral buckling moments of back to back LSBs with the above recommended web stiffener configuration while the same design rules developed for unstiffened back to back LSBs were recommended to calculate the ultimate moment capacities.