177 resultados para experimental modal analysis
Resumo:
Condition monitoring of diesel engines can prevent unpredicted engine failures and the associated consequence. This paper presents an experimental study of the signal characteristics of a 4-cylinder diesel engine under various loading conditions. Acoustic emission, vibration and in-cylinder pressure signals were employed to study the effectiveness of these techniques for condition monitoring and identifying symptoms of incipient failures. An event driven synchronous averaging technique was employed to average the quasi-periodic diesel engine signal in the time domain to eliminate or minimize the effect of engine speed and amplitude variations on the analysis of condition monitoring signal. It was shown that acoustic emission (AE) is a better technique than vibration method for condition monitor of diesel engines due to its ability to produce high quality signals (i.e., excellent signal to noise ratio) in a noisy diesel engine environment. It was found that the peak amplitude of AE RMS signals correlating to the impact-like combustion related events decreases in general due to a more stable mechanical process of the engine as the loading increases. A small shift in the exhaust valve closing time was observed as the engine load increases which indicates a prolong combustion process in the cylinder (to produce more power). On the contrary, peak amplitudes of the AE RMS attributing to fuel injection increase as the loading increases. This can be explained by the increase fuel friction caused by the increase volume flow rate during the injection. Multiple AE pulses during the combustion process were identified in the study, which were generated by the piston rocking motion and the interaction between the piston and the cylinder wall. The piston rocking motion is caused by the non-uniform pressure distribution acting on the piston head as a result of the non-linear combustion process of the engine. The rocking motion ceased when the pressure in the cylinder chamber stabilized.
Resumo:
Texture analysis and textural cues have been applied for image classification, segmentation and pattern recognition. Dominant texture descriptors include directionality, coarseness, line-likeness etc. In this dissertation a class of textures known as particulate textures are defined, which are predominantly coarse or blob-like. The set of features that characterise particulate textures are different from those that characterise classical textures. These features are micro-texture, macro-texture, size, shape and compaction. Classical texture analysis techniques do not adequately capture particulate texture features. This gap is identified and new methods for analysing particulate textures are proposed. The levels of complexity in particulate textures are also presented ranging from the simplest images where blob-like particles are easily isolated from their back- ground to the more complex images where the particles and the background are not easily separable or the particles are occluded. Simple particulate images can be analysed for particle shapes and sizes. Complex particulate texture images, on the other hand, often permit only the estimation of particle dimensions. Real life applications of particulate textures are reviewed, including applications to sedimentology, granulometry and road surface texture analysis. A new framework for computation of particulate shape is proposed. A granulometric approach for particle size estimation based on edge detection is developed which can be adapted to the gray level of the images by varying its parameters. This study binds visual texture analysis and road surface macrotexture in a theoretical framework, thus making it possible to apply monocular imaging techniques to road surface texture analysis. Results from the application of the developed algorithm to road surface macro-texture, are compared with results based on Fourier spectra, the auto- correlation function and wavelet decomposition, indicating the superior performance of the proposed technique. The influence of image acquisition conditions such as illumination and camera angle on the results was systematically analysed. Experimental data was collected from over 5km of road in Brisbane and the estimated coarseness along the road was compared with laser profilometer measurements. Coefficient of determination R2 exceeding 0.9 was obtained when correlating the proposed imaging technique with the state of the art Sensor Measured Texture Depth (SMTD) obtained using laser profilometers.
Resumo:
Acoustic sensors play an important role in augmenting the traditional biodiversity monitoring activities carried out by ecologists and conservation biologists. With this ability however comes the burden of analysing large volumes of complex acoustic data. Given the complexity of acoustic sensor data, fully automated analysis for a wide range of species is still a significant challenge. This research investigates the use of citizen scientists to analyse large volumes of environmental acoustic data in order to identify bird species. Specifically, it investigates ways in which the efficiency of a user can be improved through the use of species identification tools and the use of reputation models to predict the accuracy of users with unidentified skill levels. Initial experimental results are reported.
Resumo:
Trees, shrubs and other vegetation are of continued importance to the environment and our daily life. They provide shade around our roads and houses, offer a habitat for birds and wildlife, and absorb air pollutants. However, vegetation touching power lines is a risk to public safety and the environment, and one of the main causes of power supply problems. Vegetation management, which includes tree trimming and vegetation control, is a significant cost component of the maintenance of electrical infrastructure. For example, Ergon Energy, the Australia’s largest geographic footprint energy distributor, currently spends over $80 million a year inspecting and managing vegetation that encroach on power line assets. Currently, most vegetation management programs for distribution systems are calendar-based ground patrol. However, calendar-based inspection by linesman is labour-intensive, time consuming and expensive. It also results in some zones being trimmed more frequently than needed and others not cut often enough. Moreover, it’s seldom practicable to measure all the plants around power line corridors by field methods. Remote sensing data captured from airborne sensors has great potential in assisting vegetation management in power line corridors. This thesis presented a comprehensive study on using spiking neural networks in a specific image analysis application: power line corridor monitoring. Theoretically, the thesis focuses on a biologically inspired spiking cortical model: pulse coupled neural network (PCNN). The original PCNN model was simplified in order to better analyze the pulse dynamics and control the performance. Some new and effective algorithms were developed based on the proposed spiking cortical model for object detection, image segmentation and invariant feature extraction. The developed algorithms were evaluated in a number of experiments using real image data collected from our flight trails. The experimental results demonstrated the effectiveness and advantages of spiking neural networks in image processing tasks. Operationally, the knowledge gained from this research project offers a good reference to our industry partner (i.e. Ergon Energy) and other energy utilities who wants to improve their vegetation management activities. The novel approaches described in this thesis showed the potential of using the cutting edge sensor technologies and intelligent computing techniques in improve power line corridor monitoring. The lessons learnt from this project are also expected to increase the confidence of energy companies to move from traditional vegetation management strategy to a more automated, accurate and cost-effective solution using aerial remote sensing techniques.
Resumo:
An analytical solution is presented in this paper for the vibration response of a ribbed plate clamped on all its boundary edges by employing a travelling wave solution. A clamped ribbed plate test rig is also assembled in this study for the experimental investigation of the ribbed plate response and to provide verification results to the analytical solution. The dynamic characteristics and mode shapes of the ribbed plate are measured and compared to those obtained from the analytical solution and from finite element analysis (FEA). General good agreements are found between the results. Discrepancies between the computational and experimental results at low and high frequencies are also discussed. Explanations are offered in the study to disclose the mechanism causing the discrepancies. The dependency of the dynamic response of the ribbed plate on the distance between the excitation force and the rib is also investigated experimentally. It confirms the findings disclosed in a previous analytical study [T. R. Lin and J. Pan, A closed form solution for the dynamic response of finite ribbed plates. Journal of the Acoustical Society of America 119 (2006) 917-925] that the vibration response of a clamped ribbed plate due to a point force excitation is controlled by the plate stiffness when the source is more than a quarter plate bending wavelength away from the rib and from the plate boundary. The response is largely affected by the rib stiffness when the source location is less than a quarter bending wavelength away from the rib.
Resumo:
Continuing monitoring of diesel engine performance is critical for early detection of fault developments in the engine before they materialize and become a functional failure. Instantaneous crank angular speed (IAS) analysis is one of a few non intrusive condition monitoring techniques that can be utilized for such tasks. In this experimental study, IAS analysis was employed to estimate the loading condition of a 4-stroke 4-cylinder diesel engine in a laboratory condition. It was shown that IAS analysis can provide useful information about engine speed variation caused by the changing piston momentum and crankshaft acceleration during the engine combustion process. It was also found that the major order component of the IAS spectrum directly associated with the engine firing frequency (at twice the mean shaft revolution speed) can be utilized to estimate the engine loading condition regardless of whether the engine is operating at normal running conditions or in a simulated faulty injector case. The amplitude of this order component follows a clear exponential curve as the loading condition changes. A mathematical relationship was established for the estimation of the engine power output based on the amplitude of the major order component of the measured IAS spectrum.
Resumo:
Continuing monitoring of diesel engine performance is critical for early detection of fault developments in the engine before they materialize and become a functional failure. Instantaneous crank angular speed (IAS) analysis is one of a few non intrusive condition monitoring techniques that can be utilized for such tasks. In this experimental study, IAS analysis was employed to estimate the loading condition of a 4-stroke 4-cylinder diesel engine in a laboratory condition. It was shown that IAS analysis can provide useful information about engine speed variation caused by the changing piston momentum and crankshaft acceleration during the engine combustion process. It was also found that the major order component of the IAS spectrum directly associated with the engine firing frequency (at twice the mean shaft revolution speed) can be utilized to estimate the engine loading condition regardless of whether the engine is operating at normal running conditions or in a simulated faulty injector case. The amplitude of this order component follows a clear exponential curve as the loading condition changes. A mathematical relationship was established for the estimation of the engine power output based on the amplitude of the major order component of the measured IAS spectrum.
Resumo:
To sustain an ongoing rapid growth of video information, there is an emerging demand for a sophisticated content-based video indexing system. However, current video indexing solutions are still immature and lack of any standard. This doctoral consists of a research work based on an integrated multi-modal approach for sports video indexing and retrieval. By combining specific features extractable from multiple audio-visual modalities, generic structure and specific events can be detected and classified. During browsing and retrieval, users will benefit from the integration of high-level semantic and some descriptive mid-level features such as whistle and close-up view of player(s).
Resumo:
This article presents a two-stage analytical framework that integrates ecological crop (animal) growth and economic frontier production models to analyse the productive efficiency of crop (animal) production systems. The ecological crop (animal) growth model estimates "potential" output levels given the genetic characteristics of crops (animals) and the physical conditions of locations where the crops (animals) are grown (reared). The economic frontier production model estimates "best practice" production levels, taking into account economic, institutional and social factors that cause farm and spatial heterogeneity. In the first stage, both ecological crop growth and economic frontier production models are estimated to calculate three measures of productive efficiency: (1) technical efficiency, as the ratio of actual to "best practice" output levels; (2) agronomic efficiency, as the ratio of actual to "potential" output levels; and (3) agro-economic efficiency, as the ratio of "best practice" to "potential" output levels. Also in the first stage, the economic frontier production model identifies factors that determine technical efficiency. In the second stage, agro-economic efficiency is analysed econometrically in relation to economic, institutional and social factors that cause farm and spatial heterogeneity. The proposed framework has several important advantages in comparison with existing proposals. Firstly, it allows the systematic incorporation of all physical, economic, institutional and social factors that cause farm and spatial heterogeneity in analysing the productive performance of crop and animal production systems. Secondly, the location-specific physical factors are not modelled symmetrically as other economic inputs of production. Thirdly, climate change and technological advancements in crop and animal sciences can be modelled in a "forward-looking" manner. Fourthly, knowledge in agronomy and data from experimental studies can be utilised for socio-economic policy analysis. The proposed framework can be easily applied in empirical studies due to the current availability of ecological crop (animal) growth models, farm or secondary data, and econometric software packages. The article highlights several directions of empirical studies that researchers may pursue in the future.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
Purpose: Colorectal cancer patients diagnosed with stage I or II disease are not routinely offered adjuvant chemotherapy following resection of the primary tumor. However, up to 10% of stage I and 30% of stage II patients relapse within 5 years of surgery from recurrent or metastatic disease. The aim of this study was to determine if tumor-associated markers could detect disseminated malignant cells and so identify a subgroup of patients with early-stage colorectal cancer that were at risk of relapse. Experimental Design: We recruited consecutive patients undergoing curative resection for early-stage colorectal cancer. Immunobead reverse transcription-PCR of five tumor-associated markers (carcinoembryonic antigen, laminin γ2, ephrin B4, matrilysin, and cytokeratin 20) was used to detect the presence of colon tumor cells in peripheral blood and within the peritoneal cavity of colon cancer patients perioperatively. Clinicopathologic variables were tested for their effect on survival outcomes in univariate analyses using the Kaplan-Meier method. A multivariate Cox proportional hazards regression analysis was done to determine whether detection of tumor cells was an independent prognostic marker for disease relapse. Results: Overall, 41 of 125 (32.8%) early-stage patients were positive for disseminated tumor cells. Patients who were marker positive for disseminated cells in post-resection lavage samples showed a significantly poorer prognosis (hazard ratio, 6.2; 95% confidence interval, 1.9-19.6; P = 0.002), and this was independent of other risk factors. Conclusion: The markers used in this study identified a subgroup of early-stage patients at increased risk of relapse post-resection for primary colorectal cancer. This method may be considered as a new diagnostic tool to improve the staging and management of colorectal cancer. © 2006 American Association for Cancer Research.
Resumo:
Facial expression is one of the main issues of face recognition in uncontrolled environments. In this paper, we apply the probabilistic linear discriminant analysis (PLDA) method to recognize faces across expressions. Several PLDA approaches are tested and cross-evaluated on the Cohn-Kanade and JAFFE databases. With less samples per gallery subject, high recognition rates comparable to previous works have been achieved indicating the robustness of the approaches. Among the approaches, the mixture of PLDAs has demonstrated better performances. The experimental results also indicate that facial regions around the cheeks, eyes, and eyebrows are more discriminative than regions around the mouth, jaw, chin, and nose.
Resumo:
In order to support intelligent transportation system (ITS) road safety applications such as collision avoidance, lane departure warnings and lane keeping, Global Navigation Satellite Systems (GNSS) based vehicle positioning system has to provide lane-level (0.5 to 1 m) or even in-lane-level (0.1 to 0.3 m) accurate and reliable positioning information to vehicle users. However, current vehicle navigation systems equipped with a single frequency GPS receiver can only provide road-level accuracy at 5-10 meters. The positioning accuracy can be improved to sub-meter or higher with the augmented GNSS techniques such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) which have been traditionally used in land surveying and or in slowly moving environment. In these techniques, GNSS corrections data generated from a local or regional or global network of GNSS ground stations are broadcast to the users via various communication data links, mostly 3G cellular networks and communication satellites. This research aimed to investigate the precise positioning system performances when operating in the high mobility environments. This involves evaluation of the performances of both RTK and PPP techniques using: i) the state-of-art dual frequency GPS receiver; and ii) low-cost single frequency GNSS receiver. Additionally, this research evaluates the effectiveness of several operational strategies in reducing the load on data communication networks due to correction data transmission, which may be problematic for the future wide-area ITS services deployment. These strategies include the use of different data transmission protocols, different correction data format standards, and correction data transmission at the less-frequent interval. A series of field experiments were designed and conducted for each research task. Firstly, the performances of RTK and PPP techniques were evaluated in both static and kinematic (highway with speed exceed 80km) experiments. RTK solutions achieved the RMS precision of 0.09 to 0.2 meter accuracy in static and 0.2 to 0.3 meter in kinematic tests, while PPP reported 0.5 to 1.5 meters in static and 1 to 1.8 meter in kinematic tests by using the RTKlib software. These RMS precision values could be further improved if the better RTK and PPP algorithms are adopted. The tests results also showed that RTK may be more suitable in the lane-level accuracy vehicle positioning. The professional grade (dual frequency) and mass-market grade (single frequency) GNSS receivers were tested for their performance using RTK in static and kinematic modes. The analysis has shown that mass-market grade receivers provide the good solution continuity, although the overall positioning accuracy is worse than the professional grade receivers. In an attempt to reduce the load on data communication network, we firstly evaluate the use of different correction data format standards, namely RTCM version 2.x and RTCM version 3.0 format. A 24 hours transmission test was conducted to compare the network throughput. The results have shown that 66% of network throughput reduction can be achieved by using the newer RTCM version 3.0, comparing to the older RTCM version 2.x format. Secondly, experiments were conducted to examine the use of two data transmission protocols, TCP and UDP, for correction data transmission through the Telstra 3G cellular network. The performance of each transmission method was analysed in terms of packet transmission latency, packet dropout, packet throughput, packet retransmission rate etc. The overall network throughput and latency of UDP data transmission are 76.5% and 83.6% of TCP data transmission, while the overall accuracy of positioning solutions remains in the same level. Additionally, due to the nature of UDP transmission, it is also found that 0.17% of UDP packets were lost during the kinematic tests, but this loss doesn't lead to significant reduction of the quality of positioning results. The experimental results from the static and the kinematic field tests have also shown that the mobile network communication may be blocked for a couple of seconds, but the positioning solutions can be kept at the required accuracy level by setting of the Age of Differential. Finally, we investigate the effects of using less-frequent correction data (transmitted at 1, 5, 10, 15, 20, 30 and 60 seconds interval) on the precise positioning system. As the time interval increasing, the percentage of ambiguity fixed solutions gradually decreases, while the positioning error increases from 0.1 to 0.5 meter. The results showed the position accuracy could still be kept at the in-lane-level (0.1 to 0.3 m) when using up to 20 seconds interval correction data transmission.
Numerical and experimental studies of cold-formed steel floor systems under standard fire conditions
Resumo:
Light gauge cold-formed steel frame (LSF) structures are increasingly used in industrial, commercial and residential buildings because of their non-combustibility, dimensional stability, and ease of installation. A floor-ceiling system is an example of its applications. LSF floor-ceiling systems must be designed to serve as fire compartment boundaries and provide adequate fire resistance. Fire rated floor-ceiling assemblies formed with new materials and construction methodologies have been increasingly used in buildings. However, limited research has been undertaken in the past and hence a thorough understanding of their fire resistance behaviour is not available. Recently a new composite panel in which an external insulation layer is used between two plasterboards has been developed at QUT to provide a higher fire rating to LSF floors under standard fire conditions. But its increased fire rating could not be determined using the currently available design methods. Research on LSF floor systems under fire conditions is relatively recent and the behaviour of floor joists and other components in the systems is not fully understood. The present design methods thus require the use of expensive fire protection materials to protect them from excessive heat increase during a fire. This leads to uneconomical and conservative designs. Fire rating of these floor systems is provided simply by adding more plasterboard sheets to the steel joists and such an approach is totally inefficient. Hence a detailed fire research study was undertaken into the structural and thermal performance of LSF floor systems including those protected by the new composite panel system using full scale fire tests and extensive numerical studies. Experimental study included both the conventional and the new steel floor-ceiling systems under structural and fire loads using a gas furnace designed to deliver heat in accordance with the standard time- temperature curve in AS 1530.4 (SA, 2005). Fire tests included the behavioural and deflection characteristics of LSF floor joists until failure as well as related time-temperature measurements across the section and along the length of all the specimens. Full scale fire tests have shown that the structural and thermal performance of externally insulated LSF floor system was superior than traditional LSF floors with or without cavity insulation. Therefore this research recommends the use of the new composite panel system for cold-formed LSF floor-ceiling systems. The numerical analyses of LSF floor joists were undertaken using the finite element program ABAQUS based on the measured time-temperature profiles obtained from fire tests under both steady state and transient state conditions. Mechanical properties at elevated temperatures were considered based on the equations proposed by Dolamune Kankanamge and Mahendran (2011). Finite element models were calibrated using the full scale test results and used to further provide a detailed understanding of the structural fire behaviour of the LSF floor-ceiling systems. The models also confirmed the superior performance of the new composite panel system. The validated model was then used in a detailed parametric study. Fire tests and the numerical studies showed that plasterboards provided sufficient lateral restraint to LSF floor joists until their failure. Hence only the section moment capacity of LSF floor joists subjected to local buckling effects was considered in this research. To predict the section moment capacity at elevated temperatures, the effective section modulus of joists at ambient temperature is generally considered adequate. However, this research has shown that it leads to considerable over- estimation of the local buckling capacity of joist subject to non-uniform temperature distributions under fire conditions. Therefore new simplified fire design rules were proposed for LSF floor joist to determine the section moment capacity at elevated temperature based on AS/NZS 4600 (SA, 2005), NAS (AISI, 2007) and Eurocode 3 Part 1.3 (ECS, 2006). The accuracy of the proposed fire design rules was verified with finite element analysis results. A spread sheet based design tool was also developed based on these design rules to predict the failure load ratio versus time, moment capacity versus time and temperature for various LSF floor configurations. Idealised time-temperature profiles of LSF floor joists were developed based on fire test measurements. They were used in the detailed parametric study to fully understand the structural and fire behaviour of LSF floor panels. Simple design rules were also proposed to predict both critical average joist temperatures and failure times (fire rating) of LSF floor systems with various floor configurations and structural parameters under any given load ratio. Findings from this research have led to a comprehensive understanding of the structural and fire behaviour of LSF floor systems including those protected by the new composite panel, and simple design methods. These design rules were proposed within the guidelines of the Australian/New Zealand, American and European cold- formed steel structures standard codes of practice. These may also lead to further improvements to fire resistance through suitable modifications to the current composite panel system.