944 resultados para Multi-soft sets
Resumo:
This paper studies receiver autonomous integrity monitoring (RAIM) algorithms and performance benefits of RTK solutions with multiple-constellations. The proposed method is generally known as Multi-constellation RAIM -McRAIM. The McRAIM algorithms take advantage of the ambiguity invariant character to assist fast identification of multiple satellite faults in the context of multiple constellations, and then detect faulty satellites in the follow-up ambiguity search and position estimation processes. The concept of Virtual Galileo Constellation (VGC) is used to generate useful data sets of dual-constellations for performance analysis. Experimental results from a 24-h data set demonstrate that with GPS&VGC constellations, McRAIM can significantly enhance the detection and exclusion probabilities of two simultaneous faulty satellites in RTK solutions.
Resumo:
Accessibility to housing for low to moderate income groups in Australia has been experiencing a severe decline since 2001. On the supply side, the public sector has been reducing its commitment to the direct provision of public housing. Despite high demand for affordable housing, there has been limited supply generated by non-government housing providers. One possible solution to promote an increase in affordable housing supply, like other infrastructure, is through the development of multi-stakeholder partnerships and private financing. This research aims to identify current issues underlying decision-making criteria for building multi-stakeholder partnerships to deliver affordable housing projects. It also investigates strategies for minimising risk and ensuring the financial outcomes of these partnership arrangements. A mix of qualitative in-depth interviews and quantitative surveys has been used as the main method to explore stakeholder experiences regarding their involvement in partnership arrangements in the affordable housing sector in Queensland. Two sets of interviews were conducted following an exploratory pilot study: one set in 2003-2004 and the other in 2007-2008. There were nineteen respondents representing government, private and not-for-profit organisations in the first stage interviews and surveys. The second stage interviews were focussed on twenty-two housing providers in South East Queensland. Initial analyses have been conducted using thematic and statistical analyses. This study extends the use of existing decision making tools and combines the use of a Soft System Framework to analyse the ideal state questionnaires using qualitative thematic analysis. Soft System Methodology (SSM) has been used to analyse this unstructured complex problem by using systematic thinking to develop a conceptual model and carrying it to the real world situations to solve the problem. This research found that the diversity of stakeholder capability and their level of risk acceptance will allow partnerships to develop the best synergies and a degree of collaboration which achieves the required financial return within acceptable risk parameters. However, some of the negativity attached to future commitment to such partnerships has been found to be the anticipation of a worse outcome than that expected from independent action. Many interviewees agree that housing providers' fear of financial risk and community rejection has been central to dampening their enthusiasm for entering such investment projects. The creation of a mixed-use development structure will mitigate both risk and return as the commercial income will subsidise the affordable housing development and will normalise concentration of marginalised low-income people who live in a prime location with an award winning design. In addition, tenant support schemes and rent-to-buy incentive programs will encourage them to secure their tenancies and significantly reduce the risk of rent arrears and property damage. There is also a breakthrough investment vehicle offered by the social developer which sells the non-physical but financial product to individual and institutional investors to mitigate further financial risk. Finally, this study recommends modification of the current value-for-money framework in favour of broader partnership arrangements which are more closely aligned with risk minimisation strategies.
Resumo:
Multi-level concrete buildings requrre substantial temporary formwork structures to support the slabs during construction. The primary function of this formwork is to safely disperse the applied loads so that the slab being constructed, or the portion of the permanent structure already constructed, is not overloaded. Multi-level formwork is a procedure in which a limited number of formwork and shoring sets are cycled up the building as construction progresses. In this process, each new slab is supported by a number of lower level slabs. The new slab load is, essentially, distributed to these supporting slabs in direct proportion to their relative stiffness. When a slab is post-tensioned using draped tendons, slab lift occurs as a portion of the slab self-weight is balanced. The formwork and shores supporting that slab are unloaded by an amount equivalent to the load balanced by the post-tensioning. This produces a load distribution inherently different from that of a conventionally reinforced slab. Through , theoretical modelling and extensive on-site shore load measurement, this research examines the effects of post-tensioning on multilevel formwork load distribution. The research demonstrates that the load distribution process for post-tensioned slabs allows for improvements to current construction practice. These enhancements include a shortening of the construction period; an improvement in the safety of multi-level form work operations; and a reduction in the quantity of form work materials required for a project. These enhancements are achieved through the general improvement in safety offered by post-tensioning during the various formwork operations. The research demonstrates that there is generally a significant improvement in the factors of safety over those for conventionally reinforced slabs. This improvement in the factor of safety occurs at all stages of the multi-level formwork operation. The general improvement in the factors of safety with post-tensioned slabs allows for a shortening of the slab construction cycle time. Further, the low level of load redistribution that occurs during the stripping operations makes post-tensioned slabs ideally suited to reshoring procedures. Provided the overall number of interconnected levels remains unaltered, it is possible to increase the number of reshored levels while reducing the number of undisturbed shoring levels without altering the factors of safety, thereby, reducing the overall quantity of formwork and shoring materials.
Resumo:
Many infrastructure and necessity systems such as electricity and telecommunication in Europe and the Northern America were used to be operated as monopolies, if not state-owned. However, they have now been disintegrated into a group of smaller companies managed by different stakeholders. Railways are no exceptions. Since the early 1980s, there have been reforms in the shape of restructuring of the national railways in different parts of the world. Continuous refinements are still conducted to allow better utilisation of railway resources and quality of service. There has been a growing interest for the industry to understand the impacts of these reforms on the operation efficiency and constraints. A number of post-evaluations have been conducted by analysing the performance of the stakeholders on their profits (Crompton and Jupe 2003), quality of train service (Shaw 2001) and engineering operations (Watson 2001). Results from these studies are valuable for future improvement in the system, followed by a new cycle of post-evaluations. However, direct implementation of these changes is often costly and the consequences take a long period of time (e.g. years) to surface. With the advance of fast computing technologies, computer simulation is a cost-effective means to evaluate a hypothetical change in a system prior to actual implementation. For example, simulation suites have been developed to study a variety of traffic control strategies according to sophisticated models of train dynamics, traction and power systems (Goodman, Siu and Ho 1998, Ho and Yeung 2001). Unfortunately, under the restructured railway environment, it is by no means easy to model the complex behaviour of the stakeholders and the interactions between them. Multi-agent system (MAS) is a recently developed modelling technique which may be useful in assisting the railway industry to conduct simulations on the restructured railway system. In MAS, a real-world entity is modelled as a software agent that is autonomous, reactive to changes, able to initiate proactive actions and social communicative acts. It has been applied in the areas of supply-chain management processes (García-Flores, Wang and Goltz 2000, Jennings et al. 2000a, b) and e-commerce activities (Au, Ngai and Parameswaran 2003, Liu and You 2003), in which the objectives and behaviour of the buyers and sellers are captured by software agents. It is therefore beneficial to investigate the suitability or feasibility of applying agent modelling in railways and the extent to which it might help in developing better resource management strategies. This paper sets out to examine the benefits of using MAS to model the resource management process in railways. Section 2 first describes the business environment after the railway 2 Modelling issues on the railway resource management process using MAS reforms. Then the problems emerge from the restructuring process are identified in section 3. Section 4 describes the realisation of a MAS for railway resource management under the restructured scheme and the feasible studies expected from the model.
Resumo:
Over the past ten years, minimally invasive plate osteosynthesis (MIPO) for the fixation of long bone fractures has become a clinically accepted method with good outcomes, when compared to the conventional open surgical approach (open reduction internal fixation, ORIF). However, while MIPO offers some advantages over ORIF, it also has some significant drawbacks, such as a more demanding surgical technique and increased radiation exposure. No clinical or experimental study to date has shown a difference between the healing outcomes in fractures treated with the two surgical approaches. Therefore, a novel, standardised severe trauma model in sheep has been developed and validated in this project to examine the effect of the two surgical approaches on soft tissue and fracture healing. Twenty four sheep were subjected to severe soft tissue damage and a complex distal femur fracture. The fractures were initially stabilised with an external fixator. After five days of soft tissue recovery, internal fixation with a plate was applied, randomised to either MIPO or ORIF. Within the first fourteen days, the soft tissue damage was monitored locally with a compartment pressure sensor and systemically by blood tests. The fracture progress was assessed fortnightly by x-rays. The sheep were sacrificed in two groups after four and eight weeks, and CT scans and mechanical testing performed. Soft tissue monitoring showed significantly higher postoperative Creatine Kinase and Lactate Dehydrogenase values in the ORIF group compared to MIPO. After four weeks, the torsional stiffness was significantly higher in the MIPO group (p=0.018) compared to the ORIF group. The torsional strength also showed increased values for the MIPO technique (p=0.11). The measured total mineralised callus volumes were slightly higher in the ORIF group. However, a newly developed morphological callus bridging score showed significantly higher values for the MIPO technique (p=0.007), with a high correlation to the mechanical properties (R2=0.79). After eight weeks, the same trends continued, but without statistical significance. In summary, this clinically relevant study, using the newly developed severe trauma model in sheep, clearly demonstrates that the minimally invasive technique minimises additional soft tissue damage and improves fracture healing in the early stage compared to the open surgical approach method.
Resumo:
The mineral schlossmacherite (H3O,Ca)Al3(AsO4,PO4,SO4)2(OH)6 , a multi-cation-multi-anion mineral of the beudantite mineral subgroup has been characterised by Raman spectroscopy. The mineral and related minerals functions as a heavy metal collector and is often amorphous or poorly crystalline, such that XRD identification is difficult. The Raman spectra are dominated by an intense band at 864 cm-1, assigned to the symmetric stretching mode of the AsO43- anion. Raman bands at 809 and 819 cm-1 are assigned to the antisymmetric stretching mode of AsO43- . The sulphate anion is characterised by bands at 1000 cm-1 (ν1), and at 1031, 1082 and 1139 cm-1 (ν3). Two sets of bands in the OH stretching region are observed: firstly between 2800 and 3000 cm-1 with bands observed at 2850, 2868, 2918 cm-1 and secondly between 3300 and 3600 with bands observed at 3363, 3382, 3410, 3449 and 3537 cm-1. These bands enabled the calculation of hydrogen bond distances and show a wide range of H-bond distances.
Resumo:
Kinematic models are commonly used to quantify foot and ankle kinematics, yet no marker sets or models have been proven reliable or accurate when wearing shoes. Further, the minimal detectable difference of a developed model is often not reported. We present a kinematic model that is reliable, accurate and sensitive to describe the kinematics of the foot–shoe complex and lower leg during walking gait. In order to achieve this, a new marker set was established, consisting of 25 markers applied on the shoe and skin surface, which informed a four segment kinematic model of the foot–shoe complex and lower leg. Three independent experiments were conducted to determine the reliability, accuracy and minimal detectable difference of the marker set and model. Inter-rater reliability of marker placement on the shoe was proven to be good to excellent (ICC = 0.75–0.98) indicating that markers could be applied reliably between raters. Intra-rater reliability was better for the experienced rater (ICC = 0.68–0.99) than the inexperienced rater (ICC = 0.38–0.97). The accuracy of marker placement along each axis was <6.7 mm for all markers studied. Minimal detectable difference (MDD90) thresholds were defined for each joint; tibiocalcaneal joint – MDD90 = 2.17–9.36°, tarsometatarsal joint – MDD90 = 1.03–9.29° and the metatarsophalangeal joint – MDD90 = 1.75–9.12°. These thresholds proposed are specific for the description of shod motion, and can be used in future research designed at comparing between different footwear.
Resumo:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.
Resumo:
Person re-identification involves recognising individuals in different locations across a network of cameras and is a challenging task due to a large number of varying factors such as pose (both subject and camera) and ambient lighting conditions. Existing databases do not adequately capture these variations, making evaluations of proposed techniques difficult. In this paper, we present a new challenging multi-camera surveillance database designed for the task of person re-identification. This database consists of 150 unscripted sequences of subjects travelling in a building environment though up to eight camera views, appearing from various angles and in varying illumination conditions. A flexible XML-based evaluation protocol is provided to allow a highly configurable evaluation setup, enabling a variety of scenarios relating to pose and lighting conditions to be evaluated. A baseline person re-identification system consisting of colour, height and texture models is demonstrated on this database.
'Going live' : establishing the creative attributes of the live multi-camera television professional
Resumo:
In my capacity as a television professional and teacher specialising in multi-camera live television production for over 40 years, I was drawn to the conclusion that opaque or inadequately formed understandings of how creativity applies to the field of live television, have impeded the development of pedagogies suitable to the teaching of live television in universities. In the pursuit of this hypothesis, the thesis shows that television degrees were born out of film studies degrees, where intellectual creativity was aligned to single camera production, and the 'creative roles' of producers, directors and scriptwriters. At the same time, multi-camera live television production was subsumed under the 'mass communication' banner, leading to an understanding that roles other than producer and director are simply technical, and bereft of creative intent or acumen. The thesis goes on to show that this attitude to other television production personnel, for example, the vision mixer, videotape operator and camera operator, relegates their roles to that of 'button pusher'. This has resulted in university teaching models with inappropriate resources and unsuitable teaching practices. As a result, the industry is struggling to find people with the skills to fill the demands of the multi-camera live television sector. In specific terms the central hypothesis is pursued through the following sequenced approach. Firstly, the thesis sets out to outline the problems, and traces the origins of the misconceptions that hold with the notion that intellectual creativity does not exist in live multi-camera television. Secondly, this more adequately conceptualised rendition, of the origins particular to the misconceptions of live television and creativity, is then anchored to the field of examination by presentation of the foundations of the roles involved in making live television programs, using multicamera production techniques. Thirdly, this more nuanced rendition of the field sets the stage for a thorough analysis of education and training in the industry, and teaching models at Australian universities. The findings clearly establish that the pedagogical models are aimed at single camera production, a position that deemphasises the creative aspects of multi-camera live television production. Informed by an examination of theories of learning, qualitative interviews, professional reflective practice and observations, the roles of four multi-camera live production crewmembers (camera operator, vision mixer, EVS/videotape operator and director's assistant), demonstrate the existence of intellectual creativity during live production. Finally, supported by the theories of learning, and the development and explication of a successful teaching model, a new approach to teaching students how to work in live television is proposed and substantiated.
Resumo:
Traffic congestion has a significant impact on the economy and environment. Encouraging the use of multimodal transport (public transport, bicycle, park’n’ride, etc.) has been identified by traffic operators as a good strategy to tackle congestion issues and its detrimental environmental impacts. A multi-modal and multi-objective trip planner provides users with various multi-modal options optimised on objectives that they prefer (cheapest, fastest, safest, etc) and has a potential to reduce congestion on both a temporal and spatial scale. The computation of multi-modal and multi-objective trips is a complicated mathematical problem, as it must integrate and utilize a diverse range of large data sets, including both road network information and public transport schedules, as well as optimising for a number of competing objectives, where fully optimising for one objective, such as travel time, can adversely affect other objectives, such as cost. The relationship between these objectives can also be quite subjective, as their priorities will vary from user to user. This paper will first outline the various data requirements and formats that are needed for the multi-modal multi-objective trip planner to operate, including static information about the physical infrastructure within Brisbane as well as real-time and historical data to predict traffic flow on the road network and the status of public transport. It will then present information on the graph data structures representing the road and public transport networks within Brisbane that are used in the trip planner to calculate optimal routes. This will allow for an investigation into the various shortest path algorithms that have been researched over the last few decades, and provide a foundation for the construction of the Multi-modal Multi-objective Trip Planner by the development of innovative new algorithms that can operate the large diverse data sets and competing objectives.
Resumo:
Particulate matter research is essential because of the well known significant adverse effects of aerosol particles on human health and the environment. In particular, identification of the origin or sources of particulate matter emissions is of paramount importance in assisting efforts to control and reduce air pollution in the atmosphere. This thesis aims to: identify the sources of particulate matter; compare pollution conditions at urban, rural and roadside receptor sites; combine information about the sources with meteorological conditions at the sites to locate the emission sources; compare sources based on particle size or mass; and ultimately, provide the basis for control and reduction in particulate matter concentrations in the atmosphere. To achieve these objectives, data was obtained from assorted local and international receptor sites over long sampling periods. The samples were analysed using Ion Beam Analysis and Scanning Mobility Particle Sizer methods to measure the particle mass with chemical composition and the particle size distribution, respectively. Advanced data analysis techniques were employed to derive information from large, complex data sets. Multi-Criteria Decision Making (MCDM), a ranking method, drew on data variability to examine the overall trends, and provided the rank ordering of the sites and years that sampling was conducted. Coupled with the receptor model Positive Matrix Factorisation (PMF), the pollution emission sources were identified and meaningful information pertinent to the prioritisation of control and reduction strategies was obtained. This thesis is presented in the thesis by publication format. It includes four refereed papers which together demonstrate a novel combination of data analysis techniques that enabled particulate matter sources to be identified and sampling site/year ranked. The strength of this source identification process was corroborated when the analysis procedure was expanded to encompass multiple receptor sites. Initially applied to identify the contributing sources at roadside and suburban sites in Brisbane, the technique was subsequently applied to three receptor sites (roadside, urban and rural) located in Hong Kong. The comparable results from these international and national sites over several sampling periods indicated similarities in source contributions between receptor site-types, irrespective of global location and suggested the need to apply these methods to air pollution investigations worldwide. Furthermore, an investigation into particle size distribution data was conducted to deduce the sources of aerosol emissions based on particle size and elemental composition. Considering the adverse effects on human health caused by small-sized particles, knowledge of particle size distribution and their elemental composition provides a different perspective on the pollution problem. This thesis clearly illustrates that the application of an innovative combination of advanced data interpretation methods to identify particulate matter sources and rank sampling sites/years provides the basis for the prioritisation of future air pollution control measures. Moreover, this study contributes significantly to knowledge based on chemical composition of airborne particulate matter in Brisbane, Australia and on the identity and plausible locations of the contributing sources. Such novel source apportionment and ranking procedures are ultimately applicable to environmental investigations worldwide.
Resumo:
Bactrocera dorsalis sensu stricto, B. papayae, B. philippinensis and B. carambolae are serious pest fruit fly species of the B. dorsalis complex that predominantly occur in south-east Asia and the Pacific. Identifying molecular diagnostics has proven problematic for these four taxa, a situation that cofounds biosecurity and quarantine efforts and which may be the result of at least some of these taxa representing the same biological species. We therefore conducted a phylogenetic study of these four species (and closely related outgroup taxa) based on the individuals collected from a wide geographic range; sequencing six loci (cox1, nad4-3′, CAD, period, ITS1, ITS2) for approximately 20 individuals from each of 16 sample sites. Data were analysed within maximum likelihood and Bayesian phylogenetic frameworks for individual loci and concatenated data sets for which we applied multiple monophyly and species delimitation tests. Species monophyly was measured by clade support, posterior probability or bootstrap resampling for Bayesian and likelihood analyses respectively, Rosenberg's reciprocal monophyly measure, P(AB), Rodrigo's (P(RD)) and the genealogical sorting index, gsi. We specifically tested whether there was phylogenetic support for the four 'ingroup' pest species using a data set of multiple individuals sampled from a number of populations. Based on our combined data set, Bactrocera carambolae emerges as a distinct monophyletic clade, whereas B. dorsalis s.s., B. papayae and B. philippinensis are unresolved. These data add to the growing body of evidence that B. dorsalis s.s., B. papayae and B. philippinensis are the same biological species, which poses consequences for quarantine, trade and pest management.
Resumo:
An Application Specific Instruction-set Processor (ASIP) is a specialized processor tailored to run a particular application/s efficiently. However, when there are multiple candidate applications in the application’s domain it is difficult and time consuming to find optimum set of applications to be implemented. Existing ASIP design approaches perform this selection manually based on a designer’s knowledge. We help in cutting down the number of candidate applications by devising a classification method to cluster similar applications based on the special-purpose operations they share. This provides a significant reduction in the comparison overhead while resulting in customized ASIP instruction sets which can benefit a whole family of related applications. Our method gives users the ability to quantify the degree of similarity between the sets of shared operations to control the size of clusters. A case study involving twelve algorithms confirms that our approach can successfully cluster similar algorithms together based on the similarity of their component operations.
Resumo:
Lean body mass (LBM) and muscle mass remains difficult to quantify in large epidemiological studies due to non-availability of inexpensive methods. We therefore developed anthropometric prediction equations to estimate the LBM and appendicular lean soft tissue (ALST) using dual energy X-ray absorptiometry (DXA) as a reference method. Healthy volunteers (n= 2220; 36% females; age 18-79 y) representing a wide range of body mass index (14-44 kg/m2) participated in this study. Their LBM including ALST was assessed by DXA along with anthropometric measurements. The sample was divided into prediction (60%) and validation (40%) sets. In the prediction set, a number of prediction models were constructed using DXA measured LBM and ALST estimates as dependent variables and a combination of anthropometric indices as independent variables. These equations were cross-validated in the validation set. Simple equations using age, height and weight explained > 90% variation in the LBM and ALST in both men and women. Additional variables (hip and limb circumferences and sum of SFTs) increased the explained variation by 5-8% in the fully adjusted models predicting LBM and ALST. More complex equations using all the above anthropometric variables could predict the DXA measured LBM and ALST accurately as indicated by low standard error of the estimate (LBM: 1.47 kg and 1.63 kg for men and women, respectively) as well as good agreement by Bland Altman analyses. These equations could be a valuable tool in large epidemiological studies assessing these body compartments in Indians and other population groups with similar body composition.