127 resultados para Multi-criterion Decision Analysis (MCDA)
Resumo:
This paper presents Multi-Step A* (MSA*), a search algorithm based on A* for multi-objective 4D vehicle motion planning (three spatial and one time dimension). The research is principally motivated by the need for offline and online motion planning for autonomous Unmanned Aerial Vehicles (UAVs). For UAVs operating in large, dynamic and uncertain 4D environments, the motion plan consists of a sequence of connected linear tracks (or trajectory segments). The track angle and velocity are important parameters that are often restricted by assumptions and grid geometry in conventional motion planners. Many existing planners also fail to incorporate multiple decision criteria and constraints such as wind, fuel, dynamic obstacles and the rules of the air. It is shown that MSA* finds a cost optimal solution using variable length, angle and velocity trajectory segments. These segments are approximated with a grid based cell sequence that provides an inherent tolerance to uncertainty. Computational efficiency is achieved by using variable successor operators to create a multi-resolution, memory efficient lattice sampling structure. Simulation studies on the UAV flight planning problem show that MSA* meets the time constraints of online replanning and finds paths of equivalent cost but in a quarter of the time (on average) of vector neighbourhood based A*.
Resumo:
Human hair fibres are ubiquitous in nature and are found frequently at crime scenes often as a result of exchange between the perpetrator, victim and/or the surroundings according to Locard's Principle. Therefore, hair fibre evidence can provide important information for crime investigation. For human hair evidence, the current forensic methods of analysis rely on comparisons of either hair morphology by microscopic examination or nuclear and mitochondrial DNA analyses. Unfortunately in some instances the utilisation of microscopy and DNA analyses are difficult and often not feasible. This dissertation is arguably the first comprehensive investigation aimed to compare, classify and identify the single human scalp hair fibres with the aid of FTIR-ATR spectroscopy in a forensic context. Spectra were collected from the hair of 66 subjects of Asian, Caucasian and African (i.e. African-type). The fibres ranged from untreated to variously mildly and heavily cosmetically treated hairs. The collected spectra reflected the physical and chemical nature of a hair from the near-surface particularly, the cuticle layer. In total, 550 spectra were acquired and processed to construct a relatively large database. To assist with the interpretation of the complex spectra from various types of human hair, Derivative Spectroscopy and Chemometric methods such as Principal Component Analysis (PCA), Fuzzy Clustering (FC) and Multi-Criteria Decision Making (MCDM) program; Preference Ranking Organisation Method for Enrichment Evaluation (PROMETHEE) and Geometrical Analysis for Interactive Aid (GAIA); were utilised. FTIR-ATR spectroscopy had two important advantages over to previous methods: (i) sample throughput and spectral collection were significantly improved (no physical flattening or microscope manipulations), and (ii) given the recent advances in FTIR-ATR instrument portability, there is real potential to transfer this work.s findings seamlessly to on-field applications. The "raw" spectra, spectral subtractions and second derivative spectra were compared to demonstrate the subtle differences in human hair. SEM images were used as corroborative evidence to demonstrate the surface topography of hair. It indicated that the condition of the cuticle surface could be of three types: untreated, mildly treated and treated hair. Extensive studies of potential spectral band regions responsible for matching and discrimination of various types of hair samples suggested the 1690-1500 cm-1 IR spectral region was to be preferred in comparison with the commonly used 1750-800 cm-1. The principal reason was the presence of the highly variable spectral profiles of cystine oxidation products (1200-1000 cm-1), which contributed significantly to spectral scatter and hence, poor hair sample matching. In the preferred 1690-1500 cm-1 region, conformational changes in the keratin protein attributed to the α-helical to β-sheet transitions in the Amide I and Amide II vibrations and played a significant role in matching and discrimination of the spectra and hence, the hair fibre samples. For gender comparison, the Amide II band is significant for differentiation. The results illustrated that the male hair spectra exhibit a more intense β-sheet vibration in the Amide II band at approximately 1511 cm-1 whilst the female hair spectra displayed more intense α-helical vibration at 1520-1515cm-1. In terms of chemical composition, female hair spectra exhibit greater intensity of the amino acid tryptophan (1554 cm-1), aspartic and glutamic acid (1577 cm-1). It was also observed that for the separation of samples based on racial differences, untreated Caucasian hair was discriminated from Asian hair as a result of having higher levels of the amino acid cystine and cysteic acid. However, when mildly or chemically treated, Asian and Caucasian hair fibres are similar, whereas African-type hair fibres are different. In terms of the investigation's novel contribution to the field of forensic science, it has allowed for the development of a novel, multifaceted, methodical protocol where previously none had existed. The protocol is a systematic method to rapidly investigate unknown or questioned single human hair FTIR-ATR spectra from different genders and racial origin, including fibres of different cosmetic treatments. Unknown or questioned spectra are first separated on the basis of chemical treatment i.e. untreated, mildly treated or chemically treated, genders, and racial origin i.e. Asian, Caucasian and African-type. The methodology has the potential to complement the current forensic analysis methods of fibre evidence (i.e. Microscopy and DNA), providing information on the morphological, genetic and structural levels.
Resumo:
Green energy is one of the key factors, driving down electricity bill and zero carbon emission generating electricity to green building. However, the climate change and environmental policies are accelerating people to use renewable energy instead of coal-fired (convention type) energy for green building that energy is not environmental friendly. Therefore, solar energy is one of the clean energy solving environmental impact and paying less in electricity fee. The method of solar energy is collecting sun from solar array and saves in battery from which provides necessary electricity to whole house with zero carbon emission. However, in the market a lot of solar arrays suppliers, the aims of this paper attempted to use superiority and inferiority multi-criteria ranking (SIR) method with 13 constraints establishing I-flows and S-flows matrices to evaluate four alternatives solar energies and determining which alternative is the best, providing power to sustainable building. Furthermore, SIR is well-known structured approach of multi-criteria decision support tools and gradually used in construction and building. The outcome of this paper significantly gives an indication to user selecting solar energy.
Resumo:
Photochemistry has made significant contributions to our understanding of many important natural processes as well as the scientific discoveries of the man-made world. The measurements from such studies are often complex and may require advanced data interpretation with the use of multivariate or chemometrics methods. In general, such methods have been applied successfully for data display, classification, multivariate curve resolution and prediction in analytical chemistry, environmental chemistry, engineering, medical research and industry. However, in photochemistry, by comparison, applications of such multivariate approaches were found to be less frequent although a variety of methods have been used, especially with spectroscopic photochemical applications. The methods include Principal Component Analysis (PCA; data display), Partial Least Squares (PLS; prediction), Artificial Neural Networks (ANN; prediction) and several models for multivariate curve resolution related to Parallel Factor Analysis (PARAFAC; decomposition of complex responses). Applications of such methods are discussed in this overview and typical examples include photodegradation of herbicides, prediction of antibiotics in human fluids (fluorescence spectroscopy), non-destructive in- and on-line monitoring (near infrared spectroscopy) and fast-time resolution of spectroscopic signals from photochemical reactions. It is also quite clear from the literature that the scope of spectroscopic photochemistry was enhanced by the application of chemometrics. To highlight and encourage further applications of chemometrics in photochemistry, several additional chemometrics approaches are discussed using data collected by the authors. The use of a PCA biplot is illustrated with an analysis of a matrix containing data on the performance of photocatalysts developed for water splitting and hydrogen production. In addition, the applications of the Multi-Criteria Decision Making (MCDM) ranking methods and Fuzzy Clustering are demonstrated with an analysis of water quality data matrix. Other examples of topics include the application of simultaneous kinetic spectroscopic methods for prediction of pesticides, and the use of response fingerprinting approach for classification of medicinal preparations. In general, the overview endeavours to emphasise the advantages of chemometrics' interpretation of multivariate photochemical data, and an Appendix of references and summaries of common and less usual chemometrics methods noted in this work, is provided. Crown Copyright © 2010.
Resumo:
Linear (or continuous) assets are engineering infrastructure that usually spans long distances and can be divided into different segments, all of which perform the same function but may be subject to different loads and environmental factors. Typical linear assets include railway lines, roads, pipelines and cables. How and when to renew such assets are critical decisions for asset owners as they normally involves significant capital investment. Through investigating the characteristics of linear asset renewal decisions and identifying the critical requirements that are associated with renewal decisions, we present a multi-criteria decision support method to help optimise renewal decisions. A case study that concerns renewal of an economiser's tubing system is a coal-fired power station is adopted to demonstrate the application of this method. Although the paper concerns a particular linear asset decision type, the approach has broad applicability for linear asset management.
Resumo:
House dust is a heterogeneous matrix, which contains a number of biological materials and particulate matter gathered from several sources. It is the accumulation of a number of semi-volatile and non-volatile contaminants. The contaminants are trapped and preserved. Therefore, house dust can be viewed as an archive of both the indoor and outdoor air pollution. There is evidence to show that on average, people tend to stay indoors most of the time and this increases exposure to house dust. The aims of this investigation were to: " assess the levels of Polycyclic Aromatic Hydrocarbons (PAHs), elements and pesticides in the indoor environment of the Brisbane area; " identify and characterise the possible sources of elemental constituents (inorganic elements), PAHs and pesticides by means of Positive Matrix Factorisation (PMF); and " establish the correlations between the levels of indoor air pollutants (PAHs, elements and pesticides) with the external and internal characteristics or attributes of the buildings and indoor activities by means of multivariate data analysis techniques. The dust samples were collected during the period of 2005-2007 from homes located in different suburbs of Brisbane, Ipswich and Toowoomba, in South East Queensland, Australia. A vacuum cleaner fitted with a paper bag was used as a sampler for collecting the house dust. A survey questionnaire was filled by the house residents which contained information about the indoor and outdoor characteristics of their residences. House dust samples were analysed for three different pollutants: Pesticides, Elements and PAHs. The analyses were carried-out for samples of particle size less than 250 µm. The chemical analyses for both pesticides and PAHs were performed using a Gas Chromatography Mass Spectrometry (GC-MS), while elemental analysis was carried-out by using Inductively-Coupled Plasma-Mass Spectroscopy (ICP-MS). The data was subjected to multivariate data analysis techniques such as multi-criteria decision-making procedures, Preference Ranking Organisation Method for Enrichment Evaluations (PROMETHEE), coupled with Geometrical Analysis for Interactive Aid (GAIA) in order to rank the samples and to examine data display. This study showed that compared to the results from previous works, which were carried-out in Australia and overseas, the concentrations of pollutants in house dusts in Brisbane and the surrounding areas were relatively very high. The results of this work also showed significant correlations between some of the physical parameters (types of building material, floor level, distance from industrial areas and major road, and smoking) and the concentrations of pollutants. Types of building materials and the age of houses were found to be two of the primary factors that affect the concentrations of pesticides and elements in house dust. The concentrations of these two types of pollutant appear to be higher in old houses (timber houses) than in the brick ones. In contrast, the concentrations of PAHs were noticed to be higher in brick houses than in the timber ones. Other factors such as floor level, and distance from the main street and industrial area, also affected the concentrations of pollutants in the house dust samples. To apportion the sources and to understand mechanisms of pollutants, Positive Matrix Factorisation (PMF) receptor model was applied. The results showed that there were significant correlations between the degree of concentration of contaminants in house dust and the physical characteristics of houses, such as the age and the type of the house, the distance from the main road and industrial areas, and smoking. Sources of pollutants were identified. For PAHs, the sources were cooking activities, vehicle emissions, smoking, oil fumes, natural gas combustion and traces of diesel exhaust emissions; for pesticides the sources were application of pesticides for controlling termites in buildings and fences, treating indoor furniture and in gardens for controlling pests attacking horticultural and ornamental plants; for elements the sources were soil, cooking, smoking, paints, pesticides, combustion of motor fuels, residual fuel oil, motor vehicle emissions, wearing down of brake linings and industrial activities.
Resumo:
The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.
Resumo:
Jakarta, Indonesia’s chronic housing shortage poses multiple challenges for contemporary policy-makers. While it may be in the city’s interest to increase the availability of housing, there is limited land to do so. Market pressures, in tandem with government’s desire for housing availability, demand consideration of even marginal lands, such as those within floodplains, for development. Increasingly, planning for a flood resilient Jakarta is complicated by a number of factors, including: the city is highly urbanized and land use data is limited; flood management is technically complex, creating potential barriers to engagement for both decision-makers and the public; inherent uncertainty exists throughout modelling efforts, central to management; and risk and liability for infrastructure investments is unclear. These obstacles require localized watershed-level participatory planning to address risks of flooding where possible and reduce the likelihood that informal settlements occur in areas of extreme risk. This paper presents a preliminary scoping study for determination of an effective participatory planning method to encourage more resilient development. First, the scoping study provides background relevant to the challenges faced in planning for contemporary Jakarta. Second, the study examines the current use of decision-support tools, such as Geographic Information Systems (GIS), in planning for Jakarta. Existing capacity in the use of GIS allows for consideration of the use of an emerging method of community consultation - Multi-Criteria Decision-Making (MCDM) support systems infused with geospatial information - to aid in engagement with the public and improve decision-making outcomes. While these methods have been used in Australia to promote stakeholder engagement in urban intensification, the planned research will be an early introduction of the method to Indonesia. As a consequence of this intervention, it is expected that planning activities will result in a more resilient city, capable of engaging with disaster risk management in a more effective manner.
Resumo:
The current state of knowledge in relation to first flush does not provide a clear understanding of the role of rainfall and catchment characteristics in influencing this phenomenon. This is attributed to the inconsistent findings from research studies due to the unsatisfactory selection of first flush indicators and how first flush is defined. The research study discussed in this thesis provides the outcomes of a comprehensive analysis on the influence of rainfall and catchment characteristics on first flush behaviour in residential catchments. Two sets of first flush indicators are introduced in this study. These indicators were selected such that they are representative in explaining in a systematic manner the characteristics associated with first flush. Stormwater samples and rainfall-runoff data were collected and recorded from stormwater monitoring stations established at three urban catchments at Coomera Waters, Gold Coast, Australia. In addition, historical data were also used to support the data analysis. Three water quality parameters were analysed, namely, total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The data analyses were primarily undertaken using multi criteria decision making methods, PROMETHEE and GAIA. Based on the data obtained, the pollutant load distribution curve (LV) was determined for the individual rainfall events and pollutant types. Accordingly, two sets of first flush indicators were derived from the curve, namely, cumulative load wash-off for every 10% of runoff volume interval (interval first flush indicators or LV) from the beginning of the event and the actual pollutant load wash-off during a 10% increment in runoff volume (section first flush indicators or P). First flush behaviour showed significant variation with pollutant types. TSS and TP showed consistent first flush behaviour. However, the dissolved fraction of TN showed significant differences to TSS and TP first flush while particulate TN showed similarities. Wash-off of TSS, TP and particulate TN during the first 10% of the runoff volume showed no influence from corresponding rainfall intensity. This was attributed to the wash-off of weakly adhered solids on the catchment surface referred to as "short term pollutants" or "weakly adhered solids" load. However, wash-off after 10% of the runoff volume showed dependency on the rainfall intensity. This is attributed to the wash-off of strongly adhered solids being exposed when the weakly adhered solids diminish. The wash-off process was also found to depend on rainfall depth at the end part of the event as the strongly adhered solids are loosened due to impact of rainfall in the earlier part of the event. Events with high intensity rainfall bursts after 70% of the runoff volume did not demonstrate first flush behaviour. This suggests that rainfall pattern plays a critical role in the occurrence of first flush. Rainfall intensity (with respect to the rest of the event) that produces 10% to 20% runoff volume play an important role in defining the magnitude of the first flush. Events can demonstrate high magnitude first flush when the rainfall intensity occurring between 10% and 20% of the runoff volume is comparatively high while low rainfall intensities during this period produces low magnitude first flush. For events with first flush, the phenomenon is clearly visible up to 40% of the runoff volume. This contradicts the common definition that first flush only exists, if for example, 80% of the pollutant mass is transported in the first 30% of runoff volume. First flush behaviour for TN is different compared to TSS and TP. Apart from rainfall characteristics, the composition and the availability of TN on the catchment also play an important role in first flush. The analysis confirmed that events with low rainfall intensity can produce high magnitude first flush for the dissolved fraction of TN, while high rainfall intensity produce low dissolved TN first flush. This is attributed to the source limiting behaviour of dissolved TN wash-off where there is high wash-off during the initial part of a rainfall event irrespective of the intensity. However, for particulate TN, the influence of rainfall intensity on first flush characteristics is similar to TSS and TP. The data analysis also confirmed that first flush can occur as high magnitude first flush, low magnitude first flush or non existence of first flush. Investigation of the influence of catchment characteristics on first flush found that the key factors that influence the phenomenon are the location of the pollutant source, spatial distribution of the pervious and impervious surfaces in the catchment, drainage network layout and slope of the catchment. This confirms that first flush phenomenon cannot be evaluated based on a single or a limited set of parameters as a number of catchment characteristics should be taken into account. Catchments where the pollutant source is located close to the outlet, a high fraction of road surfaces, short travel time to the outlet, with steep slopes can produce high wash-off load during the first 50% of the runoff volume. Rainfall characteristics have a comparatively dominant impact on the wash-off process compared to the catchment characteristics. In addition, the pollutant characteristics also should be taken into account in designing stormwater treatment systems due to different wash-off behaviour. Analysis outcomes confirmed that there is a high TSS load during the first 20% of the runoff volume followed by TN which can extend up to 30% of the runoff volume. In contrast, high TP load can exist during the initial and at the end part of a rainfall event. This is related to the composition of TP available for the wash-off.
Resumo:
The ability to identify and assess user engagement with transmedia productions is vital to the success of individual projects and the sustainability of this mode of media production as a whole. It is essential that industry players have access to tools and methodologies that offer the most complete and accurate picture of how audiences/users engage with their productions and which assets generate the most valuable returns of investment. Drawing upon research conducted with Hoodlum Entertainment, a Brisbane-based transmedia producer, this project involved an initial assessment of the way engagement tends to be understood, why standard web analytics tools are ill-suited to measuring it, how a customised tool could offer solutions, and why this question of measuring engagement is so vital to the future of transmedia as a sustainable industry. Working with data provided by Hoodlum Entertainment and Foxtel Marketing, the outcome of the study was a prototype for a custom data visualisation tool that allowed access, manipulation and presentation of user engagement data, both historic and predictive. The prototyped interfaces demonstrate how the visualization tool would collect and organise data specific to multiplatform projects by aggregating data across a number of platform reporting tools. Such a tool is designed to encompass not only platforms developed by the transmedia producer but also sites developed by fans. This visualisation tool accounted for multiplatform experience projects whose top level is comprised of people, platforms and content. People include characters, actors, audience, distributors and creators. Platforms include television, Facebook and other relevant social networks, literature, cinema and other media that might be included in the multiplatform experience. Content refers to discreet media texts employed within the platform, such as tweet, a You Tube video, a Facebook post, an email, a television episode, etc. Core content is produced by the creators’ multiplatform experiences to advance the narrative, while complimentary content generated by audience members offers further contributions to the experience. Equally important is the timing with which the components of the experience are introduced and how they interact with and impact upon each other. Being able to combine, filter and sort these elements in multiple ways we can better understand the value of certain components of a project. It also offers insights into the relationship between the timing of the release of components and user activity associated with them, which further highlights the efficacy (or, indeed, failure) of assets as catalysts for engagement. In collaboration with Hoodlum we have developed a number of design scenarios experimenting with the ways in which data can be visualised and manipulated to tell a more refined story about the value of user engagement with certain project components and activities. This experimentation will serve as the basis for future research.
Resumo:
This thesis focused upon the development of improved capacity analysis and capacity planning techniques for railways. A number of innovations were made and were tested on a case study of a real national railway. These techniques can reduce the time required to perform decision making activities that planners and managers need to perform. As all railways need to be expanded to meet increasing demands, the presumption that analytical capacity models can be used to identify how best to improve an existing network at least cost, was fully investigated. Track duplication was the mechanism used to expanding a network's capacity, and two variant capacity expansion models were formulated. Another outcome of this thesis is the development and validation of bi objective models for capacity analysis. These models regulate the competition for track access and perform a trade-off analysis. An opportunity to develop more general mulch-objective approaches was identified.
Resumo:
This paper presents a multi-criteria based approach for nondestructive diagnostic structural integrity assessment of a decommissioned flatbed rail wagon (FBRW) used for road bridge superstructure rehabilitation and replacement applications. First, full-scale vibration and static test data sets are employed in a FE model of the FBRW to obtain the best ‘initial’ estimate of the model parameters. Second, the ‘final’ model parameters are predicted using sensitivity-based perturbation analysis without significant difficulties encountered. Consequently, the updated FBRW model is validated using the independent sets of full-scale laboratory static test data. Finally, the updated and validated FE model of the FBRW is used for structural integrity assessment of a single lane FBRW bridge subjected to the Australian bridge design traffic load.
Resumo:
Partial evaluation of infrastructure investments have resulted in expensive mistakes, unsatisfactory outcomes and increased uncertainties for too many stakeholders, communities and economies in both developing and developed nations. "Complex Stakeholder Perception Mapping" (CSPM), is a novel approach that can address existing limitations by inclusively framing, capturing and mapping the spectrum of insights and perceptions using extended Geographic Information Systems. Maps generated in CSPM offer presentations of flexibly combined, complex perceptions of stakeholders on multiple aspects of development. CSPM extends the applications of GIS software in non-spatial mapping and of Multi-Criteria Analysis with a multidimensional evaluation platform and augments decision science capabilities in addressing complexities. Application of CSPM can improve local and regional economic gains from infrastructure projects and aid any multi-objective and multi-stakeholder decision situations.
Resumo:
Meta-analyses estimate a statistical effect size for a test or an analysis by combining results from multiple studies without necessarily having access to each individual study's raw data. Multi-site meta-analysis is crucial for imaging genetics, as single sites rarely have a sample size large enough to pick up effects of single genetic variants associated with brain measures. However, if raw data can be shared, combining data in a "mega-analysis" is thought to improve power and precision in estimating global effects. As part of an ENIGMA-DTI investigation, we use fractional anisotropy (FA) maps from 5 studies (total N=2, 203 subjects, aged 9-85) to estimate heritability. We combine the studies through meta-and mega-analyses as well as a mixture of the two - combining some cohorts with mega-analysis and meta-analyzing the results with those of the remaining sites. A combination of mega-and meta-approaches may boost power compared to meta-analysis alone.
Resumo:
Some polycyclic aromatic hydrocarbons (PAHs) are ubiquitous in air and have been implicated as carcinogenic materials. Therefore, literature is replete with studies that are focused on their occurrence and profiles in indoor and outdoor air samples. However, because the relative potency of individual PAHs vary widely, health risks associated with the presence of PAHs in a particular environment cannot be extrapolated directly from the concentrations of individual PAHs in that environment. In addition, database on the potency of PAH mixtures is currently limited. In this paper, we have utilized multi-criteria decision making methods (MCDMs) to simultaneously correlate PAH-related health risk in some microenvironments to the concentration levels, ethoxyresorufin-O-deethylase (EROD) activity induction equivalency factors and toxic equivalency factors (TEFs) of PAHs found in those microenvironments. The results showed that the relative risk associated with PAHs in different air samples depends on the index used. Nevertheless, this approach offers a promising tool that could help identify microenvironments of concern and assist the prioritisation of control strategies.