36 resultados para Air Quality Modelling
Resumo:
Most available studies on lead smelter emissions deal with the environmental impact of outdoor particles, but only a few focus on air quality at workplaces. The objective of this study is to physically and chemically characterize the Pb-rich particles emitted at different workplaces in a lead recycling plant. A multi-scale characterization was conducted from bulk analysis to the level of individual particles, to assess the particles properties in relation with Pb speciation and availability. Process PM from various origins were sampled and then compared; namely Furnace and Refining PM respectively present in the smelter and at refinery workplaces, Emissions PM present in channeled emissions.These particles first differed by their morphology and size distribution, with finer particles found in emissions. Differences observed in chemical composition could be explained by the industrial processes. All PM contained the same major phases (Pb, PbS, PbO, PbSO4 and PbO·PbSO4) but differed on the nature and amount of minor phases. Due to high content in PM, Pb concentrations in the CaCl2 extractant reached relatively high values (40 mg.L-1). However, the ratios (soluble/total) of CaCl2 exchangeable Pb were relatively low (< 0.02%) in comparison with Cd (up to 18%). These results highlight the interest to assess the soluble fractions of all metals (minor and major) and discuss both total metal concentrations and ratios for risk evaluations. In most cases metal extractability increased with decreasing size of particles, in particular, lead exchangeability was highest for channeled emissions. Such type of study could help in the choice of targeted sanitary protection procedures and for further toxicological investigations. In the present context, particular attention is given to Emissions and Furnace PM. Moreover, exposure to other metals than Pb should be considered. [Authors]
Resumo:
Road transport emissions are a major contributor to ambient particulate matter concentrations and have been associated with adverse health effects. Therefore, these emissions are targeted through increasingly stringent European emission standards. These policies succeed in reducing exhaust emissions, but do not address "nonexhaust" emissions from brake wear, tire wear, road wear, and suspension in air of road dust. Is this a problem? To what extent do nonexhaust emissions contribute to ambient concentrations of PM10 or PM2.5? In the near future, wear emissions may dominate the remaining traffic-related PM10 emissions in Europe, mostly due to the steep decrease in PM exhaust emissions. This underlines the need to determine the relevance of the wear emissions as a contribution to the existing ambient PM concentrations, and the need to assess the health risks related to wear particles, which has not yet received much attention. During a workshop in 2011, available knowledge was reported and evaluated so as to draw conclusions on the relevance of traffic-related wear emissions for air quality policy development. On the basis of available evidence, which is briefly presented in this paper, it was concluded that nonexhaust emissions and in particular suspension in air of road dust are major contributors to exceedances at street locations of the PM10 air quality standards in various European cities. Furthermore, wear-related PM emissions that contain high concentrations of metals may (despite their limited contribution to the mass of nonexhaust emissions) cause significant health risks for the population, especially those living near intensely trafficked locations. To quantify the existing health risks, targeted research is required on wear emissions, their dispersion in urban areas, population exposure, and its effects on health. Such information will be crucial for environmental policymakers as an input for discussions on the need to develop control strategies.
Resumo:
Exposure to fine airborne particulate matter (PM(2.5)) is associated with cardiovascular events and mortality in older and cardiac patients. Potential physiologic effects of in-vehicle, roadside, and ambient PM(2.5) were investigated in young, healthy, nonsmoking, male North Carolina Highway Patrol troopers. Nine troopers (age 23 to 30) were monitored on 4 successive days while working a 3 P.M. to midnight shift. Each patrol car was equipped with air-quality monitors. Blood was drawn 14 hours after each shift, and ambulatory monitors recorded the electrocardiogram throughout the shift and until the next morning. Data were analyzed using mixed models. In-vehicle PM(2.5) (average of 24 microg/m(3)) was associated with decreased lymphocytes (-11% per 10 microg/m(3)) and increased red blood cell indices (1% mean corpuscular volume), neutrophils (6%), C-reactive protein (32%), von Willebrand factor (12%), next-morning heart beat cycle length (6%), next-morning heart rate variability parameters, and ectopic beats throughout the recording (20%). Controlling for potential confounders had little impact on the effect estimates. The associations of these health endpoints with ambient and roadside PM(2.5) were smaller and less significant. The observations in these healthy young men suggest that in-vehicle exposure to PM(2.5) may cause pathophysiologic changes that involve inflammation, coagulation, and cardiac rhythm.
Resumo:
The major task of policy makers and practitioners when confronted with a resource management problem is to decide on the potential solution(s) to adopt from a range of available options. However, this process is unlikely to be successful and cost effective without access to an independently verified and comprehensive available list of options. There is currently burgeoning interest in ecosystem services and quantitative assessments of their importance and value. Recognition of the value of ecosystem services to human well-being represents an increasingly important argument for protecting and restoring the natural environment, alongside the moral and ethical justifications for conservation. As well as understanding the benefits of ecosystem services, it is also important to synthesize the practical interventions that are capable of maintaining and/or enhancing these services. Apart from pest regulation, pollination, and global climate regulation, this type of exercise has attracted relatively little attention. Through a systematic consultation exercise, we identify a candidate list of 296 possible interventions across the main regulating services of air quality regulation, climate regulation, water flow regulation, erosion regulation, water purification and waste treatment, disease regulation, pest regulation, pollination and natural hazard regulation. The range of interventions differs greatly between habitats and services depending upon the ease of manipulation and the level of research intensity. Some interventions have the potential to deliver benefits across a range of regulating services, especially those that reduce soil loss and maintain forest cover. Synthesis and applications: Solution scanning is important for questioning existing knowledge and identifying the range of options available to researchers and practitioners, as well as serving as the necessary basis for assessing cost effectiveness and guiding implementation strategies. We recommend that it become a routine part of decision making in all environmental policy areas.
Resumo:
Résumé: L'évaluation de l'exposition aux nuisances professionnelles représente une étape importante dans l'analyse de poste de travail. Les mesures directes sont rarement utilisées sur les lieux même du travail et l'exposition est souvent estimée sur base de jugements d'experts. Il y a donc un besoin important de développer des outils simples et transparents, qui puissent aider les spécialistes en hygiène industrielle dans leur prise de décision quant aux niveaux d'exposition. L'objectif de cette recherche est de développer et d'améliorer les outils de modélisation destinés à prévoir l'exposition. Dans un premier temps, une enquête a été entreprise en Suisse parmi les hygiénistes du travail afin d'identifier les besoins (types des résultats, de modèles et de paramètres observables potentiels). Il a été constaté que les modèles d'exposition ne sont guère employés dans la pratique en Suisse, l'exposition étant principalement estimée sur la base de l'expérience de l'expert. De plus, l'émissions de polluants ainsi que leur dispersion autour de la source ont été considérés comme des paramètres fondamentaux. Pour tester la flexibilité et la précision des modèles d'exposition classiques, des expériences de modélisations ont été effectuées dans des situations concrètes. En particulier, des modèles prédictifs ont été utilisés pour évaluer l'exposition professionnelle au monoxyde de carbone et la comparer aux niveaux d'exposition répertoriés dans la littérature pour des situations similaires. De même, l'exposition aux sprays imperméabilisants a été appréciée dans le contexte d'une étude épidémiologique sur une cohorte suisse. Dans ce cas, certains expériences ont été entreprises pour caractériser le taux de d'émission des sprays imperméabilisants. Ensuite un modèle classique à deux-zone a été employé pour évaluer la dispersion d'aérosol dans le champ proche et lointain pendant l'activité de sprayage. D'autres expériences ont également été effectuées pour acquérir une meilleure compréhension des processus d'émission et de dispersion d'un traceur, en se concentrant sur la caractérisation de l'exposition du champ proche. Un design expérimental a été développé pour effectuer des mesures simultanées dans plusieurs points d'une cabine d'exposition, par des instruments à lecture directe. Il a été constaté que d'un point de vue statistique, la théorie basée sur les compartiments est sensée, bien que l'attribution à un compartiment donné ne pourrait pas se faire sur la base des simples considérations géométriques. Dans une étape suivante, des données expérimentales ont été collectées sur la base des observations faites dans environ 100 lieux de travail différents: des informations sur les déterminants observés ont été associées aux mesures d'exposition des informations sur les déterminants observés ont été associé. Ces différentes données ont été employées pour améliorer le modèle d'exposition à deux zones. Un outil a donc été développé pour inclure des déterminants spécifiques dans le choix du compartiment, renforçant ainsi la fiabilité des prévisions. Toutes ces investigations ont servi à améliorer notre compréhension des outils des modélisations ainsi que leurs limitations. L'intégration de déterminants mieux adaptés aux besoins des experts devrait les inciter à employer cet outil dans leur pratique. D'ailleurs, en augmentant la qualité des outils des modélisations, cette recherche permettra non seulement d'encourager leur utilisation systématique, mais elle pourra également améliorer l'évaluation de l'exposition basée sur les jugements d'experts et, par conséquent, la protection de la santé des travailleurs. Abstract Occupational exposure assessment is an important stage in the management of chemical exposures. Few direct measurements are carried out in workplaces, and exposures are often estimated based on expert judgements. There is therefore a major requirement for simple transparent tools to help occupational health specialists to define exposure levels. The aim of the present research is to develop and improve modelling tools in order to predict exposure levels. In a first step a survey was made among professionals to define their expectations about modelling tools (what types of results, models and potential observable parameters). It was found that models are rarely used in Switzerland and that exposures are mainly estimated from past experiences of the expert. Moreover chemical emissions and their dispersion near the source have also been considered as key parameters. Experimental and modelling studies were also performed in some specific cases in order to test the flexibility and drawbacks of existing tools. In particular, models were applied to assess professional exposure to CO for different situations and compared with the exposure levels found in the literature for similar situations. Further, exposure to waterproofing sprays was studied as part of an epidemiological study on a Swiss cohort. In this case, some laboratory investigation have been undertaken to characterize the waterproofing overspray emission rate. A classical two-zone model was used to assess the aerosol dispersion in the near and far field during spraying. Experiments were also carried out to better understand the processes of emission and dispersion for tracer compounds, focusing on the characterization of near field exposure. An experimental set-up has been developed to perform simultaneous measurements through direct reading instruments in several points. It was mainly found that from a statistical point of view, the compartmental theory makes sense but the attribution to a given compartment could ñó~be done by simple geometric consideration. In a further step the experimental data were completed by observations made in about 100 different workplaces, including exposure measurements and observation of predefined determinants. The various data obtained have been used to improve an existing twocompartment exposure model. A tool was developed to include specific determinants in the choice of the compartment, thus largely improving the reliability of the predictions. All these investigations helped improving our understanding of modelling tools and identify their limitations. The integration of more accessible determinants, which are in accordance with experts needs, may indeed enhance model application for field practice. Moreover, while increasing the quality of modelling tool, this research will not only encourage their systematic use, but might also improve the conditions in which the expert judgments take place, and therefore the workers `health protection.
Resumo:
1. Statistical modelling is often used to relate sparse biological survey data to remotely derived environmental predictors, thereby providing a basis for predictively mapping biodiversity across an entire region of interest. The most popular strategy for such modelling has been to model distributions of individual species one at a time. Spatial modelling of biodiversity at the community level may, however, confer significant benefits for applications involving very large numbers of species, particularly if many of these species are recorded infrequently. 2. Community-level modelling combines data from multiple species and produces information on spatial pattern in the distribution of biodiversity at a collective community level instead of, or in addition to, the level of individual species. Spatial outputs from community-level modelling include predictive mapping of community types (groups of locations with similar species composition), species groups (groups of species with similar distributions), axes or gradients of compositional variation, levels of compositional dissimilarity between pairs of locations, and various macro-ecological properties (e.g. species richness). 3. Three broad modelling strategies can be used to generate these outputs: (i) 'assemble first, predict later', in which biological survey data are first classified, ordinated or aggregated to produce community-level entities or attributes that are then modelled in relation to environmental predictors; (ii) 'predict first, assemble later', in which individual species are modelled one at a time as a function of environmental variables, to produce a stack of species distribution maps that is then subjected to classification, ordination or aggregation; and (iii) 'assemble and predict together', in which all species are modelled simultaneously, within a single integrated modelling process. These strategies each have particular strengths and weaknesses, depending on the intended purpose of modelling and the type, quality and quantity of data involved. 4. Synthesis and applications. The potential benefits of modelling large multispecies data sets using community-level, as opposed to species-level, approaches include faster processing, increased power to detect shared patterns of environmental response across rarely recorded species, and enhanced capacity to synthesize complex data into a form more readily interpretable by scientists and decision-makers. Community-level modelling therefore deserves to be considered more often, and more widely, as a potential alternative or supplement to modelling individual species.
Resumo:
Introduction: Isocyanates are sensitizing chemicals used in various industries such as polyurethane foam production or paint-related purposes. Acting as haptens recognized by T-lymphocytes, they can cause allergic asthma and rarely hypersensitivity pneumonitis (HP). We aim to present a case report of acute HP due to hexamethylene diisocyanate (HDI) in a paint quality controller, a profession not generally considered at a high risk for work-related Isocyanates exposure. Case report: A 30-yr-old otherwise healthy female, light smoker working as a paint quality controller developed shortness of breath, malaise, sweating and chills at workplace six hours after handling a HDI-based hardener. Upon admission to emergency department, symptoms had progressed to severe respiratory failure. HR computer tomography (HRCT) showed bilateral ground-glass attenuation without pleural effusion. Rapid clinical and radiological improvement occurred under facial oxygen supply and systemic steroid therapy. Occupational medicine investigations revealed regular handling of HDI using latex gloves without respiratory protection. Assessment at workplace showed insufficient air renewal (1.5 times per hour), inadequate local aspiration and HDI exposure at levels of 1-4.25 ppb/m3 (Swiss Occupation Exposure Limit 5 ppb/m3). Biological monitoring after identical work procedure executed by a co-worker showed HDI exposure (5.1 micrograms hexamethylene diamine/g creatinine). Resumption of work was disadvised because of the life-threatening event. Discussion: The diagnosis of occupational HP is highly supported by classical findings on imagery and typical symptoms occurring within approved latency interval, associated with rapid clinical improvement. Although neither broncho-alveolar lavage nor specific IgG diagnosis (en route) were performed during the acute episode, various blood tests managed to rule out evidence of an infection or autoimmune disease. Other causes of HP seem unlikely as the patient did not have any recurrence of symptoms since absence from work. Workplace evaluation provided significant information on HDI exposure and allowed substantial recommendations to diminish Isocyanate exposure for the 20 still healthy laboratory co-workers. Although the entryways (air or skin) and precise mechanism of toxicity remain unclear, the present case clearly shows that Isocyanates may trigger acute HP in susceptible workers in a profession not generally considered at a high risk.
Resumo:
Specific properties emerge from the structure of large networks, such as that of worldwide air traffic, including a highly hierarchical node structure and multi-level small world sub-groups that strongly influence future dynamics. We have developed clustering methods to understand the form of these structures, to identify structural properties, and to evaluate the effects of these properties. Graph clustering methods are often constructed from different components: a metric, a clustering index, and a modularity measure to assess the quality of a clustering method. To understand the impact of each of these components on the clustering method, we explore and compare different combinations. These different combinations are used to compare multilevel clustering methods to delineate the effects of geographical distance, hubs, network densities, and bridges on worldwide air passenger traffic. The ultimate goal of this methodological research is to demonstrate evidence of combined effects in the development of an air traffic network. In fact, the network can be divided into different levels of âeurooecohesionâeuro, which can be qualified and measured by comparative studies (Newman, 2002; Guimera et al., 2005; Sales-Pardo et al., 2007).
Resumo:
An objective analysis of image quality parameters was performed for a computed radiography (CR) system using both standard single-side and prototype dual-side read plates. The pre-sampled modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) for the systems were determined at three different beam qualities representative of pediatric chest radiography, at an entrance detector air kerma of 5 microGy. The NPS and DQE measurements were realized under clinically relevant x-ray spectra for pediatric radiology, including x-ray scatter radiations. Compared to the standard single-side read system, the MTF for the dual-side read system is reduced, but this is offset by a significant decrease in image noise, resulting in a marked increase in DQE (+40%) in the low spatial frequency range. Thus, for the same image quality, the new technology permits the CR system to be used at a reduced dose level.
Resumo:
In many European countries, image quality for digital x-ray systems used in screening mammography is currently specified using a threshold-detail detectability method. This is a two-part study that proposes an alternative method based on calculated detectability for a model observer: the first part of the work presents a characterization of the systems. Eleven digital mammography systems were included in the study; four computed radiography (CR) systems, and a group of seven digital radiography (DR) detectors, composed of three amorphous selenium-based detectors, three caesium iodide scintillator systems and a silicon wafer-based photon counting system. The technical parameters assessed included the system response curve, detector uniformity error, pre-sampling modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE). Approximate quantum noise limited exposure range was examined using a separation of noise sources based upon standard deviation. Noise separation showed that electronic noise was the dominant noise at low detector air kerma for three systems; the remaining systems showed quantum noise limited behaviour between 12.5 and 380 µGy. Greater variation in detector MTF was found for the DR group compared to the CR systems; MTF at 5 mm(-1) varied from 0.08 to 0.23 for the CR detectors against a range of 0.16-0.64 for the DR units. The needle CR detector had a higher MTF, lower NNPS and higher DQE at 5 mm(-1) than the powder CR phosphors. DQE at 5 mm(-1) ranged from 0.02 to 0.20 for the CR systems, while DQE at 5 mm(-1) for the DR group ranged from 0.04 to 0.41, indicating higher DQE for the DR detectors and needle CR system than for the powder CR phosphor systems. The technical evaluation section of the study showed that the digital mammography systems were well set up and exhibiting typical performance for the detector technology employed in the respective systems.
Resumo:
In recent research, both soil (root-zone) and air temperature have been used as predictors for the treeline position worldwide. In this study, we intended to (a) test the proposed temperature limitation at the treeline, and (b) investigate effects of season length for both heat sum and mean temperature variables in the Swiss Alps. As soil temperature data are available for a limited number of sites only, we developed an air-to-soil transfer model (ASTRAMO). The air-to-soil transfer model predicts daily mean root-zone temperatures (10cm below the surface) at the treeline exclusively from daily mean air temperatures. The model using calibrated air and root-zone temperature measurements at nine treeline sites in the Swiss Alps incorporates time lags to account for the damping effect between air and soil temperatures as well as the temporal autocorrelations typical for such chronological data sets. Based on the measured and modeled root-zone temperatures we analyzed. the suitability of the thermal treeline indicators seasonal mean and degree-days to describe the Alpine treeline position. The root-zone indicators were then compared to the respective indicators based on measured air temperatures, with all indicators calculated for two different indicator period lengths. For both temperature types (root-zone and air) and both indicator periods, seasonal mean temperature was the indicator with the lowest variation across all treeline sites. The resulting indicator values were 7.0 degrees C +/- 0.4 SD (short indicator period), respectively 7.1 degrees C +/- 0.5 SD (long indicator period) for root-zone temperature, and 8.0 degrees C +/- 0.6 SD (short indicator period), respectively 8.8 degrees C +/- 0.8 SD (long indicator period) for air temperature. Generally, a higher variation was found for all air based treeline indicators when compared to the root-zone temperature indicators. Despite this, we showed that treeline indicators calculated from both air and root-zone temperatures can be used to describe the Alpine treeline position.
Resumo:
Asbestos is an industrial term to describe some fibrous silicate minerals, which belong to the amphiboles or serpentines group. Six minerals are defined as asbestos including: chrysotile (white asbestos), amosite (grunerite, brown asbestos), crocidolite (riebeckite, blue asbestos), anthophyllite, tremolite and actonolite, but only in their fibrous form. In 1973, the IARC (International Agency for Research on Cancer) classified the asbestos minerals as carcinogenic substances (IARC,1973). The Swiss threshold limit (VME) is 0.01 fibre/ml (SUVA, 2007). Asbestos in Switzerland has been prohibited since 1990, but this doesn't mean we are over asbestos. Up to 20'000 tonnes/year of asbestos was imported between the end of WWII and 1990. Today, all this asbestos is still present in buildings renovated or built during that period of time. During restorations, asbestos fibres can be emitted into the air. The quantification of the emission has to be evaluated accurately. To define the exact risk on workers or on the population is quite hard, as many factors must be considered. The methods to detect asbestos in the air or in materials are still being discussed today. Even though the EPA 600 method (EPA, 1993) has proved itself for the analysis of bulk materials, the method for air analysis is more problematic. In Switzerland, the recommended method is VDI 3492 using a scanning electron microscopy (SEM), but we have encountered many identifications problems with this method. For instance, overloaded filters or long-term exposed filters cannot be analysed. This is why the Institute for Work and Health (IST) has adapted the ISO10312 method: ambient air - determination of asbestos fibres - direct-transfer transmission electron microscopy (TEM) method (ISO, 1995). Quality controls have already be done at a French institute (INRS), which validate our practical experiences. The direct-transfer from MEC's filters on TEM's supports (grids) is a delicate part of the preparation for analysis and requires a lot of trials in the laboratory. IST managed to do proper grid preparations after about two years of development. In addition to the preparation of samples, the micro-analysis (EDX), the micro-diffraction and the morphologic analysis (figure 1.a-c) are also to be mastered. Theses are the three elements, which prove the different features of asbestos identification. The SEM isn't able to associate those three analyses. The TEM is also able to make the difference between artificial and natural fibres that have very similar chemical compositions as well as differentiate types of asbestos. Finally the experiments concluded by IST show that TEM is the best method to quantify and identify asbestos in the air.
Resumo:
The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.