521 resultados para Smaller Kidneys


Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the research focuses in the integer least squares problem is the decorrelation technique to reduce the number of integer parameter search candidates and improve the efficiency of the integer parameter search method. It remains as a challenging issue for determining carrier phase ambiguities and plays a critical role in the future of GNSS high precise positioning area. Currently, there are three main decorrelation techniques being employed: the integer Gaussian decorrelation, the Lenstra–Lenstra–Lovász (LLL) algorithm and the inverse integer Cholesky decorrelation (IICD) method. Although the performance of these three state-of-the-art methods have been proved and demonstrated, there is still a potential for further improvements. To measure the performance of decorrelation techniques, the condition number is usually used as the criterion. Additionally, the number of grid points in the search space can be directly utilized as a performance measure as it denotes the size of search space. However, a smaller initial volume of the search ellipsoid does not always represent a smaller number of candidates. This research has proposed a modified inverse integer Cholesky decorrelation (MIICD) method which improves the decorrelation performance over the other three techniques. The decorrelation performance of these methods was evaluated based on the condition number of the decorrelation matrix, the number of search candidates and the initial volume of search space. Additionally, the success rate of decorrelated ambiguities was calculated for all different methods to investigate the performance of ambiguity validation. The performance of different decorrelation methods was tested and compared using both simulation and real data. The simulation experiment scenarios employ the isotropic probabilistic model using a predetermined eigenvalue and without any geometry or weighting system constraints. MIICD method outperformed other three methods with conditioning improvements over LAMBDA method by 78.33% and 81.67% without and with eigenvalue constraint respectively. The real data experiment scenarios involve both the single constellation system case and dual constellations system case. Experimental results demonstrate that by comparing with LAMBDA, MIICD method can significantly improve the efficiency of reducing the condition number by 78.65% and 97.78% in the case of single constellation and dual constellations respectively. It also shows improvements in the number of search candidate points by 98.92% and 100% in single constellation case and dual constellations case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anthropometric assessment is a simple, safe, and cost-efficient method to examine the health status of individu-als. The Japanese obesity classification based on the sum of two skin folds (Σ2SF) was proposed nearly 40 years ago therefore its applicability to Japanese living today is unknown. The current study aimed to determine Σ2SF cut-off values that correspond to percent body fat (%BF) and BMI values using two datasets from young Japa-nese adults (233 males and 139 females). Using regression analysis, Σ2SF and height-corrected Σ2SF (HtΣ2SF) values that correspond to %BF of 20, 25, and 30% for males and 30, 35, and 40% for females were determined. In addition, cut-off values of both Σ2SF and HtΣ2SF that correspond to BMI values of 23 kg/m2, 25 kg/m2 and 30 kg/m2 were determined. In comparison with the original Σ2SF values, the proposed values are smaller by about 10 mm at maximum. The proposed values show an improvement in sensitivity from about 25% to above 90% to identify individuals with ≥20% body fat in males and ≥30% body fat in females with high specificity of about 95% in both genders. The results indicate that the original Σ2SF cut-off values to screen obese individuals cannot be applied to young Japanese adults living today and modification is required. Application of the pro-posed values may assist screening in the clinical setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This series comprises three artefacts described below: Evangeline: Classic Gothic Lolita [3 piece garment]. Evangeline 2: Classic Gothic Lolita Pullip Doll Costume [2 piece garment]. Evangeline 3: Classic Gothic Lolita Mini Pullip Doll Costume [3 piece garment]. The series was part of an exhibition curated by Kathryn Hardy Bernal entitled: "Loli-Pop: A downtown Auckland view on Japanese street fashion". The exhibition explored the connections between gothic lolita fashion and popular culture. This work reflects on the aspect of collections in respect of the work of Hardy Bernal in relation to the connection between the japanese classic gothic lolita and the doll culture surrounding the movement. The pieces are interconnected and intended to communicate these aspects through a doll like dress worn by a model (Evangeline 1], carrying a doll wearing the same dress [Evangeline 2], carrying a smaller doll again wearing the same dress [Evangeline 3]. The artefacts appeared appeared as a central piece in the exhibition which was held at the War Memorial Museum in Auckland, New Zealand (15 September - 25 November 2007).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The depth of focus (DOF) can be defined as the variation in image distance of a lens or an optical system which can be tolerated without incurring an objectionable lack of sharpness of focus. The DOF of the human eye serves a mechanism of blur tolerance. As long as the target image remains within the depth of focus in the image space, the eye will still perceive the image as being clear. A large DOF is especially important for presbyopic patients with partial or complete loss of accommodation (presbyopia), since this helps them to obtain an acceptable retinal image when viewing a target moving through a range of near to intermediate distances. The aim of this research was to investigate the DOF of the human eye and its association with the natural wavefront aberrations, and how higher order aberrations (HOAs) can be used to expand the DOF, in particular by inducing spherical aberrations ( 0 4 Z and 0 6 Z ). The depth of focus of the human eye can be measured using a variety of subjective and objective methods. Subjective measurements based on a Badal optical system have been widely adopted, through which the retinal image size can be kept constant. In such measurements, the subject.s tested eye is normally cyclopleged. Objective methods without the need of cycloplegia are also used, where the eye.s accommodative response is continuously monitored. Generally, the DOF measured by subjective methods are slightly larger than those measured objectively. In recent years, methods have also been developed to estimate DOF from retinal image quality metrics (IQMs) derived from the ocular wavefront aberrations. In such methods, the DOF is defined as the range of defocus error that degrades the retinal image quality calculated from the IQMs to a certain level of the possible maximum value. In this study, the effect of different amounts of HOAs on the DOF was theoretically evaluated by modelling and comparing the DOF of subjects from four different clinical groups, including young emmetropes (20 subjects), young myopes (19 subjects), presbyopes (32 subjects) and keratoconics (35 subjects). A novel IQM-based through-focus algorithm was developed to theoretically predict the DOF of subjects with their natural HOAs. Additional primary spherical aberration ( 0 4 Z ) was also induced in the wavefronts of myopes and presbyopes to simulate the effect of myopic refractive correction (e.g. LASIK) and presbyopic correction (e.g. progressive power IOL) on the subject.s DOF. Larger amounts of HOAs were found to lead to greater values of predicted DOF. The introduction of primary spherical aberration was found to provide moderate increase of DOF while slightly deteriorating the image quality at the same time. The predicted DOF was also affected by the IQMs and the threshold level adopted. We then investigated the influence of the chosen threshold level of the IQMs on the predicted DOF, and how it relates to the subjectively measured DOF. The subjective DOF was measured in a group of 17 normal subjects, and we used through-focus visual Strehl ratio based on optical transfer function (VSOTF) derived from their wavefront aberrations as the IQM to estimate the DOF. The results allowed comparison of the subjective DOF with the estimated DOF and determination of a threshold level for DOF estimation. Significant correlation was found between the subject.s estimated threshold level for the estimated DOF and HOA RMS (Pearson.s r=0.88, p<0.001). The linear correlation can be used to estimate the threshold level for each individual subject, subsequently leading to a method for estimating individual.s DOF from a single measurement of their wavefront aberrations. A subsequent study was conducted to investigate the DOF of keratoconic subjects. Significant increases of the level of HOAs, including spherical aberration, coma and trefoil, can be observed in keratoconic eyes. This population of subjects provides an opportunity to study the influence of these HOAs on DOF. It was also expected that the asymmetric aberrations (coma and trefoil) in the keratoconic eye could interact with defocus to cause regional blur of the target. A dual-Badal-channel optical system with a star-pattern target was used to measure the subjective DOF in 10 keratoconic eyes and compared to those from a group of 10 normal subjects. The DOF measured in keratoconic eyes was significantly larger than that in normal eyes. However there was not a strong correlation between the large amount of HOA RMS and DOF in keratoconic eyes. Among all HOA terms, spherical aberration was found to be the only HOA that helped to significantly increase the DOF in the studied keratoconic subjects. Through the first three studies, a comprehensive understanding of DOF and its association to the HOAs in the human eye had been achieved. An adaptive optics system was then designed and constructed. The system was capable of measuring and altering the wavefront aberrations in the subject.s eye and measuring the resulting DOF under the influence of different combination of HOAs. Using the AO system, we investigated the concept of extending the DOF through optimized combinations of 0 4 Z and 0 6 Z . Systematic introduction of a targeted amount of both 0 4 Z and 0 6 Z was found to significantly improve the DOF of healthy subjects. The use of wavefront combinations of 0 4 Z and 0 6 Z with opposite signs can further expand the DOF, rather than using 0 4 Z or 0 6 Z alone. The optimal wavefront combinations to expand the DOF were estimated using the ratio of increase in DOF and loss of retinal image quality defined by VSOTF. In the experiment, the optimal combinations of 0 4 Z and 0 6 Z were found to provide a better balance of DOF expansion and relatively smaller decreases in VA. Therefore, the optimal combinations of 0 4 Z and 0 6 Z provides a more efficient method to expand the DOF rather than 0 4 Z or 0 6 Z alone. This PhD research has shown that there is a positive correlation between the DOF and the eye.s wavefront aberrations. More aberrated eyes generally have a larger DOF. The association of DOF and the natural HOAs in normal subjects can be quantified, which allows the estimation of DOF directly from the ocular wavefront aberration. Among the Zernike HOA terms, spherical aberrations ( 0 4 Z and 0 6 Z ) were found to improve the DOF. Certain combinations of 0 4 Z and 0 6 Z provide a more effective method to expand DOF than using 0 4 Z or 0 6 Z alone, and this could be useful in the optimal design of presbyopic optical corrections such as multifocal contact lenses, intraocular lenses and laser corneal surgeries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The electron collection efficiency in dye-sensitized solar cells (DSCs) is usually related to the electron diffusion length, L = (Dτ)1/2, where D is the diffusion coefficient of mobile electrons and τ is their lifetime, which is determined by electron transfer to the redox electrolyte. Analysis of incident photon-to-current efficiency (IPCE) spectra for front and rear illumination consistently gives smaller values of L than those derived from small amplitude methods. We show that the IPCE analysis is incorrect if recombination is not first-order in free electron concentration, and we demonstrate that the intensity dependence of the apparent L derived by first-order analysis of IPCE measurements and the voltage dependence of L derived from perturbation experiments can be fitted using the same reaction order, γ ≈ 0.8. The new analysis presented in this letter resolves the controversy over why L values derived from small amplitude methods are larger than those obtained from IPCE data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Full militaristic intervention cannot be justified on the grounds that this is a ‘just war’. We are then left with the option to intervene militarily in a smaller way or not to intervene militarily at all.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Trochlear dysplasia is suspected to have a genetic basis and causes recurrent patellar instability due to insufficient anatomical geometry. Numerous studies about trochlear morphology and the optimal surgical treatment have been carried out, but no attention has been paid to the corresponding patellar morphology.----- ----- PURPOSE: The aim of this study was the evaluation of the patellar morphology in normal and trochlear dysplastic knees. ----- ----- STUDY DESIGN: Biometric analysis. ----- ----- METHODS: Twenty two patellae with underlying trochlear dysplasia (study group--SG) were compared with 22 matched knees with normal trochlear shape (control group--CG) on transverse and sagittal MRI slices. We compared transverse diameter, cartilaginous thickness, Wiberg-index and -angle, length and radius of lateral and medial facet, patellar shape and angle, retropatellar length, and type of trochlear dysplasia. For statistical analysis we used the Wilcoxon signed ranks test. ----- ----- RESULTS: The transverse and sagittal diameter, mean length of medial patellar facet, and mean cartilaginous and subchondral Wiberg-index showed statistical differences between the two groups. ----- ----- CONCLUSIONS: Although the insufficient trochlear depth and decreased lateral trochlear slope are responsible for patellofemoral instability, the patella shows morphological changes in trochlear dysplastic knees. Its overall size and the medial facet are smaller. Although the femoral sulcus angle is larger, the Wiberg-angle and -index are equal to the control group. This may indicate that the patellar morphology may not be a result of missing medial patellofemoral pressure in trochlear dysplastic knees, but a decreased medial patellofemoral traction. This seems to be caused by hypotrophic medial patellofemoral restraints in combination with an increased lateral patellar tilt, both resulting in a decreased tension onto the medial patella facet. Whether there is a genetic component to the patellar morphology remains open.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bone development is influenced by the local mechanical environment. Experimental evidence suggests that altered loading can change cell proliferation and differentiation in chondro- and osteogenesis during endochondral ossification. This study investigated the effects of three-point bending of murine fetal metatarsal bone anlagen in vitro on cartilage differentiation, matrix mineralization and bone collar formation. This is of special interest because endochondral ossification is also an important process in bone healing and regeneration. Metatarsal preparations of 15 mouse fetuses stage 17.5 dpc were dissected en bloc and cultured for 7 days. After 3 days in culture to allow adherence they were stimulated 4 days for 20 min twice daily by a controlled bending of approximately 1000-1500 microstrain at 1 Hz. The paraffin-embedded bone sections were analyzed using histological and histomorphometrical techniques. The stimulated group showed an elongated periosteal bone collar while the total bone length was not different from controls. The region of interest (ROI), comprising the two hypertrophic zones and the intermediate calcifying diaphyseal zone, was greater in the stimulated group. The mineralized fraction of the ROI was smaller in the stimulated group, while the absolute amount of mineralized area was not different. These results demonstrate that a new device developed to apply three-point bending to a mouse metatarsal bone culture model caused an elongation of the periosteal bone collar, but did not lead to a modification in cartilage differentiation and matrix mineralization. The results corroborate the influence of biophysical stimulation during endochondral bone development in vitro. Further experiments with an altered loading regime may lead to more pronounced effects on the process of endochondral ossification and may provide further insights into the underlying mechanisms of mechanoregulation which also play a role in bone regeneration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fracture healing is influenced by fixation stability and experimental evidence suggests that the initial mechanical conditions may determine the healing outcome. We hypothesised that mechanical conditions influence not only the healing outcome, but also the early phase of fracture healing. Additionally, it was hypothesised that decreased fixation stability characterised by an increased shear interfragmentary movement results in a delay in healing. Sixty-four sheep underwent a mid-shaft tibial osteotomy which was treated with either a rigid or a semi-rigid external fixator. Animals were sacrificed at 2, 3, 6 and 9 weeks postoperatively and the fracture callus was analysed using radiological, biomechanical and histological techniques. The tibia treated with semi-rigid fixation showed inferior callus stiffness and quality after 6 weeks. At 9 weeks, the calluses were no longer distinguishable in their mechanical competence. The calluses at 9 weeks produced under rigid fixation were smaller and consisted of a reduced fibrous tissue component. These results demonstrate that the callus formation over the course of healing differed both morphologically and in the rate of development. In this study, we provide evidence that the course of healing is influenced by the initial fixation stability. The semi-rigid fixator did not result in delayed healing, but a less optimal healing path was taken. An upper limit of stability required for successful healing remains unknown, however a limit by which healing is less optimal has been determined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Saffman-Taylor finger problem is to predict the shape and,in particular, width of a finger of fluid travelling in a Hele-Shaw cell filled with a different, more viscous fluid. In experiments the width is dependent on the speed of propagation of the finger, tending to half the total cell width as the speed increases. To predict this result mathematically, nonlinear effects on the fluid interface must be considered; usually surface tension is included for this purpose. This makes the mathematical problem suffciently diffcult that asymptotic or numerical methods must be used. In this paper we adapt numerical methods used to solve the Saffman-Taylor finger problem with surface tension to instead include the effect of kinetic undercooling, a regularisation effect important in Stefan melting-freezing problems, for which Hele-Shaw flow serves as a leading order approximation when the specific heat of a substance is much smaller than its latent heat. We find the existence of a solution branch where the finger width tends to zero as the propagation speed increases, disagreeing with some aspects of the asymptotic analysis of the same problem. We also find a second solution branch, supporting the idea of a countably infinite number of branches as with the surface tension problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current knowledge about the relationship between transport disadvantage and activity space size is limited to urban areas, and as a result, very little is known to date about this link in a rural context. In addition, although research has identified transport disadvantaged groups based on their size of activity spaces, these studies have, however, not empirically explained such differences and the result is often a poor identification of the problems facing disadvantaged groups. Research has shown that transport disadvantage varies over time. The static nature of analysis using the activity space concept in previous research studies has lacked the ability to identify transport disadvantage in time. Activity space is a dynamic concept; and therefore possesses a great potential in capturing temporal variations in behaviour and access opportunities. This research derives measures of the size and fullness of activity spaces for 157 individuals for weekdays, weekends, and for a week using weekly activity-travel diary data from three case study areas located in rural Northern Ireland. Four focus groups were also conducted in order to triangulate the quantitative findings and to explain the differences between different socio-spatial groups. The findings of this research show that despite having a smaller sized activity space, individuals were not disadvantaged because they were able to access their required activities locally. Car-ownership was found to be an important life line in rural areas. Temporal disaggregation of the data reveals that this is true only on weekends due to a lack of public transport services. In addition, despite activity spaces being at a similar size, the fullness of activity spaces of low-income individuals was found to be significantly lower compared to their high-income counterparts. Focus group data shows that financial constraint, poor connections both between public transport services and between transport routes and opportunities forced individuals to participate in activities located along the main transport corridors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of variable structure control (VSC) for power systems stabilization is studied in this paper. It is the application, aspects and constraints of VSC which are of particular interest. A variable structure control methodology has been proposed for power systems stabilization. The method is implemented using thyristor controlled series compensators. A three machine power system is stabilized using a switching line control for large disturbances which becomes a sliding control as the disturbance becomes smaller. The results demonstrate the effectiveness of the methodology proposed as an useful tool to suppress the oscillations in power systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Reporting and Reception of Indigenous Issues in the Australian Media was a three year project financed by the Australian government through its Australian Research Council Large Grants Scheme and run by Professor John Hartley (of Murdoch and then Edith Cowan University, Western Australia). The purpose of the research was to map the ways in which indigeneity was constructed and circulated in Australia's mediasphere. The analysis of the 'reporting' element of the project was almost straightforward: a mixture of content analysis of a large number of items in the media, and detailed textual analysis of a smaller number of key texts. The discoveries were interesting - that when analysis approaches the media as a whole, rather than focussing exclusively on news or serious drama genres, then representation of indigeneity is not nearly as homogenous as has previously been assumed. And if researchers do not explicitly set out to uncover racism in every text, it is by no means guaranteed they will find it1. The question of how to approach the 'reception' of these issues - and particularly reception by indigenous Australians - proved to be a far more challenging one. In attempting to research this area, Hartley and I (working as a research assistant on the project) often found ourselves hampered by the axioms that underlie much media research. Traditionally, the 'reception' of media by indigenous people in Australia has been researched in ethnographic ways. This research repeatedly discovers that indigenous people in Australia are powerless in the face of new forms of media. Indigenous populations are represented as victims of aggressive and powerful intrusions: ‘What happens when a remote community is suddenly inundated by broadcast TV?’; ‘Overnight they will go from having no radio and television to being bombarded by three TV channels’; ‘The influence of film in an isolated, traditionally oriented Aboriginal community’2. This language of ‘influence’, ‘bombarded’, and ‘inundated’, presents metaphors not just of war but of a war being lost. It tells of an unequal struggle, of a more powerful force impinging upon a weaker one. What else could be the relationship of an Aboriginal audience to something which is ‘bombarding’ them? Or by which they are ‘inundated’? This attitude might best be summed up by the title of an article by Elihu Katz: ‘Can authentic cultures survive new media?’3. In such writing, there is little sense that what is being addressed might be seen as a series of discursive encounters, negotiations and acts of meaning-making in which indigenous people — communities and audiences —might be productive. Certainly, the points of concern in this type of writing are important. The question of what happens when a new communication medium is summarily introduced to a culture is certainly an important one. But the language used to describe this interaction is a misleading one. And it is noticeable that such writing is fascinated with the relationship of only traditionally-oriented Aboriginal communities to the media of mass communication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atmospheric deposition is one of the most important pollutant pathways for urban stormwater pollution. Atmospheric deposition can be in the form of dry and wet depositions which have distinct characteristics in terms of pollutant types, pollutant sources and influential parameters. This paper discusses the outcomes of a comprehensive study undertaken to identify the characteristics of wet and dry deposition of pollutants. Sample collection was undertaken at eight study sites with distinct characteristics. Four sites were close to road sites with varying traffic characteristics, whilst the other four sites had different land use characteristics. Dry deposition samples were collected for different antecedent dry days and wet deposition samples were collected immediately after rainfall events. The dry deposition was found to increase with the antecedent dry days and consisted of relatively coarser particles (greater than 1 µm) when compared to wet deposition. The wet deposition showed a strong affinity to rainfall depth, but was not related to the antecedent dry period. It was also found that smaller size particles (less than 1 µm) travel much longer distances from the source and deposit mainly with the wet deposition