958 resultados para Oja signs
Resumo:
Ions play an important role in affecting climate and particle formation in the atmosphere. Small ions rapidly attach to particles in the air and, therefore, studies have shown that they are suppressed in polluted environments. Urban environments, in particular, are dominated by motor vehicle emissions and, since motor vehicles are a source of both particles and small ions, the relationship between these two parameters is not well known. In order to gain a better understanding of this relationship, an intensive campaign was undertaken where particles and small ions of both signs were monitored over two week periods at each of three sites A, B and C that were affected to varying degrees by vehicle emissions. Site A was close to a major road and reported the highest particle number and lowest small ion concentrations. Precursors from motor vehicle emissions gave rise to clear particle formation events on five days and, on each day this was accompanied by a suppression of small ions. Observations at Site B, which was located within the urban airshed, though not adjacent to motor traffic, showed particle enhancement but no formation events. Site C was a clean site, away from urban sources. This site reported the lowest particle number and highest small ion concentration. The positive small ion concentration was 10% to 40% higher than the corresponding negative value at all sites. These results confirm previous findings that there is a clear inverse relationship between small ions and particles in urban environments dominated by motor vehicle emissions.
Resumo:
We have taken a new method of calibrating portal images of IMRT beams and used this to measure patient set-up accuracy and delivery errors, such as leaf errors and segment intensity errors during treatment. A calibration technique was used to remove the intensity modulations from the images leaving equivalent open field images that show patient anatomy that can be used for verification of the patient position. The images of the treatment beam can also be used to verify the delivery of the beam in terms of multileaf collimator leaf position and dosimetric errors. A series of controlled experiments delivering an IMRT anterior beam to the head and neck of a humanoid phantom were undertaken. A 2mm translation in the position of the phantom could be detected. With intentional introduction of delivery errors into the beam this method allowed us to detect leaf positioning errors of 2mm and variation in monitor units of 1%. The method was then applied to the case of a patient who received IMRT treatment to the larynx and cervical nodes. The anterior IMRT beam was imaged during four fractions and the images calibrated and investigated for the characteristic signs of patient position error and delivery error that were shown in the control experiments. No significant errors were seen. The method of imaging the IMRT beam and calibrating the images to remove the intensity modulations can be a useful tool in verifying both the patient position and the delivery of the beam.
Resumo:
This paper describes a risk model for estimating the likelihood of collisions at low-exposure railway level crossings, demonstrating the effect that differences in safety integrity can have on the likelihood of a collision. The model facilitates the comparison of safety benefits between level crossings with passive controls (stop or give-way signs) and level crossings that have been hypothetically upgraded with conventional or low-cost warning devices. The scenario presented illustrates how treatment of a cross-section of level crossings with low cost devices can provide a greater safety benefit compared to treatment with conventional warning devices for the same budget.
Resumo:
To investigate the migraine locus around the C19p13 region through analysis of the NOTCH3 gene (C19p13.2-p13.1), previously shown to be a gene involved in CADASIL and the TNFSF7 gene (C19p13), homologous to the ligands of TNF-alpha and TNF-beta, genes that have previously been associated with migraine. The NOTCH3 gene was analysed by sequencing all exons with known CADASIL mutations in a typical (non-familial hemiplegic) migraine family (MF1) that has previously been shown to be linked to C19p13. The TNFSF7 gene was investigated through SNP association analysis using a matched case-control migraine population. NOTCH3 gene sequencing results for affected members of MF1 proved to be negative for all known sequence variants giving rise to mutations for CADASIL. TNFSF7 gene chi-square results showed non-significant P values across all populations tested against controls, except for the MO subgroup which displayed a possible association with the TNFSF7 SNP (genotype, allele analysis P = 0.036, P = 0.017 respectively). Our results suggest that common migraine is not caused by any known CADASIL mutations in the NOTCH3 gene of interest. However, the TNFSF7 gene displayed signs of involvement in a MO affected population and indicates that further independent studies of this marker are warranted.
Resumo:
BACKGROUND: US Centers for Disease Control guidelines recommend replacement of peripheral intravenous (IV) catheters no more frequently than every 72 to 96 hours. Routine replacement is thought to reduce the risk of phlebitis and bloodstream infection. Catheter insertion is an unpleasant experience for patients and replacement may be unnecessary if the catheter remains functional and there are no signs of inflammation. Costs associated with routine replacement may be considerable. This is an update of a review first published in 2010. OBJECTIVES: To assess the effects of removing peripheral IV catheters when clinically indicated compared with removing and re-siting the catheter routinely. SEARCH METHODS: For this update the Cochrane Peripheral Vascular Diseases (PVD) Group Trials Search Co-ordinator searched the PVD Specialised Register (December 2012) and CENTRAL (2012, Issue 11). We also searched MEDLINE (last searched October 2012) and clinical trials registries. SELECTION CRITERIA: Randomised controlled trials that compared routine removal of peripheral IV catheters with removal only when clinically indicated in hospitalised or community dwelling patients receiving continuous or intermittent infusions. DATA COLLECTION AND ANALYSIS: Two review authors independently assessed trial quality and extracted data. MAIN RESULTS: Seven trials with a total of 4895 patients were included in the review. Catheter-related bloodstream infection (CRBSI) was assessed in five trials (4806 patients). There was no significant between group difference in the CRBSI rate (clinically-indicated 1/2365; routine change 2/2441). The risk ratio (RR) was 0.61 but the confidence interval (CI) was wide, creating uncertainty around the estimate (95% CI 0.08 to 4.68; P = 0.64). No difference in phlebitis rates was found whether catheters were changed according to clinical indications or routinely (clinically-indicated 186/2365; 3-day change 166/2441; RR 1.14, 95% CI 0.93 to 1.39). This result was unaffected by whether infusion through the catheter was continuous or intermittent. We also analysed the data by number of device days and again no differences between groups were observed (RR 1.03, 95% CI 0.84 to 1.27; P = 0.75). One trial assessed all-cause bloodstream infection. There was no difference in this outcome between the two groups (clinically-indicated 4/1593 (0.02%); routine change 9/1690 (0.05%); P = 0.21). Cannulation costs were lower by approximately AUD 7.00 in the clinically-indicated group (mean difference (MD) -6.96, 95% CI -9.05 to -4.86; P ≤ 0.00001). AUTHORS' CONCLUSIONS: The review found no evidence to support changing catheters every 72 to 96 hours. Consequently, healthcare organisations may consider changing to a policy whereby catheters are changed only if clinically indicated. This would provide significant cost savings and would spare patients the unnecessary pain of routine re-sites in the absence of clinical indications. To minimise peripheral catheter-related complications, the insertion site should be inspected at each shift change and the catheter removed if signs of inflammation, infiltration, or blockage are present. OBJECTIVES: To assess the effects of removing peripheral IV catheters when clinically indicated compared with removing and re-siting the catheter routinely. SEARCH METHODS: For this update the Cochrane Peripheral Vascular Diseases (PVD) Group Trials Search Co-ordinator searched the PVD Specialised Register (December 2012) and CENTRAL (2012, Issue 11). We also searched MEDLINE (last searched October 2012) and clinical trials registries. SELECTION CRITERIA: Randomised controlled trials that compared routine removal of peripheral IV catheters with removal only when clinically indicated in hospitalised or community dwelling patients receiving continuous or intermittent infusions. DATA COLLECTION AND ANALYSIS: Two review authors independently assessed trial quality and extracted data. MAIN RESULTS: Seven trials with a total of 4895 patients were included in the review. Catheter-related bloodstream infection (CRBSI) was assessed in five trials (4806 patients). There was no significant between group difference in the CRBSI rate (clinically-indicated 1/2365; routine change 2/2441). The risk ratio (RR) was 0.61 but the confidence interval (CI) was wide, creating uncertainty around the estimate (95% CI 0.08 to 4.68; P = 0.64). No difference in phlebitis rates was found whether catheters were changed according to clinical indications or routinely (clinically-indicated 186/2365; 3-day change 166/2441; RR 1.14, 95% CI 0.93 to 1.39). This result was unaffected by whether infusion through the catheter was continuous or intermittent. We also analysed the data by number of device days and again no differences between groups were observed (RR 1.03, 95% CI 0.84 to 1.27; P = 0.75). One trial assessed all-cause bloodstream infection. There was no difference in this outcome between the two groups (clinically-indicated 4/1593 (0.02%); routine change 9/1690 (0.05%); P = 0.21). Cannulation costs were lower by approximately AUD 7.00 in the clinically-indicated group (mean difference (MD) -6.96, 95% CI -9.05 to -4.86; P ≤ 0.00001). AUTHORS' CONCLUSIONS: The review found no evidence to support changing catheters every 72 to 96 hours. Consequently, healthcare organisations may consider changing to a policy whereby catheters are changed only if clinically indicated. This would provide significant cost savings and would spare patients the unnecessary pain of routine re-sites in the absence of clinical indications. To minimise peripheral catheter-related complications, the insertion site should be inspected at each shift change and the catheter removed if signs of inflammation, infiltration, or blockage are present.
Resumo:
Background A reliable standardized diagnosis of pneumonia in children has long been difficult to achieve. Clinical and radiological criteria have been developed by the World Health Organization (WHO), however, their generalizability to different populations is uncertain. We evaluated WHO defined chest radiograph (CXRs) confirmed alveolar pneumonia in the clinical context in Central Australian Aboriginal children, a high risk population, hospitalized with acute lower respiratory illness (ALRI). Methods CXRs in children (aged 1-60 months) hospitalized and treated with intravenous antibiotics for ALRI and enrolled in a randomized controlled trial (RCT) of Vitamin A/Zinc supplementation were matched with data collected during a population-based study of WHO-defined primary endpoint pneumonia (WHO-EPC). These CXRs were reread by a pediatric pulmonologist (PP) and classified as pneumonia-PP when alveolar changes were present. Sensitivities, specificities, positive and negative predictive values (PPV, NPV) for clinical presentations were compared between WHO-EPC and pneumonia-PP. Results Of the 147 episodes of hospitalized ALRI, WHO-EPC was significantly less commonly diagnosed in 40 (27.2%) compared to pneumonia-PP (difference 20.4%, 95% CI 9.6-31.2, P < 0.001). Clinical signs on admission were poor predictors for both pneumonia-PP and WHO-EPC; the sensitivities of clinical signs ranged from a high of 45% for tachypnea to 5% for fever + tachypnea + chest-indrawing. The PPV range was 40-20%, respectively. Higher PPVs were observed against the pediatric pulmonologist's diagnosis compared to WHO-EPC. Conclusions WHO-EPC underestimates alveolar consolidation in a clinical context. Its use in clinical practice or in research designed to inform clinical management in this population should be avoided. Pediatr Pulmonol. 2012; 47:386-392. (C) 2011 Wiley Periodicals, Inc.
Resumo:
Purpose. To compare the on-road driving performance of visually impaired drivers using bioptic telescopes with age-matched controls. Methods. Participants included 23 persons (mean age = 33 ± 12 years) with visual acuity of 20/63 to 20/200 who were legally licensed to drive through a state bioptic driving program, and 23 visually normal age-matched controls (mean age = 33 ± 12 years). On-road driving was assessed in an instrumented dual-brake vehicle along 14.6 miles of city, suburban, and controlled-access highways. Two backseat evaluators independently rated driving performance using a standardized scoring system. Vehicle control was assessed through vehicle instrumentation and video recordings used to evaluate head movements, lane-keeping, pedestrian detection, and frequency of bioptic telescope use. Results. Ninety-six percent (22/23) of bioptic drivers and 100% (23/23) of controls were rated as safe to drive by the evaluators. There were no group differences for pedestrian detection, or ratings for scanning, speed, gap judgments, braking, indicator use, or obeying signs/signals. Bioptic drivers received worse ratings than controls for lane position and steering steadiness and had lower rates of correct sign and traffic signal recognition. Bioptic drivers made significantly more right head movements, drove more often over the right-hand lane marking, and exhibited more sudden braking than controls. Conclusions. Drivers with central vision loss who are licensed to drive through a bioptic driving program can display proficient on-road driving skills. This raises questions regarding the validity of denying such drivers a license without the opportunity to train with a bioptic telescope and undergo on-road evaluation.
Resumo:
Background Trials of new technologies to remotely monitor for signs and symptoms of worsening heart failure are continually emerging. The extent to which technological differences impact the effectiveness of non-invasive remote monitoring for heart failure management is unknown. Objective To examine the effect of specific technology used for non-invasive remote monitoring of people with heart failure on all-cause mortality and heart failure-related hospitalisations. Methods A sub-analysis of a large systematic review and meta-analysis was conducted. Studies were stratified according to the specific type of technology used and separate meta-analyses were performed. Four different types of non-invasive remote monitoring technologies were identified including structured telephone calls, videophone, interactive voice response devices and telemonitoring. Results Only structured telephone calls and telemonitoring were effective in reducing the risk of all-cause mortality (RR 0.87; 95% CI=0.75-1.01; p=0.06 and 0.62; 95% CI=0.50-0.77; p<0.0001) and heart failure-related hospitalisations (RR 0.77; 95% CI=0.68-0.87; p<0.001) and 0.75; 95% CI=0.63-0.91; p=0.003). More research data is required for videophone and interactive voice response technologies. Conclusions This sub-analysis identified that only two of the four specific technologies used for non-invasive remote monitoring in heart failure improved outcomes. When results of studies that involved these disparate technologies were combined in previous meta-analyses, significant improvements in outcomes were identified. As such, this study has highlighted implications for future meta-analyses of randomised controlled trials focused on evaluating the effectiveness of remote monitoring in heart failure.
Resumo:
Background Extracorporeal membrane oxygenation (ECMO) is used for severe lung and/or heart failure in intensive care units (ICU). The Prince Charles Hospital (TPCH) has one of the largest ECMO units in Australia. Its use rapidly increased during the H1N1 (“swine flu”) pandemic and an increase in pedal complications resulted. The relationship between ECMO and pedal complications has been described, particularly in children, though no strong data exists. This paper presents a case series of foot complications in patients having received ECMO treatment. Methods We present nine cases of severe foot complications resulting from patients receiving ECMO treatment at TPCH in 2009–2012. Results Case ages ranged from 16 - 58 years and three were male. Six cases had an unremarkable medical history prior to H1N1 or H1N2 infection, one had Cardiomyopathy, one had received a lung transplant, and one had multi-organ failure post-sepsis. Common medications prescribed included vasopressors, antibiotics, and sedatives. All cases showed signs of markedly impaired peripheral perfusion whilst on ECMO and seven developed increasing areas of foot necrosis. Outcomes include two bilateral below knee amputations, two multiple digital amputations, one Reflex Sympathetic Dystrophy Syndrome, three pressure injuries, and three deaths. Conclusion Necrosis of the feet appears to occur more readily in younger people requiring ECMO treatment than others in ICU. The authors are conducting further studies to investigate associations between particular infections, medical history, medications, or machine techniques and severe foot complications. Some of these early results will also be presented at this conference.
Resumo:
In contemporary Western societies, the years between childhood and young adulthood are commonly understood to be (trans)formative in the reflexive project of sexual self-making (Russell et al. 2012). As sexual subjects in the making, youthful bodies, desires and sexual activities are often perceived as both volatile and vulnerable, thus subjected to instruction and discipline, protection and surveillance. Accordingly, young people’s sexual proximities are closely monitored by social institutions and ‘(hetero)normalising regimes’ (Warner 1999) for any signs that may compromise the end goal of development—a ‘normal’ reproductive heterosexual monogamous adult...
Resumo:
Burn-wound healing is a dynamic, interactive process involving a number of cellular and molecular events and is characterized by inflammation, granulation tissue formation, re-epithelialization, and tissue remodeling (Greenhalgh, 2002; Linares, 2002). Unlike incisional-wound healing, it also requires extensive re-epithelialization due to a predominant horizontal loss of tissue and often heals with abnormal scarring when burns involve deep dermis. The early mammalian fetus has the remarkable ability to regenerate normal epidermis and dermis and to heal dermal incisional wounds with no signs of scarring. Extensive research has indicated that scarless healing appears to be intrinsic to fetal skin (McCallion and Ferguson, 1996; Ferguson and O’Kane, 2004). Previously, we reported a fetal burn model, in which 80-day-old ovine fetuses (gestation¼ 145–153 days) healed deep dermal partial thickness burns without scars, whereas postnatal lambs healed equal depth burns with significant scarring (Cuttle et al., 2005; Fraser et al., 2005). This burn model provided early evidence that fetal skin has the capacity to repair and restore dermal horizontal loss, not just vertical injuries.
Resumo:
While vital staining remains a cornerstone in the diagnosis of ocular disease and contact lens complications, there are many misconceptions regarding the properties of commonly used dyes by eye-care practitioners and what is and what is not corneal staining after instillation of sodium fluorescein. Similarly, the proper use and diagnostic utility of rose Bengal and lissamine green B, the other two ophthalmic dyes commonly used for assessing ocular complications, have similarly remained unclear. Due to the limitations of vital stains for definitive diagnosis, concomitant signs and symptoms in addition to a complete patient history are required. Over the past decade, there have been many reports of a type of corneal staining—often referred to as solution-induced corneal staining (SICS)—that is observed with the use of multipurpose solutions in combination with soft lenses, more specifically silicone hydrogel lenses. Some authors believe that SICS is a sign of lens/solution incompatibility; however, new research shows that SICS may be neither a measure of lens/solution biocompatibility nor ‘true’ corneal staining, as that observed in pathological situations. A large component of SICS may be a benign phenomenon, known as preservative-associated transient hyperfluorescence (PATH). There is a lack of correlated signs and/or symptoms with SICS/PATH. Several properties of SICS/PATH, such as appearance and duration, differentiate it from pathological corneal staining. This paper reviews the properties of vital stains, their use and limitations in assessment of the ocular surface, the aetiology of corneal staining, characteristics of SICS/PATH that differentiate it from pathological corneal staining and what the SICS/PATH phenomenon means for contact lens-wearing patients.
Resumo:
- Considers broad-scale assessment approaches and how they impact on educational opportunity and outcomes. - Brings together internationally recognised scholars providing new insights into assessment for learning improvement and accountability. - Presents different theoretical and methodological perspectives influential in the field of assessment, learning and social change. - Contributes to the theorising of assessment in contexts characterised by heightened accountability requirements and constant change. This book brings together internationally recognised scholars with an interest in how to use the power of assessment to improve student learning and to engage with accountability priorities at both national and global levels. It includes distinguished writers who have worked together for some two decades to shift the assessment paradigm from a dominant focus on assessment as measurement towards assessment as central to efforts to improve learning. These writers have worked with the teaching profession and, in so doing, have researched and generated key insights into different ways of understanding assessment and its relationship to learning. The volume contributes to the theorising of assessment in contexts characterised by heightened accountability requirements and constant change. The book’s structure and content reflect already significant and growing international interest in assessment as contextualised practice, as well as theories of learning and teaching that underpin and drive particular assessment approaches. Learning theories and practices, assessment literacies, teachers’ responsibilities in assessment, the role of leadership, and assessment futures are the organisers within the book’s structure and content. The contributors to this book have in common the view that quality assessment, and quality learning and teaching are integrally related. Another shared view is that the alignment of assessment with curriculum, teaching and learning is linchpin to efforts to improve both learning opportunities and outcomes for all. Essentially, the book presents new perspectives on the enabling power of assessment. In so doing, the writers recognise that validity and reliability - the traditional canons of assessment – remain foundational and therefore necessary. However, they are not of themselves sufficient for quality education. The book argues that assessment needs to be radically reconsidered in the context of unprecedented societal change. Increasingly, communities are segregating more by wealth, with clear signs of social, political, economic and environmental instability. These changes raise important issues relating to ethics and equity, taken to be core dimensions in enabling the power of assessment to contribute to quality learning for all. This book offers readers new knowledge about how assessment can be used to re/engage learners across all phases of education.
Resumo:
Debilitating infectious diseases caused by Chlamydia are major contributors to the decline of Australia's iconic native marsupial species, the koala (Phascolarctos cinereus). An understanding of koala chlamydial disease pathogenesis and the development of effective strategies to control infections continue to be hindered by an almost complete lack of species-specific immunological reagents. The cell-mediated immune response has been shown to play an influential role in the response to chlamydial infection in other hosts. The objective of this study, hence, was to provide preliminary data on the role of two key cytokines, pro-inflammatory tumour necrosis factor alpha (TNFα) and anti-inflammatory interleukin 10 (IL10), in the koala Chlamydia pecorum response. Utilising sequence homology between the cytokine sequences obtained from several recently sequenced marsupial genomes, this report describes the first mRNA sequences of any koala cytokine and the development of koala specific TNFα and IL10 real-time PCR assays to measure the expression of these genes from koala samples. In preliminary studies comparing wild koalas with overt chlamydial disease, previous evidence of C. pecorum infection or no signs of C. pecorum infection, we revealed strong but variable expression of TNFα and IL10 in wild koalas with current signs of chlamydiosis. The description of these assays and the preliminary data on the cell-mediated immune response of koalas to chlamydial infection paves the way for future studies characterising the koala immune response to a range of its pathogens while providing reagents to assist with measuring the efficacy of ongoing attempts to develop a koala chlamydial vaccine.
Resumo:
Basal cell carcinoma (BCC) is a skin cancer of particular importance to the Australian community. Its rate of occurrence is highest in Queensland, where 1% to 2% of people are newly affected annually. This is an order of magnitude higher than corresponding incidence estimates in European and North American populations. Individuals with a sun-sensitive complexion are particularly susceptible because sun exposure is the single most important causative agent, as shown by the anatomic distribution of BCC which is in general consistent with the levels of sun exposure across body sites. A distinguishing feature of BCC is the occurrence of multiple primary tumours within individuals, synchronously or over time, and their diagnosis and treatment costs contribute substantially to the major public health burden caused by BCC. A primary knowledge gap about BCC pathogenesis however was an understanding of the true frequency of multiple BCC occurrences and their body distribution, and why a proportion of people do develop more than one BCC in their life. This research project sought to address this gap under an overarching research aim to better understand the detailed epidemiology of BCC with the ultimate goal of reducing the burden of this skin cancer through prevention. The particular aim was to document prospectively the rate of BCC occurrence and its associations with constitutional and environmental (solar) factors, all the while paying special attention to persons affected by more than one BCC. The study built on previous findings and recent developments in the field but set out to confirm and extend these and propose more adequate theories about the complex epidemiology of this cancer. Addressing these goals required a new approach to researching basal cell carcinoma, due to the need to account for the phenomenon of multiple incident BCCs per person. This was enabled by a 20 year community-based study of skin cancer in Australians that provided the methodological foundation for this thesis. Study participants were originally randomly selected in 1986 from the electoral register of all adult residents of the subtropical township of Nambour in Queensland, Australia. On various occasions during the study, participants were fully examined by dermatologists who documented cumulative photodamage as well as skin cancers. Participants completed standard questionnaires about skin cancer-related factors, and consented to have any diagnosed skin cancers notified to the investigators by regional pathology laboratories in Queensland. These methods allowed 100% ascertainment of histologically confirmed BCCs in this study population. 1339 participants had complete follow-up to the end of 2007. Statistical analyses in this thesis were carried out using SAS and SUDAAN statistical software packages. Modelling methods, including multivariate logistic regressions, allowed for repeated measures in terms of multiple BCCs per person. This innovative approach gave new findings on two levels, presented in five chapters as scientific papers: 1. Incidence of basal cell carcinoma multiplicity and detailed anatomic distribution: longitudinal study of an Australian population The incidence of people affected multiple times by BCC was 705 per 100,000 person years compared to an incidence rate of people singly affected of 935 per 100,000 person years. Among multiply and singly affected persons alike, site-specific BCC incidence rates were far highest on facial subsites, followed by upper limbs, trunk, and then lower limbs 2. Melanocytic nevi and basal cell carcinoma: is there an association? BCC risk was significantly increased in those with forearm nevi (Odds Ratios (OR) 1.43, 95% Confidence Intervals (CI) 1.09-1.89) compared to people without forearm nevi, especially among those who spent their time mainly outdoors (OR 1.6, 95%CI 1.1-2.3) compared to those who spent their time mainly indoors. Nevi on the back were not associated with BCC. 3. Clinical signs of photodamage are associated with basal cell carcinoma multiplicity and site: a 16-year longitudinal study Over a 16-year follow-up period, 58% of people affected by BCC developed more than one BCC. Among these people 60% developed BCCs across different anatomic sites. Participants with high numbers of solar keratoses, compared to people without solar keratoses, were most likely to experience the highest BCC counts overall (OR 3.3, 95%CI 1.4-13.5). Occurrences of BCC on the trunk (OR 3.3, 95%CI 1.4-7.6) and on the limbs (OR 3.7, 95%CI 2.0-7.0) were strongly associated with high numbers of solar keratoses on these sites. 4. Occurrence and determinants of basal cell carcinoma by histological subtype in an Australian community Among 1202 BCCs, 77% had a single growth pattern and 23% were of mixed histological composition. Among all BCCs the nodular followed by the superficial growth patterns were commonest. Risk of nodular and superficial BCCs on the head was raised if 5 or more solar keratoses were present on the face (OR 1.8, 95%CI 1.2-2.7 and OR 4.5, 95%CI 2.1-9.7 respectively) and similarly on the trunk in the presence of multiple solar keratoses on the trunk (OR 4.2, 95%CI 1.5-11.9 and OR 2.2, 95%CI 1.1-4.4 respectively). 5. Basal cell carcinoma and measures of cumulative sun exposure: an Australian longitudinal community-based study Dermal elastosis was more likely to be seen adjacent to head and neck BCCs than trunk BCCs (p=0.01). Severity of dermal elastosis increased on each site with increasing clinical signs of cutaneous sun damage on that site. BCCs that occurred without perilesional elastosis per se, were always found in an anatomic region with signs of photodamage. This thesis thus has identified the magnitude of the burden of multiple BCCs. It does not support the view that people affected by more than one BCC represent a distinct group of people who are prone to BCCs on certain body sites. The results also demonstrate that BCCs regardless of site, histology or order of occurrence are strongly associated with cumulative sun exposure causing photodamage to the skin, and hence challenge the view that BCCs occurring on body sites with typically low opportunities for sun exposure or of the superficial growth pattern are different in their association with the sun from those on typically sun-exposed sites, or nodular BCCs, respectively. Through dissemination in the scientific and medical literature, and to the community at large, these findings can ultimately assist in the primary and secondary prevention of BCC, perhaps especially in high-risk populations.