461 resultados para Accelerated failure time model
Resumo:
This study examined the effect that temporal order within the entrepreneurial discovery exploitation process has on the outcomes of venture creation. Consistent with sequential theories of discovery-exploitation, the general flow of venture creation was found to be directed from discovery toward exploitation in a random sample of nascent ventures. However, venture creation attempts which specifically follow this sequence derive poor outcomes. Moreover, simultaneous discovery-exploitation was the most prevalent temporal order observed, and venture attempts that proceed in this manner more likely become operational. These findings suggest that venture creation is a multi-scale phenomenon that is at once directional in time, and simultaneously driven by symbiotically coupled discovery and exploitation.
Resumo:
Part-time employment presents a conundrum in that it facilitates work-life priorities, while also, compared to equivalent full-time roles, attracting penalties such as diminished career prospects and lower commensurate remuneration. Recently, some promising theoretical developments in the job/work design literature suggest that consideration of work design may redress some of the penalties associated with part-time work. Adopting the framework of the Elaborated Model of Work Design by Parker and colleagues (2001), we examined this possibility through interviews with part-time professional service employees and their supervisors. The findings revealed that in organizations characterised by cultural norms of extended working hours and a singular-focused commitment to work, part-time roles were often inadequately re-designed when adapted from full-time arrangements. The findings also demonstrated that certain work design characteristics (e.g. predictability of work-flow, interdependencies with co-workers) render some roles more suitable for part-time arrangements than others. The research provides insights into gaps between policy objectives and outcomes associated with part-time work, challenges assumptions about the limitations of part-time roles, and suggests re-design strategies for more effective part-time arrangements.
Resumo:
Background: Trauma resulting from traffic crashes poses a significant problem in highly motorised countries. Over a million people worldwide are killed annually and 50 million are critically injured as a result of traffic collisions. In Australia, road crashes cost an average of $17 billion annually in personal loss of income and quality of life, organisational losses in productivity and workplace quality, and health care costs. Driver aggression has been identified as a key factor contributing to crashes, and many motorists report experiencing mild forms of aggression (e.g., rude gestures, horn honking). However despite this concern, driver aggression has received relatively little attention in empirical research, and existing research has been hampered by a number of methodological and conceptual shortcomings. Specifically, there has been substantial disagreement regarding what constitutes aggressive driving and a failure to examine both the situational factors and the emotional and cognitive processes underlying driver aggression. To enhance current understanding of aggressive driving, a model of driver aggression that highlights the cognitive and emotional processes at play in aggressive driving incidents is proposed. Aims: The research aims to improve current understanding of the complex nature of driver aggression by testing and refining a model of aggressive driving that incorporates the person-related and situational factors and the cognitive and emotional appraisal processes fundamental to driver aggression. In doing so, the research will assist to provide a clear definition of what constitutes aggressive driving, assist to identify on-road incidents that trigger driver aggression, and identify the emotional and cognitive appraisal processes that underlie driver aggression. Methods: The research involves three studies. Firstly, to contextualise the model and explore the cognitive and emotional aspects of driver aggression, a diary-based study using self-reports of aggressive driving events will be conducted with a general population of drivers. This data will be supplemented by in-depth follow-up interviews with a sub-sample of participants. Secondly, to test generalisability of the model, a large sample of drivers will be asked to respond to video-based scenarios depicting driving contexts derived from incidents identified in Study 1 as inciting aggression. Finally, to further operationalise and test the model an advanced driving simulator will be used with sample of drivers. These drivers will be exposed to various driving scenarios that would be expected to trigger negative emotional responses. Results: Work on the project has commenced and progress on the first study will be reported.
Resumo:
Nowadays, Opinion Mining is getting more important than before especially in doing analysis and forecasting about customers’ behavior for businesses purpose. The right decision in producing new products or services based on data about customers’ characteristics means profit for organization/company. This paper proposes a new architecture for Opinion Mining, which uses a multidimensional model to integrate customers’ characteristics and their comments about products (or services). The key step to achieve this objective is to transfer comments (opinions) to a fact table that includes several dimensions, such as, customers, products, time and locations. This research presents a comprehensive way to calculate customers’ orientation for all possible products’ attributes. A use case study is also presented in this paper to show the advantages of using OLAP and data cubes to analyze costumers’ opinions.
Resumo:
In order to make good decisions about the design of information systems, an essential skill is to understand process models of the business domain the system is intended to support. Yet, little knowledge to date has been established about the factors that affect how model users comprehend the content of process models. In this study, we use theories of semiotics and cognitive load to theorize how model and personal factors influence how model viewers comprehend the syntactical information of process models. We then report on a four-part series of experiments, in which we examined these factors. Our results show that additional semantical information impedes syntax comprehension, and that theoretical knowledge eases syntax comprehension. Modeling experience further contributes positively to comprehension efficiency, measured as the ratio of correct answers to the time taken to provide answers. We discuss implications for practice and research.
Resumo:
Here we present a sequential Monte Carlo (SMC) algorithm that can be used for any one-at-a-time Bayesian sequential design problem in the presence of model uncertainty where discrete data are encountered. Our focus is on adaptive design for model discrimination but the methodology is applicable if one has a different design objective such as parameter estimation or prediction. An SMC algorithm is run in parallel for each model and the algorithm relies on a convenient estimator of the evidence of each model which is essentially a function of importance sampling weights. Other methods for this task such as quadrature, often used in design, suffer from the curse of dimensionality. Approximating posterior model probabilities in this way allows us to use model discrimination utility functions derived from information theory that were previously difficult to compute except for conjugate models. A major benefit of the algorithm is that it requires very little problem specific tuning. We demonstrate the methodology on three applications, including discriminating between models for decline in motor neuron numbers in patients suffering from neurological diseases such as Motor Neuron disease.
Resumo:
Prefabricated construction is regarded by many as an effective and efficient approach to improving construction processes and productivity, ensuring construction quality and reducing time and cost in the construction industry. However, many problems occur with this approach in practice, including higher risk levels and cost or time overruns. In order to solve such problems, it is proposed that the IKEA model of the manufacturing industry and VP technology are introduced into a prefabricated construction process. The concept of the IKEA model is identified in detail and VP technology is briefly introduced. In conjunction with VP technology, the applications of the IKEA model are presented in detail, i.e. design optimization, production optimization and installation optimization. Furthermore, through a case study of a prefabricated hotel project in Hong Kong, it is shown that the VP-based IKEA model can improve the efficiency and safety of prefabricated construction as well as reducing cost and time.
Resumo:
Airports represent the epitome of complex systems with multiple stakeholders, multiple jurisdictions and complex interactions between many actors. The large number of existing models that capture different aspects of the airport are a testament to this. However, these existing models do not consider in a systematic sense modelling requirements nor how stakeholders such as airport operators or airlines would make use of these models. This can detrimentally impact on the verification and validation of models and makes the development of extensible and reusable modelling tools difficult. This paper develops from the Concept of Operations (CONOPS) framework a methodology to help structure the review and development of modelling capabilities and usage scenarios. The method is applied to the review of existing airport terminal passenger models. It is found that existing models can be broadly categorised according to four usage scenarios: capacity planning, operational planning and design, security policy and planning, and airport performance review. The models, the performance metrics that they evaluate and their usage scenarios are discussed. It is found that capacity and operational planning models predominantly focus on performance metrics such as waiting time, service time and congestion whereas performance review models attempt to link those to passenger satisfaction outcomes. Security policy models on the other hand focus on probabilistic risk assessment. However, there is an emerging focus on the need to be able to capture trade-offs between multiple criteria such as security and processing time. Based on the CONOPS framework and literature findings, guidance is provided for the development of future airport terminal models.
Juggling competing public values : resolving conflicting agendas in social procurement in Queensland
Resumo:
Organisations within the not-for-profit sector provide services to individuals and groups government and for-profit organisations cannot or will not consider. This response by the not-for-profit sector to market failure and government failure is a well understood contribution to society by the nonprofit sector. Over time, this response has resulted in the development of a vibrant and rich agglomeration of services and programs that operate under a myriad of philosophical stances, service orientations, client groupings and operational capacities. In Australia, these organisations and services provide social support and service assistance to many people in the community; often targeting their assistance to clients facing the most difficult of clients with complex problems. Initially, in undertaking this role, the not-for-profit sector received limited sponsorship from government, relying on primarily on public donations to fund the delivery of services. (Lyons 2001). Over time governments assumed greater responsibility in the form of service grants to particular groups: ‘the worthy poor’. More recently, government has engaged in widespread procurement of services from the not-for-profit sector, which specify the nature of the outcomes to be achieved and, to a degree, the way in which the services will be provided. A consequence of this growing shift to a more marketised model of service contracting, often offered-up under the label of enhanced collaborative practice, has been increased competitiveness between agencies that had previously worked well together (Keast and Brown, 2006). One of the challenges which emerge from the procurement of services by government from third sector organisations is that public values such as effectiveness, efficiency, transparency and professionalism can be neglected (Jørgensen and Bozeman 2002), although this is not always the case (Brown, Furneaux and Gudmundsson 2012). While some approaches to the examination of social procurement - the intentional purchasing of social outcomes (Furneaux and Barraket 2011) - assumes that public values are lost in social procurement arrangements (Bozeman 2002; Jørgensen and Bozeman 2002), alternative approach suggest such inevitability is not the case. Instead, social procurement is seen to involve a set of tensions (Brown, Potoski and Slyke 2006) or a set of trade offs (Charles et al. 2007), which must be managed, and through such management, public values can be potentially safeguarded (Bruin and Dicke 2006). The potential trade-offs of public values in social procurement is an area in need of further research, and one which carries both theoretical and practical significance. Additionally, the juxtaposition of policies – horizontal integration and vertical efficiency – results in a complex, crowded and contested policy and practice environment (Keast et al., 2007),, with the potential for set of unintentional consequences arising from these arrangements. Further the involvement of for-profit, non-profit, and hybrid organisations such as social enterprises, adds further complexity in the number of different organisational forms engaged in service delivery on behalf of government. To address this issue, this paper uses information gleaned from a state-wide survey of not-for-profit organisations in Queensland, Australia which included within its focus organisational size, operational scope, funding arrangements and governance/management approaches. Supplementing this information is qualitative data derived from 17 focus groups and 120 interviews conducted over ten years of study of this sector. The findings contribute to greater understanding of the practice and theory of the future provision of social services.
Resumo:
In recent times, light gauge steel framed (LSF) structures, such as cold-formed steel wall systems, are increasingly used, but without a full understanding of their fire performance. Traditionally the fire resistance rating of these load-bearing LSF wall systems is based on approximate prescriptive methods developed based on limited fire tests. Very often they are limited to standard wall configurations used by the industry. Increased fire rating is provided simply by adding more plasterboards to these walls. This is not an acceptable situation as it not only inhibits innovation and structural and cost efficiencies but also casts doubt over the fire safety of these wall systems. Hence a detailed fire research study into the performance of LSF wall systems was undertaken using full scale fire tests and extensive numerical studies. A new composite wall panel developed at QUT was also considered in this study, where the insulation was used externally between the plasterboards on both sides of the steel wall frame instead of locating it in the cavity. Three full scale fire tests of LSF wall systems built using the new composite panel system were undertaken at a higher load ratio using a gas furnace designed to deliver heat in accordance with the standard time temperature curve in AS 1530.4 (SA, 2005). Fire tests included the measurements of load-deformation characteristics of LSF walls until failure as well as associated time-temperature measurements across the thickness and along the length of all the specimens. Tests of LSF walls under axial compression load have shown the improvement to their fire performance and fire resistance rating when the new composite panel was used. Hence this research recommends the use of the new composite panel system for cold-formed LSF walls. The numerical study was undertaken using a finite element program ABAQUS. The finite element analyses were conducted under both steady state and transient state conditions using the measured hot and cold flange temperature distributions from the fire tests. The elevated temperature reduction factors for mechanical properties were based on the equations proposed by Dolamune Kankanamge and Mahendran (2011). These finite element models were first validated by comparing their results with experimental test results from this study and Kolarkar (2010). The developed finite element models were able to predict the failure times within 5 minutes. The validated model was then used in a detailed numerical study into the strength of cold-formed thin-walled steel channels used in both the conventional and the new composite panel systems to increase the understanding of their behaviour under nonuniform elevated temperature conditions and to develop fire design rules. The measured time-temperature distributions obtained from the fire tests were used. Since the fire tests showed that the plasterboards provided sufficient lateral restraint until the failure of LSF wall panels, this assumption was also used in the analyses and was further validated by comparison with experimental results. Hence in this study of LSF wall studs, only the flexural buckling about the major axis and local buckling were considered. A new fire design method was proposed using AS/NZS 4600 (SA, 2005), NAS (AISI, 2007) and Eurocode 3 Part 1.3 (ECS, 2006). The importance of considering thermal bowing, magnified thermal bowing and neutral axis shift in the fire design was also investigated. A spread sheet based design tool was developed based on the above design codes to predict the failure load ratio versus time and temperature for varying LSF wall configurations including insulations. Idealised time-temperature profiles were developed based on the measured temperature values of the studs. This was used in a detailed numerical study to fully understand the structural behaviour of LSF wall panels. Appropriate equations were proposed to find the critical temperatures for different composite panels, varying in steel thickness, steel grade and screw spacing for any load ratio. Hence useful and simple design rules were proposed based on the current cold-formed steel structures and fire design standards, and their accuracy and advantages were discussed. The results were also used to validate the fire design rules developed based on AS/NZS 4600 (SA, 2005) and Eurocode Part 1.3 (ECS, 2006). This demonstrated the significant improvements to the design method when compared to the currently used prescriptive design methods for LSF wall systems under fire conditions. In summary, this research has developed comprehensive experimental and numerical thermal and structural performance data for both the conventional and the proposed new load bearing LSF wall systems under standard fire conditions. Finite element models were developed to predict the failure times of LSF walls accurately. Idealized hot flange temperature profiles were developed for non-insulated, cavity and externally insulated load bearing wall systems. Suitable fire design rules and spread sheet based design tools were developed based on the existing standards to predict the ultimate failure load, failure times and failure temperatures of LSF wall studs. Simplified equations were proposed to find the critical temperatures for varying wall panel configurations and load ratios. The results from this research are useful to both structural and fire engineers and researchers. Most importantly, this research has significantly improved the knowledge and understanding of cold-formed LSF loadbearing walls under standard fire conditions.
Resumo:
Driver response (reaction) time (tr) of the second queuing vehicle is generally longer than other vehicles at signalized intersections. Though this phenomenon was revealed in 1972, the above factor is still ignored in conventional departure models. This paper highlights the need for quantitative measurements and analysis of queuing vehicle performance in spontaneous discharge pattern because it can improve microsimulation. Video recording from major cities in Australia plus twenty two sets of vehicle trajectories extracted from the Next Generation Simulation (NGSIM) Peachtree Street Dataset have been analyzed to better understand queuing vehicle performance in the discharge process. Findings from this research will alleviate driver response time and also can be used for the calibration of the microscopic traffic simulation model.
Resumo:
For over half a century, it has been known that the rate of morphological evolution appears to vary with the time frame of measurement. Rates of microevolutionary change, measured between successive generations, were found to be far higher than rates of macroevolutionary change inferred from the fossil record. More recently, it has been suggested that rates of molecular evolution are also time dependent, with the estimated rate depending on the timescale of measurement. This followed surprising observations that estimates of mutation rates, obtained in studies of pedigrees and laboratory mutation-accumulation lines, exceeded long-term substitution rates by an order of magnitude or more. Although a range of studies have provided evidence for such a pattern, the hypothesis remains relatively contentious. Furthermore, there is ongoing discussion about the factors that can cause molecular rate estimates to be dependent on time. Here we present an overview of our current understanding of time-dependent rates. We provide a summary of the evidence for time-dependent rates in animals, bacteria and viruses. We review the various biological and methodological factors that can cause rates to be time dependent, including the effects of natural selection, calibration errors, model misspecification and other artefacts. We also describe the challenges in calibrating estimates of molecular rates, particularly on the intermediate timescales that are critical for an accurate characterization of time-dependent rates. This has important consequences for the use of molecular-clock methods to estimate timescales of recent evolutionary events.
Time dependency of molecular rate estimates and systematic overestimation of recent divergence times
Resumo:
Studies of molecular evolutionary rates have yielded a wide range of rate estimates for various genes and taxa. Recent studies based on population-level and pedigree data have produced remarkably high estimates of mutation rate, which strongly contrast with substitution rates inferred in phylogenetic (species-level) studies. Using Bayesian analysis with a relaxed-clock model, we estimated rates for three groups of mitochondrial data: avian protein-coding genes, primate protein-coding genes, and primate d-loop sequences. In all three cases, we found a measurable transition between the high, short-term (<1–2 Myr) mutation rate and the low, long-term substitution rate. The relationship between the age of the calibration and the rate of change can be described by a vertically translated exponential decay curve, which may be used for correcting molecular date estimates. The phylogenetic substitution rates in mitochondria are approximately 0.5% per million years for avian protein-coding sequences and 1.5% per million years for primate protein-coding and d-loop sequences. Further analyses showed that purifying selection offers the most convincing explanation for the observed relationship between the estimated rate and the depth of the calibration. We rule out the possibility that it is a spurious result arising from sequence errors, and find it unlikely that the apparent decline in rates over time is caused by mutational saturation. Using a rate curve estimated from the d-loop data, several dates for last common ancestors were calculated: modern humans and Neandertals (354 ka; 222–705 ka), Neandertals (108 ka; 70–156 ka), and modern humans (76 ka; 47–110 ka). If the rate curve for a particular taxonomic group can be accurately estimated, it can be a useful tool for correcting divergence date estimates by taking the rate decay into account. Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution.
Resumo:
ABSTRACT Objectives: To investigate the effect of hot and cold temperatures on ambulance attendances. Design: An ecological time series study. Setting and participants: The study was conducted in Brisbane, Australia. We collected information on 783 935 daily ambulance attendances, along with data of associated meteorological variables and air pollutants, for the period of 2000–2007. Outcome measures: The total number of ambulance attendances was examined, along with those related to cardiovascular, respiratory and other non-traumatic conditions. Generalised additive models were used to assess the relationship between daily mean temperature and the number of ambulance attendances. Results: There were statistically significant relationships between mean temperature and ambulance attendances for all categories. Acute heat effects were found with a 1.17% (95% CI: 0.86%, 1.48%) increase in total attendances for 1 °C increase above threshold (0–1 days lag). Cold effects were delayed and longer lasting with a 1.30% (0.87%, 1.73%) increase in total attendances for a 1 °C decrease below the threshold (2–15 days lag). Harvesting was observed following initial acute periods of heat effects, but not for cold effects. Conclusions: This study shows that both hot and cold temperatures led to increases in ambulance attendances for different medical conditions. Our findings support the notion that ambulance attendance records are a valid and timely source of data for use in the development of local weather/health early warning systems.
Resumo:
The human Ureaplasma species are the most frequently isolated bacteria from the upper genital tract of pregnant women and can cause clinically asymptomatic, intra-uterine infections, which are difficult to treat with antimicrobials. Ureaplasma infection of the upper genital tract during pregnancy has been associated with numerous adverse outcomes including preterm birth, chorioamnionitis and neonatal respiratory diseases. The mechanisms by which ureaplasmas are able to chronically colonise the amniotic fluid and avoid eradication by (i) the host immune response and (ii) maternally-administered antimicrobials, remain virtually unexplored. To address this gap within the literature, this study investigated potential mechanisms by which ureaplasmas are able to cause chronic, intra-amniotic infections in an established ovine model. In this PhD program of research the effectiveness of standard, maternal erythromycin for the treatment of chronic, intra-amniotic ureaplasma infections was evaluated. At 55 days of gestation pregnant ewes received an intra-amniotic injection of either: a clinical Ureaplasma parvum serovar 3 isolate that was sensitive to macrolide antibiotics (n = 16); or 10B medium (n = 16). At 100 days of gestation, ewes were then randomised to receive either maternal erythromycin treatment (30 mg/kg/day for four days) or no treatment. Ureaplasmas were isolated from amniotic fluid, chorioamnion, umbilical cord and fetal lung specimens, which were collected at the time of preterm delivery of the fetus (125 days of gestation). Surprisingly, the numbers of ureaplasmas colonising the amniotic fluid and fetal tissues were not different between experimentally-infected animals that received erythromycin treatment or infected animals that did not receive treatment (p > 0.05), nor were there any differences in fetal inflammation and histological chorioamnionitis between these groups (p > 0.05). These data demonstrate the inability of maternal erythromycin to eradicate intra-uterine ureaplasma infections. Erythromycin was detected in the amniotic fluid of animals that received antimicrobial treatment (but not in those that did not receive treatment) by liquid chromatography-mass spectrometry; however, the concentrations were below therapeutic levels (<10 – 76 ng/mL). These findings indicate that the ineffectiveness of standard, maternal erythromycin treatment of intra-amniotic ureaplasma infections may be due to the poor placental transfer of this drug. Subsequently, the phenotypic and genotypic characteristics of ureaplasmas isolated from the amniotic fluid and chorioamnion of pregnant sheep after chronic, intra-amniotic infection and low-level exposure to erythromycin were investigated. At 55 days of gestation twelve pregnant ewes received an intra-amniotic injection of a clinical U. parvum serovar 3 isolate, which was sensitive to macrolide antibiotics. At 100 days of gestation, ewes received standard maternal erythromycin treatment (30 mg/kg/day for four days, n = 6) or saline (n = 6). Preterm fetuses were surgically delivered at 125 days of gestation and ureaplasmas were cultured from the amniotic fluid and the chorioamnion. The minimum inhibitory concentrations (MICs) of erythromycin, azithromycin and roxithromycin were determined for cultured ureaplasma isolates, and antimicrobial susceptibilities were different between ureaplasmas isolated from the amniotic fluid (MIC range = 0.08 – 1.0 mg/L) and chorioamnion (MIC range = 0.06 – 5.33 mg/L). However, the increased resistance to macrolide antibiotics observed in chorioamnion ureaplasma isolates occurred independently of exposure to erythromycin in vivo. Remarkably, domain V of the 23S ribosomal RNA gene (which is the target site of macrolide antimicrobials) of chorioamnion ureaplasmas demonstrated significant variability (125 polymorphisms out of 422 sequenced nucleotides, 29.6%) when compared to the amniotic fluid ureaplasma isolates and the inoculum strain. This sequence variability did not occur as a consequence of exposure to erythromycin, as the nucleotide substitutions were identical between chorioamnion ureaplasmas isolated from different animals, including those that did not receive erythromycin treatment. We propose that these mosaic-like 23S ribosomal RNA gene sequences may represent gene fragments transferred via horizontal gene transfer. The significant differences observed in (i) susceptibility to macrolide antimicrobials and (ii) 23S ribosomal RNA sequences of ureaplasmas isolated from the amniotic fluid and chorioamnion suggests that the anatomical site from which they were isolated may exert selective pressures that alter the socio-microbiological structure of the bacterial population, by selecting for genetic changes and altered antimicrobial susceptibility profiles. The final experiment for this PhD examined antigenic size variation of the multiple banded antigen (MBA, a surface-exposed lipoprotein and predicted ureaplasmal virulence factor) in chronic, intra-amniotic ureaplasma infections. Previously defined ‘virulent-derived’ and ‘avirulent-derived’ clonal U. parvum serovar 6 isolates (each expressing a single MBA protein) were injected into the amniotic fluid of pregnant ewes (n = 20) at 55 days of gestation, and amniotic fluid was collected by amniocentesis every two weeks until the time of near-term delivery of the fetus (at 140 days of gestation). Both the avirulent and virulent clonal ureaplasma strains generated MBA size variants (ranging in size from 32 – 170 kDa) within the amniotic fluid of pregnant ewes. The mean number of MBA size variants produced within the amniotic fluid was not different between the virulent (mean = 4.2 MBA variants) and avirulent (mean = 4.6 MBA variants) ureaplasma strains (p = 0.87). Intra-amniotic infection with the virulent strain was significantly associated with the presence of meconium-stained amniotic fluid (p = 0.01), which is an indicator of fetal distress in utero. However, the severity of histological chorioamnionitis was not different between the avirulent and virulent groups. We demonstrated that ureaplasmas were able to persist within the amniotic fluid of pregnant sheep for 85 days, despite the host mounting an innate and adaptive immune response. Pro-inflammatory cytokines (interleukin (IL)-1â, IL-6 and IL-8) were elevated within the chorioamnion tissue of pregnant sheep from both the avirulent and virulent treatment groups, and this was significantly associated with the production of anti-ureaplasma IgG antibodies within maternal sera (p < 0.05). These findings suggested that the inability of the host immune response to eradicate ureaplasmas from the amniotic cavity may be due to continual size variation of MBA surface-exposed epitopes. Taken together, these data confirm that ureaplasmas are able to cause long-term in utero infections in a sheep model, despite standard antimicrobial treatment and the development of a host immune response. The overall findings of this PhD project suggest that ureaplasmas are able to cause chronic, intra-amniotic infections due to (i) the limited placental transfer of erythromycin, which prevents the accumulation of therapeutic concentrations within the amniotic fluid; (ii) the ability of ureaplasmas to undergo rapid selection and genetic variation in vivo, resulting in ureaplasma isolates with variable MICs to macrolide antimicrobials colonising the amniotic fluid and chorioamnion; and (iii) antigenic size variation of the MBA, which may prevent eradication of ureaplasmas by the host immune response and account for differences in neonatal outcomes. The outcomes of this program of study have improved our understanding of the biology and pathogenesis of this highly adapted microorganism.