437 resultados para Basic Reproduction Number


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The importance of the environment to the fulfilment of human rights is widely accepted at international law. What is less well-accepted is the proposition that we, as humans, possess rights to the environment beyond what is necessary to support our basic human needs. The suggestion that a human right to a healthy environment may be emerging at international law raises a number of theoretical and practical challenges for human rights law, with such challenges coming from both within and outside the human rights discourse. It is argued that human rights law can make a positive contribution to environmental protection, but the precise nature of the connection between the environment and human rights warrants more critical analysis. This short paper considers the different ways that the environment is conceptualised in international human rights law and analyses the proposition that a right to a healthy environment is emerging. It identifies some of the challenges which would need to be overcome before such a right could be recognised, including those which draw on the disciplines of deep ecology and earth jurisprudence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantifying spatial and/or temporal trends in environmental modelling data requires that measurements be taken at multiple sites. The number of sites and duration of measurement at each site must be balanced against costs of equipment and availability of trained staff. The split panel design comprises short measurement campaigns at multiple locations and continuous monitoring at reference sites [2]. Here we present a modelling approach for a spatio-temporal model of ultrafine particle number concentration (PNC) recorded according to a split panel design. The model describes the temporal trends and background levels at each site. The data were measured as part of the “Ultrafine Particles from Transport Emissions and Child Health” (UPTECH) project which aims to link air quality measurements, child health outcomes and a questionnaire on the child’s history and demographics. The UPTECH project involves measuring aerosol and particle counts and local meteorology at each of 25 primary schools for two weeks and at three long term monitoring stations, and health outcomes for a cohort of students at each school [3].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most studies of in vitro fertilisation (IVF) outcomes use cycle-based data and fail to account for women who use repeated IVF cycles. The objective of this study was to examine the association between the number of eggs collected (EC) and the percentage fertilised normally, and women’s self-reported medical, personal and social histories. This study involved a crosssectional survey of infertile women (aged 27-46 years) recruited from four privately-owned fertility clinics located in major cities of Australia. Regression modeling was used to estimate the mean EC and mean percentage of eggs fertilised normally: adjusted for age at EC. Appropriate statistical methods were used to take account of repeated IVF cycles by the same women. Among 121 participants who returned the survey and completed 286 IVF cycles, the mean age at EC was 35.2 years (SD 4.5). Women’s age at EC was strongly associated with the number of EC: <30 years, 11.7 EC; 30.0-< 35 years, 10.6 EC; 35.0-<40.0 years, 7.3 EC; 40.0+ years, 8.1 EC; p<.0001. Prolonged use of oral contraceptives was associated with lower numbers of EC: never used, 14.6 EC; 0-2 years, 11.7 EC; 3-5 years, 8.5 EC; 6þ years, 8.2 EC; p=.04. Polycystic ovary syndrome (PCOS) was associated with more EC: have PCOS, 11.5 EC; no, 8.3 EC; p=.01. Occupational exposures may be detrimental to normal fertilisation: professional roles, 58.8%; trade and service roles, 51.8%; manual and other roles, 63.3%; p=.02. In conclusion, women’s age remains the most significant characteristic associated with EC but not the percentage of eggs fertilised normally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identifying, modelling and documenting business processes usually require the collaboration of many stakeholders that may be spread across companies in inter-organizational settings. While modern process modelling technologies are beginning to provide a number of features to support remote, they lack support for visual cues used in co-located collaboration. In this paper, we examine the importance of visual cues for collaboration tasks in collaborative process modelling. Based on this analysis, we present a prototype 3D virtual world process modelling tool that supports a number of visual cues to facilitate remote collaborative process model creation and validation. We then report on a preliminary analysis of the technology. In conclusion, we proceed to describe the future direction of our research with regards to the theoretical contributions expected from the evaluation of the tool.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ILLITERACY is now increasingly recognised as a serious social problem. UNESCO defines literacy in the following way :- "A person is literate when he has acquired the essential knowledge skills that enable him to engage in all those activities in which literacy is required for effective functioning in his group and community" This is in fact seeing the problem in terms of functional literacy. As the demands of an increasingly industrial society grow, more and more people who are functionally illiterate are appearing. Many do not have the functional skills required to enable them to apply for a job. This inability to obtain work is common among clients of the probation service. Literacy has become so important in our society, that to be unable to read and write causes great feeling of isolation, of being different and inferior, which often leads the illiterate to join a group where this deficiency is unknown and where he can gain some status. This is often a delinquent group.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is significant toxicological evidence of the effects of ultrafine particles (<100nm) on human health (WHO 2005). Studies show that the number concentration of particles has been associated with adverse human health effects (Englert 2004). This work is part of a major study called ‘Ultrafine Particles form Traffic Emissions and Children’s Health’ (UPTECH), which seeks to determine the effect of the exposure to traffic related ultrafine particles on children’s health in schools (http://www.ilaqh.qut.edu.au/Misc/UPT ECH%20Home.htm). Quantification of spatial variation of particle number concentration (PNC) in a microscale environment and identification of the main affecting parameters and their contribution levels are the main aims of this analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traffic safety studies demand more than what current micro-simulation models can provide as they presume that all drivers of motor vehicles exhibit safe behaviours. Several car-following models are used in various micro-simulation models. This research compares the mainstream car following models’ capabilities of emulating precise driver behaviour parameters such as headways and Time to Collisions. The comparison firstly illustrates which model is more robust in the metric reproduction. Secondly, the study conducted a series of sensitivity tests to further explore the behaviour of each model. Based on the outcome of these two steps exploration of the models, a modified structure and parameters adjustment for each car-following model is proposed to simulate more realistic vehicle movements, particularly headways and Time to Collision, below a certain critical threshold. NGSIM vehicle trajectory data is used to evaluate the modified models performance to assess critical safety events within traffic flow. The simulation tests outcomes indicate that the proposed modified models produce better frequency of critical Time to Collision than the generic models, while the improvement on the headway is not significant. The outcome of this paper facilitates traffic safety assessment using microscopic simulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The number of internet users in Australia has been steadily increasing, with over 10.9 million people currently subscribed to an internet provider (ABS, 2011). Over the past year, the most avid users of the Internet were 15 – 24 year olds, with approximately 95% accessing the internet on a regular basis (ABS, Social Trends, 2011). While the internet, in particularly Web 2.0, has been described as fundamental to higher education students, social and leisure internet tools are also increasingly being used by these students to generate and maintain their social and professional networks and interactions (Duffy & Bruns, 2006). Rapid technological advancements have enabled greater and faster access to information for learning and education (Hemmi et al, 2009; Glassman & Kang, 2011). As such, we sought to integrate interactive, online social media into the assessment profile of a Public Health undergraduate cohort at the Queensland University of Technology (QUT). The aim of this exercise was to engage undergraduate students to both develop and showcase their research on a range of complex, contemporary health issues within the online forum of Wikispaces for review and critique by their peers. We applied Bandura’s Social Learning Theory (SLT) to analyse the interactive processes from which students developed deeper and more sustained learning, and via which their overall academic writing standards were enriched. This paper outlines the assessment task, and the students’ feedback on their learning outcomes in relation to the Attentional, Retentional, Motor Reproduction, and Motivational Processes outlined by Bandura in SLT. We conceptualise the findings in a theoretical model, and discuss the implications for this approach within the broader tertiary environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motor unit number estimation (MUNE) is a method which aims to provide a quantitative indicator of progression of diseases that lead to loss of motor units, such as motor neurone disease. However the development of a reliable, repeatable and fast real-time MUNE method has proved elusive hitherto. Ridall et al. (2007) implement a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm to produce a posterior distribution for the number of motor units using a Bayesian hierarchical model that takes into account biological information about motor unit activation. However we find that the approach can be unreliable for some datasets since it can suffer from poor cross-dimensional mixing. Here we focus on improved inference by marginalising over latent variables to create the likelihood. In particular we explore how this can improve the RJMCMC mixing and investigate alternative approaches that utilise the likelihood (e.g. DIC (Spiegelhalter et al., 2002)). For this model the marginalisation is over latent variables which, for a larger number of motor units, is an intractable summation over all combinations of a set of latent binary variables whose joint sample space increases exponentially with the number of motor units. We provide a tractable and accurate approximation for this quantity and also investigate simulation approaches incorporated into RJMCMC using results of Andrieu and Roberts (2009).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Macrophage inhibitory cytokine-1 (MIC-1/GDF15), a divergent member of the TGF-β superfamily, is over-expressed by many common cancers including those of the prostate (PCa) and its expression is linked to cancer outcome. We have evaluated the effect of MIC-1/GDF15 overexpression on PCa development and spread in the TRAMP transgenic model of spontaneous prostate cancer. TRAMP mice were crossed with MIC-1/GDF15 overexpressing mice (MIC-1fms) to produce syngeneic TRAMPfmsmic-1 mice. Survival rate, prostate tumor size, histopathological grades and extent of distant organ metastases were compared. Metastasis of TC1-T5, an androgen independent TRAMP cell line that lacks MIC-1/GDF15 expression, was compared by injecting intravenously into MIC-1fms and syngeneic C57BL/6 mice. Whilst TRAMPfmsmic-1 survived on average 7.4 weeks longer, had significantly smaller genitourinary (GU) tumors and lower PCa histopathological grades than TRAMP mice, more of these mice developed distant organ metastases. Additionally, a higher number of TC1-T5 lung tumor colonies were observed in MIC-1fms mice than syngeneic WT C57BL/6 mice. Our studies strongly suggest that MIC-1/GDF15 has complex actions on tumor behavior: it limits local tumor growth but may with advancing disease, promote metastases. As MIC-1/GDF15 is induced by all cancer treatments and metastasis is the major cause of cancer treatment failure and cancer deaths, these results, if applicable to humans, may have a direct impact on patient care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Currently in the Australian higher education sector higher productivity from allied health clinical education placements are a contested issue. This paper will report results of a study that investigated output changes associated with occupational therapy and nutrition/dietetics clinical education placements in Queensland, Australia. Supervisors’ and students’ time use during placements and how this changes for supervisors compared to when students are not present in the workplace is also presented. Methodology/Principal Findings: A cohort design was used with students from four Queensland universities, and their supervisors employed by Queensland Health. There was an increasing trend in the number of occasions of service delivered when the students were present, and a statistically significant increase in the daily mean length of occasions of service delivered during the placement compared to pre-placement levels. For project-based placements that were not directly involved in patient care, supervisors’ project activity time decreased during placements, with students undertaking considerably more time in project activities. Conclusions/Significance: A novel method for estimating productivity and time use changes during clinical education programs for allied health disciplines has been applied. During clinical education placements there was a net increase in outputs, suggesting supervisors engage in longer consultations with patients for the purpose of training students, while maintaining patient numbers. Other activities are reduced. This paper is the first time these data have been shown and form a good basis for future assessments of the economic impact of student placements for allied health disciplines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A coupled SPH-DEM based two-dimensional (2-D) micro-scale single cell model is developed to predict basic cell-level shrinkage effects of apple parenchyma cells during air drying. In this newly developed drying model, Smoothed Particle Hydrodynamics (SPH) is used to model the low Reynolds Number fluid motions of the cell protoplasm, and a Discrete Element Method (DEM) is employed to simulate the polymer-like cell wall. Simulations results reasonably agree with published experimental drying results on cellular shrinkage properties such as cellular area, diameter and perimeter. These preliminary results indicate that the model is effective for the modelling and simulation of apple parenchyma cells during air drying.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deciding the appropriate population size and number of is- lands for distributed island-model genetic algorithms is often critical to the algorithm’s success. This paper outlines a method that automatically searches for good combinations of island population sizes and the number of islands. The method is based on a race between competing parameter sets, and collaborative seeding of new parameter sets. This method is applicable to any problem, and makes distributed genetic algorithms easier to use by reducing the number of user-set parameters. The experimental results show that the proposed method robustly and reliably finds population and islands settings that are comparable to those found with traditional trial-and-error approaches.