925 resultados para Impact analysis
Resumo:
This is a methodological paper describing when and how manifest items dropped from a latent construct measurement model (e.g., factor analysis) can be retained for additional analysis. Presented are protocols for assessment for retention in the measurement model, evaluation of dropped items as potential items separate from the latent construct, and post hoc analyses that can be conducted using all retained (manifest or latent) variables. The protocols are then applied to data relating to the impact of the NAPLAN test. The variables examined are teachers’ achievement goal orientations and teachers’ perceptions of the impact of the test on curriculum and pedagogy. It is suggested that five attributes be considered before retaining dropped manifest items for additional analyses. (1) Items can be retained when employed in service of an established or hypothesized theoretical model. (2) Items should only be retained if sufficient variance is present in the data set. (3) Items can be retained when they provide a rational segregation of the data set into subsamples (e.g., a consensus measure). (4) The value of retaining items can be assessed using latent class analysis or latent mean analysis. (5) Items should be retained only when post hoc analyses with these items produced significant and substantive results. These suggested exploratory strategies are presented so that other researchers using survey instruments might explore their data in similar and more innovative ways. Finally, suggestions for future use are provided.
Resumo:
This paper reports preliminary findings of a survey of in-service teachers in WA and SA conducted in 2012. Participants completed an online survey open to all teachers in WA and SA. The survey ran for three months from April to June 2012. One section of the survey asked teachers to report their perceptions of the impact that NAPLAN has had on the curriculum and pedagogy of their classroom and school. Two principal research questions were addressed in this preliminary analysis. First what are teacher perceptions of the effects on NAPLAN on curriculum and pedagogy? Second, are there any interaction effects between gender, socioeconomics status, location and school system on teachers perceptions? Statistical analyses examined one- and two-way MANOVA to assess main effects and interaction effects on teachers' global perceptions. These were followed by a series of exploratory one- and two-way ANOVA of specific survey items to suggest potential sources for differences among teachers from different socioeconomic regions, states and systems. Teachers report that they are either choosing or being instructed to teach to the test, that this results in less time being spent on other curriculum areas and that these effects contribute in a negative way on the engagement of students. This largely agrees with a body of international research that suggests that high-stakes literacy and numeracy tests often results in unintended consequences such as a narrow curriculum focus (Au, 2007), a return to teacher-centred instruction (Barret, 2009) and a decrease in motivation (Ryan & Wesinstein, 2009). Preliminary results from early survey respondents suggests there is a relationship between participant responses to the effect of NAPLAN on curriculum and pedagogy based on the characteristics of which State the teacher taught in, their perceptions of the socioeconomic status of the school and the school system in which they were employed (State, Catholic, and Independent).
Resumo:
Scenario planning is a method widely used by strategic planners to address uncertainty about the future. However, current methods either fail to address the future behaviour and impact of stakeholders or they treat the role of stakeholders informally. We present a practical decision-analysis-based methodology for analysing stakeholder objectives and likely behaviour within contested unfolding futures. We address issues of power, interest, and commitment to achieve desired outcomes across a broad stakeholder constituency. Drawing on frameworks for corporate social responsibility (CSR), we provide an illustrative example of our approach to analyse a complex contested issue that crosses geographic, organisational and cultural boundaries. Whilst strategies can be developed by individual organisations that consider the interests of others – for example in consideration of an organisation's CSR agenda – we show that our augmentation of scenario method provides a further, nuanced, analysis of the power and objectives of all concerned stakeholders across a variety of unfolding futures. The resulting modelling framework is intended to yield insights and hence more informed decision making by individual stakeholders or regulators.
Resumo:
Background: Providing motivationally supportive physical education experiences for learners is crucial since empirical evidence in sport and physical education research has associated intrinsic motivation with positive educational outcomes. Self-determination theory (SDT) provides a valuable framework for examining motivationally supportive physical education experiences through satisfaction of three basic psychological needs: autonomy, competence and relatedness. However, the capacity of the prescriptive teaching philosophy of the dominant traditional physical education teaching approach to effectively satisfy the psychological needs of students to engage in physical education has been questioned. The constraints-led approach (CLA) has been proposed as a viable alternative teaching approach that can effectively support students’ self-motivated engagement in physical education. Purpose: We sought to investigate whether adopting the learning design and delivery of the CLA, guided by key pedagogical principles of nonlinear pedagogy (NLP), would address basic psychological needs of learners, resulting in higher self-reported levels of intrinsic motivation. The claim was investigated using action research. The teacher/researcher delivered two lessons aimed at developing hurdling skills: one taught using the CLA and the other using the traditional approach. Participants and Setting: The main participant for this study was the primary researcher and lead author who is a PETE educator, with extensive physical education teaching experience. A sample of 54 pre-service PETE students undertaking a compulsory second year practical unit at an Australian university was recruited for the study, consisting of an equal number of volunteers from each of two practical classes. A repeated measures experimental design was adopted, with both practical class groups experiencing both teaching approaches in a counterbalanced order. Data collection and analysis: Immediately after participation in each lesson, participants completed a questionnaire consisting of 22 items chosen from validated motivation measures of basic psychological needs and indices of intrinsic motivation, enjoyment and effort. All questionnaire responses were indicated on a 7-point Likert scale. A two-tailed, paired-samples t-test was used to compare the groups’ motivation subscale mean scores for each teaching approach. The size of the effect for each group was calculated using Cohen’s d. To determine whether any significant differences between the subscale mean scores of the two groups was due to an order effect, a two-tailed, independent samples t test was used. Findings: Participants’ reported substantially higher levels of self-determination and intrinsic motivation during the CLA hurdles lesson compared to during the traditional hurdles lesson. Both groups reported significantly higher motivation subscale mean scores for competence, relatedness, autonomy, enjoyment and effort after experiencing the CLA than mean scores reported after experiencing the traditional approach. This significant difference was evident regardless of the order that each teaching approach was experienced. Conclusion: The theoretically based pedagogical principles of NLP that inform learning design and delivery of the CLA may provide teachers and coaches with tools to develop more functional pedagogical climates, which result in students exhibiting more intrinsically motivated behaviours during learning.
Resumo:
This study investigates the impact floods on property values using the hedonic property price approach and other relevant econometric techniques. The main objectives of this research are to investigate (1) the impact of the release of flood-risk information and the actual floods on property values (2) the temporal behaviour of negative impacts (3) the property submarket behaviour (4) the behaviour of flood affected vs flood non-affected areas and (5) the property market efficiency. The thesis expanded on the existing literature on natural disasters by applying a range of econometric techniques. Findings of this research are useful for policy decision-making which is aimed at minimizing the negative impacts of natural hazards on property markets. The thesis findings also provide a better framework for decision-making in the property insurance market. The methodological improvements that are made in the thesis will be invaluable for analysing the impacts of natural hazards elsewhere.
Resumo:
This research investigated the use of DNA fingerprinting to characterise the bacteria Streptococcus pneumoniae or pneumococcus, and hence gain insight into the development of new vaccines or antibiotics. Different bacterial DNA fingerprinting methods were studied, and a novel method was developed and validated, which characterises different cell coatings that pneumococci produce. This method was used to study the epidemiology of pneumococci in Queensland before and after the introduction of the current pneumococcal vaccine. This study demonstrated that pneumococcal disease is highly prevalent in children under four years, that the bacteria can `switch' its cell coating to evade the vaccine, and that some DNA fingerprinting methods are more discriminatory than others. This has an impact on understanding which strains are more prone to cause invasive disease. Evidence of the excellent research findings have been published in high impact internationally refereed journals.
Resumo:
Objectives To investigate whether a sudden temperature change between neighboring days has significant impact on mortality. Methods A Poisson generalized linear regression model combined with a distributed lag non-linear models was used to estimate the association of temperature change between neighboring days with mortality in a subtropical Chinese city during 2008–2012. Temperature change was calculated as the current day’s temperature minus the previous day’s temperature. Results A significant effect of temperature change between neighboring days on mortality was observed. Temperature increase was significantly associated with elevated mortality from non-accidental and cardiovascular diseases, while temperature decrease had a protective effect on non-accidental mortality and cardiovascular mortality. Males and people aged 65 years or older appeared to be more vulnerable to the impact of temperature change. Conclusions Temperature increase between neighboring days has a significant adverse impact on mortality. Further health mitigation strategies as a response to climate change should take into account temperature variation between neighboring days.
Resumo:
Schizophrenia is an idiopathic mental disorder with a heritable component and a substantial public health impact. We conducted a multi-stage genome-wide association study (GWAS) for schizophrenia beginning with a Swedish national sample (5,001 cases and 6,243 controls) followed by meta-Analysis with previous schizophrenia GWAS (8,832 cases and 12,067 controls) and finally by replication of SNPs in 168 genomic regions in independent samples (7,413 cases, 19,762 controls and 581 parent-offspring trios). We identified 22 loci associated at genome-wide significance; 13 of these are new, and 1 was previously implicated in bipolar disorder. Examination of candidate genes at these loci suggests the involvement of neuronal calcium signaling. We estimate that 8,300 independent, mostly common SNPs (95% credible interval of 6,300-10,200 SNPs) contribute to risk for schizophrenia and that these collectively account for at least 32% of the variance in liability. Common genetic variation has an important role in the etiology of schizophrenia, and larger studies will allow more detailed understanding of this disorder.
Resumo:
Objectives: Obesity is a disease with excess body fat where health is adversely affected. Therefore it is prudent to make the diagnosis of obesity based on the measure of percentage body fat. Body composition of a group of Australian children of Sri Lankan origin were studied to evaluate the applicability of some bedside techniques in the measurement of percentage body fat. Methods: Height (H) and weight (W) was measured and BMI (W/H2) calculated. Bioelectrical impedance analysis (BIA) was measured using tetra polar technique with an 800 μA current of 50 Hz frequency. Total body water was used as a reference method and was determined by deuterium dilution and fat free mass and hence fat mass (FM) derived using age and gender specific constants. Percentage FM was estimated using four predictive equations, which used BIA and anthropometric measurements. Results: Twenty-seven boys and 15 girls were studied with mean ages being 9.1 years and 9.6 years, respectively. Girls had a significantly higher FM compared to boys. The mean percentage FM of boys (22.9 ± 8.7%) was higher than the limit for obesity and for girls (29.0 ± 6.0%) it was just below the cut-off. BMI was comparatively low. All but BIA equation in boys under estimated the percentage FM. The impedance index and weight showed a strong association with total body water (r 2 = 0.96, P < 0.001). Except for BIA in boys all other techniques under diagnosed obesity. Conclusions: Sri Lankan Australian children appear to have a high percentage of fat with a low BMI and some of the available indirect techniques are not helpful in the assessment of body composition. Therefore ethnic and/or population specific predictive equations have to be developed for the assessment of body composition, especially in a multicultural society using indirect methods such as BIA or anthropometry.
Resumo:
Background Genome-wide association studies have identified multiple genetic variants associated with prostate cancer risk which explain a substantial proportion of familial relative risk. These variants can be used to stratify individuals by their risk of prostate cancer. Methods We genotyped 25 prostate cancer susceptibility loci in 40,414 individuals and derived a polygenic risk score (PRS).We estimated empirical odds ratios (OR) for prostate cancer associated with different risk strata defined by PRS and derived agespecific absolute risks of developing prostate cancer by PRS stratum and family history. Results The prostate cancer risk for men in the top 1% of the PRS distribution was 30.6 (95% CI, 16.4-57.3) fold compared with men in the bottom 1%, and 4.2 (95% CI, 3.2-5.5) fold compared with the median risk. The absolute risk of prostate cancer by age of 85 years was 65.8% for a man with family history in the top 1% of the PRS distribution, compared with 3.7% for a man in the bottom 1%. The PRS was only weakly correlated with serum PSA level (correlation = 0.09). Conclusions Risk profiling can identify men at substantially increased or reduced risk of prostate cancer. The effect size, measured by OR per unit PRS, was higher in men at younger ages and in men with family history of prostate cancer. Incorporating additional newly identified loci into a PRS should improve the predictive value of risk profiles. Impact:We demonstrate that the risk profiling based on SNPs can identify men at substantially increased or reduced risk that could have useful implications for targeted prevention and screening programs.
Resumo:
Background: Tuberculosis still remains one of the largest killer infectious diseases, warranting the identification of newer targets and drugs. Identification and validation of appropriate targets for designing drugs are critical steps in drug discovery, which are at present major bottle-necks. A majority of drugs in current clinical use for many diseases have been designed without the knowledge of the targets, perhaps because standard methodologies to identify such targets in a high-throughput fashion do not really exist. With different kinds of 'omics' data that are now available, computational approaches can be powerful means of obtaining short-lists of possible targets for further experimental validation. Results: We report a comprehensive in silico target identification pipeline, targetTB, for Mycobacterium tuberculosis. The pipeline incorporates a network analysis of the protein-protein interactome, a flux balance analysis of the reactome, experimentally derived phenotype essentiality data, sequence analyses and a structural assessment of targetability, using novel algorithms recently developed by us. Using flux balance analysis and network analysis, proteins critical for survival of M. tuberculosis are first identified, followed by comparative genomics with the host, finally incorporating a novel structural analysis of the binding sites to assess the feasibility of a protein as a target. Further analyses include correlation with expression data and non-similarity to gut flora proteins as well as 'anti-targets' in the host, leading to the identification of 451 high-confidence targets. Through phylogenetic profiling against 228 pathogen genomes, shortlisted targets have been further explored to identify broad-spectrum antibiotic targets, while also identifying those specific to tuberculosis. Targets that address mycobacterial persistence and drug resistance mechanisms are also analysed. Conclusion: The pipeline developed provides rational schema for drug target identification that are likely to have high rates of success, which is expected to save enormous amounts of money, resources and time in the drug discovery process. A thorough comparison with previously suggested targets in the literature demonstrates the usefulness of the integrated approach used in our study, highlighting the importance of systems-level analyses in particular. The method has the potential to be used as a general strategy for target identification and validation and hence significantly impact most drug discovery programmes.
Resumo:
Background Calcification is commonly believed to be associated with cardiovascular disease burden. But whether or not the calcifications have a negative effect on plaque vulnerability is still under debate. Methods and Results Fatigue rupture analysis and the fatigue life were used to evaluate the rupture risk. An idealized baseline model containing no calcification was first built. Based on the baseline model, we investigated the influence of calcification on rupture path and fatigue life by adding a circular calcification and changing its location within the fibrous cap area. Results show that 84.0% of calcified cases increase the fatigue life up to 11.4%. For rupture paths 10D far from the calcification, the life change is negligible. Calcifications close to lumen increase more fatigue life than those close to the lipid pool. Also, calcifications in the middle area of fibrous cap increase more fatigue life than those in the shoulder area. Conclusion Calcifications may play a positive role in the plaque stability. The influence of the calcification only exists in a local area. Calcifications close to lumen may be influenced more than those close to lipid pool. And calcifications in the middle area of fibrous cap are seemly influenced more than those in the shoulder area.
Resumo:
Background: Increased biomechanical stresses within the abdominal aortic aneurysm (AAA) wall contribute to its rupture. Calcification and intraluminal thrombus can be commonly found in AAAs, but the relationship between calcification/intraluminal thrombus and AAA wall stress is not completely described. Methods: Patient-specific three-dimensional AAA geometries were reconstructed from computed tomographic images of 20 patients. Structural analysis was performed to calculate the wall stresses of the 20 AAA models and their altered models when calcification or intraluminal thrombus was not considered. A nonlinear large-strain finite element method was used to compute the wall stress distribution. The relationships between wall stresses and volumes of calcification and intraluminal thrombus were sought. Results: Maximum stress was not correlated with the percentage of calcification, and was negatively correlated with the percentage of intraluminal thrombus (r = -0.56; P = .011). Exclusion of calcification from analysis led to a significant decrease in maximum stress by a median of 14% (range, 2%-27%; P < .01). When intraluminal thrombus was eliminated, maximum stress increased significantly by a median of 24% (range, 5%-43%; P < .01). Conclusion: The presence of calcification increases AAA peak wall stress, suggesting that calcification decrease the biomechanical stability of AAA. In contrast, intraluminal thrombus reduces the maximum stress in AAA. Calcification and intraluminal thrombus should both be considered in the evaluation of wall stress for risk assessment of AAA rupture.
Resumo:
Rupture of vulnerable atheromatous plaque in the carotid and coronary arteries often leads to stroke and heart attack respectively. The mechanism of blood flow and plaque rupture in stenotic arteries is still not fully understood. A three dimensional rigid wall model was solved under steady state conditions and unsteady conditions by assuming a time-varying inlet velocity profile to investigate the relative importance of axial forces and pressure drops in arteries with asymmetric stenosis. Flow-structure interactions were investigated for the same geometry and the results were compared with those retrieved with the corresponding 2D cross-section structural models. The Navier-Stokes equations were used as the governing equations for the fluid. The tube wall was assumed hyperelastic, homogeneous, isotropic and incompressible. The analysis showed that the three dimensional behavior of velocity, pressure and wall shear stress is in general very different from that predicted by cross-section models. Pressure drop across the stenosis was found to be much higher than shear stress. Therefore, pressure may be the more important mechanical trigger for plaque rupture other than shear stress, although shear stress is closely related to plaque formation and progression.
Resumo:
The Macroscopic Fundamental Diagram (MFD) relates space-mean density and flow. Since the MFD represents the area-wide network traffic performance, studies on perimeter control strategies and network-wide traffic state estimation utilising the MFD concept have been reported. Most previous works have utilised data from fixed sensors, such as inductive loops, to estimate the MFD, which can cause biased estimation in urban networks due to queue spillovers at intersections. To overcome the limitation, recent literature reports the use of trajectory data obtained from probe vehicles. However, these studies have been conducted using simulated datasets; limited works have discussed the limitations of real datasets and their impact on the variable estimation. This study compares two methods for estimating traffic state variables of signalised arterial sections: a method based on cumulative vehicle counts (CUPRITE), and one based on vehicles’ trajectory from taxi Global Positioning System (GPS) log. The comparisons reveal some characteristics of taxi trajectory data available in Brisbane, Australia. The current trajectory data have limitations in quantity (i.e., the penetration rate), due to which the traffic state variables tend to be underestimated. Nevertheless, the trajectory-based method successfully captures the features of traffic states, which suggests that the trajectories from taxis can be a good estimator for the network-wide traffic states.