311 resultados para STATISTICAL METHODOLOGY
Resumo:
This report presents the final deliverable from the project titled Conceptual and statistical framework for a water quality component of an integrated report card’ funded by the Marine and Tropical Sciences Research Facility (MTSRF; Project 3.7.7). The key management driver of this, and a number of other MTSRF projects concerned with indicator development, is the requirement for state and federal government authorities and other stakeholders to provide robust assessments of the present ‘state’ or ‘health’ of regional ecosystems in the Great Barrier Reef (GBR) catchments and adjacent marine waters. An integrated report card format, that encompasses both biophysical and socioeconomic factors, is an appropriate framework through which to deliver these assessments and meet a variety of reporting requirements. It is now well recognised that a ‘report card’ format for environmental reporting is very effective for community and stakeholder communication and engagement, and can be a key driver in galvanising community and political commitment and action. Although a report card it needs to be understandable by all levels of the community, it also needs to be underpinned by sound, quality-assured science. In this regard this project was to develop approaches to address the statistical issues that arise from amalgamation or integration of sets of discrete indicators into a final score or assessment of the state of the system. In brief, the two main issues are (1) selecting, measuring and interpreting specific indicators that vary both in space and time, and (2) integrating a range of indicators in such a way as to provide a succinct but robust overview of the state of the system. Although there is considerable research and knowledge of the use of indicators to inform the management of ecological, social and economic systems, methods on how to best to integrate multiple disparate indicators remain poorly developed. Therefore the objective of this project was to (i) focus on statistical approaches aimed at ensuring that estimates of individual indicators are as robust as possible, and (ii) present methods that can be used to report on the overall state of the system by integrating estimates of individual indicators. It was agreed at the outset, that this project was to focus on developing methods for a water quality report card. This was driven largely by the requirements of Reef Water Quality Protection Plan (RWQPP) and led to strong partner engagement with the Reef Water Quality Partnership.
Resumo:
Purpose – The purpose of this paper is to examine empirically, an industry development paradox, using embryonic literature in the area of strategic supply chain management, together with innovation management literature. This study seeks to understand how, forming strategic supply chain relationships, and developing strategic supply chain capability, influences beneficial supply chain outcomes expected from utilizing industry-led innovation, in the form of electronic business solutions using the internet, in the Australian beef industry. Findings should add valuable insights to both academics and practitioners in the fields of supply chain innovation management and strategic supply chain management, and expand knowledge to current literature. Design/methodology/approach – This is a quantitative study comparing innovative and non-innovative supply chain operatives in the Australian beef industry, through factor analysis and structural equation modeling using PAWS Statistical V18 and AMOS V18 to analyze survey data from 412 respondents from the Australian beef supply chain. Findings – Key findings are that both innovative and non-innovative supply chain operators attribute supply chain synchronization as only a minor indicator of strategic supply chain capability, contrary to the literature; and they also indicate strategic supply chain capability has a minor influence in achieving beneficial outcomes from utilizing industry-led innovation. These results suggest a lack of coordination between supply chain operatives in the industry. They also suggest a lack of understanding of the benefits of developing a strategic supply chain management competence, particularly in relation to innovation agendas, and provides valuable insights as to why an industry paradox exists in terms of the level of investment in industry-led innovation, vs the level of corresponding benefit achieved. Research limitations/implications – Results are not generalized due to the single agribusiness industry studied and the single research method employed. However, this provides opportunity for further agribusiness studies in this area and also studies using alternate methods, such as qualitative, in-depth analysis of these factors and their relationships, which may confirm results or produce different results. Further, this study empirically extends existing theoretical contributions and insights into the roles of strategic supply chain management and innovation management in improving supply chain and ultimately industry performance while providing practical insights to supply chain practitioners in this and other similar agribusiness industries. Practical implications – These findings confirm results from a 2007 research (Ketchen et al., 2007) which suggests supply chain practice and teachings need to take a strategic direction in the twenty-first century. To date, competence in supply chain management has built up from functional and process orientations rather than from a strategic perspective. This study confirms that there is a need for more generalists that can integrate with various disciplines, particularly those who can understand and implement strategic supply chain management. Social implications – Possible social implications accrue through the development of responsible government policy in terms of industry supply chains. Strategic supply chain management and supply chain innovation management have impacts to the social fabric of nations through the sustainability of their industries, especially agribusiness industries which deal with food safety and security. If supply chains are now the competitive weapon of nations then funding innovation and managing their supply chain competitiveness in global markets requires a strategic approach from everyone, not just the industry participants. Originality/value – This is original empirical research, seeking to add value to embryonic and important developing literature concerned with adopting a strategic approach to supply chain management. It also seeks to add to existing literature in the area of innovation management, particularly through greater understanding of the implications of nations developing industry-wide, industry-led innovation agendas, and their ramifications to industry supply chains.
Resumo:
While social media research has provided detailed cumulative analyses of selected social media platforms and content, especially Twitter, newer platforms, apps, and visual content have been less extensively studied so far. This paper proposes a methodology for studying Instagram activity, building on established methods for Twitter research by initially examining hashtags, as common structural features to both platforms. In doing so, we outline methodological challenges to studying Instagram, especially in comparison to Twitter. Finally, we address critical questions around ethics and privacy for social media users and researchers alike, setting out key considerations for future social media research.
Resumo:
There is a wide range of potential study designs for intervention studies to decrease nosocomial infections in hospitals. The analysis is complex due to competing events, clustering, multiple timescales and time-dependent period and intervention variables. This review considers the popular pre-post quasi-experimental design and compares it with randomized designs. Randomization can be done in several ways: randomization of the cluster [intensive care unit (ICU) or hospital] in a parallel design; randomization of the sequence in a cross-over design; and randomization of the time of intervention in a stepped-wedge design. We introduce each design in the context of nosocomial infections and discuss the designs with respect to the following key points: bias, control for nonintervention factors, and generalizability. Statistical issues are discussed. A pre-post-intervention design is often the only choice that will be informative for a retrospective analysis of an outbreak setting. It can be seen as a pilot study with further, more rigorous designs needed to establish causality. To yield internally valid results, randomization is needed. Generally, the first choice in terms of the internal validity should be a parallel cluster randomized trial. However, generalizability might be stronger in a stepped-wedge design because a wider range of ICU clinicians may be convinced to participate, especially if there are pilot studies with promising results. For analysis, the use of extended competing risk models is recommended.
Resumo:
Meta-analysis is a method to obtain a weighted average of results from various studies. In addition to pooling effect sizes, meta-analysis can also be used to estimate disease frequencies, such as incidence and prevalence. In this article we present methods for the meta-analysis of prevalence. We discuss the logit and double arcsine transformations to stabilise the variance. We note the special situation of multiple category prevalence, and propose solutions to the problems that arise. We describe the implementation of these methods in the MetaXL software, and present a simulation study and the example of multiple sclerosis from the Global Burden of Disease 2010 project. We conclude that the double arcsine transformation is preferred over the logit, and that the MetaXL implementation of multiple category prevalence is an improvement in the methodology of the meta-analysis of prevalence.
Resumo:
Background Cardiovascular disease and mental health both hold enormous public health importance, both ranking highly in results of the recent Global Burden of Disease Study 2010 (GBD 2010). For the first time, the GBD 2010 has systematically and quantitatively assessed major depression as an independent risk factor for the development of ischemic heart disease (IHD) using comparative risk assessment methodology. Methods A pooled relative risk (RR) was calculated from studies identified through a systematic review with strict inclusion criteria designed to provide evidence of independent risk factor status. Accepted case definitions of depression include diagnosis by a clinician or by non-clinician raters adhering to Diagnostic and Statistical Manual of Mental Disorders (DSM) or International Classification of Diseases (ICD) classifications. We therefore refer to the exposure in this paper as major depression as opposed to the DSM-IV category of major depressive disorder (MDD). The population attributable fraction (PAF) was calculated using the pooled RR estimate. Attributable burden was calculated by multiplying the PAF by the underlying burden of IHD estimated as part of GBD 2010. Results The pooled relative risk of developing IHD in those with major depression was 1.56 (95% CI 1.30 to 1.87). Globally there were almost 4 million estimated IHD disability-adjusted life years (DALYs), which can be attributed to major depression in 2010; 3.5 million years of life lost and 250,000 years of life lived with a disability. These findings highlight a previously underestimated mortality component of the burden of major depression. As a proportion of overall IHD burden, 2.95% (95% CI 1.48 to 4.46%) of IHD DALYs were estimated to be attributable to MDD in 2010. Eastern Europe and North Africa/Middle East demonstrate the highest proportion with Asia Pacific, high income representing the lowest. Conclusions The present work comprises the most robust systematic review of its kind to date. The key finding that major depression may be responsible for approximately 3% of global IHD DALYs warrants assessment for depression in patients at high risk of developing IHD or at risk of a repeat IHD event.
Resumo:
Background Summarizing the epidemiology of major depressive disorder (MDD) at a global level is complicated by significant heterogeneity in the data. The aim of this study is to present a global summary of the prevalence and incidence of MDD, accounting for sources of bias, and dealing with heterogeneity. Findings are informing MDD burden quantification in the Global Burden of Disease (GBD) 2010 Study. Method A systematic review of prevalence and incidence of MDD was undertaken. Electronic databases Medline, PsycINFO and EMBASE were searched. Community-representative studies adhering to suitable diagnostic nomenclature were included. A meta-regression was conducted to explore sources of heterogeneity in prevalence and guide the stratification of data in a meta-analysis. Results The literature search identified 116 prevalence and four incidence studies. Prevalence period, sex, year of study, depression subtype, survey instrument, age and region were significant determinants of prevalence, explaining 57.7% of the variability between studies. The global point prevalence of MDD, adjusting for methodological differences, was 4.7% (4.4–5.0%). The pooled annual incidence was 3.0% (2.4–3.8%), clearly at odds with the pooled prevalence estimates and the previously reported average duration of 30 weeks for an episode of MDD. Conclusions Our findings provide a comprehensive and up-to-date profile of the prevalence of MDD globally. Region and study methodology influenced the prevalence of MDD. This needs to be considered in the GBD 2010 study and in investigations into the ecological determinants of MDD. Good-quality estimates from low-/middle-income countries were sparse. More accurate data on incidence are also required.
Resumo:
The Galilee and Eromanga basins are sub-basins of the Great Artesian Basin (GAB). In this study, a multivariate statistical approach (hierarchical cluster analysis, principal component analysis and factor analysis) is carried out to identify hydrochemical patterns and assess the processes that control hydrochemical evolution within key aquifers of the GAB in these basins. The results of the hydrochemical assessment are integrated into a 3D geological model (previously developed) to support the analysis of spatial patterns of hydrochemistry, and to identify the hydrochemical and hydrological processes that control hydrochemical variability. In this area of the GAB, the hydrochemical evolution of groundwater is dominated by evapotranspiration near the recharge area resulting in a dominance of the Na–Cl water types. This is shown conceptually using two selected cross-sections which represent discrete groundwater flow paths from the recharge areas to the deeper parts of the basins. With increasing distance from the recharge area, a shift towards a dominance of carbonate (e.g. Na–HCO3 water type) has been observed. The assessment of hydrochemical changes along groundwater flow paths highlights how aquifers are separated in some areas, and how mixing between groundwater from different aquifers occurs elsewhere controlled by geological structures, including between GAB aquifers and coal bearing strata of the Galilee Basin. The results of this study suggest that distinct hydrochemical differences can be observed within the previously defined Early Cretaceous–Jurassic aquifer sequence of the GAB. A revision of the two previously recognised hydrochemical sequences is being proposed, resulting in three hydrochemical sequences based on systematic differences in hydrochemistry, salinity and dominant hydrochemical processes. The integrated approach presented in this study which combines different complementary multivariate statistical techniques with a detailed assessment of the geological framework of these sedimentary basins, can be adopted in other complex multi-aquifer systems to assess hydrochemical evolution and its geological controls.
Resumo:
This paper addresses research from a three-year longitudinal study that engaged children in data modeling experiences from the beginning school year through to third year (6-8 years). A data modeling approach to statistical development differs in several ways from what is typically done in early classroom experiences with data. In particular, data modeling immerses children in problems that evolve from their own questions and reasoning, with core statistical foundations established early. These foundations include a focus on posing and refining statistical questions within and across contexts, structuring and representing data, making informal inferences, and developing conceptual, representational, and metarepresentational competence. Examples are presented of how young learners developed and sustained informal inferential reasoning and metarepresentational competence across the study to become “sophisticated statisticians”.
Resumo:
Many areas of biochemistry and molecular biology, both fundamental and applications-orientated, require an accurate construction, representation and understanding of the protein molecular surface and its interaction with other, usually small, molecules. There are however many situations when the protein molecular surface gets in physical contact with larger objects, either biological, such as membranes, or artificial, such as nanoparticles. The contribution presents a methodology for describing and quantifying the molecular properties of proteins, by geometrical and physico-chemical mapping of the molecular surfaces, with several analytical relationships being proposed for molecular surface properties. The relevance of the molecular surface-derived properties has been demonstrated through the calculation of the statistical strength of the prediction of protein adsorption. It is expected that the extension of this methodology to other phenomena involving proteins near solid surfaces, in particular the protein interaction with nanoparticles, will result in important benefits in the understanding and design of protein-specific solid surfaces. © 2013 Nicolau et al.