905 resultados para Challenge posed by omics data to compositional analysis-paucity of independent samples (n)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims: To identify risk factors for major Adverse Events (AEs) and to develop a nomogram to predict the probability of such AEs in individual patients who have surgery for apparent early stage endometrial cancer. Methods: We used data from 753 patients who were randomized to either total laparoscopic hysterectomy or total abdominal hysterectomy in the LACE trial. Serious adverse events that prolonged hospital stay or postoperative adverse events (using common terminology criteria 3+, CTCAE V3) were considered major AEs. We analyzed pre-surgical characteristics that were associated with the risk of developing major AEs by multivariate logistic regression. We identified a parsimonious model by backward stepwise logistic regression. The six most significant or clinically important variables were included in the nomogram to predict the risk of major AEs within 6 weeks of surgery and the nomogram was internally validated. Results: Overall, 132 (17.5%) patients had at least one major AE. An open surgical approach (laparotomy), higher Charlson’s medical co-morbidities score, moderately differentiated tumours on curettings, higher baseline ECOG score, higher body mass index and low haemoglobin levels were associated with AE and were used in the nomogram. The bootstrap corrected concordance index of the nomogram was 0.63 and it showed good calibration. Conclusions: Six pre-surgical factors independently predicted the risk of major AEs. This research might form the basis to develop risk reduction strategies to minimize the risk of AEs among patients undergoing surgery for apparent early stage endometrial cancer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Environmental and sustainability issues pose challenges for society. Although education is seen as being a contributor to addressing sustainability, teacher education has been slow to act in preparing future teachers to teach sustainability. Recent Australian curriculum documents nominate sustainability as one of three cross-curriculum priorities. In one Australian university course, an Ecological Footprint Calculator tool has been employed to challenge preservice early childhood teachers to consider the sustainability of their lifestyles as a means for engaging them in learning and teaching for sustainability. Students enrolled in an integrated arts and humanities subject voluntarily engaged with the online calculator and shared their findings on an electronic discussion forum. These postings then became the basis of qualitative analysis and discussion. Data categories included reactions and reflections on reasons for the ‘heaviness’ of their footprints , student reactions leading to actions to reduce their footprints, reflections on the implications of the footprint results for future teaching, reactions that considered the need for societal change, and reflections on the integration of sustainability with the visual arts. The power of the tool’s application to stimulate interest in sustainability and education for sustainability more broadly in teacher education is explored.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: National physical activity data suggest that there is a considerable difference in physical activity levels of US and Australian adults. Although different surveys (Active Australia and BRFSS) are used, the questions are similar. Different protocols, however, are used to estimate “activity” from the data collected. The primary aim of this study was to assess whether the 2 approaches to the management of PA data could explain some of the difference in prevalence estimates derived from the two national surveys. Methods: Secondary data analysis of the most recent AA survey (N = 2987). Results: 15% of the sample was defined as “active” using Australian criteria but as “inactive” using the BRFSS protocol, even though weekly energy expenditure was commensurate with meeting current guidelines. Younger respondents (age < 45 y) were more likely to be “misclassified” using the BRFSS criteria. Conclusions: The prevalence of activity in Australia and the US appears to be more similar than we had previously thought.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A community nurse is required to have excellent interpersonal, teaching, collaborative and clinical skills in order to develop effective individualised client care contracts. Using a descriptive qualitative design data was collected from two focus groups of fourteen community nurses to explore the issues surrounding negotiating and contracting client care contracts from the perspective of community nurses. Thematic analysis revealed three themes: ‘assessment of needs’, ‘education towards enablement’, and ‘negotiation’. ‘Assessment of needs’ identified that community nurses assess both the client’s requirements for health care as well as the ability of the nurse to provide that care. ‘Education towards enablement’ described that education of the client is a common strategy used by community nurses to establish realistic goals of health care as part of developing an ongoing care plan. The final theme, ‘negotiation’, involved an informed agreement between the client and the community nurse which forms the origin of the care contract that will direct the partnership between the client and the nurse. Of importance for community nurses is that development of successful person-centred care contracts requires skillful negotiation of care that strikes the balance between the needs of the client and the ability of the nurse to meet those needs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with applying a particle-based approach to simulate the micro-level cellular structural changes of plant cells during drying. The objective of the investigation was to relate the micro-level structural properties such as cell area, diameter and perimeter to the change of moisture content of the cell. Model assumes a simplified cell which consists of two basic components, cell wall and cell fluid. The cell fluid is assumed to be a Newtonian fluid with higher viscosity compared to water and cell wall is assumed to be a visco-elastic solid boundary located around the cell fluid. Cell fluid is modelled with Smoothed Particle Hydrodynamics (SPH) technique and for the cell wall; a Discrete Element Method (DEM) is used. The developed model is two-dimensional, but accounts for three-dimensional physical properties of real plant cells. Drying phenomena is simulated as fluid mass reductions and the model is used to predict the above mentioned structural properties as a function of cell fluid mass. Model predictions are found to be in fairly good agreement with experimental data in literature and the particle-based approach is demonstrated to be suitable for numerical studies of drying related structural deformations. Also a sensitivity analysis is included to demonstrate the influence of key model parameters to model predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Given the expanding scope of extracorporeal membrane oxygenation (ECMO) and its variable impact on drug pharmacokinetics as observed in neonatal studies, it is imperative that the effects of the device on the drugs commonly prescribed in the intensive care unit (ICU) are further investigated. Currently, there are no data to confirm the appropriateness of standard drug dosing in adult patients on ECMO. Ineffective drug regimens in these critically ill patients can seriously worsen patient outcomes. This study was designed to describe the pharmacokinetics of the commonly used antibiotic, analgesic and sedative drugs in adult patients receiving ECMO. METHODS: This is a multi-centre, open-label, descriptive pharmacokinetic (PK) study. Eligible patients will be adults treated with ECMO for severe cardiac and/or respiratory failure at five Intensive Care Units in Australia and New Zealand. Patients will receive the study drugs as part of their routine management. Blood samples will be taken from indwelling catheters to investigate plasma concentrations of several antibiotics (ceftriaxone, meropenem, vancomycin, ciprofloxacin, gentamicin, piperacillin-tazobactum, ticarcillin-clavulunate, linezolid, fluconazole, voriconazole, caspofungin, oseltamivir), sedatives and analgesics (midazolam, morphine, fentanyl, propofol, dexmedetomidine, thiopentone). The PK of each drug will be characterised to determine the variability of PK in these patients and to develop dosing guidelines for prescription during ECMO. DISCUSSION: The evidence-based dosing algorithms generated from this analysis can be evaluated in later clinical studies. This knowledge is vitally important for optimising pharmacotherapy in these most severely ill patients to maximise the opportunity for therapeutic success and minimise the risk of therapeutic failure

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The status of entertainment as both a dimension of human culture, and a booming global industry is increasing. Given more recent consumer-centric definitions of entertainment, the entertainment consumer has grown in prominence and is now coming under closer scrutiny. However viewing entertainment consumers as always behaving in a similar fashion towards entertainment as to other products may be selling them short. For a start, entertainment consumers can exhibit a strong loyalty towards their favourite entertainment products that is the envy of the marketing world. Academic researchers and marketers who are keen to investigate entertainment consumers would benefit from a theoretical base from which to commence. This essay therefore, takes a consumer-oriented focus in defining entertainment and conceptualises a model of entertainment consumption. In approaching the study of entertainment one axiomatic question remains: how should we define it? Richard Dyer notes that, considering that the category of entertainment can include – by its own definition in the song ‘That’s entertainment!’ – everything from Hamlet and Oedipus Rex to ‘the clown with his pants falling down’ and ‘the lights on the lady in tights’, it doesn’t make much sense to try to define entertainment as being marked by particular textual features (as is done, for example, by Avrich, 2002). Dyer’s position is rather that ‘entertainment is not so much a category of things as an attitude towards things’ (Dyer, 1973: 9). He traces the modern conception of entertainment back to the writings of Molière. This writer defended the purpose of his plays against attacks from the church that they were not sufficiently edifying by insisting that, as entertainments he had no interest in edifying audiences – his ‘real purpose …was to provide people pleasure – and the definition of that was to be decided by “the people”’(Dyer, 1973: 9). In my own discipline of Marketing this approach has been embraced – Kaser and Oelkers, for example, define entertainment as ‘whatever people are willing to spend their money and spare time viewing’ (2008, 18). That is the approach taken in this paper, where I see entertainment as ‘consumer-driven culture’ (McKee and Collis, 2009) – a definition that is closely aligned with the marketing concept. Within a marketing framework I explore what the consumption of entertainment can tell us about the relationships between consumers and culture more generally. For entertainment offers an intriguing case study, and is often consumed in ways that challenge many of our assumptions about marketing and consumer behaviour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivated by growing considerations of the scale, severity and risks associated with human exposure to indoor particulate matter, this work reviewed existing literature to: (i) identify state-of-the-art experimental techniques used for personal exposure assessment; (ii) compare exposure levels reported for domestic/school settings in different countries (excluding exposure to environmental tobacco smoke and particulate matter from biomass cooking in developing countries); (iii) assess the contribution of outdoor background vs indoor sources to personal exposure; and (iv) examine scientific understanding of the risks posed by personal exposure to indoor aerosols. Limited studies assessing integrated daily residential exposure to just one particle size fraction, ultrafine particles, show that the contribution of indoor sources ranged from 19-76%. This indicates a strong dependence on resident activities, source events and site specificity, and highlights the importance of indoor sources for total personal exposure. Further, it was assessed that 10-30% of the total burden-of-disease from particulate matter exposure was due to indoor generated particles, signifying that indoor environments are likely to be a dominant environmental factor affecting human health. However, due to challenges associated with conducting epidemiological assessments, the role of indoor generated particles has not been fully acknowledged, and improved exposure/risk assessment methods are still needed, together with a serious focus on exposure control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis makes several contributions towards improved methods for encoding structure in computational models of word meaning. New methods are proposed and evaluated which address the requirement of being able to easily encode linguistic structural features within a computational representation while retaining the ability to scale to large volumes of textual data. Various methods are implemented and evaluated on a range of evaluation tasks to demonstrate the effectiveness of the proposed methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study is about young adolescents' engagement in learning science. The middle years of schooling are critical in the development of students' interest and engagement with learning. Successful school experiences enhance dispositions towards a career related to those experiences. Poor experiences lead to negative attitudes and rejection of certain career pathways. At a time when students are becoming more aware, more independent and focused on peer relationships and social status, the high school environment in some circumstances offers more a content-centred curriculum that is less personally relevant to their lives than the social melee surrounding them. Science education can further exacerbate the situation by presenting abstract concepts that have limited contextual relevance and a seemingly difficult vocabulary that further alienates adolescents from the curriculum. In an attempt to reverse a perceived growing disinterest by students to science (Goodrum, Druhan & Abbs, 2011), a study was initiated based on a student-centred unit designed to enhance and sustain adolescent engagement in science. The premise of the study was that adolescent students are more responsive toward learning if they are given an appropriate learning environment that helps connect their learning with life beyond the school. The purpose of this study was to examine the experiences of young adolescents with the aim of transforming school learning in science into meaningful experiences that connected with their lives. Two areas were specifically canvassed and subsumed within the study to strengthen the design base. One area that of the middle schooling ideology, offered specific pedagogical approaches and a philosophical framework that could provide opportunities for reform. The other area, the construct of scientific literacy (OECD, 2007) as defined by Holbrook and Rannikmae, (2009) appeared to provide a sense of purpose for students to aim toward and value for becoming active citizens. The study reported here is a self-reflection of a teacher/researcher exploring practice and challenging existing approaches to the teaching of science in the middle years of schooling. The case study approach (Yin, 2003) was adopted to guide the design of the study. Over a 6-month period, the researcher, an experienced secondary-science teacher, designed, implemented and documented a range of student-centred pedagogical practices with a Year-7 secondary science class. Data for this case study included video recordings, journals, interviews and surveys of students. Both quantitative and qualitative data sources were employed in a partially mixed methods research approach (Leech & Onwuegbuzie, 2009) dominated by qualitative data with the concurrent collection of quantitative data to corroborate interpretations as a means of analysing and developing a model of the dynamic learning environment. The findings from the case study identified five propositions that became the basis for a model of a student-centred learning environment that was able to sustain student participation and thus engagement in science. The study suggested that adolescent student engagement can be promoted and sustained by providing a classroom climate that encourages and strengthens social interaction. Engagement in science can be enhanced by presenting developmentally appropriate challenges that require rigorous exploration of contextually relevant learning environments; supporting students to develop connections with a curriculum that aligns with their own experiences. By setting an environment empathetic to adolescent needs and understandings, students were able to actively explore phenomena collaboratively through developmentally appropriate experiences. A significant outcome of this study was the transformative experiences of an insider, the teacher as researcher, whose reflections provide an authentic model for reforming pedagogy. The model and theory presented became an adjunct to my repertoire for science teaching in the middle years of schooling. The study was rewarding in that it helped address a void in my understanding of middle years of schooling by prompting me to re-think the notion of adolescence in the context of the science classroom. This study is timely given the report "The Status and Quality of Year 11 and 12 Science in Australian Schools" (Goodrum, Druhan & Abbs, 2011) and national curricular changes that are being proposed for science (ACARA, 2009).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lyngbya majuscula is a cyanobacterium (blue-green algae) occurring naturally in tropical and subtropical coastal areas worldwide. Deception Bay, in Northern Moreton Bay, Queensland, has a history of Lyngbya blooms, and forms a case study for this investigation. The South East Queensland (SEQ) Healthy Waterways Partnership, collaboration between government, industry, research and the community, was formed to address issues affecting the health of the river catchments and waterways of South East Queensland. The Partnership coordinated the Lyngbya Research and Management Program (2005-2007) which culminated in a Coastal Algal Blooms (CAB) Action Plan for harmful and nuisance algal blooms, such as Lyngbya majuscula. This first phase of the project was predominantly of a scientific nature and also facilitated the collection of additional data to better understand Lyngbya blooms. The second phase of this project, SEQ Healthy Waterways Strategy 2007-2012, is now underway to implement the CAB Action Plan and as such is more management focussed. As part of the first phase of the project, a Science model for the initiation of a Lyngbya bloom was built using Bayesian Networks (BN). The structure of the Science Bayesian Network was built by the Lyngbya Science Working Group (LSWG) which was drawn from diverse disciplines. The BN was then quantified with annual data and expert knowledge. Scenario testing confirmed the expected temporal nature of bloom initiation and it was recommended that the next version of the BN be extended to take this into account. Elicitation for this BN thus occurred at three levels: design, quantification and verification. The first level involved construction of the conceptual model itself, definition of the nodes within the model and identification of sources of information to quantify the nodes. The second level included elicitation of expert opinion and representation of this information in a form suitable for inclusion in the BN. The third and final level concerned the specification of scenarios used to verify the model. The second phase of the project provides the opportunity to update the network with the newly collected detailed data obtained during the previous phase of the project. Specifically the temporal nature of Lyngbya blooms is of interest. Management efforts need to be directed to the most vulnerable periods to bloom initiation in the Bay. To model the temporal aspects of Lyngbya we are using Object Oriented Bayesian networks (OOBN) to create ‘time slices’ for each of the periods of interest during the summer. OOBNs provide a framework to simplify knowledge representation and facilitate reuse of nodes and network fragments. An OOBN is more hierarchical than a traditional BN with any sub-network able to contain other sub-networks. Connectivity between OOBNs is an important feature and allows information flow between the time slices. This study demonstrates more sophisticated use of expert information within Bayesian networks, which combine expert knowledge with data (categorized using expert-defined thresholds) within an expert-defined model structure. Based on the results from the verification process the experts are able to target areas requiring greater precision and those exhibiting temporal behaviour. The time slices incorporate the data for that time period for each of the temporal nodes (instead of using the annual data from the previous static Science BN) and include lag effects to allow the effect from one time slice to flow to the next time slice. We demonstrate a concurrent steady increase in the probability of initiation of a Lyngbya bloom and conclude that the inclusion of temporal aspects in the BN model is consistent with the perceptions of Lyngbya behaviour held by the stakeholders. This extended model provides a more accurate representation of the increased risk of algal blooms in the summer months and show that the opinions elicited to inform a static BN can be readily extended to a dynamic OOBN, providing more comprehensive information for decision makers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses methodological developments in phenomenography that make it apropos for the study of teaching and learning to use information in educational environments. Phenomenography is typically used to analyze interview data to determine different ways of experiencing a phenomenon. There is an established tradition of phenomenographic research in the study of information literacy (ex: Bruce, 1997; 2008; Lupton, 2008; Webber, Boon, & Johnston, 2005). Drawing from the large body of evidence complied in two decades of research, phenomenographers developed variation theory, which explains what a learner can feasibly learn from a classroom lesson based on how the phenomenon being studied is presented (Marton, Runesson, & Tsui, 2004). Variation theory’s ability to establish the critical conditions necessary for learning to occur has resulted in the use of phenomenographic methods to study classroom interactions by collecting and analyzing naturalistic data through observation, as well as interviews concerning teachers’ intentions and students’ different experiences of classroom lessons. Describing the methodological developments of phenomenography in relation to understanding the classroom experience, this paper discusses the potential benefits and challenges of utilizing such methods to research the experiences of teaching and learning to use information in discipline-focused classrooms. The application of phenomenographic methodology for this purpose is exemplified with an ongoing study that explores how students learned to use information in an undergraduate language and gender course (Maybee, Bruce, Lupton, & Rebmann, in press). This paper suggests that by providing a nuanced understanding of what is intended for students to learn about using information, and relating that to what transpires in the classroom and how students experience these lessons, phenomenography and variation theory offer a viable framework for further understanding and improving how students are taught, and learn to use information.