331 resultados para Normally Complemented Subgroups
Resumo:
Optimal Asset Maintenance decisions are imperative for efficient asset management. Decision Support Systems are often used to help asset managers make maintenance decisions, but high quality decision support must be based on sound decision-making principles. For long-lived assets, a successful Asset Maintenance decision-making process must effectively handle multiple time scales. For example, high-level strategic plans are normally made for periods of years, while daily operational decisions may need to be made within a space of mere minutes. When making strategic decisions, one usually has the luxury of time to explore alternatives, whereas routine operational decisions must often be made with no time for contemplation. In this paper, we present an innovative, flexible decision-making process model which distinguishes meta-level decision making, i.e., deciding how to make decisions, from the information gathering and analysis steps required to make the decisions themselves. The new model can accommodate various decision types. Three industrial case studies are given to demonstrate its applicability.
Resumo:
Background Dieting has historically been the main behavioural treatment paradigm for overweight/obesity, although a non-dieting paradigm has more recently emerged based on the criticisms of the original dieting approach. There is a dearth of research contrasting why these approaches are adopted. To address this, we conducted a qualitative investigation into the determinants of dieting and non-dieting approaches based on the perspectives and experiences of overweight/obese Australian adults. Methods Grounded theory was used inductively to generate a model of themes contrasting the determinants of dieting and non-dieting approaches based on the perspectives of 21 overweight/obese adults. Data was collected using semi-structured interviews to elicit in-depth individual experiences and perspectives. Results Several categories emerged which distinguished between the adoption of a dieting or non-dieting approach. These categories included the focus of each approach (weight/image or lifestyle/health behaviours); internal or external attributions about dieting failure; attitudes towards established diets, and personal autonomy. Personal autonomy was also influenced by another category; the perceived knowledge and self-efficacy about each approach, with adults more likely to choose an approach they knew more about and were confident in implementing. The time perspective of change (short or long-term) and the perceived identity of the person (fat/dieter or healthy person) also emerged as determinants of dieting or non-dieting approaches respectively. Conclusions The model of determinants elicited from this study assists in understanding why dieting and non-dieting approaches are adopted, from the perspectives and experiences of overweight/obese adults. Understanding this decision-making process can assist clinicians and public health researchers to design and tailor dieting and non-dieting interventions to population subgroups that have preferences and characteristics suitable for each approach.
Resumo:
There have been many improvements in Australian engineering education since the 1990s. However, given the recent drive for assuring the achievement of identified academic standards, more progress needs to be made, particularly in the area of evidence-based assessment. This paper reports on initiatives gathered from the literature and engineering academics in the USA, through an Australian National Teaching Fellowship program. The program aims to establish a process to help academics in designing and implementing evidence-based assessments that meet the needs of not only students and the staff that teach them, but also industry as well as accreditation bodies. The paper also examines the kinds and levels of support necessary for engineering academics, especially early career ones, to help meet the expectations of the current drive for assured quality and standards of both research and teaching. Academics are experiencing competing demands on their time and energy with very high expectations in research performance and increased teaching responsibilities, although many are researchers who have not had much pedagogic training. Based on the literature and investigation of relevant initiatives in the USA, we conducted interviews with several identified experts and change agents who have wrought effective academic cultural change within their institutions and beyond. These reveal that assuring the standards and quality of student learning outcomes through evidence-based assessments cannot be appropriately addressed without also addressing the issue of pedagogic training for academic staff. To be sustainable, such training needs to be complemented by a culture of on-going mentoring support from senior academics, formalised through the university administration, so that mentors are afforded resources, time, and appropriate recognition.
Resumo:
The majority of distribution utilities do not have accurate information on the constituents of their loads. This information is very useful in managing and planning the network, adequately and economically. Customer loads are normally categorized in three main sectors: 1) residential; 2) industrial; and 3) commercial. In this paper, penalized least-squares regression and Euclidean distance methods are developed for this application to identify and quantify the makeup of a feeder load with unknown sectors/subsectors. This process is done on a monthly basis to account for seasonal and other load changes. The error between the actual and estimated load profiles are used as a benchmark of accuracy. This approach has shown to be accurate in identifying customer types in unknown load profiles, and is used in cross-validation of the results and initial assumptions.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Game jams provide design researchers with extraordinary opportunity to watch creative teams in action and recent years have seen a number of projects which seek to illuminate the design process as seen in these events. For example, Gaydos, Harris and Martinez discuss the opportunity of the jam to expose students to principles of design process and design spaces (2011). Rouse muses on the game jam ‘as radical practice’ and a ‘corrective to game creation as it is normally practiced’. His observations about his own experience in a jam emphasise the same artistic endeavour forefronted earlier, where the experience is about creation that is divorced from the instrumental motivations of commercial game design (Rouse 2011) and where the focus is on process over product. Other participants remark on the social milieu of the event as a critical factor and the collaborative opportunity as a rich site to engage participants in design processes (Shin et al, 2012). Shin et al are particularly interested in the notion of the site of the process and the ramifications of participants being in the same location. They applaud the more localized event where there is an emphasis on local participation and collaboration. For other commentators, it is specifically the social experience in the place of the jam is the most important aspect (See Keogh 2011), not the material site but rather the physical embodied experience of ‘being there’ and being part of the event. Participants talk about game jams they have attended in a similar manner to those observations made by Dourish where the experience is layered on top of the physical space of the event (Dourish 2006). It is as if the event has taken on qualities of place where we find echoes of Tuan’s description of a particular site having an aura of history that makes it a very different place, redolent and evocative (Tuan 1977). Re-presenting the experience in place has become the goal of the data visualisation project that has become the focus of our own curated 48hr game jam. Taking our cue from the work of Tim Ingold on embodied practice, we have now established the 48hr game making challenge as a site for data visualisation research in place.
Resumo:
‘Forced marriages’ involve a woman or girl being abducted and declared the ‘wife’ of her captor without her consent or her family’s consent. The practice generally occurs during wartime and the ‘wife’ is normally subjected to rape, forced impregnation and sexual slavery. Moreover, she is coerced into an intimate relationship with a man who is often the perpetrator of crimes against her and her community. While forced marriages have recently been recognised as a crime against humanity, this Article contends that this does not constitute full recognition of the destructive nature of forced marriages. Instead, this Article mirrors and extends the Akayesu decision that rape can be used as a tool of genocide and maintains that forced marriages can also be a form of genocide.
Resumo:
Encompasses the whole BPM lifecycle, including process identification, modelling, analysis, redesign, automation and monitoring Class-tested textbook complemented with additional teaching material on the accompanying website Covers both relevant conceptual background, industrial standards and actionable skills Business Process Management (BPM) is the art and science of how work should be performed in an organization in order to ensure consistent outputs and to take advantage of improvement opportunities, e.g. reducing costs, execution times or error rates. Importantly, BPM is not about improving the way individual activities are performed, but rather about managing entire chains of events, activities and decisions that ultimately produce added value for an organization and its customers. This textbook encompasses the entire BPM lifecycle, from process identification to process monitoring, covering along the way process modelling, analysis, redesign and automation. Concepts, methods and tools from business management, computer science and industrial engineering are blended into one comprehensive and inter-disciplinary approach. The presentation is illustrated using the BPMN industry standard defined by the Object Management Group and widely endorsed by practitioners and vendors worldwide. In addition to explaining the relevant conceptual background, the book provides dozens of examples, more than 100 hands-on exercises – many with solutions – as well as numerous suggestions for further reading. The textbook is the result of many years of combined teaching experience of the authors, both at the undergraduate and graduate levels as well as in the context of professional training. Students and professionals from both business management and computer science will benefit from the step-by-step style of the textbook and its focus on fundamental concepts and proven methods. Lecturers will appreciate the class-tested format and the additional teaching material available on the accompanying website fundamentals-of-bpm.org.
Resumo:
The management and improvement of business processes are a core topic of the information systems discipline. The persistent demand in corporations within all industry sectors for increased operational efficiency and innovation, an emerging set of established and evaluated methods, tools, and techniques as well as the quickly growing body of academic and professional knowledge are indicative for the standing that Business Process Management (BPM) has nowadays. During the last decades, intensive research has been conducted with respect to the design, implementation, execution, and monitoring of business processes. Comparatively low attention, however, has been paid to questions related to organizational issues such as the adoption, usage, implications, and overall success of BPM approaches, technologies, and initiatives. This research gap motivated us to edit a corresponding special focus issue for the journal BISE/WIRTSCHAFTSINFORMATIK. We are happy that we are able to present a selection of three research papers and a state-of-the-art paper in the scientific section of the issue at hand. As these papers differ in the topics they investigate, the research method they apply, and the theoretical foundations they build on, the diversity within the BPM field becomes evident. The academic papers are complemented by an interview with Phil Gilbert, IBM’s Vice President for Business Process and Decision Management, who reflects on the relationship between business processes and the data flowing through them, the need to establish a process context for decision making, and the calibration of BPM efforts toward executives who see processes as a means to an end, rather than a first-order concept in its own right.
Resumo:
Purpose: To determine whether neuroretinal function differs in healthy persons with and without common risk gene variants for age- related macular degeneration (AMD) and no ophthalmoscopic signs of AMD, and to compare those findings in persons with manifest early AMD. Methods and Participants: Neuroretinal function was assessed with the multifocal electroretinogram (mfERG) (VERIS, Redwood City, CA,) in 32 participants (22 healthy persons with no clinical signs of AMD and 10 early AMD patients). The 22 healthy participants with no AMD were risk genotypes for either the CFH (rs380390) and/or ARMS2 (rs10490920). We used a slow flash mfERG paradigm (3 inserted frames) and a 103 hexagon stimulus array. Recordings were made with DTL electrodes; fixation and eye movements were monitored online. Trough N1 to peak P1 (N1P1) response densities and P1-implicit times (IT) were analysed in 5 concentric rings. Results: N1P1 response densities (mean ± SD) for concentric rings 1-3 were on average significantly higher in at-risk genotypes (ring 1: 17.97 nV/deg2 ± 1.9, ring 2: 11.7 nV/deg2 ±1.3, ring 3: 8.7 nV/deg2 ± 0.7) compared to those without risk (ring 1: 13.7 nV/deg2 ± 1.9, ring 2: 9.2 nV/deg2 ±0.8, ring 3: 7.3 nV/deg2 ± 1.1) and compared to persons with early AMD (ring 1: 15.3 nV/deg2 ± 4.8, ring 2: 9.1 nV/deg2 ±2.3, ring 3 nV/deg2: 7.3± 1.3) (p<0.5). The group implicit times, P1-ITs for ring 1 were on average delayed in the early AMD patients (36.4 ms ± 1.0) compared to healthy participants with (35.1 ms ± 1.1) or without risk genotypes (34.8 ms ±1.3), although these differences were not significant. Conclusion: Neuroretinal function in persons with normal fundi can be differentiated into subgroups based on their genetics. Increased neuroretinal activity in persons who carry AMD risk genotypes may be due to genetically determined subclinical inflammatory and/or histological changes in the retina. Assessment of neuroretinal function in healthy persons genetically susceptible to AMD may be a useful early biomarker before there is clinical manifestation of AMD.
Resumo:
Research in construction innovation highlights construction industry as having many barriers and resistance to innovations and suggests that it needs champions. A hierarchical structural model is presented, to assess the impact of the role of the project manager (PM) on the levels of innovation and project performance. The model adopts the structural equation modelling technique and uses the survey data collected from PMs and project team members working for general contractors in Singapore. The model fits well to the observed data, accounting for 24%, 37% and 49% of the variance in championing behaviour, the level of innovation and project performance, respectively. The results of this study show the importance of the championing role of PMs in construction innovation. However, in order to increase their effectiveness, such a role should be complemented by their competency and professionalism, tactical use of influence tactics, and decision authority. Moreover, senior management should provide adequate resources and a sustained support to innovation and create a conducive environment or organizational culture that nurtures and facilitates the PM’s role in the construction project as a champion of innovation.
Resumo:
QUT Bachelor of Radiation Therapy students progress from first visiting a radiation therapy department to graduation and progression into the NPDP over a span of three years. Although there are clear guidelines as to expected competency level post-NPDP, there is still a variety of perceived levels prior to this. Staff and students feedback both suggest that different centres and within centres different staff have differing opinions of these levels. Indeed, many staff members object to the use of the word “competency” for a pre-NPODP undergraduate, preferring the term “achievement”. While it is acknowledged that students progress at different rates, it is vitally important for equity that staff expectations of students at different academic levels are identical. Provision of guidelines for different stages of progression are essential for equitable assessment and most assessments, including the NRTAT are complemented by statements to enable level to be determined. For the University-specific competency assessments some level of consensus between clinical staff is required, especially where students are placed at a large number of different placement sites. Aims The main aim of this initial study is to gauge staff opinions of levels of student progression in order to judge cross-centres consistency. A secondary objective is to evaluate the degree of correlation between staff seniority and perception of student levels. Informal feedback suggests that staff at or just post NPDP level have a different perception of student competency expectations than more senior staff. If these perceptions change with level it will make agreement of guidelines statements more challenging. Study Methods A standard evaluation questionnaire was provided to RT staff participating in ongoing updates to clinical assessment. As part of curriculum development staff were asked to provide anonymous and optional answers to further questions in order to audit current practice. This involved assigning level of student progression to different statements relating to tasks or competencies. After data collation, scores were assigned to level and totals used to rank statements according to perceived student level. Descriptive statistical analysis was used to identify which statements were easier to assign to student level and which were more ambiguous. Further sub-analysis was performed for each category of staff seniority to judge differences in perception. Strength of correlation between seniority and expectation was calculated to confirm or contradict the informal feedback. Results By collating different staff perceptions of competencies for different student levels commonly agreed statements can be used to define achievement level. This presentation outlines the results of the audit including statements that most staff perceived as relevant to a specific student group and statements that staff found to be harder to attribute. Strength of correlation between staff perception and seniority will be outlined where statistically significant.
Resumo:
Under seismic loads neither the response of the pile nor the response of ground are independent of each other, contrary what is normally assumed. In seismic design of buildings, dynamic response of a structure is determined by assuming a fixed base on sub-grade and neglecting the physical interaction between foundation and soil profile in which it is embedded. However, the seismic response of pile foundations in vibration sensitive soil profiles is significantly affected by the behaviour of supporting soil. This research uses validated Finite Element techniques to simulate the seismic behaviour of pile foundations embedded in multilayered vibration sensitive soils.
Resumo:
Circulating 25-hydroxyvitamin D (25(OH)D), a marker for vitamin D status, is associated with bone health and possibly cancers and other diseases; yet, the determinants of 25(OH)D status, particularly ultraviolet radiation (UVR) exposure, are poorly understood. Determinants of 25(OH)D were analyzed in a subcohort of 1,500 participants of the US Radiologic Technologists (USRT) Study that included whites (n 842), blacks (n 646), and people of other races/ethnicities (n 12). Participants were recruited monthly (20082009) across age, sex, race, and ambient UVR level groups. Questionnaires addressing UVR and other exposures were generally completed within 9 days of blood collection. The relation between potential determinants and 25(OH)D levels was examined through regression analysis in a random two-thirds sample and validated in the remaining one third. In the regression model for the full study population, age, race, body mass index, some seasons, hours outdoors being physically active, and vitamin D supplement use were associated with 25(OH)D levels. In whites, generally, the same factors were explanatory. In blacks, only age and vitamin D supplement use predicted 25(OH)D concentrations. In the full population, determinants accounted for 25 of circulating 25(OH)D variability, with similar correlations for subgroups. Despite detailed data on UVR and other factors near the time of blood collection, the ability to explain 25(OH)D was modest.
Resumo:
This thesis examines the social practice of homework. It explores how homework is shaped by the discourses, policies and guidelines in circulation in a society at any given time with particular reference to one school district in the province of Newfoundland and Labrador, Canada. This study investigates how contemporary homework reconstitutes the home as a pedagogical site where the power of the institution of schooling circulates regularly from school to home. It examines how the educational system shapes the organization of family life and how family experiences with homework may be different in different sites depending on the accessibility of various forms of cultural capital. This study employs a qualitative approach, incorporating multiple case studies, and is complemented by insights from institutional ethnography and critical discourse analysis. It draws on the theoretical concepts of Foucault including power and power relations, and governmentality and surveillance, as well as Bourdieu’s concepts of economic, social and cultural capital for analysis. It employs concepts from Bourdieu’s work as they have been expanded on by researchers including Reay (1998), Lareau (2000), and Griffith and Smith (2005). The studies of these researchers allowed for an examination of homework as it related to families and mothers’ work. Smith’s (1987; 1999) concepts of ruling relations, mothers’ unpaid labour, and the engine of inequality were also employed in the analysis. Family interviews with ten volunteer families, teacher focus group sessions with 15 teachers from six schools, homework artefacts, school newsletters, homework brochures, and publicly available assessment and evaluation policy documents from one school district were analyzed. From this analysis key themes emerged and the findings are documented throughout five data analysis chapters. This study shows a change in education in response to a system shaped by standards, accountability and testing. It documents an increased transference of educational responsibility from one educational stakeholder to another. This transference of responsibility shifts downward until it eventually reaches the family in the form of homework and educational activities. Texts in the form of brochures and newsletters, sent home from school, make available to parents specific subject positions that act as instruments of normalization. These subject positions promote a particular ‘ideal’ family that has access to certain types of cultural capital needed to meet the school’s expectations. However, the study shows that these resources are not equally available to all and some families struggle to obtain what is necessary to complete educational activities in the home. The increase in transference of educational work from the school to the home results in greater work for parents, particularly mothers. As well, consideration is given to mother’s role in homework and how, in turn, classroom instructional practices are sometimes dependent on the work completed at home with differential effects for children. This study confirms previous findings that it is mothers who assume the greatest role in the educational trajectory of their children. An important finding in this research is that it is not only middle-class mothers who dedicate extensive time working hard to ensure their children’s educational success; working-class mothers also make substantial contributions of time and resources to their children’s education. The assignments and educational activities distributed as homework require parents’ knowledge of technical school pedagogy to help their children. Much of the homework being sent home from schools is in the area of literacy, particularly reading, but requires parents to do more than read with children. A key finding is that the practices of parents are changing and being reconfigured by the expectations of schools in regard to reading. Parents are now being required to monitor and supervise children’s reading, as well as help children complete reading logs, written reading responses, and follow up questions. The reality of family life as discussed by the participants in this study does not match the ‘ideal’ as portrayed in the educational documents. Homework sessions often create frustrations and tensions between parents and children. Some of the greatest struggles for families were created by mathematical homework, homework for those enrolled in the French Immersion program, and the work required to complete Literature, Heritage and Science Fair projects. Even when institutionalized and objectified capital was readily available, many families still encountered struggles when trying to carry out the assigned educational tasks. This thesis argues that homework and education-related activities play out differently in different homes. Consideration of this significance may assist educators to better understand and appreciate the vast difference in families and the ways in which each family can contribute to their children’s educational trajectory.