911 resultados para Source to sink study
Resumo:
Background: Although the potential to reduce hospitalisation and mortality in chronic heart failure (CHF) is well reported, the feasibility of receiving healthcare by structured telephone support or telemonitoring is not. Aims: To determine; adherence, adaptation and acceptability to a national nurse-coordinated telephone-monitoring CHF management strategy. The Chronic Heart Failure Assistance by Telephone Study (CHAT). Methods: Triangulation of descriptive statistics, feedback surveys and qualitative analysis of clinical notes. Cohort comprised of standard care plus intervention (SC + I) participants who completed the first year of the study. Results: 30 GPs (70% rural) randomised to SC + I recruited 79 eligible participants, of whom 60 (76%) completed the full 12 month follow-up period. During this time 3619 calls were made into the CHAT system (mean 45.81 SD ± 79.26, range 0-369), Overall there was an adherence to the study protocol of 65.8% (95% CI 0.54-0.75; p = 0.001) however, of the 60 participants who completed the 12 month follow-up period the adherence was significantly higher at 92.3% (95% CI 0.82-0.97, p ≤ 0.001). Only 3% of this elderly group (mean age 74.7 ±9.3 years) were unable to learn or competently use the technology. Participants rated CHAT with a total acceptability rate of 76.45%. Conclusion: This study shows that elderly CHF patients can adapt quickly, find telephone-monitoring an acceptable part of their healthcare routine, and are able to maintain good adherence for a least 12 months. © 2007.
Resumo:
Demands for delivering high instantaneous power in a compressed form (pulse shape) have widely increased during recent decades. The flexible shapes with variable pulse specifications offered by pulsed power have made it a practical and effective supply method for an extensive range of applications. In particular, the release of basic subatomic particles (i.e. electron, proton and neutron) in an atom (ionization process) and the synthesizing of molecules to form ions or other molecules are among those reactions that necessitate large amount of instantaneous power. In addition to the decomposition process, there have recently been requests for pulsed power in other areas such as in the combination of molecules (i.e. fusion, material joining), gessoes radiations (i.e. electron beams, laser, and radar), explosions (i.e. concrete recycling), wastewater, exhausted gas, and material surface treatments. These pulses are widely employed in the silent discharge process in all types of materials (including gas, fluid and solid); in some cases, to form the plasma and consequently accelerate the associated process. Due to this fast growing demand for pulsed power in industrial and environmental applications, the exigency of having more efficient and flexible pulse modulators is now receiving greater consideration. Sensitive applications, such as plasma fusion and laser guns also require more precisely produced repetitive pulses with a higher quality. Many research studies are being conducted in different areas that need a flexible pulse modulator to vary pulse features to investigate the influence of these variations on the application. In addition, there is the need to prevent the waste of a considerable amount of energy caused by the arc phenomena that frequently occur after the plasma process. The control over power flow during the supply process is a critical skill that enables the pulse supply to halt the supply process at any stage. Different pulse modulators which utilise different accumulation techniques including Marx Generators (MG), Magnetic Pulse Compressors (MPC), Pulse Forming Networks (PFN) and Multistage Blumlein Lines (MBL) are currently employed to supply a wide range of applications. Gas/Magnetic switching technologies (such as spark gap and hydrogen thyratron) have conventionally been used as switching devices in pulse modulator structures because of their high voltage ratings and considerably low rising times. However, they also suffer from serious drawbacks such as, their low efficiency, reliability and repetition rate, and also their short life span. Being bulky, heavy and expensive are the other disadvantages associated with these devices. Recently developed solid-state switching technology is an appropriate substitution for these switching devices due to the benefits they bring to the pulse supplies. Besides being compact, efficient, reasonable and reliable, and having a long life span, their high frequency switching skill allows repetitive operation of pulsed power supply. The main concerns in using solid-state transistors are the voltage rating and the rising time of available switches that, in some cases, cannot satisfy the application’s requirements. However, there are several power electronics configurations and techniques that make solid-state utilisation feasible for high voltage pulse generation. Therefore, the design and development of novel methods and topologies with higher efficiency and flexibility for pulsed power generators have been considered as the main scope of this research work. This aim is pursued through several innovative proposals that can be classified under the following two principal objectives. • To innovate and develop novel solid-state based topologies for pulsed power generation • To improve available technologies that have the potential to accommodate solid-state technology by revising, reconfiguring and adjusting their structure and control algorithms. The quest to distinguish novel topologies for a proper pulsed power production was begun with a deep and through review of conventional pulse generators and useful power electronics topologies. As a result of this study, it appears that efficiency and flexibility are the most significant demands of plasma applications that have not been met by state-of-the-art methods. Many solid-state based configurations were considered and simulated in order to evaluate their potential to be utilised in the pulsed power area. Parts of this literature review are documented in Chapter 1 of this thesis. Current source topologies demonstrate valuable advantages in supplying the loads with capacitive characteristics such as plasma applications. To investigate the influence of switching transients associated with solid-state devices on rise time of pulses, simulation based studies have been undertaken. A variable current source is considered to pump different current levels to a capacitive load, and it was evident that dissimilar dv/dts are produced at the output. Thereby, transient effects on pulse rising time are denied regarding the evidence acquired from this examination. A detailed report of this study is given in Chapter 6 of this thesis. This study inspired the design of a solid-state based topology that take advantage of both current and voltage sources. A series of switch-resistor-capacitor units at the output splits the produced voltage to lower levels, so it can be shared by the switches. A smart but complicated switching strategy is also designed to discharge the residual energy after each supply cycle. To prevent reverse power flow and to reduce the complexity of the control algorithm in this system, the resistors in common paths of units are substituted with diode rectifiers (switch-diode-capacitor). This modification not only gives the feasibility of stopping the load supply process to the supplier at any stage (and consequently saving energy), but also enables the converter to operate in a two-stroke mode with asymmetrical capacitors. The components’ determination and exchanging energy calculations are accomplished with respect to application specifications and demands. Both topologies were simply modelled and simulation studies have been carried out with the simplified models. Experimental assessments were also executed on implemented hardware and the approaches verified the initial analysis. Reports on details of both converters are thoroughly discussed in Chapters 2 and 3 of the thesis. Conventional MGs have been recently modified to use solid-state transistors (i.e. Insulated gate bipolar transistors) instead of magnetic/gas switching devices. Resistive insulators previously used in their structures are substituted by diode rectifiers to adjust MGs for a proper voltage sharing. However, despite utilizing solid-state technology in MGs configurations, further design and control amendments can still be made to achieve an improved performance with fewer components. Considering a number of charging techniques, resonant phenomenon is adopted in a proposal to charge the capacitors. In addition to charging the capacitors at twice the input voltage, triggering switches at the moment at which the conducted current through switches is zero significantly reduces the switching losses. Another configuration is also introduced in this research for Marx topology based on commutation circuits that use a current source to charge the capacitors. According to this design, diode-capacitor units, each including two Marx stages, are connected in cascade through solid-state devices and aggregate the voltages across the capacitors to produce a high voltage pulse. The polarity of voltage across one capacitor in each unit is reversed in an intermediate mode by connecting the commutation circuit to the capacitor. The insulation of input side from load side is provided in this topology by disconnecting the load from the current source during the supply process. Furthermore, the number of required fast switching devices in both designs is reduced to half of the number used in a conventional MG; they are replaced with slower switches (such as Thyristors) that need simpler driving modules. In addition, the contributing switches in discharging paths are decreased to half; this decrease leads to a reduction in conduction losses. Associated models are simulated, and hardware tests are performed to verify the validity of proposed topologies. Chapters 4, 5 and 7 of the thesis present all relevant analysis and approaches according to these topologies.
Resumo:
A hospital consists of a number of wards, units and departments that provide a variety of medical services and interact on a day-to-day basis. Nearly every department within a hospital schedules patients for the operating theatre (OT) and most wards receive patients from the OT following post-operative recovery. Because of the interrelationships between units, disruptions and cancellations within the OT can have a flow-on effect to the rest of the hospital. This often results in dissatisfied patients, nurses and doctors, escalating waiting lists, inefficient resource usage and undesirable waiting times. The objective of this study is to use Operational Research methodologies to enhance the performance of the operating theatre by improving elective patient planning using robust scheduling and improving the overall responsiveness to emergency patients by solving the disruption management and rescheduling problem. OT scheduling considers two types of patients: elective and emergency. Elective patients are selected from a waiting list and scheduled in advance based on resource availability and a set of objectives. This type of scheduling is referred to as ‘offline scheduling’. Disruptions to this schedule can occur for various reasons including variations in length of treatment, equipment restrictions or breakdown, unforeseen delays and the arrival of emergency patients, which may compete for resources. Emergency patients consist of acute patients requiring surgical intervention or in-patients whose conditions have deteriorated. These may or may not be urgent and are triaged accordingly. Most hospitals reserve theatres for emergency cases, but when these or other resources are unavailable, disruptions to the elective schedule result, such as delays in surgery start time, elective surgery cancellations or transfers to another institution. Scheduling of emergency patients and the handling of schedule disruptions is an ‘online’ process typically handled by OT staff. This means that decisions are made ‘on the spot’ in a ‘real-time’ environment. There are three key stages to this study: (1) Analyse the performance of the operating theatre department using simulation. Simulation is used as a decision support tool and involves changing system parameters and elective scheduling policies and observing the effect on the system’s performance measures; (2) Improve viability of elective schedules making offline schedules more robust to differences between expected treatment times and actual treatment times, using robust scheduling techniques. This will improve the access to care and the responsiveness to emergency patients; (3) Address the disruption management and rescheduling problem (which incorporates emergency arrivals) using innovative robust reactive scheduling techniques. The robust schedule will form the baseline schedule for the online robust reactive scheduling model.
Resumo:
Background: Although class attendance is linked to academic performance, questions remain about what determines students’ decisions to attend or miss class. Aims: In addition to the constructs of a common decision-making model, the theory of planned behaviour, the present study examined the influence of student role identity and university student (in-group) identification for predicting both the initiation and maintenance of students’ attendance at voluntary peer-assisted study sessions in a statistics subject. Sample: University students enrolled in a statistics subject were invited to complete a questionnaire at two time points across the academic semester. A total of 79 university students completed questionnaires at the first data collection point, with 46 students completing the questionnaire at the second data collection point. Method: Twice during the semester, students’ attitudes, subjective norm, perceived behavioural control, student role identity, in-group identification, and intention to attend study sessions were assessed via on-line questionnaires. Objective measures of class attendance records for each half-semester (or ‘term’) were obtained. Results: Across both terms, students’ attitudes predicted their attendance intentions, with intentions predicting class attendance. Earlier in the semester, in addition to perceived behavioural control, both student role identity and in-group identification predicted students’ attendance intentions, with only role identity influencing intentions later in the semester. Conclusions: These findings highlight the possible chronology that different identity influences have in determining students’ initial and maintained attendance at voluntary sessions designed to facilitate their learning.
Resumo:
Driving and using prescription medicines that have the potential to impair driving is an emerging research area. To date it is characterised by a limited (although growing) number of studies and methodological complexities that make generalisations about impairment due to medications difficult. Consistent evidence has been found for the impairing effects of hypnotics, sedative antidepressants and antihistamines, and narcotic analgesics, although it has been estimated that as many as nine medication classes have the potential to impair driving (Alvarez & del Rio, 2000; Walsh, de Gier, Christopherson, & Verstraete, 2004). There is also evidence for increased negative effects related to concomitant use of other medications and alcohol (Movig et al., 2004; Pringle, Ahern, Heller, Gold, & Brown, 2005). Statistics on the high levels of Australian prescription medication use suggest that consumer awareness of driving impairment due to medicines should be examined. One web-based study has found a low level of awareness, knowledge and risk perceptions among Australian drivers about the impairing effects of various medications on driving (Mallick, Johnston, Goren, & Kennedy, 2007). The lack of awareness and knowledge brings into question the effectiveness of the existing countermeasures. In Australia these consist of the use of ancillary warning labels administered under mandatory regulation and professional guidelines, advice to patients, and the use of Consumer Medicines Information (CMI) with medications that are known to cause impairment. The responsibility for the use of the warnings and related counsel to patients primarily lies with the pharmacist when dispensing relevant medication. A review by the Therapeutic Goods Administration (TGA) noted that in practice, advice to patients may not occur and that CMI is not always available (TGA, 2002). Researchers have also found that patients' recall of verbal counsel is very low (Houts, Bachrach, Witmer, Tringali, Bucher, & Localio, 1998). With healthcare observed as increasingly being provided in outpatient conditions (Davis et al., 2006; Vingilis & MacDonald, 2000), establishing the effectiveness of the warning labels as a countermeasure is especially important. There have been recent international developments in medication categorisation systems and associated medication warning labels. In 2005, France implemented a four-tier medication categorisation and warning system to improve patients' and health professionals' awareness and knowledge of related road safety issues (AFSSAPS, 2005). This warning system uses a pictogram and indicates the level of potential impairment in relation to driving performance through the use of colour and advice on the recommended behaviour to adopt towards driving. The comparable Australian system does not indicate the severity level of potential effects, and does not provide specific guidelines on the attitude or actions that the individual should adopt towards driving. It is reliant upon the patient to be vigilant in self-monitoring effects, to understand the potential ways in which they may be affected and how serious these effects may be, and to adopt the appropriate protective actions. This thesis investigates the responses of a sample of Australian hospital outpatients who receive appropriate labelling and counselling advice about potential driving impairment due to prescribed medicines. It aims to provide baseline data on the understanding and use of relevant medications by a Queensland public hospital outpatient sample recruited through the hospital pharmacy. It includes an exploration and comparison of the effect of the Australian and French medication warning systems on medication user knowledge, attitudes, beliefs and behaviour, and explores whether there are areas in which the Australian system may be improved by including any beneficial elements of the French system. A total of 358 outpatients were surveyed, and a follow-up telephone survey was conducted with a subgroup of consenting participants who were taking at least one medication that required an ancillary warning label about driving impairment. A complementary study of 75 French hospital outpatients was also conducted to further investigate the performance of the warnings. Not surprisingly, medication use among the Australian outpatient sample was high. The ancillary warning labels required to appear on medications that can impair driving were prevalent. A subgroup of participants was identified as being potentially at-risk of driving impaired, based on their reported recent use of medications requiring an ancillary warning label and level of driving activity. The sample reported previous behaviour and held future intentions that were consistent with warning label advice and health protective action. Participants did not express a particular need for being advised by a health professional regarding fitness to drive in relation to their medication. However, it was also apparent from the analysis that the participants would be significantly more likely to follow advice from a doctor than a pharmacist. High levels of knowledge in terms of general principles about effects of alcohol, illicit drugs and combinations of substances, and related health and crash risks were revealed. This may reflect a sample specific effect. Emphasis is placed in the professional guidelines for hospital pharmacists that make it essential that advisory labels are applied to medicines where applicable and that warning advice is given to all patients on medication which may affect driving (SHPA, 2006, p. 221). The research program applied selected theoretical constructs from Schwarzer's (1992) Health Action Process Approach, which has extended constructs from existing health theories such as the Theory of Planned Behavior (Ajzen, 1991) to better account for the intention-behaviour gap often observed when predicting behaviour. This was undertaken to explore the utility of the constructs in understanding and predicting compliance intentions and behaviour with the mandatory medication warning about driving impairment. This investigation revealed that the theoretical constructs related to intention and planning to avoid driving if an effect from the medication was noticed were useful. Not all the theoretical model constructs that had been demonstrated to be significant predictors in previous research on different health behaviours were significant in the present analyses. Positive outcome expectancies from avoiding driving were found to be important influences on forming the intention to avoid driving if an effect due to medication was noticed. In turn, intention was found to be a significant predictor of planning. Other selected theoretical constructs failed to predict compliance with the Australian warning label advice. It is possible that the limited predictive power of a number of constructs including risk perceptions is due to the small sample size obtained at follow up on which the evaluation is based. Alternately, it is possible that the theoretical constructs failed to sufficiently account for issues of particular relevance to the driving situation. The responses of the Australian hospital outpatient sample towards the Australian and French medication warning labels, which differed according to visual characteristics and warning message, were examined. In addition, a complementary study with a sample of French hospital outpatients was undertaken in order to allow general comparisons concerning the performance of the warnings. While a large amount of research exists concerning warning effectiveness, there is little research that has specifically investigated medication warnings relating to driving impairment. General established principles concerning factors that have been demonstrated to enhance warning noticeability and behavioural compliance have been extrapolated and investigated in the present study. The extent to which there is a need for education and improved health messages on this issue was a core issue of investigation in this thesis. Among the Australian sample, the size of the warning label and text, and red colour were the most visually important characteristics. The pictogram used in the French labels was also rated highly, and was salient for a large proportion of the sample. According to the study of French hospital outpatients, the pictogram was perceived to be the most important visual characteristic. Overall, the findings suggest that the Australian approach of using a combination of visual characteristics was important for the majority of the sample but that the use of a pictogram could enhance effects. A high rate of warning recall was found overall and a further important finding was that higher warning label recall was associated with increased number of medication classes taken. These results suggest that increased vigilance and care are associated with the number of medications taken and the associated repetition of the warning message. Significantly higher levels of risk perception were found for the French Level 3 (highest severity) label compared with the comparable mandatory Australian ancillary Label 1 warning. Participants' intentions related to the warning labels indicated that they would be more cautious while taking potentially impairing medication displaying the French Level 3 label compared with the Australian Label 1. These are potentially important findings for the Australian context regarding the current driving impairment warnings about displayed on medication. The findings raise other important implications for the Australian labelling context. An underlying factor may be the differences in the wording of the warning messages that appear on the Australian and French labels. The French label explicitly states "do not drive" while the Australian label states "if affected, do not drive", and the difference in responses may reflect that less severity is perceived where the situation involves the consumer's self-assessment of their impairment. The differences in the assignment of responsibility by the Australian (the consumer assesses and decides) and French (the doctor assesses and decides) approaches for the decision to drive while taking medication raises the core question of who is most able to assess driving impairment due to medication: the consumer, or the health professional? There are pros and cons related to knowledge, expertise and practicalities with either option. However, if the safety of the consumer is the primary aim, then the trend towards stronger risk perceptions and more consistent and cautious behavioural intentions in relation to the French label suggests that this approach may be more beneficial for consumer safety. The observations from the follow-up survey, although based on a small sample size and descriptive in nature, revealed that just over half of the sample recalled seeing a warning label about driving impairment on at least one of their medications. The majority of these respondents reported compliance with the warning advice. However, the results indicated variation in responses concerning alcohol intake and modifying the dose of medication or driving habits so that they could continue to drive, which suggests that the warning advice may not be having the desired impact. The findings of this research have implications for current countermeasures in this area. These have included enhancing the role that prescribing doctors have in providing warnings and advice to patients about the impact that their medication can have on driving, increasing consumer perceptions of the authority of pharmacists on this issue, and the reinforcement of the warning message. More broadly, it is suggested that there would be benefit in a wider dissemination of research-based information on increased crash risk and systematic monitoring and publicity about the representation of medications in crashes resulting in injuries and fatalities. Suggestions for future research concern the continued investigation of the effects of medications and interactions with existing medical conditions and other substances on driving skills, effects of variations in warning label design, individual behaviours and characteristics (particularly among those groups who are dependent upon prescription medication) and validation of consumer self-assessment of impairment.
Resumo:
It is acknowledged around the world that many university students struggle with learning to program (McCracken et al., 2001; McGettrick et al., 2005). In this paper, we describe how we have developed a research programme to systematically study and incrementally improve our teaching. We have adopted a research programme with three elements: (1) a theory that provides an organising framework for defining the type of phenomena and data of interest, (2) data on how the class as a whole performs on formative assessment tasks that are framed from within the organising framework, and (3) data from one-on-one think aloud sessions, to establish why students struggle with some of those in-class formative assessment tasks. We teach introductory computer programming, but this three-element structure of our research is applicable to many areas of engineering education research.
Resumo:
In this thesis, I contribute to the study of how arrangements are made in social interaction. Using conversation analysis, I examine a corpus of 375 telephone calls between employees and clients of three Community Home Care (CHC) service agencies in metropolitan Adelaide, South Australia. My analysis of the CHC data corpus draws upon existing empirical findings within conversation analysis in order to generate novel findings about how people make arrangements with one another, and some of the attendant considerations that parties to such an activity can engage in: Prospective informings as remote proposals for a future arrangement – Focusing on how employees make arrangements with clients, I show how the employees in the CHC data corpus use ‘prospective informings’ to detail a future course of action that will involve the recipient of that informing. These informings routinely occasion a double-paired sequence, where informers pursue a response to their informing. This pursuit often occurs even after recipients have provided an initial response. This practice for making arrangements has been previously described by Houtkoop (1987) as ‘remote proposing.’ I develop Houtkoop’s analysis to show how an informing of a future arrangement can be recompleted, with response solicitation, as a proposal that is contingent upon a recipient’s acceptance. Participants’ understanding of references to non-present third parties – In the process of making arrangements, references are routinely made to non-present third parties. In the CHC data corpus, these third parties are usually care workers. Prior research (e.g., Sacks & Schegloff, 1979; Schegloff, 1996b) explains how the use of ‘recognitional references’ (such as the bare name ‘Kerry’), conveys to recipients that they should be able to locate the referent from amongst their acquaintances. Conversely, the use of ‘non-recognitional references’ (such as the description ‘a lady called Kerry’), conveys that recipients are unacquainted with the referent. I examine instances where the selection of a recognitional or non-recognitional reference form is followed by a recipient initiating repair on that reference. My analysis provides further evidence thatthe existing analytic account of these references corresponds to the way in which participants themselves make sense of them. My analysis also advances an understanding of how repair can be used, by recipients, to indicate the inappositeness of a prior turn. Post-possible-completion accounts – In a case study of a problematic interaction, I examine a misunderstanding that is not resolved within the repair space, the usual defence of intersubjectivity in interaction (cf. Schegloff, 1992b). Rather, I explore how the source of trouble is addressed, outside of the sequence of its production, with a ‘post-possible-completion account.’ This account specifies the basis of a misunderstanding and yet, unlike repair, does so without occasioning a revised response to a trouble-source turn. By considering various aspects of making arrangements in social interaction, I highlight some of the rich order that underpins the maintenance of human relationships across time. In the concluding section of this thesis I review this order, while also discussing practical implications of this analysis for CHC practice.
Resumo:
Aim: To describe the positioning of patients managed in an intensive care unit (ICU); assess how frequently these patients were repositioned; and determine if any specific factors influenced how, why or when patients were repositioned in the ICU. Background: Alterations in body position of ICU patients are important for patient comfort and are believed to prevent and/or treat pressure ulcers, improve respiratory function and combat the adverse effects of immobility. There is a paucity of research on the positioning of critically ill patients in Saudi Arabian ICUs. Design and Methods: A prospective observational study was undertaken. Participant demographic data were collected as were clinical factors (i.e. ventilation status, primary diagnosis, co-morbidities and Ramsay sedation score) and organizational factors (i.e. time of day, type of mattress or beds used, nurse/patient ratio and the patient's position). Clinical and some organization data were recorded over a continuous 48 hour period. Result: Twenty-eight participants were recruited to the study. No participant was managed in either a flat or prone position. Obese participants were most likely to be managed in a supine position. The mean time between turns was two hours. There was no significant association between the mean time between turns and the recorded variables related to patients' demographic and organizational considerations. Conclusion: Results indicate that patient positioning in the ICU was a direct result of unit policy - it appeared that patients were not repositioned based upon evaluation of their clinical condition but rather according to a two-hour ICU timetable
Resumo:
An environmentally sustainable and thus green business process is one that delivers organizational value whilst also exerting a minimal impact on the natural environment. Recent works from the field of Information Systems (IS) have argued that information systems can contribute to the design and implementation of sustainable business processes. While prior research has investigated how information systems can be used in order to support sustainable business practices, there is still a void as to the actual changes that business processes have to undergo in order to become environmentally sustainable, and the specific role that information systems play in enabling this change. In this paper, we provide a conceptualization of environmentally sustainable business processes, and discuss the role of functional affordances of information systems in enabling both incremental and radical changes in order to make processes environmentally sustainable. Our conceptualization is based on (a) a fundamental definition of the concept of environmental sustainability, grounded in two basic components:the environmental source and sink functions of any project or activity, and (b) the concept of functional affordances, which describe the potential uses originating in the material properties of information systems in relation to their use context. In order to illustrate the application of our framework and provide a first evaluation, we analyse two examples from prior research where information systems impacted on the sustainability of business processes.
Resumo:
Deterministic computer simulation of physical experiments is now a common technique in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This paper aims to discuss some practical issues when designing a computer simulation and/or experiments for manufacturing systems. A case study approach is reviewed and presented.
Resumo:
The natural disasters incident that frequently hit Indonesia are floods, severe droughts, tsunamis, earth-quakes, volcano, eruptions, landslides, windstorm and forest fires. The impact of those natural disasters are significantly severe and affecting the quality of life of the community due to the breakdown of the public as-sets as one source to deliver public services. This paper is aimed to emphasis the importance of natural disaster risk-informed in relation to public asset management in Indonesian Central Government, particularly in asset planning stage where asset decision is made as the gate into the whole public asset management processes. A Case study in the Ministry of Finance Indonesia as the central government public asset manager and in 5 (five) line ministries/governmental agencies as public asset users was used as the approach to achieved the research objective. The case study devoured three data collection techniques i.e. interviews, observations and document archival which will be analysed by a content analysis approach. The result of the study indicates that Indonesian geographical position exposing many of public infra-structure assets as a high vulnerability to natural disasters. Information on natural-disaster trends and predictions to identify and measure the risks are available, however, such information are not utilise and integrated to the process of public infrastructure asset planning as the gate to the whole public asset management processes. Therefore, in order to accommodate and incorporate this natural disaster risk-information into public asset management processes, particularly in public asset planning, a public asset performance measurements framework should be adopted and applied in the process as one sources in making decision for infrastructure asset planning. Findings from this study provide useful input for the Ministry of Finance as public asset manager, scholars and private asset management practitioners in Indonesia to establish natural disaster risks awareness in public infrastructure asset management processes.
Resumo:
Purpose Virally mediated head and neck cancers (VMHNC) often present with nodal involvement and are highly radioresponsive, meaning that treatment plan adaptation during radiotherapy (RT) in a subset of patients is required. We sought to determine potential risk profiles and a corresponding adaptive treatment strategy for these patients. Methodology 121 patients with virally mediated, node positive nasopharyngeal (Epstein Barr Virus positive) or oropharyngeal (Human Papillomavirus positive) cancers, receiving curative intent RT were reviewed. The type, frequency and timing of adaptive interventions, including source-to-skin distance (SSD) corrections, re-scanning and re-planning, were evaluated. Patients were reviewed based on the maximum size of the dominant node to assess the need for plan adaptation. Results Forty-six patients (38%) required plan adaptation during treatment. The median fraction at which the adaptive intervention occurred was 26 for SSD corrections and 22 for re-planning CTs. A trend toward 3 risk profile groupings was discovered: 1) Low risk with minimal need (< 10%) for adaptive intervention (dominant pre-treatment nodal size of ≤ 35 mm), 2) Intermediate risk with possible need (< 20%) for adaptive intervention (dominant pre-treatment nodal size of 36 mm – 45 mm) and 3) High-risk with increased likelihood (> 50%) for adaptive intervention (dominant pre-treatment nodal size of ≥ 46 mm). Conclusion In this study, patients with VMHNC and a maximum dominant nodal size of > 46 mm were identified at a higher risk of requiring re-planning during a course of definitive RT. Findings will be tested in a future prospective adaptive RT study.
Resumo:
The candidate gene approach has been a pioneer in the field of genetic epidemiology, identifying risk alleles and their association with clinical traits. With the advent of rapidly changing technology, there has been an explosion of in silico tools available to researchers, giving them fast, efficient resources and reliable strategies important to find casual gene variants for candidate or genome wide association studies (GWAS). In this review, following a description of candidate gene prioritisation, we summarise the approaches to single nucleotide polymorphism (SNP) prioritisation and discuss the tools available to assess functional relevance of the risk variant with consideration to its genomic location. The strategy and the tools discussed are applicable to any study investigating genetic risk factors associated with a particular disease. Some of the tools are also applicable for the functional validation of variants relevant to the era of GWAS and next generation sequencing (NGS).
The electrochemical corrosion behaviour of quaternary gold alloys when exposed to 3.5% NaCl solution
Resumo:
Lower carat gold alloys, specifically 9 carat gold alloys, containing less than 40 % gold, and alloying additions of silver, copper and zinc, are commonly used in many jewellery applications, to offset high costs and poor mechanical properties associated with pure gold. While gold is considered to be chemically inert, the presence of active alloying additions raises concerns about certain forms of corrosion, particularly selective dissolution of these alloys. The purpose of this study was to systematically study the corrosion behaviour of a series of quaternary gold–silver–copper–zinc alloys using dc potentiodynamic scanning in saline (3.5 % NaCl) environment. Full anodic/cathodic scans were conducted to determine the overall corrosion characteristics of the alloy, followed by selective anodic scans and subsequent morphological and compositional analysis of the alloy surface and corroding media to determine the extent of selective dissolution. Varying degrees of selective dissolution and associated corrosion rates were observed after anodic polarisation in 3.5 % NaCl, depending on the alloy composition. The corrosion behaviour of the alloys was determined by the extent of anodic reactions which induce (1) formation of oxide scales on the alloy surface and or (2) dissolution of Zn and Cu species. In general, the improved corrosion characteristics of alloy #3 was attributed to the composition of Zn/Cu in the alloy and thus favourable microstructure promoting the formation of protective oxide/chloride scales and reducing the extent of Cu and Zn dissolution.
Resumo:
BACKGROUND: The prevalence of protein-energy malnutrition in older adults is reported to be as high as 60% and is associated with poor health outcomes. Inadequate feeding assistance and mealtime interruptions may contribute to malnutrition and poor nutritional intake during hospitalisation. Despite being widely implemented in practice in the United Kingdom and increasingly in Australia, there have been few studies examining the impact of strategies such as Protected Mealtimes and dedicated feeding assistant roles on nutritional outcomes of elderly inpatients. AIMS: The aim of this research was to implement and compare three system-level interventions designed to specifically address mealtime barriers and improve energy intakes of medical inpatients aged ≥65 years. This research also aimed to evaluate the sustainability of any changes to mealtime routines six months post-intervention and to gain an understanding of staff perceptions of the post-intervention mealtime experience. METHODS: Three mealtime assistance interventions were implemented in three medical wards at Royal Brisbane and Women's Hospital: AIN-only: Additional assistant-in-nursing (AIN) with dedicated nutrition role. PM-only: Multidisciplinary approach to meals, including Protected Mealtimes. PM+AIN: Combined intervention: AIN + multidisciplinary approach to meals. An action research approach was used to carefully design and implement the three interventions in partnership with ward staff and managers. Significant time was spent in consultation with staff throughout the implementation period to facilitate ownership of the interventions and increase likelihood of successful implementation. A pre-post design was used to compare the implementation and nutritional outcomes of each intervention to a pre-intervention group. Using the same wards, eligible participants (medical inpatients aged ≥65 years) were recruited to the preintervention group between November 2007 and March 2008 and to the intervention groups between January and June 2009. The primary nutritional outcome was daily energy and protein intake, which was determined by visually estimating plate waste at each meal and mid-meal on Day 4 of admission. Energy and protein intakes were compared between the pre and post intervention groups. Data were collected on a range of covariates (demographics, nutritional status and known risk factors for poor food intake), which allowed for multivariate analysis of the impact of the interventions on nutritional intake. The provision of mealtime assistance to participants and activities of ward staff (including mealtime interruptions) were observed in the pre-intervention and intervention groups, with staff observations repeated six months post-intervention. Focus groups were conducted with nursing and allied health staff in June 2009 to explore their attitudes and behaviours in response to the three mealtime interventions. These focus group discussions were analysed using thematic analysis. RESULTS: A total of 254 participants were recruited to the study (pre-intervention: n=115, AIN-only: n=58, PM-only: n=39, PM+AIN: n=42). Participants had a mean age of 80 years (SD 8), and 40% (n=101) were malnourished on hospital admission, 50% (n=108) had anorexia and 38% (n=97) required some assistance at mealtimes. Occasions of mealtime assistance significantly increased in all interventions (p<0.01). However, no change was seen in mealtime interruptions. No significant difference was seen in mean total energy and protein intake between the preintervention and intervention groups. However, when total kilojoule intake was compared with estimated requirements at the individual level, participants in the intervention groups were more likely to achieve adequate energy intake (OR=3.4, p=0.01), with no difference noted between interventions (p=0.29). Despite small improvements in nutritional adequacy, the majority of participants in the intervention groups (76%, n=103) had inadequate energy intakes to meet their estimated energy requirements. Patients with cognitive impairment or feeding dependency appeared to gain substantial benefit from mealtime assistance interventions. The increase in occasions of mealtime assistance by nursing staff during the intervention period was maintained six-months post-intervention. Staff focus groups highlighted the importance of clearly designating and defining mealtime responsibilities in order to provide adequate mealtime care. While the purpose of the dedicated feeding assistant was to increase levels of mealtime assistance, staff indicated that responsibility for mealtime duties may have merely shifted from nursing staff to the assistant. Implementing the multidisciplinary interventions empowered nursing staff to "protect" the mealtime from external interruptions, but further work is required to empower nurses to prioritise mealtime activities within their own work schedules. Staff reported an increase in the profile of nutritional care on all wards, with additional non-nutritional benefits noted including improved mobility and functional independence, and better identification of swallowing difficulties. IMPLICATIONS: The PhD research provides clinicians with practical strategies to immediately introduce change to deliver better mealtime care in the hospital setting, and, as such, has initiated local and state-wide roll-out of mealtime assistance programs. Improved nutritional intakes of elderly inpatients was observed; however given the modest effect size and reducing lengths of hospital stays, better nutritional outcomes may be achieved by targeting the hospital-to-home transition period. Findings from this study suggest that mealtime assistance interventions for elderly inpatients with cognitive impairment and/or functional dependency show promise.