912 resultados para Subsequent hydrolysis
Resumo:
In this research I have examined how ePortfolios can be designed for Music postgraduate study through a practice led research enquiry. This process involved designing two Web 2.0 ePortfolio systems for a group of five post graduate music research students. The design process revolved around the application of an iterative methodology called Software Develop as Research (SoDaR) that seeks to simultaneously develop design and pedagogy. The approach to designing these ePortfolio systems applied four theoretical protocols to examine the use of digitised artefacts in ePortfolio systems to enable a dynamic and inclusive dialogue around representations of the students work. The research and design process involved an analysis of existing software and literature with a focus upon identifying the affordances of available Web 2.0 software and the applications of these ideas within 21st Century life. The five post graduate music students each posed different needs in relation to the management of digitised artefacts and the communication of their work amongst peers, supervisors and public display. An ePortfolio was developed for each of them that was flexible enough to address their needs within the university setting. However in this first SoDaR iteration data gathering phase I identified aspects of the university context that presented a negative case that impacted upon the design and usage of the ePortfolios and prevented uptake. Whilst the portfolio itself functioned effectively, the university policies and technical requirements prevented serious use. The negative case analysis of the case study found revealed that Access and Control and Implementation, Technical and Policy Constraints protocols where limiting user uptake. From the semistructured interviews carried out as part of this study participant feedback revealed that whilst the participants did not use the ePortfolio system I designed, each student was employing Web 2.0 social networking and storage processes in their lives and research. In the subsequent iterations I then designed a more ‘ideal’ system that could be applied outside of the University context that draws upon the employment of these resources. In conclusion I suggest recommendations about ePortfolio design that considers what the applications of the theoretical protocols reveal about creative arts settings. The transferability of these recommendations are of course dependent upon the reapplication of the theoretical protocols in a new context. To address the mobility of ePortfolio design between Institutions and wider settings I have also designed a prototype for a business card sized USB portal for the artists’ ePortfolio. This research project is not a static one; it stands as an evolving design for a Web 2.0 ePortfolio that seeks to refer to users needs, institutional and professional contexts and the development of software that can be incorporated within the design. What it potentially provides to creative artist is an opportunity to have a dialogue about art with artefacts of the artist products and processes in that discussion.
Resumo:
Recently it has been shown that the consumption of a diet high in saturated fat is associated with impaired insulin sensitivity and increased incidence of type 2 diabetes. In contrast, diets that are high in monounsaturated fatty acids (MUFAs) or polyunsaturated fatty acids (PUFAs), especially very long chain n-3 fatty acids (FAs), are protective against disease. However, the molecular mechanisms by which saturated FAs induce the insulin resistance and hyperglycaemia associated with metabolic syndrome and type 2 diabetes are not clearly defined. It is possible that saturated FAs may act through alternative mechanisms compared to MUFA and PUFA to regulate of hepatic gene expression and metabolism. It is proposed that, like MUFA and PUFA, saturated FAs regulate the transcription of target genes. To test this hypothesis, hepatic gene expression analysis was undertaken in a human hepatoma cell line, Huh-7, after exposure to the saturated FA, palmitate. These experiments showed that palmitate is an effective regulator of gene expression for a wide variety of genes. A total of 162 genes were differentially expressed in response to palmitate. These changes not only affected the expression of genes related to nutrient transport and metabolism, they also extend to other cellular functions including, cytoskeletal architecture, cell growth, protein synthesis and oxidative stress response. In addition, this thesis has shown that palmitate exposure altered the expression patterns of several genes that have previously been identified in the literature as markers of risk of disease development, including CVD, hypertension, obesity and type 2 diabetes. The altered gene expression patterns associated with an increased risk of disease include apolipoprotein-B100 (apo-B100), apo-CIII, plasminogen activator inhibitor 1, insulin-like growth factor-I and insulin-like growth factor binding protein 3. This thesis reports the first observation that palmitate directly signals in cultured human hepatocytes to regulate expression of genes involved in energy metabolism as well as other important genes. Prolonged exposure to long-chain saturated FAs reduces glucose phosphorylation and glycogen synthesis in the liver. Decreased glucose metabolism leads to elevated rates of lipolysis, resulting in increased release of free FAs. Free FAs have a negative effect on insulin action on the liver, which in turn results in increased gluconeogenesis and systemic dyslipidaemia. It has been postulated that disruption of glucose transport and insulin secretion by prolonged excessive FA availability might be a non-genetic factor that has contributed to the staggering rise in prevalence of type 2 diabetes. As glucokinase (GK) is a key regulatory enzyme of hepatic glucose metabolism, changes in its activity may alter flux through the glycolytic and de novo lipogenic pathways and result in hyperglycaemia and ultimately insulin resistance. This thesis investigated the effects of saturated FA on the promoter activity of the glycolytic enzyme, GK, and various transcription factors that may influence the regulation of GK gene expression. These experiments have shown that the saturated FA, palmitate, is capable of decreasing GK promoter activity. In addition, quantitative real-time PCR has shown that palmitate incubation may also regulate GK gene expression through a known FA sensitive transcription factor, sterol regulatory element binding protein-1c (SREBP-1c), which upregulates GK transcription. To parallel the investigations into the mechanisms of FA molecular signalling, further studies of the effect of FAs on metabolic pathway flux were performed. Although certain FAs reduce SREBP-1c transcription in vitro, it is unclear whether this will result in decreased GK activity in vivo where positive effectors of SREBP-1c such as insulin are also present. Under these conditions, it is uncertain if the inhibitory effects of FAs would be overcome by insulin. The effects of a combination of FAs, insulin and glucose on glucose phosphorylation and metabolism in cultured primary rat hepatocytes at concentrations that mimic those in the portal circulation after a meal was examined. It was found that total GK activity was unaffected by an increased concentration of insulin, but palmitate and eicosapentaenoic acid significantly lowered total GK activity in the presence of insulin. Despite the fact that total GK enzyme activity was reduced in response to FA incubation, GK enzyme translocation from the inactive, nuclear bound, to active, cytoplasmic state was unaffected. Interestingly, none of the FAs tested inhibited glucose phosphorylation or the rate of glycolysis when insulin is present. These results suggest that in the presence of insulin the levels of the active, unbound cytoplasmic GK are sufficient to buffer a slight decrease in GK enzyme activity and decreased promoter activity caused by FA exposure. Although a high fat diet has been associated with impaired hepatic glucose metabolism, there is no evidence from this thesis that FAs themselves directly modulate flux through the glycolytic pathway in isolated primary hepatocytes when insulin is also present. Therefore, although FA affected expression of a wide range of genes, including GK, this did not affect glycolytic flux in the presence of insulin. However, it may be possible that a saturated FA-induced decrease in GK enzyme activity when combined with the onset of insulin resistance may promote the dys-regulation of glucose homeostasis and the subsequent development of hyperglycaemia, metabolic syndrome and type 2 diabetes.
Resumo:
This thesis consists of three related studies: an ERP Major Issues Study; an Historical Study of the Queensland Government Financial Management System; and a Meta-Study that integrates these and other related studies conducted under the umbrella of the Cooperative ERP Lifecycle Knowledge Management research program. This research provides a comprehensive view of ERP lifecycle issues encountered in SAP R/3 projects across the Queensland Government. This study follows a preliminary ERP issues study (Chang, 2002) conducted in five Queensland Government agencies. The Major Issues Study aims to achieve the following: (1) identify / explicate major issues in relation to the ES life-cycle in the public sector; (2) rank the importance of these issues; and, (3) highlight areas of consensus and dissent among stakeholder groups. To provide a rich context for this study, this thesis includes an historical recount of the Queensland Government Financial Management System (QGFMS). This recount tells of its inception as a centralised system; the selection of SAP and subsequent decentralisation; and, its eventual recentralisation under the Shared Services Initiative and CorpTech. This historical recount gives an insight into the conditions that affected the selection and ongoing management and support of QGFMS. This research forms part of a program entitled Cooperative ERP Lifecycle Knowledge Management. This thesis provides a concluding report for this research program by summarising related studies conducted in the Queensland Government SAP context: Chan (2003); Vayo et al (2002); Ng (2003); Timbrell et al (2001); Timbrell et al (2002); Chang (2002); Putra (1998); and, Niehus et al (1998). A study of Oracle in the United Arab Emirates by Dhaheri (2002) is also included. The thesis then integrates the findings from these studies in an overarching Meta-Study. The Meta-Study discusses key themes across all of these studies, creating an holistic report for the research program. Themes discussed in the meta-study include common issues found across the related studies; knowledge dynamics of the ERP lifecycle; ERP maintenance and support; and, the relationship between the key players in the ERP lifecycle.
Resumo:
In recent times, the improved levels of accuracy obtained by Automatic Speech Recognition (ASR) technology has made it viable for use in a number of commercial products. Unfortunately, these types of applications are limited to only a few of the world’s languages, primarily because ASR development is reliant on the availability of large amounts of language specific resources. This motivates the need for techniques which reduce this language-specific, resource dependency. Ideally, these approaches should generalise across languages, thereby providing scope for rapid creation of ASR capabilities for resource poor languages. Cross Lingual ASR emerges as a means for addressing this need. Underpinning this approach is the observation that sound production is largely influenced by the physiological construction of the vocal tract, and accordingly, is human, and not language specific. As a result, a common inventory of sounds exists across languages; a property which is exploitable, as sounds from a resource poor, target language can be recognised using models trained on resource rich, source languages. One of the initial impediments to the commercial uptake of ASR technology was its fragility in more challenging environments, such as conversational telephone speech. Subsequent improvements in these environments has gained consumer confidence. Pragmatically, if cross lingual techniques are to considered a viable alternative when resources are limited, they need to perform under the same types of conditions. Accordingly, this thesis evaluates cross lingual techniques using two speech environments; clean read speech and conversational telephone speech. Languages used in evaluations are German, Mandarin, Japanese and Spanish. Results highlight that previously proposed approaches provide respectable results for simpler environments such as read speech, but degrade significantly when in the more taxing conversational environment. Two separate approaches for addressing this degradation are proposed. The first is based on deriving better target language lexical representation, in terms of the source language model set. The second, and ultimately more successful approach, focuses on improving the classification accuracy of context-dependent (CD) models, by catering for the adverse influence of languages specific phonotactic properties. Whilst the primary research goal in this thesis is directed towards improving cross lingual techniques, the catalyst for investigating its use was based on expressed interest from several organisations for an Indonesian ASR capability. In Indonesia alone, there are over 200 million speakers of some Malay variant, provides further impetus and commercial justification for speech related research on this language. Unfortunately, at the beginning of the candidature, limited research had been conducted on the Indonesian language in the field of speech science, and virtually no resources existed. This thesis details the investigative and development work dedicated towards obtaining an ASR system with a 10000 word recognition vocabulary for the Indonesian language.
Resumo:
This project is an account of one teacher's journey with her students, across cultural boundaries in search of creating authentic Asian/Australian Drama experiences. The project explores the notion of establishing a shared cultural context. The early chapters focus on the background influences that determine where and how the project is set. Subsequent chapters provide an account of the innovative use of dramatic forms used in preparation for the fieldwork, then account of the fieldwork and post classwork. The study ends with a series of recommendations for any teacher intending to undertake a similar project.
Resumo:
This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.
Resumo:
This dissertation develops the model of a prototype system for the digital lodgement of spatial data sets with statutory bodies responsible for the registration and approval of land related actions under the Torrens Title system. Spatial data pertain to the location of geographical entities together with their spatial dimensions and are classified as point, line, area or surface. This dissertation deals with a sub-set of spatial data, land boundary data that result from the activities performed by surveying and mapping organisations for the development of land parcels. The prototype system has been developed, utilising an event-driven paradigm for the user-interface, to exploit the potential of digital spatial data being generated from the utilisation of electronic techniques. The system provides for the creation of a digital model of the cadastral network and dependent data sets for an area of interest from hard copy records. This initial model is calibrated on registered control and updated by field survey to produce an amended model. The field-calibrated model then is electronically validated to ensure it complies with standards of format and content. The prototype system was designed specifically to create a database of land boundary data for subsequent retrieval by land professionals for surveying, mapping and related activities. Data extracted from this database are utilised for subsequent field survey operations without the need to create an initial digital model of an area of interest. Statistical reporting of differences resulting when subsequent initial and calibrated models are compared, replaces the traditional checking operations of spatial data performed by a land registry office. Digital lodgement of survey data is fundamental to the creation of the database of accurate land boundary data. This creation of the database is fundamental also to the efficient integration of accurate spatial data about land being generated by modem technology such as global positioning systems, and remote sensing and imaging, with land boundary information and other information held in Government databases. The prototype system developed provides for the delivery of accurate, digital land boundary data for the land registration process to ensure the continued maintenance of the integrity of the cadastre. Such data should meet also the more general and encompassing requirements of, and prove to be of tangible, longer term benefit to the developing, electronic land information industry.
Resumo:
The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.
Resumo:
The closure of large institutions for people with intellectual disability and the subsequent shift to community living has been a feature of social policies in most western democracies for more than two decades. While the move from congregated settings to homes in the community has been heralded as a positive and desirable strategy, deinstitutionalisation has continued to be a controversial policy and practice. This research critically analyses the implementation of a deinstitutionalisation policy called Institutional Reform in the state of Queensland from May 1994 until it was dismantled under a new government in the middle of 1996. A trajectory study of the policy from early conceptualisation through its development, implementation and final extinction was undertaken. Several methods were utilised in the research including the textual analyis of policy documents, discussion papers and newspaper articles, interviews with stakeholders and participant observation. The research draws on theories of discourse and focuses on how discourses of disability shape policy and practice. The thesis outlines a number of implications for policy implementation more generally as well as for disability services. In particular, the theoretical framework builds on Fulcher's (1989) disabling discourses - medical, charity, lay and rights - and identifies two additional discourses of economics and inclusion. The thesis argues that competing disability discourses operated in powerful ways to shape the implementation of the policy and illustrates how older discourses based on fear and prejudice were promoted to positions of dominance and power.
Resumo:
This research investigated students' construction of knowledge about the topics of magnetism and electricity emergent from a visit to an interactive science centre and subsequent classroom-based activities linked to the science centre exhibits. The significance of this study is that it analyses critically an aspect of school visits to informal learning centres that has been neglected by researchers in the past, namely the influence of post-visit activities in the classroom on subsequent learning and knowledge construction. Employing an interpretive methodology, the study focused on three areas of endeavour. Firstly, the establishment of a set of principles for the development of post-visit activities, from a constructivist framework, to facilitate students' learning of science. Secondly, to describe and interpret students' scientific understandings : prior t o a visit t o a science museum; following a visit t o a science museum; and following post-visit activities that were related to their museum experiences. Finally, to describe and interpret the ways in which students constructed their understandings: prior to a visit to a science museum; following a visit to a science museum; and following post-visit activities directly related to their museum experiences. The study was designed and implemented in three stages: 1) identification and establishment of the principles for design and evaluation of post-visit activities; 2) a pilot study of specific post-visit activities and data gathering strategies related to student construction of knowledge; and 3) interpretation of students' construction of knowledge from a visit to a science museum and subsequent completion of post-visit activities, which constituted the main study. Twelve students were selected from a year 7 class to participate in the study. This study provides evidence that the series of post-visit activities, related to the museum experiences, resulted in students constructing and reconstructing their personal knowledge of science concepts and principles represented in the science museum exhibits, sometimes towards the accepted scientific understanding and sometimes in different and surprising ways. Findings demonstrate the interrelationships between learning that occurs at school, at home and in informal learning settings. The study also underscores for teachers and staff of science museums and similar centres the importance of planning pre- and post-visit activities, not only to support the development of scientific conceptions, but also to detect and respond to alternative conceptions that may be produced or strengthened during a visit to an informal learning centre. Consistent with contemporary views of constructivism, the study strongly supports the views that : 1) knowledge is uniquely structured by the individual; 2) the processes of knowledge construction are gradual, incremental, and assimilative in nature; 3) changes in conceptual understanding are can be interpreted in the light of prior knowledge and understanding; and 4) knowledge and understanding develop idiosyncratically, progressing and sometimes appearing to regress when compared with contemporary science. This study has implications for teachers, students, museum educators, and the science education community given the lack of research into the processes of knowledge construction in informal contexts and the roles that post-visit activities play in the overall process of learning.
Resumo:
The primary purpose of this research was to examine individual differences in learning from worked examples. By integrating cognitive style theory and cognitive load theory, it was hypothesised that an interaction existed between individual cognitive style and the structure and presentation of worked examples in their effect upon subsequent student problem solving. In particular, it was hypothesised that Analytic-Verbalisers, Analytic-Imagers, and Wholist-lmagers would perform better on a posttest after learning from structured-pictorial worked examples than after learning from unstructured worked examples. For Analytic-Verbalisers it was reasoned that the cognitive effort required to impose structure on unstructured worked examples would hinder learning. Alternatively, it was expected that Wholist-Verbalisers would display superior performances after learning from unstructured worked examples than after learning from structured-pictorial worked examples. The images of the structured-pictorial format, incongruent with the Wholist-Verbaliser style, would be expected to split attention between the text and the diagrams. The information contained in the images would also be a source of redundancy and not easily ignored in the integrated structured-pictorial format. Despite a number of authors having emphasised the need to include individual differences as a fundamental component of problem solving within domainspecific subjects such as mathematics, few studies have attempted to investigate a relationship between mathematical or science instructional method, cognitive style, and problem solving. Cognitive style theory proposes that the structure and presentation of learning material is likely to affect each of the four cognitive styles differently. No study could be found which has used Riding's (1997) model of cognitive style as a framework for examining the interaction between the structural presentation of worked examples and an individual's cognitive style. 269 Year 12 Mathematics B students from five urban and rural secondary schools in Queensland, Australia participated in the main study. A factorial (three treatments by four cognitive styles) between-subjects multivariate analysis of variance indicated a statistically significant interaction. As the difficulty of the posttest components increased, the empirical evidence supporting the research hypotheses became more pronounced. The rigour of the study's theoretical framework was further tested by the construction of a measure of instructional efficiency, based on an index of cognitive load, and the construction of a measure of problem-solving efficiency, based on problem-solving time. The consistent empirical evidence within this study that learning from worked examples is affected by an interaction of cognitive style and the structure and presentation of the worked examples emphasises the need to consider individual differences among senior secondary mathematics students to enhance educational opportunities. Implications for teaching and learning are discussed and recommendations for further research are outlined.
Resumo:
The band The Escalators together with the music uniquely composed for it and a subsequent CD and DVDs was the work that emerged from my period of research. The areas of interest that were investigated were sampling, minimalism, stasis, the work of David Lynch, as well as a desire to produce new and innovative music. The above concepts defined the bands composition and makeup. While each may be regarded as a discreet concept with its own boundaries in my work they seamlessly intermingle resulting in that which is unique to The Escalators sound. The research methodology used for this work was practice led research.
Resumo:
Spatial organization of Ge islands, grown by physical vapor deposition, on prepatterned Si(001) substrates has been investigated. The substrates were patterned prior to Ge deposition by nanoindentation. Characterization of Ge dots is performed by atomic force microscopy and scanning electron microscopy. The nanoindents act as trapping sites, allowing ripening of Ge islands at those locations during subsequent deposition and diffusion of Ge on the surface. The results show that island ordering is intrinsically linked to the nucleation and growth at indented sites and it strongly depends on pattern parameters.
Resumo:
Introduction: The purpose of this study was to assess the capacity of a written intervention, in this case a patient information brochure, to improve patient satisfaction during an Emergency Department (ED) visit. For the purpose of measuring the effect of the intervention the ED journey was conceptualised as a series of distinct areas of service comprising waiting time, service by the triage nurse, care from doctors and nurses and information giving Background of study: Research into patient satisfaction has become a widespread activity endorsed by both governments and hospital administrations. The literature on ED patient satisfaction has consistently indicated three primary areas of patient dissatisfaction: waiting time, nursing care and communication. Recent developments in the literature on patient satisfaction studies however have highlighted the relationship between patients. expectations of a service encounter and their consequent assessment of the experience as dissatisfying or satisfying. Disconfirmation theory posits that the degree to which expectations are confirmed will affect subsequent levels of satisfaction. The conceptual framework utilised in this study is Coye.s (2004) model of disconfirmation. Coye while reiterating satisfaction is a consequence of the degree expectations are either confirmed or disconfirmed also posits that expectations can be modified by interventions. Coye.s work conceptualises these interventions as intra encounter experiences (cues) which function to adjust expectations. Coye suggests some cues are unintended and may have a negative impact which also reinforces the value of planned cues intended to meet or exceed consumer expectations. Consequently the brochure can be characterized as a potentially positive cue, encouraging the patient to understand processes and to orient them in what can be a confronting environment. Only a limited number of studies have examined the effect of written interventions within an ED. No studies could be located which have tested the effect of ED interventions using a conceptual framework which relates the effect of the degree to which expectations are confirmed or disconfirmed in terms of satisfaction with services. Method: Two studies were conducted. Study One used qualitative methods to explore patients. expectations of the ED from the perspective of both patients and health care professionals. Study One was used in part to direct the development of the intervention (brochure) in Study Two. The brochure was an intervention designed to modify patients. expectations thus increasing their satisfaction with the provision of ED service. As there was no existing tools to measure ED patients. expectations and satisfaction a new tool was also developed based on the findings and the literature of Study One. Study Two used a non-randomised, quasi-experimental approach using a non-equivalent post-test only comparison group design used to investigate the effect of the patient education brochure (Stommel and Wills, 2004). The brochure was disseminated to one of two study groups (the intervention group). The effect of the brochure was assessed by comparing the data obtained from both the intervention and control group. These two groups consisted of 150 participants each. It was expected that any differences in the relevant domains selected for examination would indicate the effect of the brochure both on expectation and potentially satisfaction. Results: Study One revealed several areas of common ground between patients and nurses in terms of relevant content for the written intervention, including the need for information on the triage system and waiting times. Areas of difference were also found with patients emphasizing communication issues, whereas focus group members expressed concern that patients were often unable to assimilate verbal information. The findings suggested the potential utility of written material to reinforce verbal communication particularly in terms of the triage process and other ED protocols. This material was synthesized within the final version of the written intervention. Overall the results of Study Two indicated no significant differences between the two groups. The intervention group did indicate a significant number of participants who viewed the brochure of having changed their expectations. The effect of the brochure may have been obscured by a lack of parity between the two groups as the control group presented with statistically significantly higher levels of acuity and experienced significantly shorter waiting times. In terms of disconfirmation theory this would suggest expectations that had been met or exceeded. The results confirmed the correlation of expectations with satisfaction. Several domains also indicated age as a significant predictor with older patients tending to score higher satisfaction results. Other significant predictors of satisfaction established were waiting time and care from nurses, reinforcing the combination of efficient service and positive interpersonal experiences as being valued by patients. Conclusions: Information presented in written form appears to benefit a significant number of ED users in terms of orientation and explaining systems and procedures. The degree to which these effects may interact with other dimensions of satisfaction however is likely to be limited. Waiting time and interpersonal behaviours from staff also provide influential cues in determining satisfaction. Written material is likely to be one element in a series of coordinated strategies to improve patient satisfaction during periods of peak demand.
Resumo:
In Australia and many other countries worldwide, water used in the manufacture of concrete must be potable. At present, it is currently thought that concrete properties are highly influenced by the water type used and its proportion in the concrete mix, but actually there is little knowledge of the effects of different, alternative water sources used in concrete mix design. Therefore, the identification of the level and nature of contamination in available water sources and their subsequent influence on concrete properties is becoming increasingly important. Of most interest, is the recycled washout water currently used by batch plants as mixing water for concrete. Recycled washout water is the water used onsite for a variety of purposes, including washing of truck agitator bowls, wetting down of aggregate and run off. This report presents current information on the quality of concrete mixing water in terms of mandatory limits and guidelines on impurities as well as investigating the impact of recycled washout water on concrete performance. It also explores new sources of recycled water in terms of their quality and suitability for use in concrete production. The complete recycling of washout water has been considered for use in concrete mixing plants because of the great benefit in terms of reducing the cost of waste disposal cost and environmental conservation. The objective of this study was to investigate the effects of using washout water on the properties of fresh and hardened concrete. This was carried out by utilizing a 10 week sampling program from three representative sites across South East Queensland. The sample sites chosen represented a cross-section of plant recycling methods, from most effective to least effective. The washout water samples collected from each site were then analysed in accordance with Standards Association of Australia AS/NZS 5667.1 :1998. These tests revealed that, compared with tap water, the washout water was higher in alkalinity, pH, and total dissolved solids content. However, washout water with a total dissolved solids content of less than 6% could be used in the production of concrete with acceptable strength and durability. These results were then interpreted using chemometric techniques of Principal Component Analysis, SIMCA and the Multi-Criteria Decision Making methods PROMETHEE and GAIA were used to rank the samples from cleanest to unclean. It was found that even the simplest purifying processes provided water suitable for the manufacture of concrete form wash out water. These results were compared to a series of alternative water sources. The water sources included treated effluent, sea water and dam water and were subject to the same testing parameters as the reference set. Analysis of these results also found that despite having higher levels of both organic and inorganic properties, the waters complied with the parameter thresholds given in the American Standard Test Method (ASTM) C913-08. All of the alternative sources were found to be suitable sources of water for the manufacture of plain concrete.