905 resultados para The Impossible Is Possible
Resumo:
Organizations from every industry sector seek to enhance their business performance and competitiveness through the deployment of contemporary information systems (IS), such as Enterprise Systems (ERP). Investments in ERP are complex and costly, attracting scrutiny and pressure to justify their cost. Thus, IS researchers highlight the need for systematic evaluation of information system success, or impact, which has resulted in the introduction of varied models for evaluating information systems. One of these systematic measurement approaches is the IS-Impact Model introduced by a team of researchers at Queensland University of technology (QUT) (Gable, Sedera, & Chan, 2008). The IS-Impact Model is conceptualized as a formative, multidimensional index that consists of four dimensions. Gable et al. (2008) define IS-Impact as "a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups" (p.381). The IT Evaluation Research Program (ITE-Program) at QUT has grown the IS-Impact Research Track with the central goal of conducting further studies to enhance and extend the IS-Impact Model. The overall goal of the IS-Impact research track at QUT is "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable, 2009). In order to achieve that, the IS-Impact research track advocates programmatic research having the principles of tenacity, holism, and generalizability through extension research strategies. This study was conducted within the IS-Impact Research Track, to further generalize the IS-Impact Model by extending it to the Saudi Arabian context. According to Hofsted (2012), the national culture of Saudi Arabia is significantly different from the Australian national culture making the Saudi Arabian culture an interesting context for testing the external validity of the IS-Impact Model. The study re-visits the IS-Impact Model from the ground up. Rather than assume the existing instrument is valid in the new context, or simply assess its validity through quantitative data collection, the study takes a qualitative, inductive approach to re-assessing the necessity and completeness of existing dimensions and measures. This is done in two phases: Exploratory Phase and Confirmatory Phase. The exploratory phase addresses the first research question of the study "Is the IS-Impact Model complete and able to capture the impact of information systems in Saudi Arabian Organization?". The content analysis, used to analyze the Identification Survey data, indicated that 2 of the 37 measures of the IS-Impact Model are not applicable for the Saudi Arabian Context. Moreover, no new measures or dimensions were identified, evidencing the completeness and content validity of the IS-Impact Model. In addition, the Identification Survey data suggested several concepts related to IS-Impact, the most prominent of which was "Computer Network Quality" (CNQ). The literature supported the existence of a theoretical link between IS-Impact and CNQ (CNQ is viewed as an antecedent of IS-Impact). With the primary goal of validating the IS-Impact model within its extended nomological network, CNQ was introduced to the research model. The Confirmatory Phase addresses the second research question of the study "Is the Extended IS-Impact Model Valid as a Hierarchical Multidimensional Formative Measurement Model?". The objective of the Confirmatory Phase was to test the validity of IS-Impact Model and CNQ Model. To achieve that, IS-Impact, CNQ, and IS-Satisfaction were operationalized in a survey instrument, and then the research model was assessed by employing the Partial Least Squares (PLS) approach. The CNQ model was validated as a formative model. Similarly, the IS-Impact Model was validated as a hierarchical multidimensional formative construct. However, the analysis indicated that one of the IS-Impact Model indicators was insignificant and can be removed from the model. Thus, the resulting Extended IS-Impact Model consists of 4 dimensions and 34 measures. Finally, the structural model was also assessed against two aspects: explanatory and predictive power. The analysis revealed that the path coefficient between CNQ and IS-Impact is significant with t-value= (4.826) and relatively strong with â = (0.426) with CNQ explaining 18% of the variance in IS-Impact. These results supported the hypothesis that CNQ is antecedent of IS-Impact. The study demonstrates that the quality of Computer Network affects the quality of the Enterprise System (ERP) and consequently the impacts of the system. Therefore, practitioners should pay attention to the Computer Network quality. Similarly, the path coefficient between IS-Impact and IS-Satisfaction was significant t-value = (17.79) and strong â = (0.744), with IS-Impact alone explaining 55% of the variance in Satisfaction, consistent with results of the original IS-Impact study (Gable et al., 2008). The research contributions include: (a) supporting the completeness and validity of IS-Impact Model as a Hierarchical Multi-dimensional Formative Measurement Model in the Saudi Arabian context, (b) operationalizing Computer Network Quality as conceptualized in the ITU-T Recommendation E.800 (ITU-T, 1993), (c) validating CNQ as a formative measurement model and as an antecedent of IS Impact, and (d) conceptualizing and validating IS-Satisfaction as a reflective measurement model and as an immediate consequence of IS Impact. The CNQ model provides a framework to perceptually measure Computer Network Quality from multiple perspectives. The CNQ model features an easy-to-understand, easy-to-use, and economical survey instrument.
Creating 'saviour siblings' : the notion of harming by conceiving in the context of healthy children
Resumo:
Over the past decade there have been a number of families who have utilised assisted reproductive technologies (ARTs) to create a tissue-matched child, with the purpose of using the child’s tissue to cure an existing sick child. This inevitably brings such families a sense of hope as the ultimate aim is to overcome a family health crisis. However, this specific use of reproductive technologies has been the subject of significant criticism, most of which is levelled against the potential harm to the ‘saviour’ child. In Australia, families seeking to access reproductive technologies in this context are therefore required to justify their motives to an ethics committee in order to establish, amongst other things, whether the child will suffer harm once born. This paper explores the concept of harm in the context of conception, focusing on whether it is possible to ‘harm’ a healthy child who has been conceived to save another. To achieve this, the paper will evaluate the impact of the ‘non-identity’ principle in the ‘saviour sibling’ context, and assess the existing body of literature which addresses ‘harm’ in the context of conception. As will be established, the majority of such literature has focused on ‘wrongful life’ cases which seek to address whether an existing child who has been born with a disability, has been harmed. Finally, this paper will distinguish the harm arguments in the ‘saviour sibling’ context based on the fact that the harm evaluation concerns the ‘future-life’ assessment of a healthy child.
Resumo:
Australian universities are currently engaging with new governmental policies and regulations that require them to demonstrate enhanced quality and accountability in teaching and research. The development of national academic standards for learning outcomes in higher education is one such instance of this drive for excellence. These discipline-specific standards articulate the minimum, or Threshold Learning Outcomes, to be addressed by higher education institutions so that graduating students can demonstrate their achievement to their institutions, accreditation agencies, and industry recruiters. This impacts not only on the design of Engineering courses (with particular emphasis on pedagogy and assessment), but also on the preparation of academics to engage with these standards and implement them in their day-to-day teaching practice on a micro level. This imperative for enhanced quality and accountability in teaching is also significant at a meso level, for according to the Australian Bureau of Statistics, about 25 per cent of teachers in Australian universities are aged 55 and above and more than 54 per cent are aged 45 and above (ABS, 2006). A number of institutions have undertaken recruitment drives to regenerate and enrich their academic workforce by appointing capacity-building research professors and increasing the numbers of early- and mid-career academics. This nationally driven agenda for quality and accountability in teaching permeates also the micro level of engineering education, since the demand for enhanced academic standards and learning outcomes requires both a strong advocacy for a shift to an authentic, collaborative, outcomes-focused education and the mechanisms to support academics in transforming their professional thinking and practice. Outcomes-focused education means giving greater attention to the ways in which the curriculum design, pedagogy, assessment approaches and teaching activities can most effectively make a positive, verifiable difference to students’ learning. Such education is authentic when it is couched firmly in the realities of learning environments, student and academic staff characteristics, and trustworthy educational research. That education will be richer and more efficient when staff works collaboratively, contributing their knowledge, experience and skills to achieve learning outcomes based on agreed objectives. We know that the school or departmental levels of universities are the most effective loci of changes in approaches to teaching and learning practices in higher education (Knight & Trowler, 2000). Heads of Schools are being increasingly entrusted with more responsibilities - in addition to setting strategic directions and managing the operational and sometimes financial aspects of their school, they are also expected to lead the development and delivery of the teaching, research and other academic activities. Guiding and mentoring individuals and groups of academics is one critical aspect of the Head of School’s role. Yet they do not always have the resources or support to help them mentor staff, especially the more junior academics. In summary, the international trend in undergraduate engineering course accreditation towards the demonstration of attainment of graduate attributes poses new challenges in addressing academic staff development needs and the assessment of learning. This paper will give some insights into the conceptual design, implementation and empirical effectiveness to date, of a Fellow-In-Residence Engagement (FIRE) program. The program is proposed as a model for achieving better engagement of academics with contemporary issues and effectively enhancing their teaching and assessment practices. It will also report on the program’s collaborative approach to working with Heads of Schools to better support academics, especially early-career ones, by utilizing formal and informal mentoring. Further, the paper will discuss possible factors that may assist the achievement of the intended outcomes of such a model, and will examine its contributions to engendering an outcomes-focussed thinking in engineering education.
Resumo:
With the overwhelming increase in the amount of texts on the web, it is almost impossible for people to keep abreast of up-to-date information. Text mining is a process by which interesting information is derived from text through the discovery of patterns and trends. Text mining algorithms are used to guarantee the quality of extracted knowledge. However, the extracted patterns using text or data mining algorithms or methods leads to noisy patterns and inconsistency. Thus, different challenges arise, such as the question of how to understand these patterns, whether the model that has been used is suitable, and if all the patterns that have been extracted are relevant. Furthermore, the research raises the question of how to give a correct weight to the extracted knowledge. To address these issues, this paper presents a text post-processing method, which uses a pattern co-occurrence matrix to find the relation between extracted patterns in order to reduce noisy patterns. The main objective of this paper is not only reducing the number of closed sequential patterns, but also improving the performance of pattern mining as well. The experimental results on Reuters Corpus Volume 1 data collection and TREC filtering topics show that the proposed method is promising.
Resumo:
This paper presents a method for investigating ship emissions, the plume capture and analysis system (PCAS), and its application in measuring airborne pollutant emission factors (EFs) and particle size distributions. The current investigation was conducted in situ, aboard two dredgers (Amity: a cutter suction dredger and Brisbane: a hopper suction dredger) but the PCAS is also capable of performing such measurements remotely at a distant point within the plume. EFs were measured relative to the fuel consumption using the fuel combustion derived plume CO2. All plume measurements were corrected by subtracting background concentrations sampled regularly from upwind of the stacks. Each measurement typically took 6 minutes to complete and during one day, 40 to 50 measurements were possible. The relationship between the EFs and plume sample dilution was examined to determine the plume dilution range over which the technique could deliver consistent results when measuring EFs for particle number (PN), NOx, SO2, and PM2.5 within a targeted dilution factor range of 50-1000 suitable for remote sampling. The EFs for NOx, SO2, and PM2.5 were found to be independent of dilution, for dilution factors within that range. The EF measurement for PN was corrected for coagulation losses by applying a time dependant particle loss correction to the particle number concentration data. For the Amity, the EF ranges were PN: 2.2 - 9.6 × 1015 (kg-fuel)-1; NOx: 35-72 g(NO2).(kg-fuel)-1, SO2 0.6 - 1.1 g(SO2).(kg-fuel)-1and PM2.5: 0.7 – 6.1 g(PM2.5).(kg-fuel)-1. For the Brisbane they were PN: 1.0 – 1.5 x 1016 (kg-fuel)-1, NOx: 3.4 – 8.0 g(NO2).(kg-fuel)-1, SO2: 1.3 – 1.7 g(SO2).(kg-fuel)-1 and PM2.5: 1.2 – 5.6 g(PM2.5).(kg-fuel)-1. The results are discussed in terms of the operating conditions of the vessels’ engines. Particle number emission factors as a function of size as well as the count median diameter (CMD), and geometric standard deviation of the size distributions are provided. The size distributions were found to be consistently uni-modal in the range below 500 nm, and this mode was within the accumulation mode range for both vessels. The representative CMDs for the various activities performed by the dredgers ranged from 94-131 nm in the case of the Amity, and 58-80 nm for the Brisbane. A strong inverse relationship between CMD and EF(PN) was observed.
Resumo:
Transient hyperopic refractive shifts occur on a timescale of weeks in some patients after initiation of therapy for hyperglycemia, and are usually followed by recovery to the original refraction. Possible lenticular origin of these changes is considered in terms of a paraxial gradient index model. Assuming that the lens thickness and curvatures remain unchanged, as observed in practice, it appears possible to account for initial hyperopic refractive shifts of up to a few diopters by reduction in refractive index near the lens center and alteration in the rate of change between center and surface, so that most of the index change occurs closer to the lens surface. Restoration of the original refraction depends on further change in the refractive index distribution with more gradual changes in refractive index from the lens center to its surface. Modeling limitations are discussed.
Resumo:
Background: The state of the HIV epidemic in the Philippines has been described as "low and slow", which is in stark contrast to many other countries in the region. A review of the conditions for HIV spread in the Philippines is necessary. Methods: We evaluated the current epidemiology, trends in behaviour and public health response in the Philippines to identify factors that could account for the current HIV epidemic, as well as to review conditions that may be of concern for facilitating an emerging epidemic. Results: The past control of HIV in the Philippines cannot be attributed to any single factor, nor is it necessarily a result of the actions of the Filipino government or other stakeholders. Likely reasons for the epidemic's slow development include: the country's geography is complicated; injecting drug use is relatively uncommon; a culture of sexual conservatism exists; sex workers tend to have few clients; anal sex is relatively uncommon; and circumcision rates are relatively high. In contrast, there are numerous factors suggesting that HIV is increasing and ready to emerge at high rates, including: the lowest documented rates of condom use in Asia; increasing casual sexual activity; returning overseas Filipino workers from high-prevalence settings; widespread misconceptions about HIV/AIDS; and high needle-sharing rates among injecting drug users. There was a three-fold increase in the rate of HIV diagnoses in the Philippines between 2003 and 2008, and this has continued over the past year. HIV diagnoses rates have noticeably increased among men, particularly among bisexual and homosexual men (114% and 214% respective increases over 2003-2008). The average age of diagnosis has also significantly decreased, from approximately 36 to 29 years. Conclusions: Young adults, men who have sex with men, commercial sex workers, injecting drug users, overseas Filipino workers, and the sexual partners of people in these groups are particularly vulnerable to HIV infection. There is no guarantee that a large HIV epidemic will be avoided in the near future. Indeed, an expanding HIV epidemic is likely to be only a matter of time as the components for such an epidemic are already present in the Philippines.
Resumo:
Ian Hunter's early work on the history of literature education and the emergence of English as school subject issued a bold challenge to traditional accounts that have in the main focused on English either as knowledge of a particular field or as ideology. The alternative proposal put forward by Hunter and supported by detailed historical analysis is that English exists as a series of historically contingent techniques and practices for shaping the self-managing capacities of children. The challenge for the field is to advance this historical work and to examine possible implications for English teaching.
Resumo:
Background: Ureaplasma species in amniotic fluid at the time of second-trimester amniocentesis increases the risk of preterm birth, but most affected pregnancies continue to term (Gerber et al. J Infect Dis 2003). We aimed to model intra-amniotic (IA) ureaplasma infection in spiny mice, a species with a relatively long gestation (39 days) that allows investigation of the disposition and possible clearance of ureaplasmas in the feto-placental compartment. Method: Pregnant spiny mice received IA injections of U. parvum serovar 6 (10µL, 1x104 colony-forming-units in PBS) or 10B media (10µL; control) at 20 days (d) of gestation (term=39d). At 37d fetuses (n=3 ureaplasma, n=4 control) were surgically delivered and tissues were collected for; bacterial culture, ureaplasma mba and urease gene expression by PCR, tissue WBC counts and indirect fluorescent antibody (IFA) staining using anti-ureaplasma serovar 6 (rabbit) antiserum. Maternal and fetal plasma IgG was measured by Western blot. Results: Ureaplasmas were not detected by culture or PCR in fetal or maternal tissues but were visualized by IFA within placental and fetal lung tissues, in association with inflammatory changes and elevated WBC counts (p<0.0001). Anti-ureaplasma IgG was detected in maternal (2/2 tested) and fetal (1/2 tested) plasma but not in controls (0/3). Conclusions: IA injection of ureaplasmas in mid-gestation spiny mice caused persistent fetal lung and placental infection even though ureaplasmas were undetectable using standard culture or PCR techniques. This is consistent with resolution of IA infection, which may occur in human pregnancies that continue to term despite detection of ureaplasmas in mid-gestation.
Resumo:
Emerging sciences, such as conceptual cost estimating, seem to have to go through two phases. The first phase involves reducing the field of study down to its basic ingredients - from systems development to technological development (techniques) to theoretical development. The second phase operates in the direction in building up techniques from theories, and systems from techniques. Cost estimating is clearly and distinctly still in the first phase. A great deal of effort has been put into the development of both manual and computer based cost estimating systems during this first phase and, to a lesser extent, the development of a range of techniques that can be used (see, for instance, Ashworth & Skitmore, 1986). Theoretical developments have not, as yet, been forthcoming. All theories need the support of some observational data and cost estimating is not likely to be an exception. These data do not need to be complete in order to build theories. As it is possible to construct an image of a prehistoric animal such as the brontosaurus from only a few key bones and relics, so a theory of cost estimating may possibly be found on a few factual details. The eternal argument of empiricists and deductionists is that, as theories need factual support, so do we need theories in order to know what facts to collect. In cost estimating, the basic facts of interest concern accuracy, the cost of achieving this accuracy, and the trade off between the two. When cost estimating theories do begin to emerge, it is highly likely that these relationships will be central features. This paper presents some of the facts we have been able to acquire regarding one part of this relationship - accuracy, and its influencing factors. Although some of these factors, such as the amount of information used in preparing the estimate, will have cost consequences, we have not yet reached the stage of quantifying these costs. Indeed, as will be seen, many of the factors do not involve any substantial cost considerations. The absence of any theory is reflected in the arbitrary manner in which the factors are presented. Rather, the emphasis here is on the consideration of purely empirical data concerning estimating accuracy. The essence of good empirical research is to .minimize the role of the researcher in interpreting the results of the study. Whilst space does not allow a full treatment of the material in this manner, the principle has been adopted as closely as possible to present results in an uncleaned and unbiased way. In most cases the evidence speaks for itself. The first part of the paper reviews most of the empirical evidence that we have located to date. Knowledge of any work done, but omitted here would be most welcome. The second part of the paper presents an analysis of some recently acquired data pertaining to this growing subject.
Resumo:
Teachers of construction economics and estimating have for a long time recognised that there is more to construction pricing than detailed calculation of costs (to the contractor). We always get to the point where we have to say "of course, experience or familiarity of the market is very important and this needs judgement, intuition, etc". Quite how important is the matter in construction pricing is not known and we tend to trivialise its effect. If judgement of the market has a minimal effect, little harm would be done, but if it is really important then some quite serious consequences arise which go well beyond the teaching environment. Major areas of concern for the quantity surveyor are in cost modelling and cost planning - neither of which pay any significant attention to the market effect. There are currently two schools of thought about the market effect issue. The first school is prepared to ignore possible effects until more is known. This may be called the pragmatic school. The second school exists solely to criticise the first school. We will call this the antagonistic school. Neither the pragmatic nor the antagonistic schools seem to be particularly keen to resolve the issue one way or the other. The founder and leader of the antagonistic school is Brian Fine whose paper in 1974 is still the basic text on the subject, and in which he coined the term 'socially acceptable' price to describe what we now recognise as the market effect. Mr Fine's argument was then, and is since, that the uncertainty surrounding the contractors' costing and cost estimating process is such that the uncertainty surrounding the contractors' cost that it logically leads to a market-orientated pricing approach. Very little factual evidence, however, seems to be available to support these arguments in any conclusive manner. A further, and more important point for the pragmatic school, is that, even if the market effect is as important as Mr Fine believes, there are no indications of how it can be measured, evaluated or predicted. Since 1974 evidence has been accumulating which tends to reinforce the antagonists' view. A review of the literature covering both contractors' and designers' estimates found many references to the use of value judgements in construction pricing (Ashworth & Skitmore, 1985), which supports the antagonistic view in implying the existence of uncertainty overload. The most convincing evidence emerged quite by accident in some research we recently completed with practicing quantity surveyors in estimating accuracy (Skitmore, 1985). In addition to demonstrating that individual quantity surveyors and certain types of buildings had significant effect on estimating accuracy, one surprise result was that only a very small amount of information was used by the most expert surveyors for relatively very accurate estimates. Only the type and size of building, it seemed, was really relevant in determining accuracy. More detailed information about the buildings' specification, and even a sight to the drawings, did not significantly improve their accuracy level. This seemed to offer clear evidence that the constructional aspects of the project were largely irrelevant and that the expert surveyors were somehow tuning in to the market price of the building. The obvious next step is to feed our expert surveyors with more relevant 'market' information in order to assess its effect. The problem with this is that our experts do not seem able to verbalise their requirements in this respect - a common occurrence in research of this nature. The lack of research into the nature of market effects on prices also means the literature provides little of benefit. Hence the need for this study. It was felt that a clearer picture of the nature of construction markets would be obtained in an environment where free enterprise was a truly ideological force. For this reason, the United States of America was chosen for the next stage of our investigations. Several people were interviewed in an informal and unstructured manner to elicit their views on the action of market forces on construction prices. Although a small number of people were involved, they were thought to be reasonably representative of knowledge in construction pricing. They were also very well able to articulate their views. Our initial reaction to the interviews was that our USA subjects held very close views to those held in the UK. However, detailed analysis revealed the existence of remarkably clear and consistent insights that would not have been obtained in the UK. Further evidence was also obtained from literature relating to the subject and some of the interviewees very kindly expanded on their views in later postal correspondence. We have now analysed all the evidence received and, although a great deal is of an anecdotal nature, we feel that our findings enable at least the basic nature of the subject to be understood and that the factors and their interrelationships can now be examined more formally in relation to construction price levels. I must express my gratitude to the Royal Institution of Chartered Surveyors' Educational Trust and the University of Salford's Department of Civil Engineering for collectively funding this study. My sincere thanks also go to our American participants who freely gave their time and valuable knowledge to us in our enquiries. Finally, I must record my thanks to Tim and Anne for their remarkable ability to produce an intelligible typescript from my unintelligible writing.
Resumo:
Neutrophils constitute 50-60% of all circulating leukocytes; they present the first line of microbicidal defense and are involved in inflammatory responses. To examine immunocompetence in athletes, numerous studies have investigated the effects of exercise on the number of circulating neutrophils and their response to stimulation by chemotactic stimuli and activating factors. Exercise causes a biphasic increase in the number of neutrophils in the blood, arising from increases in catecholamine and cortisol concentrations. Moderate intensity exercise may enhance neutrophil respiratory burst activity, possibly through increases in the concentrations of growth hormone and the inflammatory cytokine IL-6. In contrast, intense or long duration exercise may suppress neutrophil degranulation and the production of reactive oxidants via elevated circulating concentrations of epinephrine (adrenaline) and cortisol. There is evidence of neutrophil degranulation and activation of the respiratory burst following exercise-induced muscle damage. In principle, improved responsiveness of neutrophils to stimulation following exercise of moderate intensity could mean that individuals participating in moderate exercise may have improved resistance to infection. Conversely, competitive athletes undertaking regular intense exercise may be at greater risk of contracting illness. However, there are limited data to support this concept. To elucidate the cellular mechanisms involved in the neutrophil responses to exercise, researchers have examined changes in the expression of cell membrane receptors, the production and release of reactive oxidants and more recently, calcium signaling. The investigation of possible modifications of other signal transduction events following exercise has not been possible because of current methodological limitations. At present, variation in exercise-induced alterations in neutrophil function appears to be due to differences in exercise protocols, training status, sampling points and laboratory assay techniques.
Resumo:
Emergency health is a critical component of health systems; one increasingly congested from growing demand and blocked access to care. The Emergency Health Services Queensland (EHSQ) study aimed to identify the factors driving increased demand for emergency healthcare. This study examined data on patients treated by the ambulance service and Emergency Departments across Queensland. Data was derived from the Queensland Ambulance Service’s (QAS) Ambulance Information Management System and electronic Ambulance Report Form and from the Emergency Department Information System (EDIS). Data was obtained for the period 2001-02 through to 2009-10. A snapshot of users for the 2009-10 year was used to describe the characteristics of users and comparisons made with the year 2003-04 to identify trends. Per capita demand for EDs has increased by 2% per annum over the decade and for ambulance by 3.7% per annum. The growth in ED demand is most significant in more urgent triage categories with decline in less urgent patients. The growth is most prominent amongst patients suffering injuries and poisoning, amongst both men and women and across all age groups. Patients from lower socioeconomic areas appear to have higher utilisation rates and the utilisation rate for indigenous people exceeds those of other backgrounds. The utilisation rates for immigrant people is less than Australian born however it has not been possible to eliminate the confounding impact of age and socioeconomic profiles. These findings contribute to an understanding of the growth in demand for emergency health. It is evident that the growth is amongst patients in genuine need of emergency healthcare and public rhetoric that congested emergency health services is due to inappropriate attendees is unsustainable. The growth in demand over the last decade reflects not only on changing demographics of the Australian population but also changes in health status, standards of acute health care and other social factors.
Resumo:
Deterministic computer simulations of physical experiments are now common techniques in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena of this nature. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This thesis investigates some practical issues in the design and analysis of computer experiments and attempts to answer some of the questions faced by experimenters using computer experiments. In particular, the question of the number of computer experiments and how they should be augmented is studied and attention is given to when the response is a function over time.
Resumo:
The Water Catchment: fast forward to the past comprises two parts: a creative piece and an exegesis. The methodology is Creative Practice as Research; a process of critical reflection, where I observe how researching the exegesis, in my case analysing how the social reality of an era in which an author writes affects their writing of the protagonist's journey, and how this in turn shapes how I write the hero's pathway in the creative piece. The genre in which the protagonist's journey is charted and represented is dystopian young adult fiction; hence my creative piece, The Water Catchment, is a novel manuscript for a dystopian young adult fantasy. It is a speculative novel set in a possible future and poses (and answers) the question: What might happen if water becomes the most powerful commodity on earth? There are two communities, called 'worlds' to create a barrier and difference where physical ones are not in evidence. A battle ensues over unfair conditions and access to water. In the end the protagonist, Caitlyn, takes over leadership heralding a new era of co-operation and water management between the two worlds. The exegesis examines how the hero's pathway, the journey towards knowledge and resolution, is best explored in young adult literature through dystopian narratives. I explore how the dystopian worlds of Ursula Le Guin's first and last books of The Earthsea Quartet are foundational, and lay this examination over an analysis of both the hero's pathway within and the social contexts outside of the novels. Dystopian narratives constitute a liberating space for the adolescent protagonist between the reliance on adults in childhood and the world of adults. In young adult literature such narratives provide fertile ground to explore those aspects informing an adolescent's future.