892 resultados para Knowledge of caregivers
Resumo:
This paper aims to present a preliminary benefit analysis for airborne GPS occultation technique for the Australian region. The simulation studies are based on current domestic commercial flights between major Australian airports. With the knowledge of GPS satellite ephemeris data, occultation events for for any particular flight can be determined. Preliminary analysis shows a high resolution occultation observations can be achieved with this approach, for instance, about 15 occultation events for a Perth-to-Sydney flight. The simulation result agrees to the results published by other researchers for a different region. Of course, occultation observation during off-peak hours might be affected due to the limited flight activities. --------- High resolution occultation observations obtainable from airborne GPS occultation system provides an opportunity to improve the current global numerical weather prediction (NWP) models and ultimately improves the accuracy in weather forecasting. More intensive research efforts and experimental demonstrations are required in order to demonstrate the technical feasibility of the airborne GPS technology.
Resumo:
In Bonny Glen Pty Ltd v Country Energy [2009] NSWCA 26 (24 February 2009) the New South Wales Court of Appeal held that the pure economic loss suffered by the appellant was recoverable. However, rather than arguments as to whether the appellant was vulnerable and a member of an ascertainable class, whether the respondent had knowledge of the risk to the appellant and was in a position of control and considerations as to indeterminate liability as in Perre v Apand Pty Ltd (1999) 198 CLR 180, the arguments raised related to the foreseeability of the loss and causation.
Resumo:
Since the High Court decision of Cook v Cook (1986) 162 CLR 376, a person who voluntarily undertakes to instruct a learner driver of a motor vehicle is owed a lower standard of care than that owed to other road users. The standard of care was still expressed to be objective; however, it took into account the inexperience of the learner driver. Therefore, a person instructing a learner driver was owed a duty of care the standard being that of a reasonable learner driver. This ‘special relationship’ was said to exist because of the passenger’s knowledge of the driver’s inexperience and lack of skill. On 28 August 2008 the High Court handed down its decision in Imbree v McNeilly [2008] HCA 40, overruling Cook v Cook.
Resumo:
New air traffic automated separation management concepts are constantly under investigation. Yet most of the automated separation management algorithms proposed over the last few decades have assumed either perfect communication or exact knowledge of all aircraft locations. In realistic environments, these idealized assumptions are not valid and any communication failure can potentially lead to disastrous outcomes. This paper examines the separation performance behavior of several popular algorithms during periods of information loss. This comparison is done through simulation studies. These simulation studies suggest that communication failure can cause the performance of these separation management algorithms to degrade significantly. This paper also describes some preliminary flight tests.
Resumo:
For some time there has been a growing awareness of organizational culture and its impact on the functioning of engineering and maintenance departments. Those wishing to implement contemporary maintenance regimes (e.g. condition based maintenance) are often encouraged to develop “appropriate cultures” to support a new method’s introduction. Unfortunately these same publications often fail to specifically articulate the cultural values required to support those efforts. In the broader literature, only a limited number of case examples document the cultural values held by engineering asset intensive firms and how they contribute to their success (or failure). Consequently a gap exists in our knowledge of what engineering cultures currently might look like, or what might constitute a best practice engineering asset culture. The findings of a pilot study investigating the perceived ideal characteristics of engineering asset cultures are reported. Engineering managers, consultants and academics (n=47), were surveyed as to what they saw were essential attributes of both engineering cultures and engineering asset personnel. Valued cultural elements included those orientated around continuous improvement, safety and quality. Valued individual attributes included openness to change, interpersonal skills and conscientiousness. The paper concludes with a discussion regarding the development of a best practice cultural framework for practitioners and engineering managers.
Resumo:
Chlamydia pneumoniae is a common human and animal pathogen associated with a wide range of upper and lower respiratory tract infections. In more recent years there has been increasing evidence to suggest a link between C. pneumoniae and chronic diseases in humans, including atherosclerosis, stroke and Alzheimer’s disease. C. pneumoniae human strains show little genetic variation, indicating that the human-derived strain originated from a common ancestor in the recent past. Despite extensive information on the genetics and morphology processes of the human strain, knowledge concerning many other hosts (including marsupials, amphibians, reptiles and equines) remains virtually unexplored. The koala (Phascolarctos cinereus) is a native Australian marsupial under threat due to habitat loss, predation and disease. Koalas are very susceptible to chlamydial infections, most commonly affecting the conjunctiva, urogenital tract and/or respiratory tract. To address this gap in the literature, the present study (i) provides a detailed description of the morphologic and genomic architecture of the C. pneumoniae koala (and human) strain, and shows that the koala strain is microscopically, developmentally and genetically distinct from the C. pneumoniae human strain, and (ii) examines the genetic relationship of geographically diverse C. pneumoniae isolates from human, marsupial, amphibian, reptilian and equine hosts, and identifies two distinct lineages that have arisen from animal-to-human cross species transmissions. Chapter One of this thesis explores the scientific problem and aims of this study, while Chapter Two provides a detailed literature review of the background in this field of work. Chapter Three, the first results chapter, describes the morphology and developmental stages of C. pneumoniae koala isolate LPCoLN, as revealed by fluorescence and transmission electron microscopy. The profile of this isolate, when cultured in HEp-2 human epithelial cells, was quite different to the human AR39 isolate. Koala LPCoLN inclusions were larger; the elementary bodies did not have the characteristic pear-shaped appearance, and the developmental cycle was completed within a shorter period of time (as confirmed by quantitative real-time PCR). These in vitro findings might reflect biological differences between koala LPCoLN and human AR39 in vivo. Chapter Four describes the complete genome sequence of the koala respiratory pathogen, C. pneumoniae LPCoLN. This is the first animal isolate of C. pneumoniae to be fully-sequenced. The genome sequence provides new insights into genomic ‘plasticity’ (organisation), evolution and biology of koala LPCoLN, relative to four complete C. pneumoniae human genomes (AR39, CWL029, J138 and TW183). Koala LPCoLN contains a plasmid that is not shared with any of the human isolates, there is evidence of gene loss in nucleotide salvage pathways, and there are 10 hot spot genomic regions of variation that were previously not identified in the C. pneumoniae human genomes. Sequence (partial-length) from a second, independent, wild koala isolate (EBB) at several gene loci confirmed that the koala LPCoLN isolate was representative of a koala C. pneumoniae strain. The combined sequence data provides evidence that the C. pneumoniae animal (koala LPCoLN) genome is ancestral to the C. pneumoniae human genomes and that human infections may have originated from zoonotic infections. Chapter Five examines key genome components of the five C. pneumoniae genomes in more detail. This analysis reveals genomic features that are shared by and/or contribute to the broad ecological adaptability and evolution of C. pneumoniae. This analysis resulted in the identification of 65 gene sequences for further analysis of intraspecific variation, and revealed some interesting differences, including fragmentation, truncation and gene decay (loss of redundant ancestral traits). This study provides valuable insights into metabolic diversity, adaptation and evolution of C. pneumoniae. Chapter Six utilises a subset of 23 target genes identified from the previous genomic comparisons and makes a significant contribution to our understanding of genetic variability among C. pneumoniae human (11) and animal (6 amphibian, 5 reptilian, 1 equine and 7 marsupial hosts) isolates. It has been shown that the animal isolates are genetically diverse, unlike the human isolates that are virtually clonal. More convincing evidence that C. pneumoniae originated in animals and recently (in the last few hundred thousand years) crossed host species to infect humans is provided in this study. It is proposed that two animal-to-human cross species events have occurred in the context of the results, one evident by the nearly clonal human genotype circulating in the world today, and the other by a more animal-like genotype apparent in Indigenous Australians. Taken together, these data indicate that the C. pneumoniae koala LPCoLN isolate has morphologic and genomic characteristics that are distinct from the human isolates. These differences may affect the survival and activity of the C. pneumoniae koala pathogen in its natural host, in vivo. This study, by utilising the genetic diversity of C. pneumoniae, identified new genetic markers for distinguishing human and animal isolates. However, not all C. pneumoniae isolates were genetically diverse; in fact, several isolates were highly conserved, if not identical in sequence (i.e. Australian marsupials) emphasising that at some stage in the evolution of this pathogen, there has been an adaptation/s to a particular host, providing some stability in the genome. The outcomes of this study by experimental and bioinformatic approaches have significantly enhanced our knowledge of the biology of this pathogen and will advance opportunities for the investigation of novel vaccine targets, antimicrobial therapy, or blocking of pathogenic pathways.
Resumo:
Multiresolution techniques are being extensively used in signal processing literature. This paper has two parts, in the first part we derive a relationship between the general degradation model (Y=BX+W) at coarse and fine resolutions. In the second part we develop a signal restoration scheme in a multiresolution framework and demonstrate through experiments that the knowledge of the relationship between the degradation model at different resolutions helps in obtaining computationally efficient restoration scheme.
Resumo:
Water Sensitive Urban Design (WSUD) systems have the potential mitigate the hydrologic disturbance and water quality concerns associated with stormwater runoff from urban development. In the last few years WSUD has been strongly promoted in South East Queensland (SEQ) and new developments are now required to use WSUD systems to manage stormwater runoff. However, there has been limited field evaluation of WSUD systems in SEQ and consequently knowledge of their effectiveness in the field, under storm events, is limited. The objective of this research project was to assess the effectiveness of WSUD systems installed in a residential development, under real storm events. To achieve this objective, a constructed wetland, bioretention swale and a bioretention basin were evaluated for their ability to improve the hydrologic and water quality characteristics of stormwater runoff from urban development. The monitoring focused on storm events, with sophisticated event monitoring stations measuring the inflow and outflow from WSUD systems. Data analysis undertaken confirmed that the constructed wetland, bioretention basin and bioretention swale improved the hydrologic characteristics by reducing peak flow. The bioretention systems, particularly the bioretention basin also reduced the runoff volume and frequency of flow, meeting key objectives of current urban stormwater management. The pollutant loads were reduced by the WSUD systems to above or just below the regional guidelines, showing significant reductions to TSS (70-85%), TN (40-50%) and TP (50%). The load reduction of NOx and PO4 3- by the bioretention basin was poor (<20%), whilst the constructed wetland effectively reduced the load of these pollutants in the outflow by approximately 90%. The primary reason for the load reduction in the wetland was due to a reduction in concentration in the outflow, showing efficient treatment of stormwater by the system. In contrast, the concentration of key pollutants exiting the bioretention basin were higher than the inflow. However, as the volume of stormwater exiting the bioretention basin was significantly lower than the inflow, a load reduction was still achieved. Calibrated MUSIC modelling showed that the bioretention basin, and in particular, the constructed wetland were undersized, with 34% and 62% of stormwater bypassing the treatment zones in the devices. Over the long term, a large proportion of runoff would not receive treatment, considerably reducing the effectiveness of the WSUD systems.
Resumo:
Aims To identify self-care activities undertaken and determine relationships between self-efficacy, depression, quality of life, social support and adherence to compression therapy in a sample of patients with chronic venous insufficiency. Background Up to 70% of venous leg ulcers recur after healing. Compression hosiery is a primary strategy to prevent recurrence, however, problems with adherence to this strategy are well documented and an improved understanding of how psychosocial factors influence patients with chronic venous insufficiency will help guide effective preventive strategies. Design Cross-sectional survey and retrospective medical record review. Method All patients previously diagnosed with a venous leg ulcer which healed between 12–36 months prior to the study were invited to participate. Data on health, psychosocial variables and self-care activities were obtained from a self-report survey and data on medical and previous ulcer history were obtained from medical records. Multiple linear regression modelling was used to determine the independent influences of psychosocial factors on adherence to compression therapy. Results In a sample of 122 participants, the most frequently identified self-care activities were application of topical skin treatments, wearing compression hosiery and covering legs to prevent trauma. Compression hosiery was worn for a median of 4 days/week (range 0–7). After adjustment for all variables and potential confounders in a multivariable regression model, wearing compression hosiery was found to be significantly positively associated with participants’ knowledge of the cause of their condition (p=0.002), higher self-efficacy scores (p=0.026) and lower depression scores (p=0.009). Conclusion In this sample, depression, self-efficacy and knowledge were found to be significantly related to adherence to compression therapy. Relevance to clinical practice These findings support the need to screen for and treat depression in this population. In addition, strategies to improve patient knowledge and self-efficacy may positively influence adherence to compression therapy.
Resumo:
Knowledge of the regulation of food intake is crucial to an understanding of body weight and obesity. Strictly speaking, we should refer to the control of food intake whose expression is modulated in the interests of the regulation of body weight. Food intake is controlled, body weight is regulated. However, this semantic distinction only serves to emphasize the importance of food intake. Traditionally food intake has been researched within the homeostatic approach to physiological systems pioneered by Claude Bernard, Walter Cannon and others; and because feeding is a form of behaviour, it forms part of what Curt Richter referred to as the behavioural regulation of body weight (or behavioural homeostasis). This approach views food intake as the vehicle for energy supply whose expression is modulated by a metabolic drive generated in response to a requirement for energy. The idea was that eating behaviour is stimulated and inhibited by internal signalling systems (for the drive and suppression of eating respectively) in order to regulate the internal environment (energy stores, tissue needs).
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
In this chapter, we are particularly concerned with making visible the general principles underlying the transmission of Social Studies curriculum knowledge, and considering it in light of a high-stakes mandated national assessment task. Specifically, we draw on Bernstein’s theoretical concept of pedagogic models as a tool for analysing orientations to teaching and learning. We introduce a case in point from the Australian context: one state Social Studies curriculum vis-a-vis one part of the Year Three national assessment measure for reading. We use our findings to consider the implications for the disciplinary knowledge of Social Studies in the communities in which we are undertaking our respective Australian Research Council Linkage project work (Glasswell et al.; Woods et al.). We propose that Social Studies disciplinary knowledge is being constituted, in part, through power struggles between different agencies responsible for the production and relay of official forms of state curriculum and national literacy assessment. This is particularly the case when assessment instruments are used to compare and contrast school results in highly visible web based league tables (see, for example, http://myschoolaustralia.ning.com/).
Resumo:
World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.
Resumo:
This study focused on a group of primary school teachers as they implemented a variety of intervention actions within their class programs aimed towards supporting the reduction of high levels of communication apprehension (CA) among students.Six teachers and nine students, located across three primary schools, four year levels,and six classes, participated in this study. For reasons of confidentiality the schools,principals, parents, teachers, teacher assistants, and students who were involved in this study were given fictitious names. The following research question was explored in this study: What intervention actions can primary school teachers implement within their class programs that support the reduction of high CA levels among students? Throughout this study the term CA referred to "an individual's level of fear or anxiety associated with either real or anticipated (oral) communication with another person or persons" (McCroskey, 1984, p. 13). The sources of CA were explained with reference to McCroskey's state-trait continuum. The distinctions between high and appropriate levels of CA were determined conceptually and empirically. The education system within which this study was conducted promoted the philosophy of inclusion and the practices of inclusive schooling. Teachers employed in this system were encouraged to create class programs inclusive of and successful for all students. Consequently the conceptual framework within which this study was conducted was based around the notion of inclusion. Action research and case study research were the methodologies used in the study. Case studies described teachers' action research as they responded to the challenge of executing intervention actions within their class programs directed towards supporting the reduction of high CA levels among students. Consequently the teachers and not the researcher were the central characters in each of the case studies. Three principal data collection instruments were used in this study: Personal Report of Communication Fear (PRCF) scale, semistructured interviews, and dialogue journals. The PRCF scale was the screening tool used to identify a pool of students eligible for the study. Data relevant to the students involved in the study were gathered during semistructured interviews and throughout the dialogue journaling process. Dialogue journaling provided the opportunity for regular contact between teachers and the researcher, a sequence to teacher and student intervention behaviours, and a permanent record of teacher and student growth and development. The majority of teachers involved in this study endeavoured to develop class programs inclusive of all students.These teachers acknowledged the importance of modifying aspects of their class programs in response to the diverse and often multiple needs of individual students with high levels of CA. Numerous conclusions were drawn regarding practical ways that the teachers in this study supported the reduction of high CA levels among students. What this study has shown is that teachers can incorporate intervention actions within their class programs aimed towards supporting students lower their high levels of CA. Whilst no teacher developed an identical approach to intervention, similarities and differences were evident among teachers regarding their selection, interpretation, and implementation of intervention actions. Actions that teachers enacted within their class programs emerged from numerous fields of research including CA, inclusion, social skills, behaviour teaching, co-operative learning, and quality schools. Each teacher's knowledge of and familiarity with these research fields influenced their preference for and commitment to particular intervention actions. Additional factors including each teacher's paradigm of inclusion and exclusion contributed towards their choice of intervention actions. Possible implications of these conclusions were noted with reference to teachers,school administrators, support personnel, system personnel, teacher educators, parents, and researchers.
Resumo:
This work investigates the computer modelling of the photochemical formation of smog products such as ozone and aerosol, in a system containing toluene, NOx and water vapour. In particular, the problem of modelling this process in the Commonwealth Scientific and Industrial Research Organization (CSIRO) smog chambers, which utilize outdoor exposure, is addressed. The primary requirement for such modelling is a knowledge of the photolytic rate coefficients. Photolytic rate coefficients of species other than N02 are often related to JNo2 (rate coefficient for the photolysis ofN02) by a simple factor, but for outdoor chambers, this method is prone to error as the diurnal profiles may not be similar in shape. Three methods for the calculation of diurnal JNo2 are investigated. The most suitable method for incorporation into a general model, is found to be one which determines the photolytic rate coefficients for N02, as well as several other species, from actinic flux, absorption cross section and quantum yields. A computer model was developed, based on this method, to calculate in-chamber photolysis rate coefficients for the CSIRO smog chambers, in which ex-chamber rate coefficients are adjusted by accounting for variation in light intensity by transmittance through the Teflon walls, albedo from the chamber floor and radiation attenuation due to clouds. The photochemical formation of secondary aerosol is investigated in a series of toluene-NOx experiments, which were performed in the CSIRO smog chambers. Three stages of aerosol formation, in plots of total particulate volume versus time, are identified: a delay period in which no significant mass of aerosol is formed, a regime of rapid aerosol formation (regime 1) and a second regime of slowed aerosol formation (regime 2). Two models are presented which were developed from the experimental data. One model is empirically based on observations of discrete stages of aerosol formation and readily allows aerosol growth profiles to be calculated. The second model is based on an adaptation of published toluene photooxidation mechanisms and provides some chemical information about the oxidation products. Both models compare favorably against the experimental data. The gross effects of precursor concentrations (toluene, NOx and H20) and ambient conditions (temperature, photolysis rate) on the formation of secondary aerosol are also investigated, primarily using the mechanism model. An increase in [NOx]o results in increased delay time, rate of aerosol formation in regime 1 and volume of aerosol formed in regime 1. This is due to increased formation of dinitrocresol and furanone products. An increase in toluene results in a decrease in the delay time and an increase in the rate of aerosol formation in regime 1, due to enhanced reactivity from the toluene products, such as the radicals from the photolysis of benzaldehyde. Water vapor has very little effect on the formation of aerosol volume, except that rates are slightly increased due to more OH radicals from reaction with 0(1D) from ozone photolysis. Increased temperature results in increased volume of aerosol formed in regime 1 (increased dinitrocresol formation), while increased photolysis rate results in increased rate of aerosol formation in regime 1. Both the rate and volume of aerosol formed in regime 2 are increased by increased temperature or photolysis rate. Both models indicate that the yield of secondary particulates from hydrocarbons (mass concentration aerosol formed/mass concentration hydrocarbon precursor) is proportional to the ratio [NOx]0/[hydrocarbon]0