901 resultados para Knowledge of God


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Chlamydia pneumoniae is a common human and animal pathogen associated with a wide range of upper and lower respiratory tract infections. In more recent years there has been increasing evidence to suggest a link between C. pneumoniae and chronic diseases in humans, including atherosclerosis, stroke and Alzheimer’s disease. C. pneumoniae human strains show little genetic variation, indicating that the human-derived strain originated from a common ancestor in the recent past. Despite extensive information on the genetics and morphology processes of the human strain, knowledge concerning many other hosts (including marsupials, amphibians, reptiles and equines) remains virtually unexplored. The koala (Phascolarctos cinereus) is a native Australian marsupial under threat due to habitat loss, predation and disease. Koalas are very susceptible to chlamydial infections, most commonly affecting the conjunctiva, urogenital tract and/or respiratory tract. To address this gap in the literature, the present study (i) provides a detailed description of the morphologic and genomic architecture of the C. pneumoniae koala (and human) strain, and shows that the koala strain is microscopically, developmentally and genetically distinct from the C. pneumoniae human strain, and (ii) examines the genetic relationship of geographically diverse C. pneumoniae isolates from human, marsupial, amphibian, reptilian and equine hosts, and identifies two distinct lineages that have arisen from animal-to-human cross species transmissions. Chapter One of this thesis explores the scientific problem and aims of this study, while Chapter Two provides a detailed literature review of the background in this field of work. Chapter Three, the first results chapter, describes the morphology and developmental stages of C. pneumoniae koala isolate LPCoLN, as revealed by fluorescence and transmission electron microscopy. The profile of this isolate, when cultured in HEp-2 human epithelial cells, was quite different to the human AR39 isolate. Koala LPCoLN inclusions were larger; the elementary bodies did not have the characteristic pear-shaped appearance, and the developmental cycle was completed within a shorter period of time (as confirmed by quantitative real-time PCR). These in vitro findings might reflect biological differences between koala LPCoLN and human AR39 in vivo. Chapter Four describes the complete genome sequence of the koala respiratory pathogen, C. pneumoniae LPCoLN. This is the first animal isolate of C. pneumoniae to be fully-sequenced. The genome sequence provides new insights into genomic ‘plasticity’ (organisation), evolution and biology of koala LPCoLN, relative to four complete C. pneumoniae human genomes (AR39, CWL029, J138 and TW183). Koala LPCoLN contains a plasmid that is not shared with any of the human isolates, there is evidence of gene loss in nucleotide salvage pathways, and there are 10 hot spot genomic regions of variation that were previously not identified in the C. pneumoniae human genomes. Sequence (partial-length) from a second, independent, wild koala isolate (EBB) at several gene loci confirmed that the koala LPCoLN isolate was representative of a koala C. pneumoniae strain. The combined sequence data provides evidence that the C. pneumoniae animal (koala LPCoLN) genome is ancestral to the C. pneumoniae human genomes and that human infections may have originated from zoonotic infections. Chapter Five examines key genome components of the five C. pneumoniae genomes in more detail. This analysis reveals genomic features that are shared by and/or contribute to the broad ecological adaptability and evolution of C. pneumoniae. This analysis resulted in the identification of 65 gene sequences for further analysis of intraspecific variation, and revealed some interesting differences, including fragmentation, truncation and gene decay (loss of redundant ancestral traits). This study provides valuable insights into metabolic diversity, adaptation and evolution of C. pneumoniae. Chapter Six utilises a subset of 23 target genes identified from the previous genomic comparisons and makes a significant contribution to our understanding of genetic variability among C. pneumoniae human (11) and animal (6 amphibian, 5 reptilian, 1 equine and 7 marsupial hosts) isolates. It has been shown that the animal isolates are genetically diverse, unlike the human isolates that are virtually clonal. More convincing evidence that C. pneumoniae originated in animals and recently (in the last few hundred thousand years) crossed host species to infect humans is provided in this study. It is proposed that two animal-to-human cross species events have occurred in the context of the results, one evident by the nearly clonal human genotype circulating in the world today, and the other by a more animal-like genotype apparent in Indigenous Australians. Taken together, these data indicate that the C. pneumoniae koala LPCoLN isolate has morphologic and genomic characteristics that are distinct from the human isolates. These differences may affect the survival and activity of the C. pneumoniae koala pathogen in its natural host, in vivo. This study, by utilising the genetic diversity of C. pneumoniae, identified new genetic markers for distinguishing human and animal isolates. However, not all C. pneumoniae isolates were genetically diverse; in fact, several isolates were highly conserved, if not identical in sequence (i.e. Australian marsupials) emphasising that at some stage in the evolution of this pathogen, there has been an adaptation/s to a particular host, providing some stability in the genome. The outcomes of this study by experimental and bioinformatic approaches have significantly enhanced our knowledge of the biology of this pathogen and will advance opportunities for the investigation of novel vaccine targets, antimicrobial therapy, or blocking of pathogenic pathways.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multiresolution techniques are being extensively used in signal processing literature. This paper has two parts, in the first part we derive a relationship between the general degradation model (Y=BX+W) at coarse and fine resolutions. In the second part we develop a signal restoration scheme in a multiresolution framework and demonstrate through experiments that the knowledge of the relationship between the degradation model at different resolutions helps in obtaining computationally efficient restoration scheme.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Water Sensitive Urban Design (WSUD) systems have the potential mitigate the hydrologic disturbance and water quality concerns associated with stormwater runoff from urban development. In the last few years WSUD has been strongly promoted in South East Queensland (SEQ) and new developments are now required to use WSUD systems to manage stormwater runoff. However, there has been limited field evaluation of WSUD systems in SEQ and consequently knowledge of their effectiveness in the field, under storm events, is limited. The objective of this research project was to assess the effectiveness of WSUD systems installed in a residential development, under real storm events. To achieve this objective, a constructed wetland, bioretention swale and a bioretention basin were evaluated for their ability to improve the hydrologic and water quality characteristics of stormwater runoff from urban development. The monitoring focused on storm events, with sophisticated event monitoring stations measuring the inflow and outflow from WSUD systems. Data analysis undertaken confirmed that the constructed wetland, bioretention basin and bioretention swale improved the hydrologic characteristics by reducing peak flow. The bioretention systems, particularly the bioretention basin also reduced the runoff volume and frequency of flow, meeting key objectives of current urban stormwater management. The pollutant loads were reduced by the WSUD systems to above or just below the regional guidelines, showing significant reductions to TSS (70-85%), TN (40-50%) and TP (50%). The load reduction of NOx and PO4 3- by the bioretention basin was poor (<20%), whilst the constructed wetland effectively reduced the load of these pollutants in the outflow by approximately 90%. The primary reason for the load reduction in the wetland was due to a reduction in concentration in the outflow, showing efficient treatment of stormwater by the system. In contrast, the concentration of key pollutants exiting the bioretention basin were higher than the inflow. However, as the volume of stormwater exiting the bioretention basin was significantly lower than the inflow, a load reduction was still achieved. Calibrated MUSIC modelling showed that the bioretention basin, and in particular, the constructed wetland were undersized, with 34% and 62% of stormwater bypassing the treatment zones in the devices. Over the long term, a large proportion of runoff would not receive treatment, considerably reducing the effectiveness of the WSUD systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aims To identify self-care activities undertaken and determine relationships between self-efficacy, depression, quality of life, social support and adherence to compression therapy in a sample of patients with chronic venous insufficiency. Background Up to 70% of venous leg ulcers recur after healing. Compression hosiery is a primary strategy to prevent recurrence, however, problems with adherence to this strategy are well documented and an improved understanding of how psychosocial factors influence patients with chronic venous insufficiency will help guide effective preventive strategies. Design Cross-sectional survey and retrospective medical record review. Method All patients previously diagnosed with a venous leg ulcer which healed between 12–36 months prior to the study were invited to participate. Data on health, psychosocial variables and self-care activities were obtained from a self-report survey and data on medical and previous ulcer history were obtained from medical records. Multiple linear regression modelling was used to determine the independent influences of psychosocial factors on adherence to compression therapy. Results In a sample of 122 participants, the most frequently identified self-care activities were application of topical skin treatments, wearing compression hosiery and covering legs to prevent trauma. Compression hosiery was worn for a median of 4 days/week (range 0–7). After adjustment for all variables and potential confounders in a multivariable regression model, wearing compression hosiery was found to be significantly positively associated with participants’ knowledge of the cause of their condition (p=0.002), higher self-efficacy scores (p=0.026) and lower depression scores (p=0.009). Conclusion In this sample, depression, self-efficacy and knowledge were found to be significantly related to adherence to compression therapy. Relevance to clinical practice These findings support the need to screen for and treat depression in this population. In addition, strategies to improve patient knowledge and self-efficacy may positively influence adherence to compression therapy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Knowledge of the regulation of food intake is crucial to an understanding of body weight and obesity. Strictly speaking, we should refer to the control of food intake whose expression is modulated in the interests of the regulation of body weight. Food intake is controlled, body weight is regulated. However, this semantic distinction only serves to emphasize the importance of food intake. Traditionally food intake has been researched within the homeostatic approach to physiological systems pioneered by Claude Bernard, Walter Cannon and others; and because feeding is a form of behaviour, it forms part of what Curt Richter referred to as the behavioural regulation of body weight (or behavioural homeostasis). This approach views food intake as the vehicle for energy supply whose expression is modulated by a metabolic drive generated in response to a requirement for energy. The idea was that eating behaviour is stimulated and inhibited by internal signalling systems (for the drive and suppression of eating respectively) in order to regulate the internal environment (energy stores, tissue needs).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this chapter, we are particularly concerned with making visible the general principles underlying the transmission of Social Studies curriculum knowledge, and considering it in light of a high-stakes mandated national assessment task. Specifically, we draw on Bernstein’s theoretical concept of pedagogic models as a tool for analysing orientations to teaching and learning. We introduce a case in point from the Australian context: one state Social Studies curriculum vis-a-vis one part of the Year Three national assessment measure for reading. We use our findings to consider the implications for the disciplinary knowledge of Social Studies in the communities in which we are undertaking our respective Australian Research Council Linkage project work (Glasswell et al.; Woods et al.). We propose that Social Studies disciplinary knowledge is being constituted, in part, through power struggles between different agencies responsible for the production and relay of official forms of state curriculum and national literacy assessment. This is particularly the case when assessment instruments are used to compare and contrast school results in highly visible web based league tables (see, for example, http://myschoolaustralia.ning.com/).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study focused on a group of primary school teachers as they implemented a variety of intervention actions within their class programs aimed towards supporting the reduction of high levels of communication apprehension (CA) among students.Six teachers and nine students, located across three primary schools, four year levels,and six classes, participated in this study. For reasons of confidentiality the schools,principals, parents, teachers, teacher assistants, and students who were involved in this study were given fictitious names. The following research question was explored in this study: What intervention actions can primary school teachers implement within their class programs that support the reduction of high CA levels among students? Throughout this study the term CA referred to "an individual's level of fear or anxiety associated with either real or anticipated (oral) communication with another person or persons" (McCroskey, 1984, p. 13). The sources of CA were explained with reference to McCroskey's state-trait continuum. The distinctions between high and appropriate levels of CA were determined conceptually and empirically. The education system within which this study was conducted promoted the philosophy of inclusion and the practices of inclusive schooling. Teachers employed in this system were encouraged to create class programs inclusive of and successful for all students. Consequently the conceptual framework within which this study was conducted was based around the notion of inclusion. Action research and case study research were the methodologies used in the study. Case studies described teachers' action research as they responded to the challenge of executing intervention actions within their class programs directed towards supporting the reduction of high CA levels among students. Consequently the teachers and not the researcher were the central characters in each of the case studies. Three principal data collection instruments were used in this study: Personal Report of Communication Fear (PRCF) scale, semistructured interviews, and dialogue journals. The PRCF scale was the screening tool used to identify a pool of students eligible for the study. Data relevant to the students involved in the study were gathered during semistructured interviews and throughout the dialogue journaling process. Dialogue journaling provided the opportunity for regular contact between teachers and the researcher, a sequence to teacher and student intervention behaviours, and a permanent record of teacher and student growth and development. The majority of teachers involved in this study endeavoured to develop class programs inclusive of all students.These teachers acknowledged the importance of modifying aspects of their class programs in response to the diverse and often multiple needs of individual students with high levels of CA. Numerous conclusions were drawn regarding practical ways that the teachers in this study supported the reduction of high CA levels among students. What this study has shown is that teachers can incorporate intervention actions within their class programs aimed towards supporting students lower their high levels of CA. Whilst no teacher developed an identical approach to intervention, similarities and differences were evident among teachers regarding their selection, interpretation, and implementation of intervention actions. Actions that teachers enacted within their class programs emerged from numerous fields of research including CA, inclusion, social skills, behaviour teaching, co-operative learning, and quality schools. Each teacher's knowledge of and familiarity with these research fields influenced their preference for and commitment to particular intervention actions. Additional factors including each teacher's paradigm of inclusion and exclusion contributed towards their choice of intervention actions. Possible implications of these conclusions were noted with reference to teachers,school administrators, support personnel, system personnel, teacher educators, parents, and researchers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work investigates the computer modelling of the photochemical formation of smog products such as ozone and aerosol, in a system containing toluene, NOx and water vapour. In particular, the problem of modelling this process in the Commonwealth Scientific and Industrial Research Organization (CSIRO) smog chambers, which utilize outdoor exposure, is addressed. The primary requirement for such modelling is a knowledge of the photolytic rate coefficients. Photolytic rate coefficients of species other than N02 are often related to JNo2 (rate coefficient for the photolysis ofN02) by a simple factor, but for outdoor chambers, this method is prone to error as the diurnal profiles may not be similar in shape. Three methods for the calculation of diurnal JNo2 are investigated. The most suitable method for incorporation into a general model, is found to be one which determines the photolytic rate coefficients for N02, as well as several other species, from actinic flux, absorption cross section and quantum yields. A computer model was developed, based on this method, to calculate in-chamber photolysis rate coefficients for the CSIRO smog chambers, in which ex-chamber rate coefficients are adjusted by accounting for variation in light intensity by transmittance through the Teflon walls, albedo from the chamber floor and radiation attenuation due to clouds. The photochemical formation of secondary aerosol is investigated in a series of toluene-NOx experiments, which were performed in the CSIRO smog chambers. Three stages of aerosol formation, in plots of total particulate volume versus time, are identified: a delay period in which no significant mass of aerosol is formed, a regime of rapid aerosol formation (regime 1) and a second regime of slowed aerosol formation (regime 2). Two models are presented which were developed from the experimental data. One model is empirically based on observations of discrete stages of aerosol formation and readily allows aerosol growth profiles to be calculated. The second model is based on an adaptation of published toluene photooxidation mechanisms and provides some chemical information about the oxidation products. Both models compare favorably against the experimental data. The gross effects of precursor concentrations (toluene, NOx and H20) and ambient conditions (temperature, photolysis rate) on the formation of secondary aerosol are also investigated, primarily using the mechanism model. An increase in [NOx]o results in increased delay time, rate of aerosol formation in regime 1 and volume of aerosol formed in regime 1. This is due to increased formation of dinitrocresol and furanone products. An increase in toluene results in a decrease in the delay time and an increase in the rate of aerosol formation in regime 1, due to enhanced reactivity from the toluene products, such as the radicals from the photolysis of benzaldehyde. Water vapor has very little effect on the formation of aerosol volume, except that rates are slightly increased due to more OH radicals from reaction with 0(1D) from ozone photolysis. Increased temperature results in increased volume of aerosol formed in regime 1 (increased dinitrocresol formation), while increased photolysis rate results in increased rate of aerosol formation in regime 1. Both the rate and volume of aerosol formed in regime 2 are increased by increased temperature or photolysis rate. Both models indicate that the yield of secondary particulates from hydrocarbons (mass concentration aerosol formed/mass concentration hydrocarbon precursor) is proportional to the ratio [NOx]0/[hydrocarbon]0

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study examined the psychometric properties of an expanded version of the Algase Wandering Scale (Version 2) (AWS-V2) in a cross-cultural sample. A cross-sectional survey design was used. Study subjects were 172 English-speaking persons with dementia (PWD) from long-term care facilities in the USA, Canada, and Australia. Two or more facility staff rated each subject on the AWS-V2. Demographic and cognitive data (MMSE) were also obtained. Staff provided information on their own knowledge of the subject and of dementia. Separate factor analyses on data from two samples of raters each explained greater than 66% of the variance in AWS-V2 scores and validated four (persistent walking, navigational deficit, eloping behavior, and shadowing) of five factors in the original scale. Items added to create the AWS-V2 strengthened the shadowing subscale, failed to improve the routinized walking subscale, and added a factor, attention shifting as compared to the original AWS. Evidence for validity was found in significant correlations and ANOVAs between the AWS-V2 and most subscales with a single item indicator of wandering and with the MMSE. Evidence of reliability was shown by internal consistency of the AWS-V2 (0.87, 0.88) and its subscales (range 0.88 to 0.66), with Kappa for individual items (17 of 27 greater than 0.4), and ANOVAs comparing ratings across rater groups (nurses, nurse aids, and other staff). Analyses support validity and reliability of the AWS-V2 overall and for persistent walking, spatial disorientation, and eloping behavior subscales. The AWS-V2 and its subscales are an appropriate way to measure wandering as conceptualized within the Need-driven Dementia-compromised Behavior Model in studies of English-speaking subjects. Suggestions for further strengthening the scale and for extending its use to clinical applications are described.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most statistical methods use hypothesis testing. Analysis of variance, regression, discrete choice models, contingency tables, and other analysis methods commonly used in transportation research share hypothesis testing as the means of making inferences about the population of interest. Despite the fact that hypothesis testing has been a cornerstone of empirical research for many years, various aspects of hypothesis tests commonly are incorrectly applied, misinterpreted, and ignored—by novices and expert researchers alike. On initial glance, hypothesis testing appears straightforward: develop the null and alternative hypotheses, compute the test statistic to compare to a standard distribution, estimate the probability of rejecting the null hypothesis, and then make claims about the importance of the finding. This is an oversimplification of the process of hypothesis testing. Hypothesis testing as applied in empirical research is examined here. The reader is assumed to have a basic knowledge of the role of hypothesis testing in various statistical methods. Through the use of an example, the mechanics of hypothesis testing is first reviewed. Then, five precautions surrounding the use and interpretation of hypothesis tests are developed; examples of each are provided to demonstrate how errors are made, and solutions are identified so similar errors can be avoided. Remedies are provided for common errors, and conclusions are drawn on how to use the results of this paper to improve the conduct of empirical research in transportation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In Australia and many other countries worldwide, water used in the manufacture of concrete must be potable. At present, it is currently thought that concrete properties are highly influenced by the water type used and its proportion in the concrete mix, but actually there is little knowledge of the effects of different, alternative water sources used in concrete mix design. Therefore, the identification of the level and nature of contamination in available water sources and their subsequent influence on concrete properties is becoming increasingly important. Of most interest, is the recycled washout water currently used by batch plants as mixing water for concrete. Recycled washout water is the water used onsite for a variety of purposes, including washing of truck agitator bowls, wetting down of aggregate and run off. This report presents current information on the quality of concrete mixing water in terms of mandatory limits and guidelines on impurities as well as investigating the impact of recycled washout water on concrete performance. It also explores new sources of recycled water in terms of their quality and suitability for use in concrete production. The complete recycling of washout water has been considered for use in concrete mixing plants because of the great benefit in terms of reducing the cost of waste disposal cost and environmental conservation. The objective of this study was to investigate the effects of using washout water on the properties of fresh and hardened concrete. This was carried out by utilizing a 10 week sampling program from three representative sites across South East Queensland. The sample sites chosen represented a cross-section of plant recycling methods, from most effective to least effective. The washout water samples collected from each site were then analysed in accordance with Standards Association of Australia AS/NZS 5667.1 :1998. These tests revealed that, compared with tap water, the washout water was higher in alkalinity, pH, and total dissolved solids content. However, washout water with a total dissolved solids content of less than 6% could be used in the production of concrete with acceptable strength and durability. These results were then interpreted using chemometric techniques of Principal Component Analysis, SIMCA and the Multi-Criteria Decision Making methods PROMETHEE and GAIA were used to rank the samples from cleanest to unclean. It was found that even the simplest purifying processes provided water suitable for the manufacture of concrete form wash out water. These results were compared to a series of alternative water sources. The water sources included treated effluent, sea water and dam water and were subject to the same testing parameters as the reference set. Analysis of these results also found that despite having higher levels of both organic and inorganic properties, the waters complied with the parameter thresholds given in the American Standard Test Method (ASTM) C913-08. All of the alternative sources were found to be suitable sources of water for the manufacture of plain concrete.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Knowledge of differences in the demographics of contact lens prescribing between nations, and changes over time, can assist (a) the contact lens industry in developing and promoting various product types in different world regions, and (b) practitioners in understanding their prescribing habits in an international context. Data that we have gathered from annual contact lens fitting surveys conducted in Australia, Canada, Japan, the Netherlands, Norway, the UK and the USA between 2000 and 2008 reveal an ageing demographic, with Japan being the most youthful. The majority of fits are to females, with statistically significant differences between nations, ranging from 62 per cent of fits in Norway to 68 per cent in Japan. The small overall decline in the proportion of new fits, and commensurate increase in refits, over the survey periodmay indicate a growing rate of conversion of lens wearers to more advanced lens types, such as silicone hydrogels. � 2009 British Contact Lens Association.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The relationship between soil structure and the ability of soil to stabilize soil organic matter (SOM) is a key element in soil C dynamics that has either been overlooked or treated in a cursory fashion when developing SOM models. The purpose of this paper is to review current knowledge of SOM dynamics within the framework of a newly proposed soil C saturation concept. Initially, we distinguish SOM that is protected against decomposition by various mechanisms from that which is not protected from decomposition. Methods of quantification and characteristics of three SOM pools defined as protected are discussed. Soil organic matter can be: (1) physically stabilized, or protected from decomposition, through microaggregation, or (2) intimate association with silt and clay particles, and (3) can be biochemically stabilized through the formation of recalcitrant SOM compounds. In addition to behavior of each SOM pool, we discuss implications of changes in land management on processes by which SOM compounds undergo protection and release. The characteristics and responses to changes in land use or land management are described for the light fraction (LF) and particulate organic matter (POM). We defined the LF and POM not occluded within microaggregates (53-250 mum sized aggregates as unprotected. Our conclusions are illustrated in a new conceptual SOM model that differs from most SOM models in that the model state variables are measurable SOM pools. We suggest that physicochemical characteristics inherent to soils define the maximum protective capacity of these pools, which limits increases in SOM (i.e. C sequestration) with increased organic residue inputs.