973 resultados para effective atomic number


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Individual variability in the acquisition, consolidation and extinction of conditioned fear potentially contributes to the development of fear pathology including posttraumatic stress disorder (PTSD). Pavlovian fear conditioning is a key tool for the study of fundamental aspects of fear learning. Here, we used a selected mouse line of High and Low Pavlovian conditioned fear created from an advanced intercrossed line (AIL) in order to begin to identify the cellular basis of phenotypic divergence in Pavlovian fear conditioning. We investigated whether phosphorylated MAPK (p44/42 ERK/MAPK), a protein kinase required in the amygdala for the acquisition and consolidation of Pavlovian fear memory, is differentially expressed following Pavlovian fear learning in the High and Low fear lines. We found that following Pavlovian auditory fear conditioning, High and Low line mice differ in the number of pMAPK-expressing neurons in the dorsal sub nucleus of the lateral amygdala (LAd). In contrast, this difference was not detected in the ventral medial (LAvm) or ventral lateral (LAvl) amygdala sub nuclei or in control animals. We propose that this apparent increase in plasticity at a known locus of fear memory acquisition and consolidation relates to intrinsic differences between the two fear phenotypes. These data provide important insights into the micronetwork mechanisms encoding phenotypic differences in fear. Understanding the circuit level cellular and molecular mechanisms that underlie individual variability in fear learning is critical for the development of effective treatment of fear-related illnesses such as PTSD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determining what consequences are likely to serve as effective punishment for any given behaviour is a complex task. This chapter focuses specifically on illegal road user behaviours and the mechanisms used to punish and deter them. Traffic law enforcement has traditionally used the threat and/or receipt of legal sanctions and penalties to deter illegal and risky behaviours. This process represents the use of positive punishment, one of the key behaviour modification mechanisms. Behaviour modification principles describe four types of reinforcers: positive and negative punishments and positive and negative reinforcements. The terms ‘positive’ and ‘negative’ are not used in an evaluative sense here. Rather, they represent the presence (positive) or absence (negative) of stimuli to promote behaviour change. Punishments aim to inhibit behaviour and reinforcements aim to encourage it. This chapter describes a variety of punishments and reinforcements that have been and could be used to modify illegal road user behaviours. In doing so, it draws on several theoretical perspectives that have defined behavioural reinforcement and punishment in different ways. Historically, the main theoretical approach used to deter risky road use has been classical deterrence theory which has focussed on the perceived certainty, severity and swiftness of penalties. Stafford and Warr (1993) extended the traditional deterrence principles to include the positive reinforcement concept of punishment avoidance. Evidence of the association between punishment avoidance experiences and behaviour has been established for a number of risky road user behaviours including drink driving, unlicensed driving, and speeding. We chose a novel way of assessing punishment avoidance by specifying two sub-constructs (detection evasion and punishment evasion). Another theorist, Akers, described the idea of competing reinforcers, termed differential reinforcement, within social learning theory (1977). Differential reinforcement describes a balance of reinforcements and punishments as influential on behaviour. This chapter describes comprehensive way of conceptualising a broad range of reinforcement and punishment concepts, consistent with Akers’ differential reinforcement concept, within a behaviour modification framework that incorporates deterrence principles. The efficacy of three theoretical perspectives to explain self-reported speeding among a sample of 833 Australian car drivers was examined. Results demonstrated that a broad range of variables predicted speeding including personal experiences of evading detection and punishment for speeding, intrinsic sensations, practical benefits expected from speeding, and an absence of punishing effects from being caught. Not surprisingly, being younger was also significantly related to more frequent speeding, although in a regression analysis, gender did not retain a significant influence once all punishment and reinforcement variables were entered. The implications for speed management, as well as road user behaviour modification more generally, are discussed in light of these findings. Overall, the findings reported in this chapter suggest that a more comprehensive approach is required to manage the behaviour of road users which does not rely solely on traditional legal penalties and sanctions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Yd2 gene for “resistance” to barley yellow dwarf virus (BYDV) has been widely used in barley (Hordeum vulgare). We have tested Australian isolates of BYDV of varying severity against barley genotypes with and without the Yd2 gene and report here a positive relationship between symptoms and virus levels determined by ELISA. Cultivar Shannon is the result of backcrossing the resistant line CI 3208 to cultivar Proctor, a susceptible line. It appears to be intermediate in reaction to BYDV between Proctor and CI 3208, although it carries the major gene, Yd2. Unlike the whole plant studies, no significant differences were observed with regard to the ability of protoplasts derived from these various genotypes to support BYDV replication. It is therefore demonstrated for the first time that the Yd2 gene is not among the small number of resistance genes which are effective against virus replication in isolated protoplasts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the expanding literature on creative practice research, art and design are often described as a unified field. They are bracketed together (art-and-design), referred to as interchangeable terms (art/design), and nested together, as if the practices of one domain encompass the other. However it is possible to establish substantial differences in research approaches. In this chapter we argue that core distinctions arise out of the goals of the research, intentions invested in the resulting “artefacts” (creative works, products, events), and the knowledge claims made for the research outcomes. Moreover, these fundamental differences give rise to a number of contingent attributes of the research such as the forming contexts, methodological approaches, and ways of evidencing and reporting new knowledge. We do not strictly ascribe these differences to disciplinary contexts. Rather, we use the terms effective practice research and evocative practice research to describe the spirit of the two distinctive research paradigms we identify. In short, effective practice research (often pursued in design fields) seeks a solution (or resolution) to a problem identified with a particular community, and it produces an artefact that addresses this problem by effecting change (making a situation, product or process more efficient or effective in some way). On the other hand, evocative practice research (often pursued by creative arts fields) is driven by individual pre-occupations, cultural concerns or human experience more broadly. It produces artefacts that evoke affect and resonance, and are poetically irreducible in meaning. We cite recent examples of creative research projects that illustrate the distinctions we identify. We then go on to describe projects that integrate these modes of research. In this way, we map out a creative research spectrum, with distinct poles as well as multiple hybrid possibilities. The hybrid projects we reference are not presented as evidence an undifferentiated field. Instead, we argue that they integrate research modes in deliberate, purposeful and distinctive ways: employing effective practice research methods in the production of evocative artefacts or harnessing evocative (as well as effective) research paradigms to effect change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Lumbar Epidural Steroids Injections (ESI’s) have previously been shown to provide some degree of pain relief in sciatica. Number Needed To Treat (NNT) to achieve 50% pain relief has been estimated at 7 from the results of randomised controlled trials. Pain relief is temporary. They remain one of the most commonly provided procedures in the UK. It is unknown whether this pain relief represents good value for money. Methods 228 patients were randomised into a multi-centre Double Blind Randomised Controlled Trial. Subjects received up to 3 ESI’s or intra-spinous saline depending on response and fall off with the first injection. All other treatments were permitted. All received a review of analgesia, education and physical therapy. Quality of life was assessed using the SF36 at 6 points and compared using independent sample t-tests. Follow up was up to 1 yr. Missing data was imputed using last observation carried forward (LOCF). QALY’s (Quality of Life Years) were derived from preference based heath values (summary health utility score). SF-6D health state classification was derived from SF-36 raw score data. Standard gambles (SG) were calculated using Model 10. SG scores were calculated on trial results. LOCF was not used for this. Instead average SG were derived for a subset of patients with observations for all visits up to week 12. Incremental QALY’s were derived as the difference in the area between the SG curve for the active group and placebo group. Results SF36 domains showed a significant improvement in pain at week 3 but this was not sustained (mean 54 Active vs 61 Placebo P<0.05). Other domains did not show any significant gains compared with placebo. For derivation of SG the number in the sample in each period differed. In week 12, average SG scores for active and placebo converged. In other words, the health gain for the active group as measured by SG was achieved by the placebo group by week 12. The incremental QALY gained for a patient under the trial protocol compared with the standard care package was 0.0059350. This is equivalent to an additional 2.2 days of full health. The cost per QALY gained to the provider from a patient management strategy administering one epidural as suggested by results was £25 745.68. This result was derived assuming that the gain in QALY data calculated for patients under the trial protocol would approximate that under a patient management strategy based on the trial results (one ESI). This is above the threshold suggested by some as a cost effective treatment. Conclusions The transient benefit in pain relief afforded by ESI’s does not appear to be cost-effective. Further work is needed to develop more cost-effective conservative treatments for sciatica.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Queensland Centre for Social Science Innovation was formed in 2012 to develop collaborations among the Queensland Government and five Queensland universities—The University of Queensland, Griffith University, Queensland University of Technology, James Cook University and Central Queensland University. Three priorities for initial projects were established by the Queensland Government with response by the participating universities. This project addressed the identified priority area: factors affecting educational achievement and investigation of the link between school design, refurbishment and educational outcomes. The proposal for this project indicated that a Review of research literature would be undertaken that linked school and classroom design with educational outcomes for learners in the 21st century. Further, research would be examined for impact of technology on staff and students, as well as learning spaces that addressed the diversity of student learners. Specific investigation of research on effective design to enhance learning outcomes for Aboriginal and Torres Strait Islander students was to be undertaken. The project therefore consists of a Review of research literature to provide an evidence base on the impact of school and classroom on educational outcomes. The original proposal indicated that indicators of successful school and classroom design would be student learning outcomes on a range of variables, with input, the specific architectural design elements. The review was undertaken during the period July 2012 to June 2013. A search was undertaken of journals, databases, and websources to identify relevant material. These were examined for evidence-based statements and design of learning spaces to enhance learning. The Review is comprehensive, and representative of issues raised in research, and conceptualisations and debates informing modern educational design. Initial findings indicated two key findings central to reading this Review. The first key finding is that the predominant focus of modern design of learning space is on process and the engagement of stakeholders. Schools are social institutions and development of a school as a learning space to suit 21st century learning needs necessarily involves the staff, students and other members of the community as key participants. The concept of social aspects of design is threaded throughout the Review. The second key finding is that little research explicitly examined the relationship between the design of learning spaces and educational outcomes. While some research does exist, the most explicitly-focused research uses narrow test-based achievement as the learning outcomes. These are not sympathetic to the overall framings of the research on 21st century learning, future schooling and the needs of the new generation of learners and society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods which have been recently employed to analyse PNSD data, however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K-means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and Silhouette width validation values and the K-means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K-means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectra to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help researchers immensely in analysing PNSD data for characterisation and source apportionment purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduced predators can have pronounced effects on naïve prey species; thus, predator control is often essential for conservation of threatened native species. Complete eradication of the predator, although desirable, may be elusive in budget-limited situations, whereas predator suppression is more feasible and may still achieve conservation goals. We used a stochastic predator-prey model based on a Lotka-Volterra system to investigate the cost-effectiveness of predator control to achieve prey conservation. We compared five control strategies: immediate eradication, removal of a constant number of predators (fixed-number control), removal of a constant proportion of predators (fixed-rate control), removal of predators that exceed a predetermined threshold (upper-trigger harvest), and removal of predators whenever their population falls below a lower predetermined threshold (lower-trigger harvest). We looked at the performance of these strategies when managers could always remove the full number of predators targeted by each strategy, subject to budget availability. Under this assumption immediate eradication reduced the threat to the prey population the most. We then examined the effect of reduced management success in meeting removal targets, assuming removal is more difficult at low predator densities. In this case there was a pronounced reduction in performance of the immediate eradication, fixed-number, and lower-trigger strategies. Although immediate eradication still yielded the highest expected minimum prey population size, upper-trigger harvest yielded the lowest probability of prey extinction and the greatest return on investment (as measured by improvement in expected minimum population size per amount spent). Upper-trigger harvest was relatively successful because it operated when predator density was highest, which is when predator removal targets can be more easily met and the effect of predators on the prey is most damaging. This suggests that controlling predators only when they are most abundant is the "best" strategy when financial resources are limited and eradication is unlikely. © 2008 Society for Conservation Biology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper critically evaluates the empirical evidence of 36 studies regarding the comparative cost-effectiveness of group and individual cognitive behaviour therapy (CBT) as a whole, and also for specific mental disorders (e.g. depression, anxiety, substance abuse) or populations (e.g. children). Methods of calculating costs, as well as methods of comparing treatment outcomes were appraised and criticized. Overall, the evidence that group CBT is more cost-effective than individual CBT is mixed, with group CBT appearing to be more cost effective in treating depression and children, but less cost effective in treating drugs and alcohol dependence, anxiety and social phobias. In addition, methodological weaknesses in the studies assessed are noted. There is a need to improve cost calculation methodology, as well as more solid and a greater number of empirical cost-effectiveness studies before a firm conclusion can be reached that group CBT is more cost effective then individual CBT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional text classification technology based on machine learning and data mining techniques has made a big progress. However, it is still a big problem on how to draw an exact decision boundary between relevant and irrelevant objects in binary classification due to much uncertainty produced in the process of the traditional algorithms. The proposed model CTTC (Centroid Training for Text Classification) aims to build an uncertainty boundary to absorb as many indeterminate objects as possible so as to elevate the certainty of the relevant and irrelevant groups through the centroid clustering and training process. The clustering starts from the two training subsets labelled as relevant or irrelevant respectively to create two principal centroid vectors by which all the training samples are further separated into three groups: POS, NEG and BND, with all the indeterminate objects absorbed into the uncertain decision boundary BND. Two pairs of centroid vectors are proposed to be trained and optimized through the subsequent iterative multi-learning process, all of which are proposed to collaboratively help predict the polarities of the incoming objects thereafter. For the assessment of the proposed model, F1 and Accuracy have been chosen as the key evaluation measures. We stress the F1 measure because it can display the overall performance improvement of the final classifier better than Accuracy. A large number of experiments have been completed using the proposed model on the Reuters Corpus Volume 1 (RCV1) which is important standard dataset in the field. The experiment results show that the proposed model has significantly improved the binary text classification performance in both F1 and Accuracy compared with three other influential baseline models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, studies have identified high zinc levels in various environmental resources, and excessive intake of zinc has long been considered to be harmful to human health. The aim of this research was to investigate the effectiveness of tricalcium aluminate (C3A) as a removal agent of zinc from aqueous solution. Inductively coupled plasma-atomic emission spectrometer (ICP-AES), X-ray diffraction (XRD) and scanning electron microscopy (SEM) have been used to characterize such removal behavior. The effects of various factors such as pH influence, temperature and contact time were investigated. The adsorption capacity of C3A for Zn2+ was computed to be up to 13.73 mmol g−1, and the highest zinc removal capacity was obtained when the initial pH of Zn(NO3)2 solution was between 6.0 and 7.0, with temperature around 308 K. The XRD analysis showed that the resultant products were ZnAl-LDHs. Combined with the analysis of solution component, it was proved the existence of both precipitation and cation exchange in the removal process. From the experimental results, it was clear that C3A could be potentially used as a cost-effective material for the removal of zinc in aqueous environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Miscommunication in the healthcare sector can be life-threatening. The rising number of migrant patients and foreign-trained staff means that communication errors between a healthcare practitioner and patient when one or both are speaking a second language are increasingly likely. However, there is limited research that addresses this issue systematically. This protocol outlines a hospital-based study examining interactions between healthcare practitioners and their patients who either share or do not share a first language. Of particular interest are the nature and efficacy of communication in language-discordant conversations, and the degree to which risk is communicated. Our aim is to understand language barriers and miscommunication that may occur in healthcare settings between patients and healthcare practitioners, especially where at least one of the speakers is using a second (weaker) language. Methods/Design Eighty individual interactions between patients and practitioners who speak either English or Chinese (Mandarin or Cantonese) as their first language will be video recorded in a range of in- and out-patient departments at three hospitals in the Metro South area of Brisbane, Australia. All participants will complete a language background questionnaire. Patients will also complete a short survey rating the effectiveness of the interaction. Recordings will be transcribed and submitted to both quantitative and qualitative analyses to determine elements of the language used that might be particularly problematic and the extent to which language concordance and discordance impacts on the quality of the patient-practitioner consultation. Discussion Understanding the role that language plays in creating barriers to healthcare is critical for healthcare systems that are experiencing an increasing range of culturally and linguistically diverse populations both amongst patients and practitioners. The data resulting from this study will inform policy and practical solutions for communication training, provide an agenda for future research, and extend theory in health communication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Young Australian drivers aged 17 – 25 years are overwhelmingly represented in road fatalities where speed is a factor. In the combined LGAs of Armidale Dumaresq, Guyra, Uralla and Walcha in the 5 years 1999-2003 inclusive, 43% of speeding related casualty crashes involved a young driver aged less than 25 years. This is despite the fact that the 17-25 age group account for only 25% of the driving population in this area. Young male drivers account for the majority of these crashes and also tend to have a higher number of driving offences and accrue more penalties for road traffic offences, especially speeding. By analysing data from questionnaires by male and female participants this research project has been able to evaluate road safety advertisements to determine which ones are most effective to young drivers, what features of these advertisements are effective, how males differ from females in their receptiveness and preferences for road safety advertisements and specifically how to target young people especially young men in conveying road safety messages. Finally this research project has identified factors that are important in the production of media road safety advertisements and has made recommendations for how best to convey effective road safety messages to young Australian drivers in rural areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I (Manjunath et al., 1994, Chem. Engng Sci. 49, 1451-1463) of this paper showed that the random particle numbers and size distributions in precipitation processes in very small drops obtained by stochastic simulation techniques deviate substantially from the predictions of conventional population balance. The foregoing problem is considered in this paper in terms of a mean field approximation obtained by applying a first-order closure to an unclosed set of mean field equations presented in Part I. The mean field approximation consists of two mutually coupled partial differential equations featuring (i) the probability distribution for residual supersaturation and (ii) the mean number density of particles for each size and supersaturation from which all average properties and fluctuations can be calculated. The mean field equations have been solved by finite difference methods for (i) crystallization and (ii) precipitation of a metal hydroxide both occurring in a single drop of specified initial supersaturation. The results for the average number of particles, average residual supersaturation, the average size distribution, and fluctuations about the average values have been compared with those obtained by stochastic simulation techniques and by population balance. This comparison shows that the mean field predictions are substantially superior to those of population balance as judged by the close proximity of results from the former to those from stochastic simulations. The agreement is excellent for broad initial supersaturations at short times but deteriorates progressively at larger times. For steep initial supersaturation distributions, predictions of the mean field theory are not satisfactory thus calling for higher-order approximations. The merit of the mean field approximation over stochastic simulation lies in its potential to reduce expensive computation times involved in simulation. More effective computational techniques could not only enhance this advantage of the mean field approximation but also make it possible to use higher-order approximations eliminating the constraints under which the stochastic dynamics of the process can be predicted accurately.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study compares estimates of the census size of the spawning population with genetic estimates of effective current and long-term population size for an abundant and commercially important marine invertebrate, the brown tiger prawn (Penaeus esculentus). Our aim was to focus on the relationship between genetic effective and census size that may provide a source of information for viability analyses of naturally occurring populations. Samples were taken in 2001, 2002 and 2003 from a population on the east coast of Australia and temporal allelic variation was measured at eight polymorphic microsatellite loci. Moments-based and maximum-likelihood estimates of current genetic effective population size ranged from 797 to 1304. The mean long-term genetic effective population size was 9968. Although small for a large population, the effective population size estimates were above the threshold where genetic diversity is lost at neutral alleles through drift or inbreeding. Simulation studies correctly predicted that under these experimental conditions the genetic estimates would have non-infinite upper confidence limits and revealed they might be overestimates of the true size. We also show that estimates of mortality and variance in family size may be derived from data on average fecundity, current genetic effective and census spawning population size, assuming effective population size is equivalent to the number of breeders. This work confirms that it is feasible to obtain accurate estimates of current genetic effective population size for abundant Type III species using existing genetic marker technology.