965 resultados para Expected satiety
Resumo:
Germ cell mutagens are currently classified into three categories in the German List of MAK- and BAT-Values. These categories have been revised and extended in analogy to the new categories for carcinogenic chemicals. Germ cell mutagens produce heritable gene mutations, and heritable structural and numerical chromosome aberrations in germ cells. The original categories 1 and 2 for germ cell mutagens remained unchanged. Two new categories 3 A and 3 B are proposed for chemicals which are suspected to be germ cell mutagens. A new category 5 is proposed for germ cell mutagens with low potency which contribute negligibly to human genetic risk provided the MAK value is observed. The following categories are presented for further discussion. 1. Germ cell mutagens which have been shown to increase the mutant frequency among the progeny of exposed humans. 2. Germ cell mutagens which have been shown to increase the mutant frequency among the progeny of exposed animals. 3 A. Substances which have been shown to induce genetic damage in germ cells of humans or animals, or which are mutagenic in somatic cells and have been shown to reach the germ cells in their active forms. 3 B. Substances which are suspected of being germ cell mutagens because of their genotoxic effects in mammalian somatic cells in vivo or, in exceptional cases in the absence of in vivo data, if they are clearly mutagenic in vitro and structurally related to in vivo mutagens. 4. not applicable (Category 4 was introduced for carcinogenic substances with nongenotoxic modes of action. By definition, germ cell mutagens are genotoxic. Therefore, a Category 4 for germ cell mutagens cannot exist.) 5. Germ cell mutagens, the potency of which is considered to be so low that, provided the MAK value is observed, their contribution to genetic risk is expected not to be significant.
Resumo:
Lead compounds are known genotoxicants, principally affecting the integrity of chromosomes. Lead chloride and lead acetate induced concentration-dependent increases in micronucleus frequency in V79 cells, starting at 1.1 μM lead chloride and 0.05 μM lead acetate. The difference between the lead salts, which was expected based on their relative abilities to form complex acetato-cations, was confirmed in an independent experiment. CREST analyses of the micronuclei verified that lead chloride and acetate were predominantly aneugenic (CREST-positive response), which was consistent with the morphology of the micronuclei (larger micronuclei, compared with micronuclei induced by a clastogenic mechanism). The effects of high concentrations of lead salts on the microtubule network of V79 cells were also examined using immunofluorescence staining. The dose effects of these responses were consistent with the cytotoxicity of lead(II), as visualized in the neutral-red uptake assay. In a cell-free system, 20-60 μM lead salts inhibited tubulin assembly dose-dependently. The no-observed-effect concentration of lead(II) in this assay was 10 μM. This inhibitory effect was interpreted as a shift of the assembly/disassembly steady-state toward disassembly, e.g., by reducing the concentration of assembly-competent tubulin dimers. The effects of lead salts on microtubule-associated motor-protein functions were studied using a kinesin-gliding assay that mimics intracellular transport processes in vitro by quantifying the movement of paclitaxel-stabilized microtubules across a kinesin-coated glass surface. There was a dose-dependent effect of lead nitrate on microtubule motility. Lead nitrate affected the gliding velocities of microtubules starting at concentrations above 10 μM and reached half-maximal inhibition of motility at about 50 μM. The processes reported here point to relevant interactions of lead with tubulin and kinesin at low dose levels.
Resumo:
The Driver Behaviour Questionnaire (DBQ) continues to be the most widely utilised self-report scale globally to assess crash risk and aberrant driving behaviours among motorists. However, the scale also attracts criticism regarding its perceived limited ability to accurately identify those most at risk of crash involvement. This study reports on the utilisation of the DBQ to examine the self-reported driving behaviours (and crash outcomes) of drivers in three separate Australian fleet samples (N = 443, N = 3414, & N = 4792), and whether combining the samples increases the tool’s predictive ability. Either on-line or paper versions of the questionnaire were completed by fleet employees in three organisations. Factor analytic techniques identified either three or four factor solutions (in each of the separate studies) and the combined sample produced expected factors of: (a) errors, (b) highway-code violations and (c) aggressive driving violations. Highway code violations (and mean scores) were comparable across the studies. However, across the three samples, multivariate analyses revealed that exposure to the road was the best predictor of crash involvement at work, rather than DBQ constructs. Furthermore, combining the scores to produce a sample of 8649 drivers did not improve the predictive ability of the tool for identifying crashes (e.g., 0.4% correctly identified) or for demerit point loss (0.3%). The paper outlines the major findings of this comparative sample study in regards to utilising self-report measurement tools to identify “at risk” drivers as well as the application of such data to future research endeavours.
Resumo:
In this age of ever-increasing information technology (IT) driven environments, governments/or public sector organisations (PSOs) are expected to demonstrate the business value of the investment in IT and take advantage of the opportunities offered by technological advancements. Strategic alignment (SA) emerged as a mechanism to bridge the gap between business and IT missions, objectives, and plans in order to ensure value optimisation from investment in IT and enhance organisational performance. However, achieving and sustaining SA remains a challenge requiring even more agility nowadays to keep up with turbulent organisational environments. The shared domain knowledge (SDK) between the IT department and other diverse organisational groups is considered as one of the factors influencing the successful implementation of SA. However, SDK in PSOs has received relatively little empirical attention. This paper presents findings from a study which investigated the influence of SDK on SA within organisations in the Australian public sector. The developed research model examined the relationship of SDK between business and IT domains with SA using a survey of 56 public sector professionals and executives. A key research contribution is the empirical demonstration that increasing levels of SDK between IT and business groups leads to increased SA.
Resumo:
Pain is common in individuals living in residential aged care facilities (RACFs), and a number of obstacles have been identified as recurring barriers to adequate pain management. To address this, the Australian Pain Society developed 27 recommendations for comprehensive good practice in the identification, assessment, and management of pain. This study reviewed preexisting pain management practice at five Australian RACFs and identified changes needed to implement the recommendations and then implemented an evidence-based program that aimed to facilitate better pain management. The program involved staff training and education and revised in-house pain-management procedures. Reviews occurred before and after the program and included the assessment of 282 residents for analgesic use and pain status. Analgesic use improved after the program (P<.001), with a decrease in residents receiving no analgesics (from 15% to 6%) and an increase in residents receiving around-the-clock plus as-needed analgesics (from 24% to 43%). There were improvements in pain relief for residents with scores indicative of pain, with Abbey pain scale (P=.005), Pain Assessment in Advanced Dementia Scale (P=.001), and Non-communicative Patient's Pain Assessment Instrument scale (P<.001) scores all improving. Although physical function declined as expected, Medical Outcomes Study 36-item Short-Form Survey bodily pain scores also showed improvement (P=.001). Better evidence-based practice and outcomes in RACFs can be achieved with appropriate training and education. Investing resources in the aged care workforce using this program improved analgesic practice and pain relief in participating sites. Further attention to the continued targeted pain management training of aged care staff is likely to improve pain-focused care for residents.
Resumo:
Health information technologies (HIT) have changed healthcare delivery. Yet, there are few opportunities for student nurses in their undergraduate studies to develop nursing informatics competencies. More importantly, many countries around the world have not fully specified nursing informatics competencies that will be expected of student nurses prior to their graduation from undergraduate nursing programs. In this paper the authors compare and contrast the undergraduate nursing informatics competencies that were developed by two countries: Australia and Canada. They also identify some of the challenges and future research directions in the area.
Resumo:
Background: Serosorting, the practice of seeking to engage in unprotected anal intercourse with partners of the same HIV status as oneself, has been increasing among men who have sex with men. However, the effectiveness of serosorting as a strategy to reduce HIV risk is unclear, especially since it depends on the frequency of HIV testing. Methods: We estimated the relative risk of HIV acquisition associated with serosorting compared with not serosorting by using a mathematical model, informed by detailed behavioral data from a highly studied cohort of gay men. Results: We demonstrate that serosorting is unlikely to be highly beneficial in many populations of men who have sex with men, especially where the prevalence of undiagnosed HIV infections is relatively high. We find that serosorting is only beneficial in reducing the relative risk of HIV transmission if the prevalence of undiagnosed HIV infections is less than ∼20% and ∼40%, in populations of high (70%) and low (20%) treatment rates, respectively, even though treatment reduces the absolute risk of HIV transmission. Serosorting can be expected to lead to increased risk of HIV acquisition in many settings. In settings with low HIV testing rates serosorting can more than double the risk of HIV acquisition. Conclusions: Therefore caution should be taken before endorsing the practice of serosorting. It is very important to continue promotion of frequent HIV testing and condom use, particularly among people at high risk.
Resumo:
Background As financial constraints can be a barrier to accessing HIV antiretroviral therapy (ART), we argue for the removal of copayment requirements from HIV medications in South Australia. Methods Using a simple mathematical model informed by available behavioural and biological data and reflecting the HIV epidemiology in South Australia, we calculated the expected number of new HIV transmissions caused by persons who are not currently on ART compared with transmissions for people on ART. The extra financial investment required to cover the copayments to prevent an HIV infection was compared with the treatment costs saved due to averting HIV infections. Results It was estimated that one HIV infection is prevented per year for every 31.4 persons (median, 24.0–42.7 interquartile range (IQR)) who receive treatment. By considering the incremental change in costs and outcomes of a change in program from the current status quo, it would cost the health sector $17 860 per infection averted (median, $13 651–24 287 IQR) if ART is provided as a three-dose, three-drug combination without requirements for user-pay copayments. Conclusions The costs of removing copayment fees for ART are less than the costs of treating extra HIV infections that would result under current conditions. Removing the copayment requirement for HIV medication would be cost-effective from a governmental perspective.
Resumo:
Background: Discussion is currently taking place among international HIV/AIDS groups around increasing HIV testing and initiating earlier use of antiretroviral therapy (ART) among people diagnosed with HIV as a method to reduce the spread of HIV. In this study, we explore the expected epidemiological impact of this strategy in a small population in which HIV transmission is predominantly confined to men who have sex with men (MSM). Methods: A deterministic mathematical transmission model was constructed to investigate the impacts of strategies that increase testing and treatment rates, and their likely potential to mitigate HIV epidemics among MSM. Our novel model distinguishes men in the population who are more easily accessible to prevention campaigns through engagement with the gay community from men who are not. This model is applied to the population of MSM in South Australia. Results: Our model-based findings suggest that increasing testing rates alone will have minimal impact on reducing the expected number of infections compared to current conditions. However, in combination with increases in treatment coverage, this strategy could lead to a 59–68% reduction in the number of HIV infections over the next 5 years. Targeting men who are socially engaged with the gay community would result in the majority of potential reductions in incidence, with only minor improvements possible by reaching all other MSM. Conclusions: Investing in strategies that will achieve higher coverage and earlier initiation of treatment to reduce infectiousness of HIV-infected individuals could be an effective strategy for reducing incidence in a population of MSM.
Resumo:
Objective: To evaluate the potential impact of the current global economic crisis (GEC) on the spread of HIV. Design: To evaluate the impact of the economic downturn we studied two distinct HIV epidemics in Southeast Asia: the generalized epidemic in Cambodia where incidence is declining and the epidemic in Papua New Guinea (PNG) which is in an expansion phase. Methods: Major HIV-related risk factors that may change due to the GEC were identified and a dynamic mathematical transmission model was developed and used to forecast HIV prevalence, diagnoses, and incidence in Cambodia and PNG over the next 3 years. Results: In Cambodia, the total numbers of HIV diagnoses are not expected to be largely affected. However, an estimated increase of up to 10% in incident cases of HIV, due to potential changes in behavior, may not be observed by the surveillance system. In PNG, HIV incidence and diagnoses could be more affected by the GEC, resulting in respective increases of up to 17% and 11% over the next 3 years. Decreases in VCT and education programs are the factors that may be of greatest concern in both settings. A reduction in the rollout of antiretroviral therapy could increase the number of AIDS-related deaths (by up to 7.5% after 3 years). Conclusions: The GEC is likely to have a modest impact on HIV epidemics. However, there are plausible conditions under which the economic downturns can noticeably influence epidemic trends. This study highlights the high importance of maintaining funding for HIV programs.
Resumo:
A common measure of the economic performance of different fleet segments in fisheries is the rate of return on capital. However, in the English Channel (UK), observed changes in the fleet structure are at odds with expectations given the observed rates of return on capital. This disjunction between expected and observed behaviour raises the question as to the appropriateness of rate of return on capital as a measure of economic performance for small boats whose main input is often non-wage labour. In this paper, an alternative performance indicator is developed based on returns on owner-operator labour. This indicator appears to be of more relevance to small scale boats than the traditional returns on capital, and a better indicator of the direction of adjustment in the fishery.
Resumo:
Public acceptance is consistently listed as having an enormous impact on the implementation and success of a congestion charge scheme. This paper investigates public acceptance of such a scheme in Australia. Surveys were conducted in Brisbane and Melbourne, the two fastest growing Australian cities. Using an ordered logit modeling approach, the survey data including stated preferences were analyzed to pinpoint the important factors influencing people’s attitudes to a congestion charge and, in turn, to their transport mode choices. To accommodate the nature of, and to account for the resulting heterogeneity of the panel data, random effects were considered in the models. As expected, this study found that the amount of the congestion charge and the financial benefits of implementing it have a significant influence on respondents’ support for the charge and on the likelihood of their taking a bus to city areas. However, respondents’ current primary transport mode for travelling to the city areas has a more pronounced impact. Meanwhile, respondents’ perceptions of the congestion charge’s role in protecting the environment by reducing vehicle emissions, and of the extent to which the charge would mean that they travelled less frequently to the city for shopping or entertainment, also have a significant impact on their level of support for its implementation. We also found and explained notable differences across two cities. Finally, findings from this study have been fully discussed in relation to the literature.
Resumo:
The chubby baby who eats well is desirable in our culture. Perceived low weight gains and feeding concerns are common reasons mothers seek advice in the early years. In contrast, childhood obesity is a global public health concern. Use of coercive feeding practices, prompted by maternal concern about weight, may disrupt a child’s innate self regulation of energy intake, promoting overeating and overweight. This study describes predictors of maternal concern about her child undereating/becoming underweight and feeding practices. Mothers in the control group of the NOURISH and South Australian Infants Dietary Intake studies (n = 332) completed a self-administered questionnaire when the child was aged 12–16 months. Weight-for-age z-score (WAZ)was derived from weight measured by study staff. Mean age (SD) was 13.8 (1.3) months, mean WAZ (SD), 0.58 (0.86) and 49% were male. WAZ and two questions describing food refusal were combined in a structural equation model with four items from the Infant feeding Questionnaire (IFQ) to form the factor ‘Concern about undereating/weight’. Structural relationships were drawn between concern and IFQ factors ‘awareness of infant’s hunger and satiety cues’, ‘use of food to calm infant’s fussiness’ and ‘feeding infant on a schedule’, resulting in a model of acceptable fit. Lower WAZ and higher frequency of food refusal predicted higher maternal concern. Higher maternal concern was associated with lower awareness of infant cues (r = −.17, p = .01) and greater use of food to calm (r = .13, p = .03). In a cohort of healthy children, maternal concern about undereating and underweight was associated with practices that have the potential to disrupt self-regulation.
Resumo:
Purpose Food refusal is part of normal toddler development due to an innate ability to self-regulate energy intake and the onset of neophobia. For parents, this ‘fussy’ stage causes great concern, prompting use of coercive feeding practices which ignore a child’s own hunger and satiety cues, promoting overeating and overweight. This analysis defines characteristics of the ‘good eater’ using latent variable structural equation modelling and the relationship with maternal perception of her child as a fussy eater. Methods Mothers in the control group of the NOURISH and South Australian Infants Dietary Intake studies (n=332) completed a self-administered questionnaire - when child was age 12-16 months - describing refusal of familiar and unfamiliar foods and maternal perception as fussy/not fussy. Weight-for-age z-score (WAZ) was derived from weight measured by study staff. Questionnaire items and WAZ were combined in AMOS to represent the latent variable the ‘good eater’. Results/findings Mean age(sd) of children was 13.8(1.3) months, mean WAZ(sd), .58(.86) and 49% were male. The ‘good eater’ was represented by higher WAZ, a child that hardly ever refuses food, hardly ever refuses familiar food, and willing to eat unfamiliar foods (x2/df=2.80, GFI=.98, RMSEA=.07(.03-.12), CFI=.96). The ‘good eater’ was inversely associated with maternal perception of her child as a fussy eater (β=-.64, p<.05). Conclusions Toddlers displaying characteristics of a ‘good eater’ are not perceived as fussy, but these characteristics, especially higher WAZ, may be undesirable in the context of obesity prevention. Clinicians can promote food refusal as normal and even desirable in healthy young children.
Resumo:
Collaboration between neuroscience and architecture is emerging as a key field of research as demonstrated in recent times by development of the Academy of Neuroscience for Architecture (ANFA) and other societies. Neurological enquiry of affect and spatial experience from a design perspective remains in many instances unchartered. Research using portable near infrared spectroscopy (fNIRs) - an emerging non-invasive neuro-imaging device, is proving convincing in its ability to detect emotional responses to visual, spatio-auditory and task based stimuli. This innovation provides a firm basis to potentially track cortical activity in the appraisal of architectural environments. Additionally, recent neurological studies have sought to explore the manifold sensory abilities of the visually impaired to better understand spatial perception in general. Key studies reveal that early blind participants perform as well as sighted due to higher auditory and somato-sensory spatial acuity. Studies also report pleasant and unpleasant emotional responses within certain interior environments revealing a deeper perceptual sensitivity than would be expected. Comparative fNIRS studies between the sighted and blind concerning spatial experience has the potential to provide greater understanding of emotional responses to architectural environments. Supported by contemporary theories of architectural aesthetics, this paper presents a case for the use of portable fNIRS imaging in the assessment of emotional responses to spatial environments experienced by both blind and sighted. The aim of the paper is to outline the implications of fNIRS upon spatial research and practice within the field of architecture and points to a potential taxonomy of particular formations of space and affect.