975 resultados para 135-838
Resumo:
PURPOSE Brivanib, an oral, multi-targeted tyrosine kinase inhibitor with activity against vascular endothelial growth factor (VEGF) and fibroblast growth factor receptor (FGFR) was investigated as a single agent in a phase II trial to assess the activity and tolerability in recurrent or persistent endometrial cancer (EMC). PATIENTS AND METHODS Eligible patients had persistent or recurrent EMC after receiving one to two prior cytotoxic regimens, measurable disease, and performance status of ≤2. Treatment consisted of brivanib 800 mg orally every day until disease progression or prohibitive toxicity. Primary endpoints were progression-free survival (PFS) at six months and objective tumor response. Expression of multiple angiogenic proteins and FGFR2 mutation status was assessed. RESULTS Forty-five patients were enrolled. Forty-three patients were eligible and evaluable. Median age was 64 years. Twenty-four patients (55.8%) received prior radiation. Median number of cycles was two (range 1-24). No GI perforations but one rectal fistula were seen. Nine patients had grade 3 hypertension, with one experiencing grade 4 confusion. Eight patients (18.6%; 90% CI 9.6%-31.7%) had responses (one CR and seven PRs), and 13 patients (30.2%; 90% CI 18.9%-43.9%) were PFS at six months. Median PFS and overall survival (OS) were 3.3 and 10.7 months, respectively. When modeled jointly, VEGF and angiopoietin-2 expression may diametrically predict PFS. Estrogen receptor-α (ER) expression was positively correlated with OS. CONCLUSION Brivanib is reasonably well tolerated and worthy of further investigation based on PFS at six months in recurrent or persistent EMC.
Resumo:
Stress is implicated in the development and course of psychotic illness, but the factors that influence stress levels are not well understood. The aim of this study was to examine the impact of neuropsychological functioning and coping styles on perceived stress in people with first-episode psychosis (FEP) and healthy controls (HC). Thirty-four minimally treated FEP patients from the Early Psychosis Prevention and Intervention Centre, Melbourne, Australia, and 26 HC participants from a similar demographic area participated in the study. Participants completed a comprehensive neuropsychological test battery as well as the Coping Inventory for Stressful Situations (task-, emotion- and avoidance-focussed coping styles) and Perceived Stress Scale (PSS). Linear regressions were used to determine the contribution of neuropsychological functioning and coping style to perceived stress in the two groups. In the FEP group, higher levels of emotion-focussed and lower levels of task-focussed coping were associated with elevated stress. Higher premorbid IQ and working memory were also associated with higher subjective stress. In the HC group, higher levels of emotion-focussed coping, and contrary to the FEP group, lower premorbid IQ, working memory and executive functioning, were associated with increased stress. Lower intellectual functioning may provide some protection against perceived stress in FEP.
Resumo:
Husserl reminded us of the imperative to return to the Lebensweldt, or life-world. He was preoccupied with the crisis of Western science which alienated the experiencing self from the world of immediate experience. Immediate experience provides a foundation for what it means to be human. Heidegger, building upon these ideas, foresaw a threat to human nature in the face of ‘technicity’. He argued for a return to a relationship between ‘authentic self’ and nature predicated upon the notion of ‘letting be’ in which humans are open to the mystery of being. Self and nature are not conceived as alienated entities but as aspects of a single entity. In modern times, separation between self and the world is further evidenced by scientific rational modes of being exemplified through consumerism and the incessant use of screen-based technology which dominate human experience. In contrast, extreme sports provide an opportunity for people to return to the life-world by living in relation to the natural world. Engagement in extreme sports enables a return to authenticity as we rediscover self as part of nature.
Resumo:
There has been a paucity of research published in relation to the temporal aspect of destination image change over time. Given increasing investments in destination branding, research is needed to enhance understanding of how to monitor destination brand performance, of which destination image is the core construct, over time. This article reports the results of four studies tracking brand performance of a competitive set of five destinations, between 2003 and 2012. Results indicate minimal changes in perceptions held of the five destinations of interest over the 10 years, supporting the assertion of Gartner (1986) and Gartner and Hunt (1987) that destination image change will only occur slowly over time. While undertaken in Australia, the research approach provides DMOs in other parts of the world with a practical tool for evaluating brand performance over time; in terms of measures of effectiveness of past marketing communications, and indicators of future performance.
Resumo:
The context in which objects are presented influences the speed at which they are named. We employed the blocked cyclic naming paradigm and perfusion functional magnetic resonance imaging (fMRI) to investigate the mechanisms responsible for interference effects reported for thematicallyand categorically related compared to unrelated contexts. Naming objects in categorically homogeneous contexts induced a significant interference effect that accumulated from the second cycle onwards. This interference effect was associated with significant perfusion signal decreases in left middle and posterior lateral temporal cortex and the hippocampus. By contrast, thematically homogeneous contexts facilitated naming latencies significantly in the first cycle and did not differ from heterogeneous contexts thereafter, nor were they associated with any perfusion signal changes compared to heterogeneous contexts. These results are interpreted as being consistent with an account in which the interference effect both originates and has its locus at the lexical level, with an incremental learning mechanism adapting the activation levels of target lexical representations following access. We discuss the implications of these findings for accounts that assume thematic relations can be active lexical competitors or assume mandatory involvement of top-down control mechanisms in interference effects during naming.
Resumo:
Sustainability has become crucial for the energy industry as projects in this industry are extensively large and complex and have significant impacts on the environment, community and economy. It demands the energy industry to proactively incorporate sustainability ideas and commit to sustainable project development. This study aims to investigate how the Australian energy industry responds to sustainability requirements and in particular what indicators used to measure sustainability performance. To achieve this, content analysis of sustainability reports, vision statements and policy statements of Australian energy companies listed in the 2013 PLATTS Top 250 Global Energy Company Rankings and government reports relating to sustainability has been conducted. The findings show that the energy companies extensively discuss sustainability aspects within three dimensions, i.e. community, environment, and economy. Their primary goals in sustainability are supplying cleaner energy for future, and doing business in a way that improves outcomes for shareholders, employees, business partners and the communities. In particular, energy companies have valued the employees of the business as a one of the key area that needs to be considered. Furthermore, the energy industry has become increasingly aware of the importance of measuring sustainability performance to achieve sustainability goals. A number of sustainability indicators have been developed on the basis of the key themes beyond economic measures. It is envisaged that findings from this research will help stakeholders in the energy industry to adopt different indicators to evaluate and ultimately achieve sustainability performance.
Resumo:
Introducing nitrogen (N)-fixing legumes into cereal-based crop rotations reduces synthetic fertiliser-N use and may mitigate soil emissions of nitrous oxide (N2O). Current IPCC calculations assume 100% of legume biomass N as the anthropogenic N input and use 1% of this as an emission factor (EF)—the percentage of input N emitted as N2O. However, legumes also utilise soil inorganic N, so legume-fixed N is typically less than 100% of legume biomass N. In two field experiments, we measured soil N2O emissions from a black Vertosol in sub-tropical Australia for 12 months after sowing of chickpea (Cicer arietinum L.), canola (Brassica napus L.), faba bean (Vicia faba L.), and field pea (Pisum sativum L.). Cumulative N2O emissions from N-fertilised canola (624 g N2O-N ha−1) greatly exceeded those from chickpea (127 g N2O-N ha−1) in Experiment 1. Similarly, N2O emitted from canola (385 g N2O-N ha−1) in Experiment 2 was significantly greater than chickpea (166 g N2O-N ha−1), faba bean (166 g N2O-N ha−1) or field pea (135 g N2O-N ha−1). Highest losses from canola were recorded during the growing season, whereas 75% of the annual N2O losses from the legumes occurred post-harvest. Legume N2-fixation provided 37–43% (chickpea), 54% (field pea) and 64% (faba bean) of total plant biomass N. Using only fixed-N inputs, we calculated EFs for chickpea (0.13–0.31%), field pea (0.18%) and faba bean (0.04%) that were significantly less than N-fertilised canola (0.48–0.78%) (P < 0.05), suggesting legume-fixed N is a less emissive form of N input to the soil than fertiliser N. Inputs of legume-fixed N should be more accurately quantified to properly gauge the potential for legumes to mitigate soil N2O emissions. EF’s from legume crops need to be revised and should include a factor for the proportion of the legume’s N derived from the atmosphere.
Resumo:
How can obstacles to innovation be overcome in road construction? Using a focus group methodology, and based on two prior rounds of empirical work, the analysis in this chapter generates a set of four key solutions to two main construction innovation obstacles: (1) restrictive tender assessment and (2) disagreement over who carries the risk of new product failure. The four key solutions uncovered were: 1) pre-project product certification; 2) past innovation performance assessment; 3) earlier involvement of product suppliers and road asset operators; and 4) performance-based specifications. Additional research is suggested in order to illicit deeper insights into possible solutions to construction innovation obstacles, and should emphasise furthering the theoretical interpretation of empirical phenomena.
Resumo:
PBDE concentrations are higher in children compared to adults with exposure suggested to include dust ingestion. Besides the home environment, children spend a great deal of time in school classrooms which may be a source of exposure. As part of the “Ultrafine Particles from Traffic Emissions and Children's Health (UPTECH)” project, dust samples (n=28) were obtained in 2011/12 from 10 Brisbane, Australia metropolitan schools and analysed using GC and LC–MS for polybrominated diphenyl ethers (PBDEs) -17, -28, -47, -49, -66, -85, -99, -100, -154, -183, and -209. Σ11PBDEs ranged from 11–2163 ng/g dust; with a mean and median of 600 and 469 ng/g dust, respectively. BDE-209 (range n.d. −2034 ng/g dust; mean (median) 402 (217) ng/g dust) was the dominant congener in most classrooms. Frequencies of detection were 96%, 96%, 39% and 93% for BDE-47, -99, -100 and -209, respectively. No seasonal variations were apparent and from each of the two schools where XRF measurements were carried out, only two classroom items had detectable bromine. PBDE intake for 8–11 year olds can be estimated at 0.094 ng/day BDE-47; 0.187 ng/day BDE-99 and 0.522 ng/day BDE-209 as a result of ingestion of classroom dust, based on mean PBDE concentrations. The 97.5% percentile intake is estimated to be 0.62, 1.03 and 2.14 ng/day for BDEs-47, -99 and -209, respectively. These PBDE concentrations in dust from classrooms, which are higher than in Australian homes, may explain some of the higher body burden of PBDEs in children compared to adults when taking into consideration age-dependant behaviours which increase dust ingestion.
Resumo:
Objective - To investigate the HLA class I associations of ankylosing spondylitis (AS) in the white population, with particular reference to HLA-B27 subtypes. Methods - HLA-B27 and -B60 typing was performed in 284 white patients with AS. Allele frequencies of HLA-B27 and HLA-B60 from 5926 white bone marrow donors were used for comparison. HLA-B27 subtyping was performed by single strand conformation polymorphism (SSCP) in all HLA-B27 positive AS patients, and 154 HLA-B27 positive ethnically matched blood donors. Results - The strong association of HLA-B27 and AS was confirmed (odds ratio (OR) 171, 95% confidence interval (CI) 135 to 218; p < 10-99). The association of HLA-B60 with AS was confirmed in HLA-B27 positive cases (OR 3.6, 95% CI 2.1 to 6.3; p < 5 x 10-5), and a similar association was demonstrated in HLA-B27 negative AS (OR 3.5, 95% CI 1.1 to 11.4; p < 0.05). No significant difference was observed in the frequencies of HLA-B27 allelic subtypes in patients and controls (HLA-B*2702, three of 172 patients v five of 154 controls; HLA-B*2705, 169 of 172 patients v 147 of 154 controls; HkA-B*2708, none of 172 patients v two of 154 controls), and no novel HLA-B27 alleles were detected. Conclusion - HLA-B27 and -B60 are associated with susceptibility to AS, but differences in BLA-B27 subtype do not affect susceptibility to AS in this white population.
Resumo:
The importance of developing effective disaster management strategies has significantly grown as the world continues to be confronted with unprecedented disastrous events. Factors such as climate instability, recent urbanization along with rapid population growth in many cities around the world have unwittingly exacerbated the risks of potential disasters, leaving a large number of people and infrastructure exposed to new forms of threats from natural disasters such as flooding, cyclones, and earthquakes. With disasters on the rise, effective recovery planning of the built environment is becoming imperative as it is not only closely related to the well-being and essential functioning of society, but it also requires significant financial commitment. In the built environment context, post-disaster reconstruction focuses essentially on the repair and reconstruction of physical infrastructures. The reconstruction and rehabilitation efforts are generally performed in the form of collaborative partnerships that involve multiple organisations, enabling the restoration of interdependencies that exist between infrastructure systems such as energy, water (including wastewater), transport, and telecommunication systems. These interdependencies are major determinants of vulnerabilities and risks encountered by critical infrastructures and therefore have significant implications for post-disaster recovery. When disrupted by natural disasters, such interdependencies have the potential to promote the propagation of failures between critical infrastructures at various levels, and thus can have dire consequences on reconstruction activities. This paper outlines the results of a pilot study on how elements of infrastructure interdependencies have the potential to impede the post-disaster recovery effort. Using a set of unstructured interview questionnaires, plausible arguments provided by seven respondents revealed that during post-disaster recovery, critical infrastructures are mutually dependent on each other’s uninterrupted availability, both physically and through a host of information and communication technologies. Major disruption to their physical and cyber interdependencies could lead to cascading failures, which could delay the recovery effort. Thus, the existing interrelationship between critical infrastructures requires that the entire interconnected network be considered when managing reconstruction activities during the post-disaster recovery period.
Resumo:
Background Genomic data are lacking for many allergen sources. To circumvent this limitation, we implemented a strategy to reveal the repertoire of pollen allergens of a grass with clinical importance in subtropical regions, where an increasing proportion of the world's population resides. Objective We sought to identify and immunologically characterize the allergenic components of the Panicoideae Johnson grass pollen (JGP; Sorghum halepense). Methods The total pollen transcriptome, proteome, and allergome of JGP were documented. Serum IgE reactivities with pollen and purified allergens were assessed in 64 patients with grass pollen allergy from a subtropical region. Results Purified Sor h 1 and Sor h 13 were identified as clinically important allergen components of JGP with serum IgE reactivity in 49 (76%) and 28 (43.8%), respectively, of patients with grass pollen allergy. Within whole JGP, multiple cDNA transcripts and peptide spectra belonging to grass pollen allergen families 1, 2, 4, 7, 11, 12, 13, and 25 were identified. Pollen allergens restricted to subtropical grasses (groups 22-24) were also present within the JGP transcriptome and proteome. Mass spectrometry confirmed the IgE-reactive components of JGP included isoforms of Sor h 1, Sor h 2, Sor h 13, and Sor h 23. Conclusion Our integrated molecular approach revealed qualitative differences between the allergenic components of JGP and temperate grass pollens. Knowledge of these newly identified allergens has the potential to improve specific diagnosis and allergen immunotherapy treatment for patients with grass pollen allergy in subtropical regions and reduce the burden of allergic respiratory disease globally.
Resumo:
- Background Falls are the most frequent adverse events that are reported in hospitals. We examined the effectiveness of individualised falls-prevention education for patients, supported by training and feedback for staff, delivered as a ward-level programme. - Methods Eight rehabilitation units in general hospitals in Australia participated in this stepped-wedge, cluster-randomised study, undertaken during a 50 week period. Units were randomly assigned to intervention or control groups by use of computer-generated, random allocation sequences. We included patients admitted to the unit during the study with a Mini-Mental State Examination (MMSE) score of more than 23/30 to receive individualised education that was based on principles of changes in health behaviour from a trained health professional, in addition to usual care. We provided information about patients' goals, feedback about the ward environment, and perceived barriers to engagement in falls-prevention strategies to staff who were trained to support the uptake of strategies by patients. The coprimary outcome measures were patient rate of falls per 1000 patient-days and the proportion of patients who were fallers. All analyses were by intention to treat. This trial is registered with the Australian New Zealand Clinical Trials registry, number ACTRN12612000877886). - Findings Between Jan 13, and Dec 27, 2013, 3606 patients were admitted to the eight units (n=1983 control period; n=1623 intervention period). There were fewer falls (n=196, 7·80/1000 patient-days vs n=380, 13·78/1000 patient-days, adjusted rate ratio 0·60 [robust 95% CI 0·42–0·94], p=0·003), injurious falls (n=66, 2·63/1000 patient-days vs 131, 4·75/1000 patient-days, 0·65 [robust 95% CI 0·42–0·88], p=0·006), and fallers (n=136 [8·38%] vs n=248 [12·51%] adjusted odds ratio 0·55 [robust 95% CI 0·38 to 0·81], p=0·003) in the intervention compared with the control group. There was no significant difference in length of stay (intervention median 11 days [IQR 7–19], control 10 days [6–18]). - Interpretation Individualised patient education programmes combined with training and feedback to staff added to usual care reduces the rates of falls and injurious falls in older patients in rehabilitation hospital-units.
Resumo:
A combined data matrix consisting of high performance liquid chromatography–diode array detector (HPLC–DAD) and inductively coupled plasma-mass spectrometry (ICP-MS) measurements of samples from the plant roots of the Cortex moutan (CM), produced much better classification and prediction results in comparison with those obtained from either of the individual data sets. The HPLC peaks (organic components) of the CM samples, and the ICP-MS measurements (trace metal elements) were investigated with the use of principal component analysis (PCA) and the linear discriminant analysis (LDA) methods of data analysis; essentially, qualitative results suggested that discrimination of the CM samples from three different provinces was possible with the combined matrix producing best results. Another three methods, K-nearest neighbor (KNN), back-propagation artificial neural network (BP-ANN) and least squares support vector machines (LS-SVM) were applied for the classification and prediction of the samples. Again, the combined data matrix analyzed by the KNN method produced best results (100% correct; prediction set data). Additionally, multiple linear regression (MLR) was utilized to explore any relationship between the organic constituents and the metal elements of the CM samples; the extracted linear regression equations showed that the essential metals as well as some metallic pollutants were related to the organic compounds on the basis of their concentrations
Resumo:
PURPOSE: To investigate how distance visual acuity in the presence of defocus and astigmatism is affected by age and whether aberration properties of young and older eyes can explain any differences. METHODS: Participants were 12 young adults (mean [±SD] age, 23 [±2] years) and 10 older adults (mean [±SD] age, 57 [±4] years). Cyclopleged right eyes were used with 4-mm effective pupil sizes. Thirteen blur conditions were used by adding five spherical lens conditions (-1.00 diopters [D], -0.50 D, plano/0.00 D, +0.50 D, and +1.00 D) and adding two cross-cylindrical lenses (+0.50 DS/-1.00 DC and +1.00 D/-2.00 DC, or 0.50 D and 1.00 D astigmatism) at four negative cylinder axes (45, 90, 135, and 180 degrees). Targets were single lines of high-contrast letters based on the Bailey-Lovie chart. Successively smaller lines were read until a participant could no longer read any of the letters correctly. Aberrations were measured with a COAS-HD Hartmann-Shack aberrometer. RESULTS: There were no significant differences between the two age groups. We estimated that 70 to 80 participants per group would be needed to show significant effects of the trend of greater visual acuity loss for the young group. Visual acuity loss for astigmatism was twice that for defocus of the same magnitude of blur strength (0.33 logMAR [logarithm of the minimum angle of resolution]/D compared with 0.18 logMAR/D), contrary to the geometric prediction of similar loss. CONCLUSIONS: Any age-related differences in visual acuity in the presence of defocus and astigmatism were swamped by interparticipant variation.