894 resultados para lack of catalytic mechanism


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cooking skills are emphasized in nutrition promotion but their distribution among population subgroups and relationship to dietary behavior is researched by few population-based studies. This study examined the relationships between confidence to cook, sociodemographic characteristics, and household vegetable purchasing. This cross-sectional study of 426 randomly selected households in Brisbane, Australia, used a validated questionnaire to assess household vegetable purchasing habits and the confidence to cook of the person who most often prepares food for these households. The mutually adjusted odds ratios (ORs) of lacking confidence to cook were assessed across a range of demographic subgroups using multiple logistic regression models. Similarly, mutually adjusted mean vegetable purchasing scores were calculated using multiple linear regression for different population groups and for respondents with varying confidence levels. Lacking confidence to cook using a variety of techniques was more common among respondents with less education (OR 3.30; 95% confidence interval [CI] 1.01 to 10.75) and was less common among respondents who lived with minors (OR 0.22; 95% CI 0.09 to 0.53) and other adults (OR 0.43; 95% CI 0.24 to 0.78). Lack of confidence to prepare vegetables was associated with being male (OR 2.25; 95% CI 1.24 to 4.08), low education (OR 6.60; 95% CI 2.08 to 20.91), lower household income (OR 2.98; 95% CI 1.02 to 8.72) and living with other adults (OR 0.53; 95% CI 0.29 to 0.98). Households bought a greater variety of vegetables on a regular basis when the main chef was confident to prepare them (difference: 18.60; 95% CI 14.66 to 22.54), older (difference: 8.69; 95% CI 4.92 to 12.47), lived with at least one other adult (difference: 5.47; 95% CI 2.82 to 8.12) or at least one minor (difference: 2.86; 95% CI 0.17 to 5.55). Cooking skills may contribute to socioeconomic dietary differences, and may be a useful strategy for promoting fruit and vegetable consumption, particularly among socioeconomically disadvantaged groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motor vehicles are a major source of gaseous and particulate matter pollution in urban areas, particularly of ultrafine sized particles (diameters < 0.1 µm). Exposure to particulate matter has been found to be associated with serious health effects, including respiratory and cardiovascular disease, and mortality. Particle emissions generated by motor vehicles span a very broad size range (from around 0.003-10 µm) and are measured as different subsets of particle mass concentrations or particle number count. However, there exist scientific challenges in analysing and interpreting the large data sets on motor vehicle emission factors, and no understanding is available of the application of different particle metrics as a basis for air quality regulation. To date a comprehensive inventory covering the broad size range of particles emitted by motor vehicles, and which includes particle number, does not exist anywhere in the world. This thesis covers research related to four important and interrelated aspects pertaining to particulate matter generated by motor vehicle fleets. These include the derivation of suitable particle emission factors for use in transport modelling and health impact assessments; quantification of motor vehicle particle emission inventories; investigation of the particle characteristic modality within particle size distributions as a potential for developing air quality regulation; and review and synthesis of current knowledge on ultrafine particles as it relates to motor vehicles; and the application of these aspects to the quantification, control and management of motor vehicle particle emissions. In order to quantify emissions in terms of a comprehensive inventory, which covers the full size range of particles emitted by motor vehicle fleets, it was necessary to derive a suitable set of particle emission factors for different vehicle and road type combinations for particle number, particle volume, PM1, PM2.5 and PM1 (mass concentration of particles with aerodynamic diameters < 1 µm, < 2.5 µm and < 10 µm respectively). The very large data set of emission factors analysed in this study were sourced from measurement studies conducted in developed countries, and hence the derived set of emission factors are suitable for preparing inventories in other urban regions of the developed world. These emission factors are particularly useful for regions with a lack of measurement data to derive emission factors, or where experimental data are available but are of insufficient scope. The comprehensive particle emissions inventory presented in this thesis is the first published inventory of tailpipe particle emissions prepared for a motor vehicle fleet, and included the quantification of particle emissions covering the full size range of particles emitted by vehicles, based on measurement data. The inventory quantified particle emissions measured in terms of particle number and different particle mass size fractions. It was developed for the urban South-East Queensland fleet in Australia, and included testing the particle emission implications of future scenarios for different passenger and freight travel demand. The thesis also presents evidence of the usefulness of examining modality within particle size distributions as a basis for developing air quality regulations; and finds evidence to support the relevance of introducing a new PM1 mass ambient air quality standard for the majority of environments worldwide. The study found that a combination of PM1 and PM10 standards are likely to be a more discerning and suitable set of ambient air quality standards for controlling particles emitted from combustion and mechanically-generated sources, such as motor vehicles, than the current mass standards of PM2.5 and PM10. The study also reviewed and synthesized existing knowledge on ultrafine particles, with a specific focus on those originating from motor vehicles. It found that motor vehicles are significant contributors to both air pollution and ultrafine particles in urban areas, and that a standardized measurement procedure is not currently available for ultrafine particles. The review found discrepancies exist between outcomes of instrumentation used to measure ultrafine particles; that few data is available on ultrafine particle chemistry and composition, long term monitoring; characterization of their spatial and temporal distribution in urban areas; and that no inventories for particle number are available for motor vehicle fleets. This knowledge is critical for epidemiological studies and exposure-response assessment. Conclusions from this review included the recommendation that ultrafine particles in populated urban areas be considered a likely target for future air quality regulation based on particle number, due to their potential impacts on the environment. The research in this PhD thesis successfully integrated the elements needed to quantify and manage motor vehicle fleet emissions, and its novelty relates to the combining of expertise from two distinctly separate disciplines - from aerosol science and transport modelling. The new knowledge and concepts developed in this PhD research provide never before available data and methods which can be used to develop comprehensive, size-resolved inventories of motor vehicle particle emissions, and air quality regulations to control particle emissions to protect the health and well-being of current and future generations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major focus of research in nanotechnology is the development of novel, high throughput techniques for fabrication of arbitrarily shaped surface nanostructures of sub 100 nm to atomic scale. A related pursuit is the development of simple and efficient means for parallel manipulation and redistribution of adsorbed atoms, molecules and nanoparticles on surfaces – adparticle manipulation. These techniques will be used for the manufacture of nanoscale surface supported functional devices in nanotechnologies such as quantum computing, molecular electronics and lab-on-achip, as well as for modifying surfaces to obtain novel optical, electronic, chemical, or mechanical properties. A favourable approach to formation of surface nanostructures is self-assembly. In self-assembly, nanostructures are grown by aggregation of individual adparticles that diffuse by thermally activated processes on the surface. The passive nature of this process means it is generally not suited to formation of arbitrarily shaped structures. The self-assembly of nanostructures at arbitrary positions has been demonstrated, though these have typically required a pre-patterning treatment of the surface using sophisticated techniques such as electron beam lithography. On the other hand, a parallel adparticle manipulation technique would be suited for directing the selfassembly process to occur at arbitrary positions, without the need for pre-patterning the surface. There is at present a lack of techniques for parallel manipulation and redistribution of adparticles to arbitrary positions on the surface. This is an issue that needs to be addressed since these techniques can play an important role in nanotechnology. In this thesis, we propose such a technique – thermal tweezers. In thermal tweezers, adparticles are redistributed by localised heating of the surface. This locally enhances surface diffusion of adparticles so that they rapidly diffuse away from the heated regions. Using this technique, the redistribution of adparticles to form a desired pattern is achieved by heating the surface at specific regions. In this project, we have focussed on the holographic implementation of this approach, where the surface is heated by holographic patterns of interfering pulsed laser beams. This implementation is suitable for the formation of arbitrarily shaped structures; the only condition is that the shape can be produced by holographic means. In the simplest case, the laser pulses are linearly polarised and intersect to form an interference pattern that is a modulation of intensity along a single direction. Strong optical absorption at the intensity maxima of the interference pattern results in approximately a sinusoidal variation of the surface temperature along one direction. The main aim of this research project is to investigate the feasibility of the holographic implementation of thermal tweezers as an adparticle manipulation technique. Firstly, we investigate theoretically the surface diffusion of adparticles in the presence of sinusoidal modulation of the surface temperature. Very strong redistribution of adparticles is predicted when there is strong interaction between the adparticle and the surface, and the amplitude of the temperature modulation is ~100 K. We have proposed a thin metallic film deposited on a glass substrate heated by interfering laser beams (optical wavelengths) as a means of generating very large amplitude of surface temperature modulation. Indeed, we predict theoretically by numerical solution of the thermal conduction equation that amplitude of the temperature modulation on the metallic film can be much greater than 100 K when heated by nanosecond pulses with an energy ~1 mJ. The formation of surface nanostructures of less than 100 nm in width is predicted at optical wavelengths in this implementation of thermal tweezers. Furthermore, we propose a simple extension to this technique where spatial phase shift of the temperature modulation effectively doubles or triples the resolution. At the same time, increased resolution is predicted by reducing the wavelength of the laser pulses. In addition, we present two distinctly different, computationally efficient numerical approaches for theoretical investigation of surface diffusion of interacting adparticles – the Monte Carlo Interaction Method (MCIM) and the random potential well method (RPWM). Using each of these approaches we have investigated thermal tweezers for redistribution of both strongly and weakly interacting adparticles. We have predicted that strong interactions between adparticles can increase the effectiveness of thermal tweezers, by demonstrating practically complete adparticle redistribution into the low temperature regions of the surface. This is promising from the point of view of thermal tweezers applied to directed self-assembly of nanostructures. Finally, we present a new and more efficient numerical approach to theoretical investigation of thermal tweezers of non-interacting adparticles. In this approach, the local diffusion coefficient is determined from solution of the Fokker-Planck equation. The diffusion equation is then solved numerically using the finite volume method (FVM) to directly obtain the probability density of adparticle position. We compare predictions of this approach to those of the Ermak algorithm solution of the Langevin equation, and relatively good agreement is shown at intermediate and high friction. In the low friction regime, we predict and investigate the phenomenon of ‘optimal’ friction and describe its occurrence due to very long jumps of adparticles as they diffuse from the hot regions of the surface. Future research directions, both theoretical and experimental are also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Arabic satellite television has recently attracted tremendous attention in both the academic and professional worlds, with a special interest in Aljazeera as a curious phenomenon in the Arab region. Having made a household name for itself worldwide with the airing of the Bin Laden tapes, Aljazeera has set out to deliberately change the culture of Arabic journalism, as it has been repeatedly stated by its current General Manager Waddah Khanfar, and to shake up the Arab society by raising awareness to issues never discussed on television before and challenging long-established social and cultural values and norms while promoting, as it claims, Arab issues from a presumably Arab perspective. Working within the meta-frame of democracy, this Qatari-based network station has been received with mixed reactions ranging from complete support to utter rejection in both the west and the Arab world. This research examines the social semiotics of Arabic television and the socio-cultural impact of translation-mediated news in Arabic satellite television, with the aim to carry out a qualitative content analysis, informed by framing theory, critical linguistic analysis, social semiotics and translation theory, within a re-mediation framework which rests on the assumption that a medium “appropriates the techniques, forms and social significance of other media and attempts to rival or refashion them in the name of the real" (Bolter and Grusin, 2000: 66). This is a multilayered research into how translation operates at two different yet interwoven levels: translation proper, that is the rendition of discourse from one language into another at the text level, and translation as a broader process of interpretation of social behaviour that is driven by linguistic and cultural forms of another medium resulting in new social signs generated from source meaning reproduced as target meaning that is bound to be different in many respects. The research primarily focuses on the news media, news making and reporting at Arabic satellite television and looks at translation as a reframing process of news stories in terms of content and cultural values. This notion is based on the premise that by its very nature, news reporting is a framing process, which involves a reconstruction of reality into actualities in presenting the news and providing the context for it. In other words, the mediation of perceived reality through a media form, such as television, actually modifies the mind’s ordering and internal representation of the reality that is presented. The research examines the process of reframing through translation news already framed or actualized in another language and argues that in submitting framed news reports to the translation process several alterations take place, driven by the linguistic and cultural constraints and shaped by the context in which the content is presented. These alterations, which involve recontextualizations, may be intentional or unintentional, motivated or unmotivated. Generally, they are the product of lack of awareness of the dynamics and intricacies of turning a message from one language form into another. More specifically, they are the result of a synthesis process that consciously or subconsciously conforms to editorial policy and cultural interpretive frameworks. In either case, the original message is reproduced and the news is reframed. For the case study, this research examines news broadcasts by the now world-renowned Arabic satellite television station Aljazeera, and to a lesser extent the Lebanese Broadcasting Corporation (LBC) and Al- Arabiya where access is feasible, for comparison and crosschecking purposes. As a new phenomenon in the Arab world, Arabic satellite television, especially 24-hour news and current affairs, provides an interesting area worthy of study, not only for its immediate socio-cultural and professional and ethical implications for the Arabic media in particular, but also for news and current affairs production in the western media that rely on foreign language sources and translation mediation for international stories.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the emerging literature related to destination branding, little has been reported about performance metrics. The focus of most research reported to date has been concerned with the development of destination brand identities and the implementation of campaigns (see for example, Crockett & Wood 1999, Hall 1999, May 2001, Morgan et al 2002). One area requiring increased attention is that of tracking the performance of destination brands over time. This is an important gap in the tourism literature, given: i) the increasing level of investment by destination marketing organisations (DMO) in branding since the 1990s, ii) the complex political nature of DMO brand decision-making and increasing accountability to stakeholders (see Pike, 2005), and iii) the long-term nature of repositioning a destination’s image in the market place (see Gartner & Hunt, 1987). Indeed, a number of researchers in various parts of the world have pointed to a lack of market research monitoring destination marketing objectives, such as in Australia (see Prosser et. al 2000, Carson, Beattie and Gove 2003), North America (Sheehan & Ritchie 1997, Masberg 1999), and Europe (Dolnicar & Schoesser 2003)...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Neopolycystus sp. is the only primary egg parasitoid associated with the pest beetle Paropsis atomaria in subtropical eucalypt plantations, but its impact on its host populations is unknown. The simplified ecosystem represented by the plantation habitat, lack of interspecific competition for host and parasitoid, and the multivoltinism of the host population makes this an ideal system for quantifying the direct and indirect effects of egg parasitism, and hence, effects on host population dynamics. Within-, between- and overall-egg-batch parasitism rates were determined at three field sites over two field seasons, and up to seven host generations. The effect of exposure time (egg batch age), host density proximity to native forest and water sources on egg parasitism rates was also tested. Neopolycystus sp. exerts a significant influence on P. atomaria populations in Eucalyptus cloeziana. plantations in south-eastern Queensland, causing the direct (13%) and indirect (15%) mortality of almost one-third of all eggs in the field. Across seasons and generations, 45% of egg batches were parasitised, with a within-batch parasitism rate of around 30%. Between-batch parasitism increased up to 5–6 days after oviposition in the field, although within-batch parasitism rates generally did not. However, there were few apparent patterns to egg parasitism, with rates often varying significantly between sites and seasons.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The bactericide triclosan has found wide-spread use in e.g. soaps, deodorants and toothpastes. Recent in vitro and in vivo studies indicate that triclosan might exert adverse effects in humans. Triclosan has previously been shown to be present in human plasma and milk at concentrations that are well correlated to the use of personal care products containing triclosan. In this study we investigated the influence of age, gender, and the region of residence on triclosan concentrations in pooled samples of Australian human blood serum. The results showed no influence of region of residence on the concentrations of triclosan. There was a small but significant influence of age and gender on the serum triclosan concentrations, which were higher in males than in females, and highest in the group of 31–45 year old males and females. However, overall there was a lack of pronounced differences in the triclosan concentrations within the dataset, which suggests that the exposure to triclosan among different groups of the Australian population is relatively homogenous. A selection of the dataset was compared with previous measurements of triclosan concentrations in human plasma from Sweden, where the use of triclosan is expected to be low due to consumer advisories. The triclosan concentrations were a factor of 2 higher in Australian serum than in Swedish plasma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Low back pain is an increasing problem in industrialised countries and although it is a major socio-economic problem in terms of medical costs and lost productivity, relatively little is known about the processes underlying the development of the condition. This is in part due to the complex interactions between bone, muscle, nerves and other soft tissues of the spine, and the fact that direct observation and/or measurement of the human spine is not possible using non-invasive techniques. Biomechanical models have been used extensively to estimate the forces and moments experienced by the spine. These models provide a means of estimating the internal parameters which can not be measured directly. However, application of most of the models currently available is restricted to tasks resembling those for which the model was designed due to the simplified representation of the anatomy. The aim of this research was to develop a biomechanical model to investigate the changes in forces and moments which are induced by muscle injury. In order to accurately simulate muscle injuries a detailed quasi-static three dimensional model representing the anatomy of the lumbar spine was developed. This model includes the nine major force generating muscles of the region (erector spinae, comprising the longissimus thoracis and iliocostalis lumborum; multifidus; quadratus lumborum; latissimus dorsi; transverse abdominis; internal oblique and external oblique), as well as the thoracolumbar fascia through which the transverse abdominis and parts of the internal oblique and latissimus dorsi muscles attach to the spine. The muscles included in the model have been represented using 170 muscle fascicles each having their own force generating characteristics and lines of action. Particular attention has been paid to ensuring the muscle lines of action are anatomically realistic, particularly for muscles which have broad attachments (e.g. internal and external obliques), muscles which attach to the spine via the thoracolumbar fascia (e.g. transverse abdominis), and muscles whose paths are altered by bony constraints such as the rib cage (e.g. iliocostalis lumborum pars thoracis and parts of the longissimus thoracis pars thoracis). In this endeavour, a separate sub-model which accounts for the shape of the torso by modelling it as a series of ellipses has been developed to model the lines of action of the oblique muscles. Likewise, a separate sub-model of the thoracolumbar fascia has also been developed which accounts for the middle and posterior layers of the fascia, and ensures that the line of action of the posterior layer is related to the size and shape of the erector spinae muscle. Published muscle activation data are used to enable the model to predict the maximum forces and moments that may be generated by the muscles. These predictions are validated against published experimental studies reporting maximum isometric moments for a variety of exertions. The model performs well for fiexion, extension and lateral bend exertions, but underpredicts the axial twist moments that may be developed. This discrepancy is most likely the result of differences between the experimental methodology and the modelled task. The application of the model is illustrated using examples of muscle injuries created by surgical procedures. The three examples used represent a posterior surgical approach to the spine, an anterior approach to the spine and uni-lateral total hip replacement surgery. Although the three examples simulate different muscle injuries, all demonstrate the production of significant asymmetrical moments and/or reduced joint compression following surgical intervention. This result has implications for patient rehabilitation and the potential for further injury to the spine. The development and application of the model has highlighted a number of areas where current knowledge is deficient. These include muscle activation levels for tasks in postures other than upright standing, changes in spinal kinematics following surgical procedures such as spinal fusion or fixation, and a general lack of understanding of how the body adjusts to muscle injuries with respect to muscle activation patterns and levels, rate of recovery from temporary injuries and compensatory actions by other muscles. Thus the comprehensive and innovative anatomical model which has been developed not only provides a tool to predict the forces and moments experienced by the intervertebral joints of the spine, but also highlights areas where further clinical research is required.