406 resultados para model categories homotopy theory quillen functor equivalence derived adjunction cofibrantly generated

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose The purpose of this research is to explore the idea of the participatory library in higher education settings. This research aims to address the question, what is a participatory university library? Design/methodology/approach Grounded theory approach was adopted. In-depth individual interviews were conducted with two diverse groups of participants including ten library staff members and six library users. Data collection and analysis were carried out simultaneously and complied with Straussian grounded theory principles and techniques. Findings Three core categories representing the participatory library were found including “community”, “empowerment”, and “experience”. Each category was thoroughly delineated via sub-categories, properties, and dimensions that all together create a foundation for the participatory library. A participatory library model was also developed together with an explanation of model building blocks that provide a deeper understanding of the participatory library phenomenon. Research limitations The research focuses on a specific library system, i.e., academic libraries. Therefore, the research results may not be very applicable to public, special, and school library contexts. Originality/value This is the first empirical study developing a participatory library model. It provides librarians, library managers, researchers, library students, and the library community with a holistic picture of the contemporary library.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reactive oxygen species (ROS) and related free radicals are considered to be key factors underpinning the various adverse health effects associated with exposure to ambient particulate matter. Therefore, measurement of ROS is a crucial factor for assessing the potential toxicity of particles. In this work, a novel profluorescent nitroxide, BPEAnit, was investigated as a probe for detecting particle-derived ROS. BPEAnit has a very low fluorescence emission due to inherent quenching by the nitroxide group, but upon radical trapping or redox activity, a strong fluorescence is observed. BPEAnit was tested for detection of ROS present in mainstream and sidestream cigarette smoke. In the case of mainstream cigarette smoke, there was a linear increase in fluorescence intensity with an increasing number of cigarette puffs, equivalent to an average of 101 nmol ROS per cigarette based on the number of moles of the probe reacted. Sidestream cigarette smoke sampled from an environmental chamber exposed BPEAnit to much lower concentrations of particles, but still resulted in a clearly detectible increase in fluorescence intensity with sampling time. It was calculated that the amount of ROS was equivalent to 50 ± 2 nmol per mg of particulate matter; however, this value decreased with ageing of the particles in the chamber. Overall, BPEAnit was shown to provide a sensitive response related to the oxidative capacity of the particulate matter. These findings present a good basis for employing the new BPEAnit probe for the investigation of particle-related ROS generated from cigarette smoke as well as from other combustion sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Particulate pollution has been widely recognised as an important risk factor to human health. In addition to increases in respiratory and cardiovascular morbidity associated with exposure to particulate matter (PM), WHO estimates that urban PM causes 0.8 million premature deaths globally and that 1.5 million people die prematurely from exposure to indoor smoke generated from the combustion of solid fuels. Despite the availability of a huge body of research, the underlying toxicological mechanisms by which particles induce adverse health effects are not yet entirely understood. Oxidative stress caused by generation of free radicals and related reactive oxygen species (ROS) at the sites of deposition has been proposed as a mechanism for many of the adverse health outcomes associated with exposure to PM. In addition to particle-induced generation of ROS in lung tissue cells, several recent studies have shown that particles may also contain ROS. As such, they present a direct cause of oxidative stress and related adverse health effects. Cellular responses to oxidative stress have been widely investigated using various cell exposure assays. However, for a rapid screening of the oxidative potential of PM, less time-consuming and less expensive, cell-free assays are needed. The main aim of this research project was to investigate the application of a novel profluorescent nitroxide probe, synthesised at QUT, as a rapid screening assay in assessing the oxidative potential of PM. Considering that this was the first time that a profluorescent nitroxide probe was applied in investigating the oxidative stress potential of PM, the proof of concept regarding the detection of PM–derived ROS by using such probes needed to be demonstrated and a sampling methodology needed to be developed. Sampling through an impinger containing profluorescent nitroxide solution was chosen as a means of particle collection as it allowed particles to react with the profluorescent nitroxide probe during sampling, avoiding in that way any possible chemical changes resulting from delays between the sampling and the analysis of the PM. Among several profluorescent nitroxide probes available at QUT, bis(phenylethynyl)anthracene-nitroxide (BPEAnit) was found to be the most suitable probe, mainly due to relatively long excitation and emission wavelengths (λex= 430 nm; λem= 485 and 513 nm). These wavelengths are long enough to avoid overlap with the background fluorescence coming from light absorbing compounds which may be present in PM (e.g. polycyclic aromatic hydrocarbons and their derivatives). Given that combustion, in general, is one of the major sources of ambient PM, this project aimed at getting an insight into the oxidative stress potential of combustion-generated PM, namely cigarette smoke, diesel exhaust and wood smoke PM. During the course of this research project, it was demonstrated that the BPEAnit probe based assay is sufficiently sensitive and robust enough to be applied as a rapid screening test for PM-derived ROS detection. Considering that for all three aerosol sources (i.e. cigarette smoke, diesel exhaust and wood smoke) the same assay was applied, the results presented in this thesis allow direct comparison of the oxidative potential measured for all three sources of PM. In summary, it was found that there was a substantial difference between the amounts of ROS per unit of PM mass (ROS concentration) for particles emitted by different combustion sources. For example, particles from cigarette smoke were found to have up to 80 times less ROS per unit of mass than particles produced during logwood combustion. For both diesel and wood combustion it has been demonstrated that the type of fuel significantly affects the oxidative potential of the particles emitted. Similarly, the operating conditions of the combustion source were also found to affect the oxidative potential of particulate emissions. Moreover, this project has demonstrated a strong link between semivolatile (i.e. organic) species and ROS and therefore, clearly highlights the importance of semivolatile species in particle-induced toxicity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the increasing popularity of social networking websites (SNWs), very little is known about the psychosocial variables which predict people’s use of these websites. The present study used an extended model of the theory of planned behaviour (TPB), including the additional variables of self-identity and belongingness, to predict high level SNW use intentions and behaviour in a sample of young people aged between 17 and 24 years. Additional analayses examined the impact of self-identity and belongingness on young people’s addictive tendencies towards SNWs. University students (N = 233) completed measures of the standard TPB constructs (attitude, subjective norm and perceived behavioural control), the additional predictor variables (self-identity and belongingness), demographic variables (age, gender, and past behaviour) and addictive tendencies. One week later, they reported their engagement in high level SNW use during the previous week. Regression analyses partially supported the TPB, as attitude and subjective norm signficantly predicted intentions to engage in high level SNW use with intention signficantly predicting behaviour. Self-identity, but not belongingness, signficantly contributed to the prediction of intention, and, unexpectedly, behaviour. Past behaviour also signficantly predicted intention and behaviour. Self-identity and belongingness signficantly predicted addictive tendencies toward SNWs. Overall, the present study revealed that high level SNW use is influenced by attitudinal, normative, and self-identity factors, findings which can be used to inform strategies that aim to modify young people’s high levels of use or addictive tendencies for SNWs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Following Youngjohn, Lees-Haley, and Binder's (1999) comment on Johnson and Lesniak-Karpiak's (1997) study that warnings lead to more subtle malingering, researchers have sought to better understand warning effects. However, such studies have been largely atheoretical and may have confounded warning and coaching. This study examined the effect on malingering of a warning that was based on criminological-sociological concepts derived from the rational choice model of deterrence theory. A total of 78 participants were randomly assigned to a control group, an unwarned simulator group, or one of two warned simulator groups. The warning groups comprised low- and high-level conditions depending on warning intensity. Simulator participants received no coaching about how to fake tests. Outcome variables were scores derived from the Test of Memory Malingering and Wechsler Memory Scale-III. When the rate of malingering was compared across the four groups, a high-level warning effect was found such that warned participants were significantly less likely to exaggerate than unwarned simulators. In an exploratory follow-up analysis, the warned groups were divided into those who reported malingering and those who did not report malingering, and the performance of these groups was compared to that of unwarned simulators and controls. Using this approach, results showed that participants who were deterred from malingering by warning performed no worse than controls. However, on a small number of tests, self-reported malingerers in the low-level warning group appeared less impaired than unwarned simulators. This pattern was not observed in the high-level warning condition. Although cautious interpretation of findings is necessitated by the exploratory nature of some analyses, overall results suggest that using a carefully designed warning may be useful for reducing the rate of malingering. The combination of some noteworthy effect sizes, despite low power and the small size of some groups, suggests that further investigation of the effects of warnings needs to continue to determine their effect more fully.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cardiovascular disease (CVD) continues to impose a heavy burden in terms of cost, disability and death in Australia. Evidence suggests that increasing remoteness, where cardiac services are scarce, is linked to an increased risk of dying from CVD. Fatal CVD events are reported to be between 20% and 50% higher in rural areas compared to major cities. The Cardiac ARIA project, with its extensive use of geographic Information Systems (GIS), ranks each of Australia’s 20,387 urban, rural and remote population centres by accessibility to essential services or resources for the management of a cardiac event. This unique, innovative and highly collaborative project delivers a powerful tool to highlight and combat the burden imposed by cardiovascular disease (CVD) in Australia. Cardiac ARIA is innovative. It is a model that could be applied internationally and to other acute and chronic conditions such as mental health, midwifery, cancer, respiratory, diabetes and burns services. Cardiac ARIA was designed to: 1. Determine by expert panel, what were the minimal services and resources required for the management of a cardiac event in any urban, rural or remote population locations in Australia using a single patient pathway to access care. 2. Derive a classification using GIS accessibility modelling for each of Australia’s 20,387 urban, rural and remote population locations. 3. Compare the Cardiac ARIA categories and population locations with census derived population characteristics. Key findings are as follows: • In the event of a cardiac emergency, the majority of Australians had very good access to cardiac services. Approximately 71% or 13.9 million people lived within one hour of a category one hospital. • 68% of older Australians lived within one hour of a category one hospital (Principal Referral Hospital with access to Cardiac Catheterisation). • Only 40% of indigenous people lived within one hour of the category one hospital. • 16% (74000) of indigenous people lived more than one hour from a hospital. • 3% (91,000) of people 65 years of age or older lived more than one hour from any hospital or clinic. • Approximately 96%, or 19 million, of people lived within one hour of the four key services to support cardiac rehabilitation and secondary prevention. • 75% of indigenous people lived within one hour of the four cardiac rehabilitation services to support cardiac rehabilitation and secondary prevention. Fourteen percent (64,000 persons) indigenous people had poor access to the four key services to support cardiac rehabilitation and secondary prevention. • 12% (56,000) of indigenous people were more than one hour from a hospital and only had access one the four key services (usually a medical service) to support cardiac rehabilitation and secondary prevention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose. To create a binocular statistical eye model based on previously measured ocular biometric data. Methods. Thirty-nine parameters were determined for a group of 127 healthy subjects (37 male, 90 female; 96.8% Caucasian) with an average age of 39.9 ± 12.2 years and spherical equivalent refraction of −0.98 ± 1.77 D. These parameters described the biometry of both eyes and the subjects' age. Missing parameters were complemented by data from a previously published study. After confirmation of the Gaussian shape of their distributions, these parameters were used to calculate their mean and covariance matrices. These matrices were then used to calculate a multivariate Gaussian distribution. From this, an amount of random biometric data could be generated, which were then randomly selected to create a realistic population of random eyes. Results. All parameters had Gaussian distributions, with the exception of the parameters that describe total refraction (i.e., three parameters per eye). After these non-Gaussian parameters were omitted from the model, the generated data were found to be statistically indistinguishable from the original data for the remaining 33 parameters (TOST [two one-sided t tests]; P < 0.01). Parameters derived from the generated data were also significantly indistinguishable from those calculated with the original data (P > 0.05). The only exception to this was the lens refractive index, for which the generated data had a significantly larger SD. Conclusions. A statistical eye model can describe the biometric variations found in a population and is a useful addition to the classic eye models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Business process modelling as a practice and research field has received great attention over recent years. Organizations invest significantly into process modelling in terms of training, tools, capabilities and resources. The return on this investment is a function of process model re-use, which we define as the recurring use of process models to support organizational work tasks. While prior research has examined re-use as a design principle, we explore re-use as a behaviour, because evidence suggest that analysts’ re-use of process models is indeed limited. In this paper we develop a two-stage conceptualization of the key object-, behaviour- and socioorganization-centric factors explaining process model re-use behaviour. We propose a theoretical model and detail implications for its operationalization and measurement. Our study can provide significant benefits to our understanding of process modelling and process model use as key practices in analysis and design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our objective was to determine the factors that lead users to continue working with process modeling grammars after their initial adoption. We examined the explanatory power of three theoretical models of IT usage by applying them to two popular process modeling grammars. We found that a hybrid model of technology acceptance and expectation-confirmation best explained user intentions to continue using the grammars. We examined differences in the model results, and used them to provide three contributions. First, the study confirmed the applicability of IT usage models to the domain of process modeling. Second, we discovered that differences in continued usage intentions depended on the grammar type instead of the user characteristics. Third, we suggest implications and practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alcohol consumption has been a popular leisure activity among Australian since European Settlement. Australians currently consume 7.2 litres per capita pure alcohol and Australia in regards to alcohol consumption is ranked as the 22nd highest country of 58 countries. Although the alcohol industry has provided leisure, employment and government taxes, alcohol use has also become associated with chronic health problems, crime, public disorder and violence. Drunken and disorderly behaviour is commonly associated with Pubs, Clubs and Hotels, particularly in the late night entertainment areas. Historically, drunkenness and disorderly behaviour has been managed by measures such as floggings, jail and treatment in asylums. Alcohol has also been banned in specific areas and restrictions have applied to hours and days of operation. In more recent times alcohol policies have included extended trading hours, restricted trading hours and bans in some Aboriginal communities in order to reduce alcohol-related violence. Community and business partnerships in and around licensed premises have also developed in order to address the noise, violence and disorderly behaviour that often occurs in the evenings and early mornings. There is an urgent need for the government to be more robust about implementing effective alcohol control policies in order to prevent and reduce the harmful effects of alcohol.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The incidence of malignant mesothelioma is increasing. There is the perception that survival is worse in the UK than in other countries. However, it is important to compare survival in different series based on accurate prognostic data. The European Organisation for Research and Treatment of Cancer (EORTC) and the Cancer and Leukaemia Group B (CALGB) have recently published prognostic scoring systems. We have assessed the prognostic variables, validated the EORTC and CALGB prognostic groups, and evaluated survival in a series of 142 patients. Methods Case notes of 142 consecutive patients presenting in Leicester since 1988 were reviewed. Univariate analysis of prognostic variables was performed using a Cox proportional hazards regression model. Statistically significant variables were analysed further in a forward, stepwise multivariate model. EORTC and CALGB prognostic groups were derived, Kaplan-Meier survival curves plotted, and survival rates were calculated from life tables. Results Significant poor prognostic factors in univariate analysis included male sex, older age, weight loss, chest pain, poor performance status, low haemoglobin, leukocytosis, thrombocytosis, and non-epithelial cell type (p<0.05). The prognostic significance of cell type, haemoglobin, white cell count, performance status, and sex were retained in the multivariate model. Overall median survival was 5.9 (range 0-34.3) months. One and two year survival rates were 21.3% (95% CI 13.9 to 28.7) and 3.5% (0 to 8.5), respectively. Median, one, and two year survival data within prognostic groups in Leicester were equivalent to the EORTC and CALGB series. Survival curves were successfully stratified by the prognostic groups. Conclusions This study validates the EORTC and CALGB prognostic scoring systems which should be used both in the assessment of survival data of series in different countries and in the stratification of patients into randomised clinical studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this age of rapidly evolving technology, teachers are encouraged to adopt ICTs by government, syllabus, school management, and parents. Indeed, it is an expectation that teachers will incorporate technologies into their classroom teaching practices to enhance the learning experiences and outcomes of their students. In particular, regarding the science classroom, a subject that traditionally incorporates hands-on experiments and practicals, the integration of modern technologies should be a major feature. Although myriad studies report on technologies that enhance students’ learning outcomes in science, there is a dearth of literature on how teachers go about selecting technologies for use in the science classroom. Teachers can feel ill prepared to assess the range of available choices and might feel pressured and somewhat overwhelmed by the avalanche of new developments thrust before them in marketing literature and teaching journals. The consequences of making bad decisions are costly in terms of money, time and teacher confidence. Additionally, no research to date has identified what technologies science teachers use on a regular basis, and whether some purchased technologies have proven to be too problematic, preventing their sustained use and possible wider adoption. The primary aim of this study was to provide research-based guidance to teachers to aid their decision-making in choosing technologies for the science classroom. The study unfolded in several phases. The first phase of the project involved survey and interview data from teachers in relation to the technologies they currently use in their science classrooms and the frequency of their use. These data were coded and analysed using Grounded Theory of Corbin and Strauss, and resulted in the development of a PETTaL model that captured the salient factors of the data. This model incorporated usability theory from the Human Computer Interaction literature, and education theory and models such as Mishra and Koehler’s (2006) TPACK model, where the grounded data indicated these issues. The PETTaL model identifies Power (school management, syllabus etc.), Environment (classroom / learning setting), Teacher (personal characteristics, experience, epistemology), Technology (usability, versatility etc.,) and Learners (academic ability, diversity, behaviour etc.,) as fields that can impact the use of technology in science classrooms. The PETTaL model was used to create a Predictive Evaluation Tool (PET): a tool designed to assist teachers in choosing technologies, particularly for science teaching and learning. The evolution of the PET was cyclical (employing agile development methodology), involving repeated testing with in-service and pre-service teachers at each iteration, and incorporating their comments i ii in subsequent versions. Once no new suggestions were forthcoming, the PET was tested with eight in-service teachers, and the results showed that the PET outcomes obtained by (experienced) teachers concurred with their instinctive evaluations. They felt the PET would be a valuable tool when considering new technology, and it would be particularly useful as a means of communicating perceived value between colleagues and between budget holders and requestors during the acquisition process. It is hoped that the PET could make the tacit knowledge acquired by experienced teachers about technology use in classrooms explicit to novice teachers. Additionally, the PET could be used as a research tool to discover a teachers’ professional development needs. Therefore, the outcomes of this study can aid a teacher in the process of selecting educationally productive and sustainable new technology for their science classrooms. This study has produced an instrument for assisting teachers in the decision-making process associated with the use of new technologies for the science classroom. The instrument is generic in that it can be applied to all subject areas. Further, this study has produced a powerful model that extends the TPACK model, which is currently extensively employed to assess teachers’ use of technology in the classroom. The PETTaL model grounded in data from this study, responds to the calls in the literature for TPACK’s further development. As a theoretical model, PETTaL has the potential to serve as a framework for the development of a teacher’s reflective practice (either self evaluation or critical evaluation of observed teaching practices). Additionally, PETTaL has the potential for aiding the formulation of a teacher’s personal professional development plan. It will be the basis for further studies in this field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Inconsistencies in research findings on the impact of the built environment on walking across the life course may be methodologically driven. Commonly used methods to define 'neighbourhood', from which built environment variables are measured, may not accurately represent the spatial extent to which the behaviour in question occurs. This paper aims to provide new methods for spatially defining 'neighbourhood' based on how people use their surrounding environment. RESULTS Informed by Global Positioning Systems (GPS) tracking data, several alternative neighbourhood delineation techniques were examined (i.e., variable width, convex hull and standard deviation buffers). Compared with traditionally used buffers (i.e., circular and polygon network), differences were found in built environment characteristics within the newly created 'neighbourhoods'. Model fit statistics indicated that exposure measures derived from alternative buffering techniques provided a better fit when examining the relationship between land-use and walking for transport or leisure. CONCLUSIONS This research identifies how changes in the spatial extent from which built environment measures are derived may influence walking behaviour. Buffer size and orientation influences the relationship between built environment measures and walking for leisure in older adults. The use of GPS data proved suitable for re-examining operational definitions of neighbourhood.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A spatial process observed over a lattice or a set of irregular regions is usually modeled using a conditionally autoregressive (CAR) model. The neighborhoods within a CAR model are generally formed deterministically using the inter-distances or boundaries between the regions. An extension of CAR model is proposed in this article where the selection of the neighborhood depends on unknown parameter(s). This extension is called a Stochastic Neighborhood CAR (SNCAR) model. The resulting model shows flexibility in accurately estimating covariance structures for data generated from a variety of spatial covariance models. Specific examples are illustrated using data generated from some common spatial covariance functions as well as real data concerning radioactive contamination of the soil in Switzerland after the Chernobyl accident.