907 resultados para model categories homotopy theory quillen functor equivalence derived adjunction cofibrantly generated
Resumo:
The field of destination image has been widely discussed in the destination literature since the early 1970s (see Mayo, 1973). However the extent to which travel context impacts on an individual’s destination image evaluation, and therefore destination choice, has received scant attention (Hu & Ritchie, 1993). This study, utilising expectancy-value theory, sought to elicit salient destination attributes from consumers across two travel contexts: short-break holidays and longer getaways. Using the Repertory Test technique, attributes elicited as being salient for short-break holidays were consistent with those elicited for longer getaways. While this study was limited to Brisbane’s near-home destinations, the results will be of interest to destination marketers and researchers interested in the challenge of positioning a destination in diverse markets.
Resumo:
The design and construction community has shown increasing interest in adopting building information models (BIMs). The richness of information provided by BIMs has the potential to streamline the design and construction processes by enabling enhanced communication, coordination, automation and analysis. However, there are many challenges in extracting construction-specific information out of BIMs. In most cases, construction practitioners have to manually identify the required information, which is inefficient and prone to error, particularly for complex, large-scale projects. This paper describes the process and methods we have formalized to partially automate the extraction and querying of construction-specific information from a BIM. We describe methods for analyzing a BIM to query for spatial information that is relevant for construction practitioners, and that is typically represented implicitly in a BIM. Our approach integrates ifcXML data and other spatial data to develop a richer model for construction users. We employ custom 2D topological XQuery predicates to answer a variety of spatial queries. The validation results demonstrate that this approach provides a richer representation of construction-specific information compared to existing BIM tools.
Resumo:
The GameFlow model strives to be a general model of player enjoyment, applicable to all game genres and platforms. Derived from a general set of heuristics for creating enjoyable player experiences, the GameFlow model has been widely used in evaluating many types of games, as well as non-game applications. However, we recognize that more specific, low-level, and implementable criteria are potentially more useful for designing and evaluating video games. Consequently, the research reported in this paper aims to provide detailed heuristics for designing and evaluating one specific game genre, real-time strategy games. In order to develop these heuristics, we conducted a grounded theoretical analysis on a set of professional game reviews and structured the resulting heuristics using the GameFlow model. A selection of the resulting 165 heuristics are presented in this paper and discussed with respect to key evaluations of the GameFlow model.
Resumo:
The main purpose of this longitudinal study was to investigate the role of motivational climates, perceived competence and motivational regulations as antecedents of self-reported physical activity during junior high school years. The participants included 237 Finnish students (101 girls, 136 boys) that were 13 years old at the first stage of the study. Students completed the motivational climate and perceived competence questionnaires at Grade 7, motivation towards physical education questionnaire at Grade 8, and self-reported physical activity questionnaire at Grade 9. A path analysis revealed a path from task-involving motivational climate via perceived competence and intrinsic motivation to self-reported physical activity. Perceived competence and intrinsic motivation were statistically significant mediators between task-involving motivational climate and self-reported physical activity. This finding supports the four-stage causal sequence model of motivation.
Resumo:
Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business process model repositories. For example, in some cases new process models may be derived from existing models, thus finding these models and adapting them may be more effective and less error-prone than developing them from scratch. Since process model repositories may be large, query evaluation may be time consuming. Hence, we investigate the use of indexes to speed up this evaluation process. To make our approach more applicable, we consider the semantic similarity between labels. Experiments are conducted to demonstrate that our approach is efficient.
Resumo:
All civil and private aircraft are required to comply with the airworthiness standards set by their national airworthiness authority and throughout their operational life must be in a condition of safe operation. Aviation accident data shows that over 20% of all fatal accidents in aviation are due to airworthiness issues, specifically aircraft mechanical failures. Ultimately it is the responsibility of each registered operator to ensure that their aircraft remain in a condition of safe operation, and this is done through both effective management of airworthiness activities and the effective programme governance of safety outcomes. Typically, the projects within these airworthiness management programmes are focused on acquiring, modifying and maintaining the aircraft as a capability supporting the business. Programme governance provides the structure through which the goals and objectives of airworthiness programmes are set along with the means of attaining them. Whilst the principal causes of failures in many programmes can be traced to inadequate programme governance, many of the failures in large-scale projects can have their root causes in the organizational culture and more specifically in the organizational processes related to decision-making. This paper examines the primary theme of project and programme-based enterprises, and introduces a model for measuring organizational culture in airworthiness management programmes using measures drawn from 211 respondents in Australian airline programmes. The paper describes the theoretical perspectives applied to modifying an original model to specifically focus it on measuring the organizational culture of programmes for managing airworthiness; identifying the most important factors needed to explain the relationship between the measures collected, and providing a description of the nature of these factors. The paper concludes by identifying a model that best describes the organizational culture data collected from seven airworthiness management programmes.
Resumo:
This paper examines the role of first aid training in increasing adolescent helping behaviours when taught in a school-based injury prevention program, Skills for Preventing Injury in Youth (SPIY). The research involved the development and application of an extended Theory of Planned Behaviour (TPB), including “behavioural willingness in a fight situation,” “first aid knowledge” and “perceptions of injury seriousness”, to predict the relationship between participation in SPIY and helping behaviours when a friend is injured in a fight. From 35 Queensland high schools, 2500 Year 9 students (mean age = 13.5, 40% male) completed surveys measuring their attitudes, perceived behavioural control, subjective norms and behavioural intention, from the TPB, and added measures of behavioural willingness in a fight situation, perceptions of injury seriousness and first aid knowledge, to predict helping behaviours when a friend is injured in a fight. It is expected that the TPB will significantly contribute to understanding the relationship between participation in SPIY and helping behaviours when a friend is injured in a fight. Further analyses will determine whether the extension of the model significantly increases the variance explained in helping behaviours. The findings of this research will provide insight into the critical factors that may increase adolescent bystanders’ actions in injury situations.
Resumo:
Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.
Resumo:
Heteroatom doping on the edge of graphene may serve as an effective way to tune chemical activity of carbon-based electrodes with respect to charge carrier transfer in an aqueous environment. In a step towards developing mechanistic understanding of this phenomenon, we explore herein mechanisms of proton transfer from aqueous solution to pristine and doped graphene edges utilizing density functional theory. Atomic B-, N-, and O- doped edges as well as the native graphene are examined, displaying varying proton affinities and effective interaction ranges with the H3O+ charge carrier. Our study shows that the doped edges characterized by more dispersive orbitals, namely boron and nitrogen, demonstrate more energetically favourable charge carrier exchange compared with oxygen, which features more localized orbitals. Extended calculations are carried out to examine proton transfer from the hydronium ion in the presence of explicit water, with results indicating that the basic mechanistic features of the simpler model are unchanged.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Universities are more and more challenged by the emerging global higher education market, facilitated by advances in Information and Communication Technologies (ICT). This requires them to reconsider their mission and direction in order to function effectively and efficiently, and to be responsive to changes in their environment. In the face of increasing demands and competitive pressures, Universities like other companies, seek to continuously innovate and improve their performance. Universities are considering co-operating or sharing, both internally and externally, in a wide range of areas to achieve cost effectiveness and improvements in performance. Shared services are an effective model for re-organizing to reduce costs, increase quality and create new capabilities. Shared services are not limited to the Higher Education (HE) sector. Organizations across different sectors are adopting shared services, in particular for support functions such as Finance, Accounting, Human Resources and Information Technology. While shared services has been around for more than three decades, commencing in the 1970’s in the banking sector and then been adopted by other sectors, it is an under researched domain, with little consensus on the most fundamental issues even as basic as defining what shared services is. Moreover, the interest in shared services within Higher Education is a global phenomenon. This study on shared services is situated within the Higher Education Sector of Malaysia, and originated as an outcome resulting from a national project (2005 – 2007) conducted by the Ministry of Higher Education (MOHE) entitled "Knowledge, Information Communication Technology Strategic Plan (KICTSP) for Malaysian Public Higher Education"- where progress towards more collaborations via shared services was a key recommendation. The study’s primary objective was to understand the nature and potential for ICT shared services, in particular in the Malaysian HE sector; by laying a foundation in terms of definition, typologies and research agenda and deriving theoretically based conceptualisations of the potential benefits of shared services, success factors and issues of pursuing shared services. The study embarked on this objective with a literature review and pilot case study as a means to further define the context of the study, given the current under-researched status of ICT shared services and of shared services in Higher Education. This context definition phase illustrated a range of unaddressed issues; including a lack of common understanding of what shared services are, how they are formed, what objectives they full fill, who is involved etc. The study thus embarked on a further investigation of a more foundational nature with an exploratory phase that aimed to address these gaps, where a detailed archival analysis of shared services literature within the IS context was conducted to better understand shared services from an IS perspective. The IS literature on shared services was analysed in depth to report on the current status of shared services research in the IS domain; in particular definitions, objectives, stakeholders, the notion of sharing, theories used, and research methods applied were analysed, which provided a firmer base to this study’s design. The study also conducted a detailed content analysis of 36 cases (globally) of shared services implementations in the HE sector to better understand how shared services are structured within the HE sector and what is been shared. The results of the context definition phase and exploratory phase formed a firm basis in the multiple case studies phase which was designed to address the primary goals of this study (as presented above). Three case sites within the Malaysian HE sector was included in this analysis, resulting in empirically supported theoretical conceptualizations of shared services success factors, issues and benefits. A range of contributions are made through this study. First, the detailed archival analysis of shared services in Information Systems (IS) demonstrated the dearth of research on shared services within Information Systems. While the existing literature was synthesised to contribute towards an improved understanding of shared services in the IS domain, the areas that are yet under-developed and requires further exploration is identified and presented as a proposed research agenda for the field. This study also provides theoretical considerations and methodological guidelines to support the research agenda; to conduct better empirical research in this domain. A number of literatures based a priori frameworks (i.e. on the forms of sharing and shared services stakeholders etc) are derived in this phase, contributing to practice and research with early conceptualisations of critical aspects of shared services. Furthermore, the comprehensive archival analysis design presented and executed here is an exemplary approach of a systematic, pre-defined and tool-supported method to extract, analyse and report literature, and is documented as guidelines that can be applied for other similar literature analysis, with particular attention to supporting novice researchers. Second, the content analysis of 36 shared services initiatives in the Higher Education sector presented eight different types of structural arrangements for shared services, as observed in practice, and the salient dimensions along which those types can be usefully differentiated. Each of the eight structural arrangement types are defined and demonstrated through case examples, with further descriptive details and insights to what is shared and how the sharing occurs. This typology, grounded on secondary empirical evidence, can serve as a useful analytical tool for researchers investigating the shared services phenomenon further, and for practitioners considering the introduction or further development of shared services. Finally, the multiple case studies conducted in the Malaysian Higher Education sector, provided further empirical basis to instantiate the conceptual frameworks and typology derived from the prior phases and develops an empirically supported: (i) framework of issues and challenges, (ii) a preliminary theory of shared services success, and (iii) a benefits framework, for shared services in the Higher Education sector.
Resumo:
Skin is the largest, and arguably, the most important organ of the body. It is a complex and multi-dimensional tissue, thus making it essentially impossible to fully model in vitro in conventional 2-dimensional culture systems. In view of this, rodents or pigs are utilised to study wound healing therapeutics or to investigate the biological effects of treatments on skin. However, there are many differences between the wound healing processes in rodents compared to humans (contraction vs. re-epithelialisation) and there are also ethical issues associated with animal testing for scientific research. Therefore, the development of skin equivalent (HSE) models from surgical discard human skin has become an important area of research. The studies in this thesis compare, for the first time, native human skin and the epidermogenesis process in a HSE model. The HSE was reported to be a comparable model for human skin in terms of expression and localisation of key epidermal cell markers. This validated HSE model was utilised to study the potential wound healing therapeutic, hyperbaric oxygen (HBO) therapy. There is a significant body of evidence suggesting that lack of cutaneous oxygen results in and potentiates the chronic, non-healing wound environment. Although the evidence is anecdotal, HBO therapy has displayed positive effects on re-oxygenation of chronic wounds and the clinical outcomes suggest that HBO treatment may be beneficial. Therefore, the HSE was subjected to a daily clinical HBO regime and assessed in terms of keratinocyte migration, proliferation, differentiation and epidermal thickening. HBO treatment was observed to increase epidermal thickness, in particular stratum corneum thickening, but it did not alter the expression or localisation of standard epidermal cell markers. In order to elucidate the mechanistic changes occurring in response to HBO treatment in the HSE model, gene microarrays were performed, followed by qRT-PCR of select genes which were differentially regulated in response to HBO treatment. The biological diversity of the HSEs created from individual skin donors, however, overrode the differences in gene expression between treatment groups. Network analysis of functional changes in the HSE model revealed general trends consistent with normal skin growth and maturation. As a more robust and longer term study of these molecular changes, protein localisation and expression was investigated in sections from the HSEs undergoing epidermogenesis in response to HBO treatment. These proteins were CDCP1, Metallothionein, Kallikrein (KLK) 1 and KLK7 and early growth response 1. While the protein expression within the HSE models exposed to HBO treatment were not consistent in all HSEs derived from all skin donors, this is the first study to detect and compare both KLK1 and CDCP1 protein expression in both a HSE model and native human skin. Furthermore, this is the first study to provide such an in depth analysis of the effect of HBO treatment on a HSE model. The data presented in this thesis, demonstrates high levels of variation between individuals and their response to HBO treatment, consistent with the clinical variation that is currently observed.
Resumo:
Travelling wave phenomena are observed in many biological applications. Mathematical theory of standard reaction-diffusion problems shows that simple partial differential equations exhibit travelling wave solutions with constant wavespeed and such models are used to describe, for example, waves of chemical concentrations, electrical signals, cell migration, waves of epidemics and population dynamics. However, as in the study of cell motion in complex spatial geometries, experimental data are often not consistent with constant wavespeed. Non-local spatial models have successfully been used to model anomalous diffusion and spatial heterogeneity in different physical contexts. In this paper, we develop a fractional model based on the Fisher-Kolmogoroff equation and analyse it for its wavespeed properties, attempting to relate the numerical results obtained from our simulations to experimental data describing enteric neural crest-derived cells migrating along the intact gut of mouse embryos. The model proposed essentially combines fractional and standard diffusion in different regions of the spatial domain and qualitatively reproduces the behaviour of neural crest-derived cells observed in the caecum and the hindgut of mouse embryos during in vivo experiments.
Resumo:
Density functional theory (DFT) is a powerful approach to electronic structure calculations in extended systems, but suffers currently from inadequate incorporation of long-range dispersion, or Van der Waals (VdW) interactions. VdW-corrected DFT is tested for interactions involving molecular hydrogen, graphite, single-walled carbon nanotubes (SWCNTs), and SWCNT bundles. The energy correction, based on an empirical London dispersion term with a damping function at short range, allows a reasonable physisorption energy and equilibrium distance to be obtained for H2 on a model graphite surface. The VdW-corrected DFT calculation for an (8, 8) nanotube bundle reproduces accurately the experimental lattice constant. For H2 inside or outside an (8, 8) SWCNT, we find the binding energies are respectively higher and lower than that on a graphite surface, correctly predicting the well known curvature effect. We conclude that the VdW correction is a very effective method for implementing DFT calculations, allowing a reliable description of both short-range chemical bonding and long-range dispersive interactions. The method will find powerful applications in areas of SWCNT research where empirical potential functions either have not been developed, or do not capture the necessary range of both dispersion and bonding interactions.
Resumo:
Introduction: The use of amorphous-silicon electronic portal imaging devices (a-Si EPIDs) for dosimetry is complicated by the effects of scattered radiation. In photon radiotherapy, primary signal at the detector can be accompanied by photons scattered from linear accelerator components, detector materials, intervening air, treatment room surfaces (floor, walls, etc) and from the patient/phantom being irradiated. Consequently, EPID measurements which presume to take scatter into account are highly sensitive to the identification of these contributions. One example of this susceptibility is the process of calibrating an EPID for use as a gauge of (radiological) thickness, where specific allowance must be made for the effect of phantom-scatter on the intensity of radiation measured through different thicknesses of phantom. This is usually done via a theoretical calculation which assumes that phantom scatter is linearly related to thickness and field-size. We have, however, undertaken a more detailed study of the scattering effects of fields of different dimensions when applied to phantoms of various thicknesses in order to derive scattered-primary ratios (SPRs) directly from simulation results. This allows us to make a more-accurate calibration of the EPID, and to qualify the appositeness of the theoretical SPR calculations. Methods: This study uses a full MC model of the entire linac-phantom-detector system simulated using EGSnrc/BEAMnrc codes. The Elekta linac and EPID are modelled according to specifications from the manufacturer and the intervening phantoms are modelled as rectilinear blocks of water or plastic, with their densities set to a range of physically realistic and unrealistic values. Transmissions through these various phantoms are calculated using the dose detected in the model EPID and used in an evaluation of the field-size-dependence of SPR, in different media, applying a method suggested for experimental systems by Swindell and Evans [1]. These results are compared firstly with SPRs calculated using the theoretical, linear relationship between SPR and irradiated volume, and secondly with SPRs evaluated from our own experimental data. An alternate evaluation of the SPR in each simulated system is also made by modifying the BEAMnrc user code READPHSP, to identify and count those particles in a given plane of the system that have undergone a scattering event. In addition to these simulations, which are designed to closely replicate the experimental setup, we also used MC models to examine the effects of varying the setup in experimentally challenging ways (changing the size of the air gap between the phantom and the EPID, changing the longitudinal position of the EPID itself). Experimental measurements used in this study were made using an Elekta Precise linear accelerator, operating at 6MV, with an Elekta iView GT a-Si EPID. Results and Discussion: 1. Comparison with theory: With the Elekta iView EPID fixed at 160 cm from the photon source, the phantoms, when positioned isocentrically, are located 41 to 55 cm from the surface of the panel. At this geometry, a close but imperfect agreement (differing by up to 5%) can be identified between the results of the simulations and the theoretical calculations. However, this agreement can be totally disrupted by shifting the phantom out of the isocentric position. Evidently, the allowance made for source-phantom-detector geometry by the theoretical expression for SPR is inadequate to describe the effect that phantom proximity can have on measurements made using an (infamously low-energy sensitive) a-Si EPID. 2. Comparison with experiment: For various square field sizes and across the range of phantom thicknesses, there is good agreement between simulation data and experimental measurements of the transmissions and the derived values of the primary intensities. However, the values of SPR obtained through these simulations and measurements seem to be much more sensitive to slight differences between the simulated and real systems, leading to difficulties in producing a simulated system which adequately replicates the experimental data. (For instance, small changes to simulated phantom density make large differences to resulting SPR.) 3. Comparison with direct calculation: By developing a method for directly counting the number scattered particles reaching the detector after passing through the various isocentric phantom thicknesses, we show that the experimental method discussed above is providing a good measure of the actual degree of scattering produced by the phantom. This calculation also permits the analysis of the scattering sources/sinks within the linac and EPID, as well as the phantom and intervening air. Conclusions: This work challenges the assumption that scatter to and within an EPID can be accounted for using a simple, linear model. Simulations discussed here are intended to contribute to a fuller understanding of the contribution of scattered radiation to the EPID images that are used in dosimetry calculations. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital, Brisbane, Australia. The authors are also grateful to Elekta for the provision of manufacturing specifications which permitted the detailed simulation of their linear accelerators and amorphous-silicon electronic portal imaging devices. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.