9 resultados para model categories homotopy theory quillen functor equivalence derived adjunction cofibrantly generated

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To assess the compliance of Daily Disposable Contact Lenses (DDCLs) wearers with replacing lenses at a manufacturer-recommended replacement frequency. To evaluate the ability of two different Health Behavioural Theories (HBT), The Health Belief Model (HBM) and The Theory of Planned Behaviour (TPB), in predicting compliance. Method: A multi-centre survey was conducted using a questionnaire completed anonymously by contact lens wearers during the purchase of DDCLs. Results: Three hundred and fifty-four questionnaires were returned. The survey comprised 58.5% females and 41.5% males (mean age 34. ±. 12. years). Twenty-three percent of respondents were non-compliant with manufacturer-recommended replacement frequency (re-using DDCLs at least once). The main reason for re-using DDCLs was "to save money" (35%). Predictions of compliance behaviour (past behaviour or future intentions) on the basis of the two HBT was investigated through logistic regression analysis: both TPB factors (subjective norms and perceived behavioural control) were significant (p. <. 0.01); HBM was less predictive with only the severity (past behaviour and future intentions) and perceived benefit (only for past behaviour) as significant factors (p. <. 0.05). Conclusions: Non-compliance with DDCLs replacement is widespread, affecting 1 out of 4 Italian wearers. Results from the TPB model show that the involvement of persons socially close to the wearers (subjective norms) and the improvement of the procedure of behavioural control of daily replacement (behavioural control) are of paramount importance in improving compliance. With reference to the HBM, it is important to warn DDCLs wearers of the severity of a contact-lens-related eye infection, and to underline the possibility of its prevention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In spite of the increasing significance of broadband Internet, there are not many research papers explicitly addressing issues pertaining to its adoption and postadoption. Previous research on broadband has mainly focused on the supply side aspect at the national level, ignoring the importance of the demand side which may involve looking more deeply into the use, as well as factors impacting organizational and individual uptake. In an attempt to fill this gap, the current study empirically verifies an integrated theoretical model comprising the theory of planned behavior and the IS continuance model to examine factors influencing broadband Internet adoption and postadoption behavior of some 1,500 organizations in Singapore. Overall, strong support for the integrated model has been manifested by our results, providing insight into influential factors. At the adoption stage, perceived behavioral control has the greatest impact on behavioral intention. Our findings also suggest that, as compared to attitude, subjective norms and perceived behavioral control more significantly affect the broadband Internet adoption decision. At the postadoption stage, intention is no longer the only determinant of broadband Internet continuance; rather, initial usage was found to significantly affect broadband Internet continuance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the processing industries particulate materials are often in the form of powders which themselves are agglomerations of much smaller sized particles. During powder processing operations agglomerate degradation occurs primarily as a result of collisions between agglomerates and between agglomerates and the process equipment. Due to the small size of the agglomerates and the very short duration of the collisions it is currently not possible to obtain sufficiently detailed quantitative information from real experiments to provide a sound theoretically based strategy for designing particles to prevent or guarantee breakage. However, with the aid of computer simulated experiments, the micro-examination of these short duration dynamic events is made possible. This thesis presents the results of computer simulated experiments on a 2D monodisperse agglomerate in which the algorithms used to model the particle-particle interactions have been derived from contact mechanics theories and, necessarily, incorporate contact adhesion. A detailed description of the theoretical background is included in the thesis. The results of the agglomerate impact simulations show three types of behaviour depending on whether the initial impact velocity is high, moderate or low. It is demonstrated that high velocity impacts produce extensive plastic deformation which leads to subsequent shattering of the agglomerate. At moderate impact velocities semi-brittle fracture is observed and there is a threshold velocity below which the agglomerate bounces off the wall with little or no visible damage. The micromechanical processes controlling these different types of behaviour are discussed and illustrated by computer graphics. Further work is reported to demonstrate the effect of impact velocity and bond strength on the damage produced. Empirical relationships between impact velocity, bond strength and damage are presented and their relevance to attrition and comminution is discussed. The particle size distribution curves resulting from the agglomerate impacts are also provided. Computer simulated diametrical compression tests on the same agglomerate have also been carried out. Simulations were performed for different platen velocities and different bond strengths. The results show that high platen velocities produce extensive plastic deformation and crushing. Low platen velocities produce semi-brittle failure in which cracks propagate from the platens inwards towards the centre of the agglomerate. The results are compared with the results of the agglomerate impact tests in terms of work input, applied velocity and damage produced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work described in this thesis focuses on the use of a design-of-experiments approach in a multi-well mini-bioreactor to enable the rapid establishments of high yielding production phase conditions in yeast, which is an increasingly popular host system in both academic and industrial laboratories. Using green fluorescent protein secreted from the yeast, Pichia pastoris, a scalable predictive model of protein yield per cell was derived from 13 sets of conditions each with three factors (temperature, pH and dissolved oxygen) at 3 levels and was directly transferable to a 7 L bioreactor. This was in clear contrast to the situation in shake flasks, where the process parameters cannot be tightly controlled. By further optimisating both the accumulation of cell density in batch and improving the fed-batch induction regime, additional yield improvement was found to be additive to the per cell yield of the model. A separate study also demonstrated that improving biomass improved product yield in a second yeast species, Saccharomyces cerevisiae. Investigations of cell wall hydrophobicity in high cell density P. pastoris cultures indicated that cell wall hydrophobin (protein) compositional changes with growth phase becoming more hydrophobic in log growth than in lag or stationary phases. This is possibly due to an increased occurrence of proteins associated with cell division. Finally, the modelling approach was validated in mammalian cells, showing its flexibility and robustness. In summary, the strategy presented in this thesis has the benefit of reducing process development time in recombinant protein production, directly from bench to bioreactor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional topic models are ineffective for topic extraction from microblog messages since the lack of structure and context among the posts renders poor message-level word co-occurrence patterns. In this work, we organize microblog posts as conversation trees based on reposting and replying relations, which enrich context information to alleviate data sparseness. Our model generates words according to topic dependencies derived from the conversation structures. In specific, we differentiate messages as leader messages, which initiate key aspects of previously focused topics or shift the focus to different topics, and follower messages that do not introduce any new information but simply echo topics from the messages that they repost or reply. Our model captures the different extents that leader and follower messages may contain the key topical words, thus further enhances the quality of the induced topics. The results of thorough experiments demonstrate the effectiveness of our proposed model.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Contrast sensitivity improves with the area of a sine-wave grating, but why? Here we assess this phenomenon against contemporary models involving spatial summation, probability summation, uncertainty, and stochastic noise. Using a two-interval forced-choice procedure we measured contrast sensitivity for circular patches of sine-wave gratings with various diameters that were blocked or interleaved across trials to produce low and high extrinsic uncertainty, respectively. Summation curves were steep initially, becoming shallower thereafter. For the smaller stimuli, sensitivity was slightly worse for the interleaved design than for the blocked design. Neither area nor blocking affected the slope of the psychometric function. We derived model predictions for noisy mechanisms and extrinsic uncertainty that was either low or high. The contrast transducer was either linear (c1.0) or nonlinear (c2.0), and pooling was either linear or a MAX operation. There was either no intrinsic uncertainty, or it was fixed or proportional to stimulus size. Of these 10 canonical models, only the nonlinear transducer with linear pooling (the noisy energy model) described the main forms of the data for both experimental designs. We also show how a cross-correlator can be modified to fit our results and provide a contemporary presentation of the relation between summation and the slope of the psychometric function.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper develops and tests a learning organization model derived from HRM and dynamic capability literatures in order to ascertain the model's applicability across divergent global contexts. We define a learning organization as one capable of achieving on-going strategic renewal, arguing based on dynamic capability theory that the model has three necessary antecedents: HRM focus, developmental orientation and customer-facing remit. Drawing on a sample comprising nearly 6000 organizations across 15 countries, we show that learning organizations exhibit higher performance than their less learning-inclined counterparts. We also demonstrate that innovation fully mediates the relationship between our conceptualization of the learning organization and organizational performance in 11 of the 15 countries we examined. It is the first time in our knowledge that these questions have been tested in a major, cross-global study, and our work contributes to both HRM and dynamic capability literatures, especially where the focus is the applicability of best practice parameters across national boundaries.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Building on a previous conceptual article, we present an empirically derived model of network learning - learning by a group of organizations as a group. Based on a qualitative, longitudinal, multiple-method empirical investigation, five episodes of network learning were identified. Treating each episode as a discrete analytic case, through cross-case comparison, a model of network learning is developed which reflects the common, critical features of the episodes. The model comprises three conceptual themes relating to learning outcomes, and three conceptual themes of learning process. Although closely related to conceptualizations that emphasize the social and political character of organizational learning, the model of network learning is derived from, and specifically for, more extensive networks in which relations among numerous actors may be arms-length or collaborative, and may be expected to change over time.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recent changes to the legislation on chemicals and cosmetics testing call for a change in the paradigm regarding the current 'whole animal' approach for identifying chemical hazards, including the assessment of potential neurotoxins. Accordingly, since 2004, we have worked on the development of the integrated co-culture of post-mitotic, human-derived neurons and astrocytes (NT2.N/A), for use as an in vitro functional central nervous system (CNS) model. We have used it successfully to investigate indicators of neurotoxicity. For this purpose, we used NT2.N/A cells to examine the effects of acute exposure to a range of test chemicals on the cellular release of brain-derived neurotrophic factor (BDNF). It was demonstrated that the release of this protective neurotrophin into the culture medium (above that of control levels) occurred consistently in response to sub-cytotoxic levels of known neurotoxic, but not non-neurotoxic, chemicals. These increases in BDNF release were quantifiable, statistically significant, and occurred at concentrations below those at which cell death was measureable, which potentially indicates specific neurotoxicity, as opposed to general cytotoxicity. The fact that the BDNF immunoassay is non-invasive, and that NT2.N/A cells retain their functionality for a period of months, may make this system useful for repeated-dose toxicity testing, which is of particular relevance to cosmetics testing without the use of laboratory animals. In addition, the production of NT2.N/A cells without the use of animal products, such as fetal bovine serum, is being explored, to produce a fully-humanised cellular model.