953 resultados para model categories homotopy theory quillen functor equivalence derived adjunction cofibrantly generated


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the processing industries particulate materials are often in the form of powders which themselves are agglomerations of much smaller sized particles. During powder processing operations agglomerate degradation occurs primarily as a result of collisions between agglomerates and between agglomerates and the process equipment. Due to the small size of the agglomerates and the very short duration of the collisions it is currently not possible to obtain sufficiently detailed quantitative information from real experiments to provide a sound theoretically based strategy for designing particles to prevent or guarantee breakage. However, with the aid of computer simulated experiments, the micro-examination of these short duration dynamic events is made possible. This thesis presents the results of computer simulated experiments on a 2D monodisperse agglomerate in which the algorithms used to model the particle-particle interactions have been derived from contact mechanics theories and, necessarily, incorporate contact adhesion. A detailed description of the theoretical background is included in the thesis. The results of the agglomerate impact simulations show three types of behaviour depending on whether the initial impact velocity is high, moderate or low. It is demonstrated that high velocity impacts produce extensive plastic deformation which leads to subsequent shattering of the agglomerate. At moderate impact velocities semi-brittle fracture is observed and there is a threshold velocity below which the agglomerate bounces off the wall with little or no visible damage. The micromechanical processes controlling these different types of behaviour are discussed and illustrated by computer graphics. Further work is reported to demonstrate the effect of impact velocity and bond strength on the damage produced. Empirical relationships between impact velocity, bond strength and damage are presented and their relevance to attrition and comminution is discussed. The particle size distribution curves resulting from the agglomerate impacts are also provided. Computer simulated diametrical compression tests on the same agglomerate have also been carried out. Simulations were performed for different platen velocities and different bond strengths. The results show that high platen velocities produce extensive plastic deformation and crushing. Low platen velocities produce semi-brittle failure in which cracks propagate from the platens inwards towards the centre of the agglomerate. The results are compared with the results of the agglomerate impact tests in terms of work input, applied velocity and damage produced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work described in this thesis focuses on the use of a design-of-experiments approach in a multi-well mini-bioreactor to enable the rapid establishments of high yielding production phase conditions in yeast, which is an increasingly popular host system in both academic and industrial laboratories. Using green fluorescent protein secreted from the yeast, Pichia pastoris, a scalable predictive model of protein yield per cell was derived from 13 sets of conditions each with three factors (temperature, pH and dissolved oxygen) at 3 levels and was directly transferable to a 7 L bioreactor. This was in clear contrast to the situation in shake flasks, where the process parameters cannot be tightly controlled. By further optimisating both the accumulation of cell density in batch and improving the fed-batch induction regime, additional yield improvement was found to be additive to the per cell yield of the model. A separate study also demonstrated that improving biomass improved product yield in a second yeast species, Saccharomyces cerevisiae. Investigations of cell wall hydrophobicity in high cell density P. pastoris cultures indicated that cell wall hydrophobin (protein) compositional changes with growth phase becoming more hydrophobic in log growth than in lag or stationary phases. This is possibly due to an increased occurrence of proteins associated with cell division. Finally, the modelling approach was validated in mammalian cells, showing its flexibility and robustness. In summary, the strategy presented in this thesis has the benefit of reducing process development time in recombinant protein production, directly from bench to bioreactor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two concepts in rural economic development policy have been the focus of much research and policy action: the identification and support of clusters or networks of firms and the availability and adoption by rural businesses of Information and Communication Technologies (ICT). From a theoretical viewpoint these policies are based on two contrasting models, with clustering seen as a process of economic agglomeration, and ICT-mediated communication as a means of facilitating economic dispersion. The study’s conceptual framework is based on four interrelated elements: location, interaction, knowledge, and advantage, together with the concept of networks which is employed as an operationally and theoretically unifying concept. The research questions are developed in four successive categories: Policy, Theory, Networks, and Method. The questions are approached using a study of two contrasting groups of rural small businesses in West Cork, Ireland: (a) Speciality Foods, and (b) firms in Digital Products and Services. The study combines Social Network Analysis (SNA) with Qualitative Thematic Analysis, using data collected from semi-structured interviews with 58 owners or managers of these businesses. Data comprise relational network data on the firms’ connections to suppliers, customers, allies and competitors, together with linked qualitative data on how the firms established connections, and how tacit and codified knowledge was sourced and utilised. The research finds that the key characteristics identified in the cluster literature are evident in the sample of Speciality Food businesses, in relation to flows of tacit knowledge, social embedding, and the development of forms of social capital. In particular the research identified the presence of two distinct forms of collective social capital in this network, termed “community” and “reputation”. By contrast the sample of Digital Products and Services businesses does not have the form of a cluster, but matches more closely to dispersive models, or “chain” structures. Much of the economic and social structure of this set of firms is best explained in terms of “project organisation”, and by the operation of an individual rather than collective form of “reputation”. The rural setting in which these firms are located has resulted in their being service-centric, and consequently they rely on ICT-mediated communication in order to exchange tacit knowledge “at a distance”. It is this factor, rather than inputs of codified knowledge, that most strongly influences their operation and their need for availability and adoption of high quality communication technologies. Thus the findings have applicability in relation to theory in Economic Geography and to policy and practice in Rural Development. In addition the research contributes to methodological questions in SNA, and to methodological questions about the combination or mixing of quantitative and qualitative methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les algèbres de Temperley-Lieb originales, aussi dites régulières, apparaissent dans de nombreux modèles statistiques sur réseau en deux dimensions: les modèles d'Ising, de Potts, des dimères, celui de Fortuin-Kasteleyn, etc. L'espace d'Hilbert de l'hamiltonien quantique correspondant à chacun de ces modèles est un module pour cette algèbre et la théorie de ses représentations peut être utilisée afin de faciliter la décomposition de l'espace en blocs; la diagonalisation de l'hamiltonien s'en trouve alors grandement simplifiée. L'algèbre de Temperley-Lieb diluée joue un rôle similaire pour des modèles statistiques dilués, par exemple un modèle sur réseau où certains sites peuvent être vides; ses représentations peuvent alors être utilisées pour simplifier l'analyse du modèle comme pour le cas original. Or ceci requiert une connaissance des modules de cette algèbre et de leur structure; un premier article donne une liste complète des modules projectifs indécomposables de l'algèbre diluée et un second les utilise afin de construire une liste complète de tous les modules indécomposables des algèbres originale et diluée. La structure des modules est décrite en termes de facteurs de composition et par leurs groupes d'homomorphismes. Le produit de fusion sur l'algèbre de Temperley-Lieb originale permet de «multiplier» ensemble deux modules sur cette algèbre pour en obtenir un autre. Il a été montré que ce produit pouvait servir dans la diagonalisation d'hamiltoniens et, selon certaines conjectures, il pourrait également être utilisé pour étudier le comportement de modèles sur réseaux dans la limite continue. Un troisième article construit une généralisation du produit de fusion pour les algèbres diluées, puis présente une méthode pour le calculer. Le produit de fusion est alors calculé pour les classes de modules indécomposables les plus communes pour les deux familles, originale et diluée, ce qui vient ajouter à la liste incomplète des produits de fusion déjà calculés par d'autres chercheurs pour la famille originale. Finalement, il s'avère que les algèbres de Temperley-Lieb peuvent être associées à une catégorie monoïdale tressée, dont la structure est compatible avec le produit de fusion décrit ci-dessus. Le quatrième article calcule explicitement ce tressage, d'abord sur la catégorie des algèbres, puis sur la catégorie des modules sur ces algèbres. Il montre également comment ce tressage permet d'obtenir des solutions aux équations de Yang-Baxter, qui peuvent alors être utilisées afin de construire des modèles intégrables sur réseaux.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background: Paediatric oncology palliative care in the community is rare and nationally there is a lack of standardisation of out of hours nursing service provision. Objectives: This paper seeks to explore influences on the experiences of paediatric nurses providing out of hours palliative care within the family home to children with cancer. The study used social worlds theory to aid identification and demonstration of the findings. Methods: Twelve community-based palliative cases were purposively selected from children with cancer treated at one regional centre. Tape-recorded interviews were undertaken with 54 health professionals (general practitioners, community nurses and allied health professionals) involved in providing their palliative care and five facilitated case discussions completed. Data analysis followed a grounded theory approach; chronological comparative data analysis identifying generated themes. Social worlds theory was used as a framework to examine the data. Results: Nurses’ experiences are shaped by their social world and those of the nursing team,child and family and the inter-professional team providing the care. The lack of a formalised service, sub-optimal inter-professional working and impact of social worlds influence the experience of the nurse. Conclusions: Social worlds theory provided a new perspective in understanding these experiences based within a paediatric palliative care setting, knowledge that can be used to inform service provision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé : La schizophrénie est un trouble mental grave qui affecte toutes les facettes de la vie de la personne. En outre, le manque de soutien social est un problème important qui contribue à l’aggravation de la maladie, notamment en influençant négativement la capacité d’adaptation. Chez les personnes atteintes de schizophrénie, la capacité à utiliser des stratégies d’adaptation adéquates et efficaces est essentielle afin d’améliorer la santé, le bien-être et la prévention des rechutes. Cette recherche utilise la conception de l’adaptation de Roy (2009). De nombreuses études confirment la présence de difficultés d’adaptation chez ces personnes. De plus, le processus d’adaptation lui-même reste mal connu. La question de recherche était : Quel est le processus d’adaptation des personnes vivant avec la schizophrénie lorsque leur soutien social est limité ? Cette question sous-tendait deux objectifs : 1) décrire le processus d’adaptation des personnes atteintes de schizophrénie dans un contexte de soutien social limité et 2) contribuer au développement du modèle de Roy dans le contexte des troubles mentaux graves. Le devis de recherche était la théorisation ancrée constructiviste, auprès de 30 personnes vivant avec la schizophrénie. Les données étaient composées d’entrevues et de résultats de trois questionnaires qui ont contribué à décrire de façon plus détaillée le profil des participants. Les résultats sont une modélisation du processus d’adaptation nommée « les filtres dans le processus d’adaptation des personnes vivant avec la schizophrénie ». Cette modélisation met en lumière le fait que le potentiel d’adaptation des personnes vivant avec la schizophrénie est affecté à la fois par des éléments de l’environnement social et des éléments inhérents à la maladie elle-même. Ces éléments altèrent la possibilité et la capacité à utiliser des stratégies d’adaptation adéquates et efficaces. Ces résultats de recherche pourraient permettre d’améliorer l’évaluation des personnes atteintes de schizophrénie et de diminuer les « inconnues » dans l’effet des interventions, tout comme de favoriser les actions visant à lutter contre les conditions sociales qui nuisent à l’adaptation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les algèbres de Temperley-Lieb originales, aussi dites régulières, apparaissent dans de nombreux modèles statistiques sur réseau en deux dimensions: les modèles d'Ising, de Potts, des dimères, celui de Fortuin-Kasteleyn, etc. L'espace d'Hilbert de l'hamiltonien quantique correspondant à chacun de ces modèles est un module pour cette algèbre et la théorie de ses représentations peut être utilisée afin de faciliter la décomposition de l'espace en blocs; la diagonalisation de l'hamiltonien s'en trouve alors grandement simplifiée. L'algèbre de Temperley-Lieb diluée joue un rôle similaire pour des modèles statistiques dilués, par exemple un modèle sur réseau où certains sites peuvent être vides; ses représentations peuvent alors être utilisées pour simplifier l'analyse du modèle comme pour le cas original. Or ceci requiert une connaissance des modules de cette algèbre et de leur structure; un premier article donne une liste complète des modules projectifs indécomposables de l'algèbre diluée et un second les utilise afin de construire une liste complète de tous les modules indécomposables des algèbres originale et diluée. La structure des modules est décrite en termes de facteurs de composition et par leurs groupes d'homomorphismes. Le produit de fusion sur l'algèbre de Temperley-Lieb originale permet de «multiplier» ensemble deux modules sur cette algèbre pour en obtenir un autre. Il a été montré que ce produit pouvait servir dans la diagonalisation d'hamiltoniens et, selon certaines conjectures, il pourrait également être utilisé pour étudier le comportement de modèles sur réseaux dans la limite continue. Un troisième article construit une généralisation du produit de fusion pour les algèbres diluées, puis présente une méthode pour le calculer. Le produit de fusion est alors calculé pour les classes de modules indécomposables les plus communes pour les deux familles, originale et diluée, ce qui vient ajouter à la liste incomplète des produits de fusion déjà calculés par d'autres chercheurs pour la famille originale. Finalement, il s'avère que les algèbres de Temperley-Lieb peuvent être associées à une catégorie monoïdale tressée, dont la structure est compatible avec le produit de fusion décrit ci-dessus. Le quatrième article calcule explicitement ce tressage, d'abord sur la catégorie des algèbres, puis sur la catégorie des modules sur ces algèbres. Il montre également comment ce tressage permet d'obtenir des solutions aux équations de Yang-Baxter, qui peuvent alors être utilisées afin de construire des modèles intégrables sur réseaux.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional topic models are ineffective for topic extraction from microblog messages since the lack of structure and context among the posts renders poor message-level word co-occurrence patterns. In this work, we organize microblog posts as conversation trees based on reposting and replying relations, which enrich context information to alleviate data sparseness. Our model generates words according to topic dependencies derived from the conversation structures. In specific, we differentiate messages as leader messages, which initiate key aspects of previously focused topics or shift the focus to different topics, and follower messages that do not introduce any new information but simply echo topics from the messages that they repost or reply. Our model captures the different extents that leader and follower messages may contain the key topical words, thus further enhances the quality of the induced topics. The results of thorough experiments demonstrate the effectiveness of our proposed model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the importance of the preservation of the historic built environment for the benefit of present and future generations, there is a lack of knowledge of the effects of architectural rehabilitation decisions on the cultural significance of historic buildings. Architectural heritage conservation literature has focused almost exclusively on providing principles and guidelines, describing intervention methodologies, and discussing predicted impacts of design on material values. This thesis argues that a focus on the actual effects is needed if the sociocultural sustainability of historic buildings significance is to be achieved. Supported by an extensive literature review and informed by personal insights from the researcher’s everyday practice, an adapted model of the Theory of Change based on Weiss (1995) was designed, providing a tool to evaluate the effects of rehabilitation on cultural significance [ERECS]. Using a selection of six recently rehabilitated historic secondary schools in Portugal (liceus), this research investigated architectural decisions and their effects on the cultural values of this building typology for education, focusing on three objectives, corresponding to three stages of interventions: understanding the existing cultural significance, identifying the design strategies applied and assessing the short-term effects of design decisions on the cultural values. Stressing the role of stakeholders in rehabilitation processes, data were collected from the buildings and architectural projects, the decision makers in the conservation process, and the school community. Although confirming that the evaluation of the effects of architectural decisions on cultural values is a complex task, the findings demonstrate that the historic liceus have historical, architectural and sociocultural values, and whilst strategies did not value social values, material cultural values were generally considered and preserved, contributing to the enhancement of intangible values. The implications of this theory-based and evidence-based research highlight the importance of evaluating actual effects for cultural heritage theory, architectural conservation practice and heritage management policy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, a search for same-sign top quark pairs produced according to the Standard Model Effective Field Theory (SMEFT) is presented. The analysis is carried out within the ATLAS Collaboration using collision data at a center-of-mass energy of $\sqrt{s} = 13$ TeV, collected by the ATLAS detector during the Run 2 of the Large Hadron Collider, corresponding to an integrated luminosity of $140$ fb$^{-1}$. Three SMEFT operators are considered in the analysis, namely $\mathcal{O}_{RR}$, $\mathcal{O}_{LR}^{(1)}$, and $\mathcal{O}_{LR}^{(8)}$. The signal associated to same-sign top pairs is searched in the dilepton channel, with the top quarks decaying via $t \longrightarrow W^+ b \longrightarrow \ell^+ \nu b$, leading to a final state signature composed of a pair of high-transverse momentum same-sign leptons and $b$-jets. Deep Neural Networks are employed in the analysis to enhance sensitivity to the different SMEFT operators and to perform signal-background discrimination. This is the first result of the ATLAS Collaboration concerning the search for same-sign top quark pairs production in proton-proton collision data at $\sqrt{s} = 13$ TeV, in the framework of the SMEFT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the potential of a high-energy muon collider in measuring the muon Yukawa coupling (y_μ) in the production of two, three and four heavy bosons via muon-antimuon annihilations. We study the sensitivity of these processes to deviations of y_μ from the Standard Model prediction, parametrized by an effective dimension-6 operator in the Standard Model Effective Field Theory (SMEFT) framework. We also consider the κ framework, in which the deviation is simply parametrized by a strength modification of the μ+μ−h vertex alone. Both frameworks lead to an energy enhancement of the cross sections with one or more vector bosons, although the κ framework yields stronger effects, especially for the production of four bosons. On the contrary, for purely-Higgs final states the cross section is suppressed in the κ framework, while it is extremely sensitive to deviations in the SMEFT. We show that the triple-Higgs production is the most sensitive process to spot new physics effects on y_μ.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well known that structures subjected to dynamic loads do not follow the usual similarity laws when the material is strain rate sensitive. As a consequence, it is not possible to use a scaled model to predict the prototype behaviour. In the present study, this problem is overcome by changing the impact velocity so that the model behaves exactly as the prototype. This exact solution is generated thanks to the use of an exponential constitutive law to infer the dynamic flow stress. Furthermore, it is shown that the adopted procedure does not rely on any previous knowledge of the structure response. Three analytical models are used to analyze the performance of the technique. It is shown that perfect similarity is achieved, regardless of the magnitude of the scaling factor. For the class of material used, the solution outlined has long been sought, inasmuch as it allows perfect similarity for strain rate sensitive structures subject to impact loads. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pectus excavatum is the most common congenital deformity of the anterior chest wall, in which an abnormal formation of the rib cage gives the chest a caved-in or sunken appearance. Today, the surgical correction of this deformity is carried out in children and adults through Nuss technic, which consists in the placement of a prosthetic bar under the sternum and over the ribs. Although this technique has been shown to be safe and reliable, not all patients have achieved adequate cosmetic outcome. This often leads to psychological problems and social stress, before and after the surgical correction. This paper targets this particular problem by presenting a method to predict the patient surgical outcome based on pre-surgical imagiologic information and chest skin dynamic modulation. The proposed approach uses the patient pre-surgical thoracic CT scan and anatomical-surgical references to perform a 3D segmentation of the left ribs, right ribs, sternum and skin. The technique encompasses three steps: a) approximation of the cartilages, between the ribs and the sternum, trough b-spline interpolation; b) a volumetric mass spring model that connects two layers - inner skin layer based on the outer pleura contour and the outer surface skin; and c) displacement of the sternum according to the prosthetic bar position. A dynamic model of the skin around the chest wall region was generated, capable of simulating the effect of the movement of the prosthetic bar along the sternum. The results were compared and validated with patient postsurgical skin surface acquired with Polhemus FastSCAN system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We apply kneading theory to describe the knots and links generated by the iteration of renormalizable nonautonomous dynamical systems with reducible kneading invariants, in terms of the links corresponding to each factor. As a consequence we obtain explicit formulas for the genus for this kind of knots and links.