992 resultados para negative dimensional integration
Resumo:
The aim of the present study is to determine the level of correlation between the 3-dimensional (3D) characteristics of trabecular bone microarchitecture, as evaluated using microcomputed tomography (μCT) reconstruction, and trabecular bone score (TBS), as evaluated using 2D projection images directly derived from 3D μCT reconstruction (TBSμCT). Moreover, we have evaluated the effects of image degradation (resolution and noise) and X-ray energy of projection on these correlations. Thirty human cadaveric vertebrae were acquired on a microscanner at an isotropic resolution of 93μm. The 3D microarchitecture parameters were obtained using MicroView (GE Healthcare, Wauwatosa, MI). The 2D projections of these 3D models were generated using the Beer-Lambert law at different X-ray energies. Degradation of image resolution was simulated (from 93 to 1488μm). Relationships between 3D microarchitecture parameters and TBSμCT at different resolutions were evaluated using linear regression analysis. Significant correlations were observed between TBSμCT and 3D microarchitecture parameters, regardless of the resolution. Correlations were detected that were strongly to intermediately positive for connectivity density (0.711≤r(2)≤0.752) and trabecular number (0.584≤r(2)≤0.648) and negative for trabecular space (-0.407 ≤r(2)≤-0.491), up to a pixel size of 1023μm. In addition, TBSμCT values were strongly correlated between each other (0.77≤r(2)≤0.96). Study results show that the correlations between TBSμCT at 93μm and 3D microarchitecture parameters are weakly impacted by the degradation of image resolution and the presence of noise.
Resumo:
A weak version of the cosmic censorship hypothesis is implemented as a set of boundary conditions on exact semiclassical solutions of two-dimensional dilaton gravity. These boundary conditions reflect low-energy matter from the strong coupling region and they also serve to stabilize the vacuum of the theory against decay into negative energy states. Information about low-energy incoming matter can be recovered in the final state but at high energy black holes are formed and inevitably lead to information loss at the semiclassical level.
Resumo:
Minimizing the risks of an investment portfolio but not in the favour of expected returns is one of the key interests of an investor. Typically, portfolio diversification is achieved using two main strategies: investing in different classes of assets thought to have little or negative correlations or investing in similar classes of assets in multiple markets through international diversification. This study investigates integration of the Russian financial markets in the time period of January 1, 2003 to December 28, 2007 using daily data. The aim is to test the intra-country and cross-country integration of the Russian stock and bond markets between seven countries. Our test methodology for the short-run dynamics testing is the vector autoregressive model (VAR) and for the long-run cointegration testing we use the Johansen cointegration test which is an extension to VAR. The empirical results of this study show that the Russian stock and bond markets are not integrated in the long-run either at intra-country or cross-country level which means that the markets are relatively segmented. The short-run dynamics are also relatively low. This implies a presence of potential gains from diversification.
Resumo:
Chromosome 22q11.2 deletion syndrome (22q11DS) is a genetic disease known to lead to cerebral structural alterations, which we study using the framework of the macroscopic white-matter connectome. We create weighted connectomes of 44 patients with 22q11DS and 44 healthy controls using diffusion tensor magnetic resonance imaging, and perform a weighted graph theoretical analysis. After confirming global network integration deficits in 22q11DS (previously identified using binary connectomes), we identify the spatial distribution of regions responsible for global deficits. Next, we further characterize the dysconnectivity of the deficient regions in terms of sub-network properties, and investigate their relevance with respect to clinical profiles. We define the subset of regions with decreased nodal integration (evaluated using the closeness centrality measure) as the affected core (A-core) of the 22q11DS structural connectome. A-core regions are broadly bilaterally symmetric and consist of numerous network hubs - chiefly parietal and frontal cortical, as well as subcortical regions. Using a simulated lesion approach, we demonstrate that these core regions and their connections are particularly important to efficient network communication. Moreover, these regions are generally densely connected, but less so in 22q11DS. These specific disturbances are associated to a rerouting of shortest network paths that circumvent the A-core in 22q11DS, "de-centralizing" the network. Finally, the efficiency and mean connectivity strength of an orbito-frontal/cingulate circuit, included in the affected regions, correlate negatively with the extent of negative symptoms in 22q11DS patients, revealing the clinical relevance of present findings. The identified A-core overlaps numerous regions previously identified as affected in 22q11DS as well as in schizophrenia, which approximately 30-40% of 22q11DS patients develop.
Resumo:
Among the numerous approaches to food waste treatment, the food waste disposers method (FWDs), as a newcomer, has become slowly accepted by the general public owing to the worries about its impact on the existing sewage system. This paper aims to justify the role of FWDs in the process of urbanization in order to better prepare a city to take good care of the construction of its infrastructure and the solid waste treatment. Both the literatures and the case study help to confirm that FWDs has no negative effects on the wastewater treatment plant and it is also environmental friendly by reducing the greenhouse gas emissions. In the case study, the Lappeenranta waste water treatment plant has been selected in order to figure out the possible changes to a WWTP following the integration of FWDs: the observation shows only minor changes take place in a WWTP, in case of 25% application, like BOD up 7%, TSS up 6% and wastewater flowrate up 6%, an additional sludge production of 200 tons per year and the extra yield of methane up to 10000m3 per year; however, when the utilization rate of FWD is over 75%, BOD, TSS, and wastewater flowrate will experience more significant changes, thus exerting much pressure on the existing WWTP. FWDs can only be used in residential areas or cities equipped with consummate drainage network within the service sphere of WWTP, therefore, the relevant authority or government department should regulate the installation frequency of FWDs, while promoting the accessory application of FWDs. In the meanwhile, WWTP should improve their treatment process in order to expand their capacity for sludge treatment so as to stay in line with the future development of urban waste management.
Resumo:
Unsuccessful mergers are unfortunately the rule rather than the exception. Therefore it is necessary to gain an enhanced understanding of mergers and post-merger integrations (PMI) as well as learning more about how mergers and PMIs of information systems (IS) and people can be facilitated. Studies on PMI of IS are scarce and public sector mergers are even less studied. There is nothing however to indicate that public sector mergers are any more successful than those in the private sector. This thesis covers five studies carried out between 2008 and 2011 in two organizations in higher education that merged in January 2010. The most recent study was carried out two years after the new university was established. The longitudinal case-study focused on the administrators and their opinions of the IS, the work situation and the merger in general. These issues were investigated before, during and after the merger. Both surveys and interviews were used to collect data, to which were added documents that both describe and guide the merger process; in this way we aimed at a triangulation of findings. Administrators were chosen as the focus of the study since public organizations are highly dependent on this staff category, forming the backbone of the organization and whose performance is a key success factor for the organization. Reliable and effective IS are also critical for maintaining a functional and effective organization, and this makes administrators highly dependent on their organizations’ IS for the ability to carry out their duties as intended. The case-study has confirmed the administrators’ dependency on IS that work well. A merger is likely to lead to changes in the IS and the routines associated with the administrators’ work. Hence it was especially interesting to study how the administrators viewed the merger and its consequences for IS and the work situation. The overall research objective is to find key issues for successful mergers and PMIs. The first explorative study in 2008 showed that the administrators were confident of their skills and knowledge of IS and had no fear of having to learn new IS due to the merger. Most administrators had an academic background and were not anxious about whether IS training would be given or not. Before the merger the administrators were positive and enthusiastic towards the merger and also to the changes that they expected. The studies carried out before the merger showed that these administrators were very satisfied with the information provided about the merger. This information was disseminated through various channels and even negative information and postponed decisions were quickly distributed. The study conflicts with the theories that have found that resistance to change is inevitable in a merger. Shortly after the merger the (third) study showed disappointment with the fact that fewer changes than expected had been implemented even if the changes that actually were carried out sometimes led to a more problematic work situation. This was seen to be more prominent for routine changes than IS changes. Still the administrators showed a clear willingness to change and to share their knowledge with new colleagues. This knowledge sharing (also tacit) worked well in the merger and the PMI. The majority reported that the most common way to learn to use new ISs and to apply new routines was by asking help from colleagues. They also needed to take responsibility for their own training and development. Five months after the merger (the fourth study) the administrators had become worried about the changes in communication strategy that had been implemented in the new university. This was perceived as being more anonymous. Furthermore, it was harder to get to know what was happening and to contact the new decision makers. The administrators found that decisions, and the authority to make decisions, had been moved to a higher administrative level than they were accustomed to. A directive management style is recommended in mergers in order to achieve a quick transition without distracting from the core business. A merger process may be tiresome and require considerable effort from the participants. In addition, not everyone can make their voice heard during a merger and consensus is not possible in every question. It is important to find out what is best for the new organization instead of simply claiming that the tried and tested methods of doing things should be implemented. A major problem turned out to be the lack of management continuity during the merger process. Especially problematic was the situation in the IS-department with many substitute managers during the whole merger process (even after the merger was carried out). This meant that no one was in charge of IS-issues and the PMI of IS. Moreover, the top managers were appointed very late in the process; in some cases after the merger was carried out. This led to missed opportunities for building trust and management credibility was heavily affected. The administrators felt neglected and that their competences and knowledge no longer counted. This, together with a reduced and altered information flow, led to rumours and distrust. Before the merger the administrators were convinced that their achievements contributed value to their organizations and that they worked effectively. After the merger they were less sure of their value contribution and effectiveness even if these factors were not totally discounted. The fifth study in November 2011 found that the administrators were still satisfied with their IS as they had been throughout the whole study. Furthermore, they believed that the IS department had done a good job despite challenging circumstances. Both the former organizations lacked IS strategies, which badly affected the IS strategizing during the merger and the PMI. IS strategies deal with issues like system ownership; namely who should pay and who is responsible for maintenance and system development, for organizing system training for new IS, and for effectively run IS even during changing circumstances (e.g. more users). A proactive approach is recommended for IS strategizing to work. This is particularly true during a merger and PMI for handling issues about what ISs should be adopted and implemented in the new organization, issues of integration and reengineering of IS-related processes. In the new university an ITstrategy had still not been decided 26 months after the new university was established. The study shows the importance of the decisive management of IS in a merger requiring that IS issues are addressed in the merger process and that IS decisions are made early. Moreover, the new management needs to be appointed early in order to work actively with the IS-strategizing. It is also necessary to build trust and to plan and make decisions about integration of IS and people.
Resumo:
Frontier and Emerging economies have implemented policies with the objective of liberalizing their equity markets. Equity market liberalization opens the domestic equity market to foreign investors and as well paves the way for domestic investors to invest in foreign equity securities. Among other things, equity market liberalization results in diversification benefits. Moreover, equity market liberalization leads to low cost of equity capital resulting from the lower rate of return by investors. Additionally, foreign and local investors share any potential risks. Liberalized equity markets also become liquid considering that there are more investors to trade. Equity market liberalization results in financial integration which explains the movement of two markets. In crisis period, increased volatility and co-movement between two markets may result in what is termed contagion effects. In Africa, major moves toward financial liberalization generally started in the late 1980s with South Africa as the pioneer. Over the years, researchers have studied the impact of financial liberalization on Africa’s economic development with diverse results; some being positive, others negative and still others being mixed. The objective of this study is to establish whether African stock-markets are integrated into the United States (US) and World market. Furthermore, the study helps to see if there are international linkages between the Africa, US and the world markets. A Bivariate- VAR- GARCH- BEKK model is employed in the study. In the study, the effect of thin trading is removed through series of econometric data purification. This is because thin trading, also known as non-trading or inconsistency of trading, is a main feature of African markets and may trigger inconsistency and biased results. The study confirmed the widely established results that the South Africa and Egypt stock markets are highly integrated with the US and World market. Interestingly, the study adds to knowledge in this research area by establishing the fact that Kenya is very integrated with the US and World markets and that it receives and exports past innovations as well as shocks to and from the US and World market.
Resumo:
This thesis applies x-ray diffraction to measure he membrane structure of lipopolysaccharides and to develop a better model of a LPS bacterial melilbrane that can be used for biophysical research on antibiotics that attack cell membranes. \iVe ha'e Inodified the Physics department x-ray machine for use 3.'3 a thin film diffractometer, and have lesigned a new temperature and relative humidity controlled sample cell.\Ve tested the sample eel: by measuring the one-dimensional electron density profiles of bilayers of pope with 0%, 1%, 1G :VcJ, and 100% by weight lipo-polysaccharide from Pse'udo'lTwna aeTuginosa. Background VVe now know that traditional p,ntibiotics ,I,re losing their effectiveness against ever-evolving bacteria. This is because traditional antibiotic: work against specific targets within the bacterial cell, and with genetic mutations over time, themtibiotic no longer works. One possible solution are antimicrobial peptides. These are short proteins that are part of the immune systems of many animals, and some of them attack bacteria directly at the membrane of the cell, causing the bacterium to rupture and die. Since the membranes of most bacteria share common structural features, and these featuret, are unlikely to evolve very much, these peptides should effectively kill many types of bacteria wi Lhout much evolved resistance. But why do these peptides kill bacterial cel: '3 , but not the cells of the host animal? For gramnegative bacteria, the most likely reason is that t Ileir outer membrane is made of lipopolysaccharides (LPS), which is very different from an animal :;ell membrane. Up to now, what we knovv about how these peptides work was likely done with r !10spholipid models of animal cell membranes, and not with the more complex lipopolysa,echaricies, If we want to make better pepticies, ones that we can use to fight all types of infection, we need a more accurate molecular picture of how they \vork. This will hopefully be one step forward to the ( esign of better treatments for bacterial infections.
Resumo:
Je reconnais l’aide financière du Centre d’études ethniques des Universités montréalaises (CEETUM), du Ministère de l’Éducation – Aide Financières au Études (AFE), et ainsi que de l’Université de Montréal (Département de psychologie et Faculté des études supérieures) dans la réalisation de ce mémoire.
Resumo:
Davantage d’évaluations de projets internationaux dans les pays en développement se réalisent pour informer les décisions sur la base de données probantes. L’utilisation des résultats d’évaluation est remise en cause et pour y remédier des évaluations participatives qui incluent à certaines étapes du processus évaluatif des acteurs non évaluateurs sont proposées. Parmi celles-ci, les évaluations participatives pratiques visent principalement à améliorer l’utilisation du processus et des résultats des évaluations. Ces évaluations participatives pratiques seraient obstruées par des attitudes individuelles négatives, ou résistance individuelle au changement, et favorisées par des attitudes individuelles positives, ou propension. Cette thèse propose d’étudier la propension individuelle des gestionnaires envers les évaluations participatives pratiques d’intervention (EPP), les éléments influençant cette propension, et de caractériser des niveaux de propension des individus envers les EPP. Tout d’abord une revue de littérature a proposé une définition multidimensionnelle de la propension envers les EPP comme étant une attitude favorable envers la pratique des EPP qui se décline à chaque étape d’une évaluation sous les volets affectif et cognitif. Les dimensions identifiées théoriquement étaient : apprentissage, travail en groupe, emploi de méthodes systématiques, usage de l’esprit critique. Ces dimensions ont servi de cadre pour la partie empirique de la thèse. Une étude de cas multiples avec les gestionnaires d’une institution de santé en Haïti a été menée pour contextualiser la propension et identifier les éléments d’influence. Les données ont été recueillies à l’aide d’entrevues semi-structurées et de sources documentaires. L’analyse des données concernant l’apprentissage a révélé une prédominance des formes d’apprentissage par l’action et par l’observation. Le travail en groupe se retrouve ancré dans la pratique des gestionnaires administratifs et des gestionnaires cliniques. Les méthodes systématiques se reflètent principalement dans la consultation de plusieurs acteurs ayant de l’intérêt pour la problématique immédiate à solutionner plutôt que par l’outillage méthodologique. L’emploi de méthodes systématiques prend généralement la forme de consultation élargie d’avis pour régler une situation ou prend la forme de tentative de validation des informations reçues. L’esprit critique se déclenche sous stimulation lorsque l’image individuelle, professionnelle, corporative ou organisationnelle est touchée ou lors de suggestions jugées constructives. En plus de contextualiser quatre composantes de la propension individuelle envers les EPP, les gestionnaires se sont positionnés par rapport à la propension de leurs collègues sur la base de la réactivité, plus ou moins réactif vis-à-vis des composantes de la propension individuelle. Ainsi, la propension étudiée empiriquement a laissé émerger deux axes : un axe formalisation et un axe réactivité. L’axe formalisation reprend la contextualisation des quatre composantes de la propension individuelle envers les EPP, soit la forme d’expression des composantes. L’axe réactivité reprend le niveau d’activité déployé dans chaque composante de la propension individuelle, de réactif à plus proactif. De plus, des profils d’individus ayant différents niveaux de propension envers les EPP ont été développés. Des influences favorables et défavorables au niveau de propension envers les EPP ont été identifiées. L’originalité de cette thèse tient dans le fait de se positionner dans un courant récent de réflexion autour de la résistance aux changements et aux évaluations avec un regard positif et d’avoir défini théoriquement et appliqué empiriquement le concept pluridimensionnel de propension individuelle aux EPP. Des profils de niveau de propension individuelle aux EPP et les éléments d’influence favorables et défavorables associés peuvent servir d’outil de diagnostic aux types d’évaluation possibles, servir d’ajustement à la mise en place d’évaluations selon les interlocuteurs, permettre le suivi des changements de niveaux de propension pendant une EPP et servir de sources d’informations pour ajuster les plans d’évaluations participatives.
Resumo:
Cette thèse présente des méthodes de traitement de données de comptage en particulier et des données discrètes en général. Il s'inscrit dans le cadre d'un projet stratégique du CRNSG, nommé CC-Bio, dont l'objectif est d'évaluer l'impact des changements climatiques sur la répartition des espèces animales et végétales. Après une brève introduction aux notions de biogéographie et aux modèles linéaires mixtes généralisés aux chapitres 1 et 2 respectivement, ma thèse s'articulera autour de trois idées majeures. Premièrement, nous introduisons au chapitre 3 une nouvelle forme de distribution dont les composantes ont pour distributions marginales des lois de Poisson ou des lois de Skellam. Cette nouvelle spécification permet d'incorporer de l'information pertinente sur la nature des corrélations entre toutes les composantes. De plus, nous présentons certaines propriétés de ladite distribution. Contrairement à la distribution multidimensionnelle de Poisson qu'elle généralise, celle-ci permet de traiter les variables avec des corrélations positives et/ou négatives. Une simulation permet d'illustrer les méthodes d'estimation dans le cas bidimensionnel. Les résultats obtenus par les méthodes bayésiennes par les chaînes de Markov par Monte Carlo (CMMC) indiquent un biais relatif assez faible de moins de 5% pour les coefficients de régression des moyennes contrairement à ceux du terme de covariance qui semblent un peu plus volatils. Deuxièmement, le chapitre 4 présente une extension de la régression multidimensionnelle de Poisson avec des effets aléatoires ayant une densité gamma. En effet, conscients du fait que les données d'abondance des espèces présentent une forte dispersion, ce qui rendrait fallacieux les estimateurs et écarts types obtenus, nous privilégions une approche basée sur l'intégration par Monte Carlo grâce à l'échantillonnage préférentiel. L'approche demeure la même qu'au chapitre précédent, c'est-à-dire que l'idée est de simuler des variables latentes indépendantes et de se retrouver dans le cadre d'un modèle linéaire mixte généralisé (GLMM) conventionnel avec des effets aléatoires de densité gamma. Même si l'hypothèse d'une connaissance a priori des paramètres de dispersion semble trop forte, une analyse de sensibilité basée sur la qualité de l'ajustement permet de démontrer la robustesse de notre méthode. Troisièmement, dans le dernier chapitre, nous nous intéressons à la définition et à la construction d'une mesure de concordance donc de corrélation pour les données augmentées en zéro par la modélisation de copules gaussiennes. Contrairement au tau de Kendall dont les valeurs se situent dans un intervalle dont les bornes varient selon la fréquence d'observations d'égalité entre les paires, cette mesure a pour avantage de prendre ses valeurs sur (-1;1). Initialement introduite pour modéliser les corrélations entre des variables continues, son extension au cas discret implique certaines restrictions. En effet, la nouvelle mesure pourrait être interprétée comme la corrélation entre les variables aléatoires continues dont la discrétisation constitue nos observations discrètes non négatives. Deux méthodes d'estimation des modèles augmentés en zéro seront présentées dans les contextes fréquentiste et bayésien basées respectivement sur le maximum de vraisemblance et l'intégration de Gauss-Hermite. Enfin, une étude de simulation permet de montrer la robustesse et les limites de notre approche.
Resumo:
Cette thèse étudie des modèles de séquences de haute dimension basés sur des réseaux de neurones récurrents (RNN) et leur application à la musique et à la parole. Bien qu'en principe les RNN puissent représenter les dépendances à long terme et la dynamique temporelle complexe propres aux séquences d'intérêt comme la vidéo, l'audio et la langue naturelle, ceux-ci n'ont pas été utilisés à leur plein potentiel depuis leur introduction par Rumelhart et al. (1986a) en raison de la difficulté de les entraîner efficacement par descente de gradient. Récemment, l'application fructueuse de l'optimisation Hessian-free et d'autres techniques d'entraînement avancées ont entraîné la recrudescence de leur utilisation dans plusieurs systèmes de l'état de l'art. Le travail de cette thèse prend part à ce développement. L'idée centrale consiste à exploiter la flexibilité des RNN pour apprendre une description probabiliste de séquences de symboles, c'est-à-dire une information de haut niveau associée aux signaux observés, qui en retour pourra servir d'à priori pour améliorer la précision de la recherche d'information. Par exemple, en modélisant l'évolution de groupes de notes dans la musique polyphonique, d'accords dans une progression harmonique, de phonèmes dans un énoncé oral ou encore de sources individuelles dans un mélange audio, nous pouvons améliorer significativement les méthodes de transcription polyphonique, de reconnaissance d'accords, de reconnaissance de la parole et de séparation de sources audio respectivement. L'application pratique de nos modèles à ces tâches est détaillée dans les quatre derniers articles présentés dans cette thèse. Dans le premier article, nous remplaçons la couche de sortie d'un RNN par des machines de Boltzmann restreintes conditionnelles pour décrire des distributions de sortie multimodales beaucoup plus riches. Dans le deuxième article, nous évaluons et proposons des méthodes avancées pour entraîner les RNN. Dans les quatre derniers articles, nous examinons différentes façons de combiner nos modèles symboliques à des réseaux profonds et à la factorisation matricielle non-négative, notamment par des produits d'experts, des architectures entrée/sortie et des cadres génératifs généralisant les modèles de Markov cachés. Nous proposons et analysons également des méthodes d'inférence efficaces pour ces modèles, telles la recherche vorace chronologique, la recherche en faisceau à haute dimension, la recherche en faisceau élagué et la descente de gradient. Finalement, nous abordons les questions de l'étiquette biaisée, du maître imposant, du lissage temporel, de la régularisation et du pré-entraînement.
Resumo:
The dynamics of saturated two-dimensional superfluid4He films is shown to be governed by the Kadomtsev-Petviashvili equation with negative dispersion. It is established that the phenomena of soliton resonance could be observed in such films. Under the lowest order nonlinearity, such resonance would happen only if two dimensional effects are taken into account. The amplitude and velocity of the resonant soliton are obtained.
Resumo:
Introducción: La atención de calidad en urgencias sólo es posible si los médicos han recibido una enseñanza de alta calidad. La escala PHEEM (Postgraduate Hospital Educational Environment Measure) es un instrumento válido y fiable, utilizado internacionalmente para medir el entorno educativo, en la formación médica de posgrado. Materiales y métodos: Estudio de corte trasversal que utilizó la escala PHEEM versión en español para conocer el entorno educativo de los programas de urgencias. El coeficiente alfa de Cronbach se calculó para determinar la consistencia interna. Se aplicó estadística descriptiva a nivel global, por categorías e ítems de la escala PHEEM y se compararon resultados por sexo, año de residencia y programa. Resultados: 94 (94%) residentes llenaron el cuestionario. La puntuación media de la escala PHEEM fue 93,91 ± 23,71 (58,1% de la puntuación máxima) que se considera un ambiente educativo más positivo que negativo, pero con margen de mejora. Hubo una diferencia estadísticamente significativa en la percepción del ambiente educativo entre los programas de residencia (p =0,01). El instrumento es altamente confiable (alfa de Cronbach = 0,952). La barrera más frecuente en la enseñanza fue el hacinamiento y la evaluación fue percibida con el propósito de cumplir normas. Discusión: Los resultados de este estudio aportaron evidencia sobre la validez interna de la escala PHEEM en el contexto colombiano. Este estudio demostró cómo la medición del ambiente educativo en una especialidad médico-quirúrgica, con el uso de una herramienta cuantitativa, puede proporcionar información en relación a las fortalezas y debilidades de los programas.
Resumo:
In recent years, the Standards for Qualified Teacher Status in England have placed new emphasis on student-teachers' ability to become integrated into the 'corporate life of the school' and to work with other professionals. Little research, however, has been carried out into how student-teachers perceive the social processes and interactions that are central to such integration during their initial teacher education school placements. This study aims to shed light on these perceptions. The data, gathered from 23 student-teachers through interviews and reflective writing, illustrate the extent to which the participants perceived such social processes as supporting or obstructing their development as teachers. Signals of inclusion, the degree of match or mismatch in students' and school colleagues' role expectations, and the social awareness of both school and student-teacher emerged as crucial factors in this respect. The student-teachers' accounts show their social interactions with school staff to be meaningful in developing their 'teacher self' and to be profoundly emotionally charged. The implications for mentor and student-teacher role preparation are discussed in this article.