859 resultados para mean pondered diameter
Resumo:
We have presented a Green's function method for the calculation of the atomic mean square displacement (MSD) for an anharmonic Hamil toni an . This method effectively sums a whole class of anharmonic contributions to MSD in the perturbation expansion in the high temperature limit. Using this formalism we have calculated the MSD for a nearest neighbour fcc Lennard Jones solid. The results show an improvement over the lowest order perturbation theory results, the difference with Monte Carlo calculations at temperatures close to melting is reduced from 11% to 3%. We also calculated the MSD for the Alkali metals Nat K/ Cs where a sixth neighbour interaction potential derived from the pseudopotential theory was employed in the calculations. The MSD by this method increases by 2.5% to 3.5% over the respective perturbation theory results. The MSD was calculated for Aluminum where different pseudopotential functions and a phenomenological Morse potential were used. The results show that the pseudopotentials provide better agreement with experimental data than the Morse potential. An excellent agreement with experiment over the whole temperature range is achieved with the Harrison modified point-ion pseudopotential with Hubbard-Sham screening function. We have calculated the thermodynamic properties of solid Kr by minimizing the total energy consisting of static and vibrational components, employing different schemes: The quasiharmonic theory (QH), ).2 and).4 perturbation theory, all terms up to 0 ().4) of the improved self consistent phonon theory (ISC), the ring diagrams up to o ().4) (RING), the iteration scheme (ITER) derived from the Greens's function method and a scheme consisting of ITER plus the remaining contributions of 0 ().4) which are not included in ITER which we call E(FULL). We have calculated the lattice constant, the volume expansion, the isothermal and adiabatic bulk modulus, the specific heat at constant volume and at constant pressure, and the Gruneisen parameter from two different potential functions: Lennard-Jones and Aziz. The Aziz potential gives generally a better agreement with experimental data than the LJ potential for the QH, ).2, ).4 and E(FULL) schemes. When only a partial sum of the).4 diagrams is used in the calculations (e.g. RING and ISC) the LJ results are in better agreement with experiment. The iteration scheme brings a definitive improvement over the).2 PT for both potentials.
Resumo:
The atomic mean square displacement (MSD) and the phonon dispersion curves (PDC's) of a number of face-centred cubic (fcc) and body-centred cubic (bcc) materials have been calclllated from the quasiharmonic (QH) theory, the lowest order (A2 ) perturbation theory (PT) and a recently proposed Green's function (GF) method by Shukla and Hiibschle. The latter method includes certain anharmonic effects to all orders of anharmonicity. In order to determine the effect of the range of the interatomic interaction upon the anharmonic contributions to the MSD we have carried out our calculations for a Lennard-Jones (L-J) solid in the nearest-neighbour (NN) and next-nearest neighbour (NNN) approximations. These results can be presented in dimensionless units but if the NN and NNN results are to be compared with each other they must be converted to that of a real solid. When this is done for Xe, the QH MSD for the NN and NNN approximations are found to differ from each other by about 2%. For the A2 and GF results this difference amounts to 8% and 7% respectively. For the NN case we have also compared our PT results, which have been calculated exactly, with PT results calculated using a frequency-shift approximation. We conclude that this frequency-shift approximation is a poor approximation. We have calculated the MSD of five alkali metals, five bcc transition metals and seven fcc transition metals. The model potentials we have used include the Morse, modified Morse, and Rydberg potentials. In general the results obtained from the Green's function method are in the best agreement with experiment. However, this improvement is mostly qualitative and the values of MSD calculated from the Green's function method are not in much better agreement with the experimental data than those calculated from the QH theory. We have calculated the phonon dispersion curves (PDC's) of Na and Cu, using the 4 parameter modified Morse potential. In the case of Na, our results for the PDC's are in poor agreement with experiment. In the case of eu, the agreement between the tlleory and experiment is much better and in addition the results for the PDC's calclliated from the GF method are in better agreement with experiment that those obtained from the QH theory.
Resumo:
The Portuguese community is one of the largest diasporic groups in the Greater Toronto Area and the choice of retention and transmission of language and culture to Luso-Canadians is crucial to the development and sustainability of the community. The overall objective of this study is to learn about the factors that influence Luso-Canadian mothers’ inclination to teach Portuguese language and cultural retention to their children. To explore this topic I employed a qualitative research design that included in-depth interviews conducted in 2012 with six Luso-Canadian mothers. Three central arguments emerged from the findings. First, Luso-Canadian mothers interviewed posses a pronounced desire for their children to succeed academically, and to provide opportunities that their children that they did not have. Second, five of the mothers attempt to achieve this mothering objective partly by disconnecting from their Portuguese roots, and by disassociating their children from the Portuguese language and culture. Third, the disconnection they experience and enact is influenced by the divisions evident in the Portuguese community in the GTA that divides regions and hierarchically ranks dialects, and groups. I conclude that the children in these households inevitably bear the prospects of maintaining a vibrant Portuguese community in the GTA and I propose that actions by the community in ranking dialects influence mothers’ decisions about transmitting language and culture to their children.
Resumo:
Presentation at Brock Library Spring Symposium 2015: What's really going on?
Resumo:
Presently, conditions ensuring the validity of bootstrap methods for the sample mean of (possibly heterogeneous) near epoch dependent (NED) functions of mixing processes are unknown. Here we establish the validity of the bootstrap in this context, extending the applicability of bootstrap methods to a class of processes broadly relevant for applications in economics and finance. Our results apply to two block bootstrap methods: the moving blocks bootstrap of Künsch ( 989) and Liu and Singh ( 992), and the stationary bootstrap of Politis and Romano ( 994). In particular, the consistency of the bootstrap variance estimator for the sample mean is shown to be robust against heteroskedasticity and dependence of unknown form. The first order asymptotic validity of the bootstrap approximation to the actual distribution of the sample mean is also established in this heterogeneous NED context.
Resumo:
By reporting his satisfaction with his job or any other experience, an individual does not communicate the number of utils that he feels. Instead, he expresses his posterior preference over available alternatives conditional on acquired knowledge of the past. This new interpretation of reported job satisfaction restores the power of microeconomic theory without denying the essential role of discrepancies between one’s situation and available opportunities. Posterior human wealth discrepancies are found to be the best predictor of reported job satisfaction. Static models of relative utility and other subjective well-being assumptions are all unambiguously rejected by the data, as well as an \"economic\" model in which job satisfaction is a measure of posterior human wealth. The \"posterior choice\" model readily explains why so many people usually report themselves as happy or satisfied, why both younger and older age groups are insensitive to current earning discrepancies, and why the past weighs more heavily than the present and the future.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
Objectif: Nous avons effectué une étude chez 135 patients ayant subis une chirurgie lombo-sacrée avec vissage pédiculaire sous navigation par tomographie axiale. Nous avons évalué la précision des vis pédiculaires et les résultats cliniques. Méthodes: Cette étude comporte 44 hommes et 91 femmes (âge moyen=61, intervalle 24-90 ans). Les diamètres, longueurs et trajectoires des 836 vis ont été planifiés en préopératoire avec un système de navigation (SNN, Surgical Navigation Network, Mississauga). Les patients ont subi une fusion lombaire (55), lombo-sacrée (73) et thoraco-lombo-sacrée (7). La perforation pédiculaire, la longueur des vis et les spondylolisthesis sont évalués par tomographies axiales postopératoires. Le niveau de douleur est mesuré par autoévaluations, échelles visuelles analogues et questionnaires (Oswestry et SF-36). La fusion osseuse a été évaluée par l’examen des radiographies postopératoires. Résultats: Une perforation des pédicules est présente pour 49/836 (5.9%) des vis (2.4% latéral, 1.7% inférieur, 1.1% supérieur, 0.7% médial). Les erreurs ont été mineures (0.1-2mm, 46/49) ou intermédiaires (2.1 - 4mm, 3/49 en latéral). Il y a aucune erreur majeure (≥ 4.1mm). Certaines vis ont été jugées trop longues (66/836, 8%). Le temps moyen pour insérer une vis en navigation a été de 19.1 minutes de l΄application au retrait du cadre de référence. Un an postopératoire on note une amélioration de la douleur des jambes et lombaire de 72% et 48% en moyenne respectivement. L’amélioration reste stable après 2 ans. La dégénérescence radiologique au dessus et sous la fusion a été retrouvée chez 44 patients (33%) and 3 patients respectivement (2%). Elle est survenue en moyenne 22.2 ± 2.6 mois après la chirurgie. Les fusions se terminant à L2 ont été associées à plus de dégénération (14/25, 56%). Conclusion: La navigation spinale basée sur des images tomographiques préopératoires est une technique sécuritaire et précise. Elle donne de bons résultats à court terme justifiant l’investissement de temps chirurgical. La dégénérescence segmentaire peut avoir un impact négatif sur les résultats radiologique et cliniques.
Resumo:
Qu’est-ce que l’être humain ? La question se pose depuis plusieurs millénaires. Platon n’y échappe pas et il suit l’inscription du temple de Delphes, le fameux « connais-toi toi-même », lorsqu’il cherche à mieux cerner l’homme dans ses écrits. Cette quête de l’essence de l’homme est présente à plusieurs moments de l’œuvre de Platon, mais nous sommes d’avis qu’il ne suggère jamais de définition de l’homme aussi claire que dans l’Alcibiade. Toute la fin de ce dialogue se consacre à cette question et l’on y trouve un Socrate avide de partager sa propre pensée sur le sujet. Les commentateurs de ce dialogue ne s’entendent pourtant pas sur la signification que l’on doit donner à ce développement parfois obscur sur l’essence de l’homme. Plusieurs affirment que l’homme y est présenté comme étant essentiellement son âme, d’aucuns que l’homme y est la réunion du corps et de l’âme, et d’autres encore que l’homme y est plutôt présenté comme étant la partie rationnelle de son âme. Les trois chapitres de ce mémoire présentent et analysent les arguments principaux de chaque camp dans le but de trancher la question. Il y est défendu que dans l’Alcibiade l’homme est, de manière approximative, son âme, mais que de manière plus précise, il correspond à la partie en lui qui domine, soit sa raison. Il y est également suggéré que cette conception de la nature humaine est reprise ailleurs dans le corpus platonicien.
Resumo:
This thesis was created in Word and converted to PDF using Mac OS X 10.7.5 Quartz PDFContext.
Resumo:
La révision d’arthroplastie de la hanche en cas d’important déficit osseux acétabulaire peut être difficile. Les reconstructions avec cupule de très grand diamètre ou cupule « jumbo » (≥ 62 mm chez la femme et ≥ 66 mm chez l’homme) sont une option thérapeutique. Nous voulions évaluer la préservation et la restauration du centre de rotation de la hanche reconstruite en la comparant au coté controlatéral sain ou selon les critères de Pierchon et al. Nous voulions également évaluer la stabilité du montage à un suivi d’au moins 2 ans. Il s’agissait de 53 cas consécutifs de révision acétabulaire pour descellement non septique avec implantation d’une cupule jumbo sans ciment à l’Hôpital Maisonneuve-Rosemont. Le déficit osseux évalué selon la classification de Paprosky et al. Les cupules implantées avaient un diamètre moyen de 66 mm (62-81) chez les femmes et 68 mm (66-75) chez les hommes. L’allogreffe osseuse morcelée et massive était utilisée dans 34 et dans 14 cas respectivement. La cupule a été positionnée avec un angle d’inclinaison moyen de 41.3° (26.0-53.0). Le centre de rotation de la hanche reconstruite a été jugé satisfaisant dans 78% de cas sur l'axe médiolatéral, 71% sur l'axe craniopodal et amélioré dans 27% dans cet axe. Au recul moyen radiologique de 84.0 mois (24.0-236.4) et clinique de 91.8 mois (24.0 – 241.8): 6 cas étaient décédés, 3 perdus au suivi. On a observé le descellement radiologique dans un 1 cas, la luxation récidivante dans 5 cas et l’infection dans 4 cas. Le retrait de la cupule a été effectué dans 2 cas pour infection. L’ostéointégration des greffons osseux était complète dans tous les cas sauf 3. Les scores cliniques étaient pour le HHS de 82 +/-17, le WOMAC de 86 +/- 14 et le SF-12 physique de 46 +/- 12 et mental 53 +/-13. La cupule jumbo peut être considérée comme un moyen fiable pour gérer le déficit osseux dans les révisions acétabulaires. Elle permet de conserver ou d’améliorer la position du centre de rotation physiologique de la hanche. La fixation sans ciment favorise l’ostéointégration de la cupule et permet une stabilité à moyen terme. Le taux de complications est comparable ou inférieur à d'autres procédures de reconstruction.
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.
Resumo:
ic first-order transition line ending in a critical point. This critical point is responsible for the existence of large premartensitic fluctuations which manifest as broad peaks in the specific heat, not always associated with a true phase transition. The main conclusion is that premartensitic effects result from the interplay between the softness of the anomalous phonon driving the modulation and the magnetoelastic coupling. In particular, the premartensitic transition occurs when such coupling is strong enough to freeze the involved mode phonon. The implication of the results in relation to the available experimental data is discussed.
Resumo:
We consider the effects of quantum fluctuations in mean-field quantum spin-glass models with pairwise interactions. We examine the nature of the quantum glass transition at zero temperature in a transverse field. In models (such as the random orthogonal model) where the classical phase transition is discontinuous an analysis using the static approximation reveals that the transition becomes continuous at zero temperature.