952 resultados para Analytic number theory
Resumo:
The recently developed semiclassical variational Wigner-Kirkwood (VWK) approach is applied to finite nuclei using external potentials and self-consistent mean fields derived from Skyrme inter-actions and from relativistic mean field theory. VWK consist s of the Thomas-Fermi part plus a pure, perturbative h 2 correction. In external potentials, VWK passes through the average of the quantal values of the accumulated level density and total en energy as a function of the Fermi energy. However, there is a problem of overbinding when the energy per particle is displayed as a function of the particle number. The situation is analyzed comparing spherical and deformed harmonic oscillator potentials. In the self-consistent case, we show for Skyrme forces that VWK binding energies are very close to those obtained from extended Thomas-Fermi functionals of h 4 order, pointing to the rapid convergence of the VWK theory. This satisfying result, however, does not cure the overbinding problem, i.e., the semiclassical energies show more binding than they should. This feature is more pronounced in the case of Skyrme forces than with the relativistic mean field approach. However, even in the latter case the shell correction energy for e.g.208 Pb turns out to be only ∼ −6 MeV what is about a factor two or three off the generally accepted value. As an adhoc remedy, increasing the kinetic energy by 2.5%, leads to shell correction energies well acceptable throughout the periodic table. The general importance of the present studies for other finite Fermi systems, self-bound or in external potentials, is pointed out.
Resumo:
As a discipline, logic is arguably constituted of two main sub-projects: formal theories of argument validity on the basis of a small number of patterns, and theories of how to reduce the multiplicity of arguments in non-logical, informal contexts to the small number of patterns whose validity is systematically studied (i.e. theories of formalization). Regrettably, we now tend to view logic 'proper' exclusively as what falls under the first sub-project, to the neglect of the second, equally important sub-project. In this paper, I discuss two historical theories of argument formalization: Aristotle's syllogistic theory as presented in the "Prior Analytics", and medieval theories of supposition. They both illustrate this two-fold nature of logic, containing in particular illuminating reflections on how to formalize arguments (i.e. the second sub-project). In both cases, the formal methods employed differ from the usual modern technique of translating an argument in ordinary language into a specially designed symbolism, a formal language. The upshot is thus a plea for a broader conceptualization of what it means to formalize.
Resumo:
In this article I intend to show that certain aspects of A.N. Whitehead's philosophy of organism and especially his epochal theory of time, as mainly exposed in his well-known work Process and Reality, can serve in clarify the underlying assumptions that shape nonstandard mathematical theories as such and also as metatheories of quantum mechanics. Concerning the latter issue, I point to an already significant research on nonstandard versions of quantum mechanics; two of these approaches are chosen to be critically presented in relation to the scope of this work. The main point of the paper is that, insofar as we can refer a nonstandard mathematical entity to a kind of axiomatical formalization essentially 'codifying' an underlying mental process indescribable as such by analytic means, we can possibly apply certain principles of Whitehead's metaphysical scheme focused on the key notion of process which is generally conceived as the becoming of actual entities. This is done in the sense of a unifying approach to provide an interpretation of nonstandard mathematical theories as such and also, in their metatheoretical status, as a formalization of the empirical-experimental context of quantum mechanics.
Resumo:
For number of reasons social responsibility in corporations has become a more essential part of business operations than before. Corporate social responsibility (CSR) is dealt with different means and aspects but the overall effects it has on organisations performance, communication and underline actions is indisputable. The thesis describes corporate social responsibility and the main objective was to observe how corporate social responsibility has developed in our case company with answering to main research question how CSR reporting has evolved in UPM-Kymmene Oyj? In addition following questions were also addressed: Is there a monetary value of CSR? What does proficient CSR report consist of? What does corporate social responsibility consist of? Qualitative research method, content analysis to be precise, was chosen and excessive literature study performed to find the theoretical back ground to perform the empirical part of the study. Data for the empirical part was collected from UPM-Kymmene Oyj financial data and annual reports. The study shows that UPM-Kymmene Oyj engagement to CSR and reporting of CSR matter have improved due time but still few managerial implications could be found. UPM-Kymmene Oyj economic key figures are only building shareholder value and stakeholders are identified in very general level. Also CSR data is scattered all over the annual report which causes problems to readers. The scientific importance of this thesis arises from the profound way CSR has been addressed in a holistic manner. Thus it is giving a good basis to understand the underlying reasons of CSR from society towards the organisation and vice versa.
Resumo:
Permanent magnet synchronous machines (PMSM) have become widely used in applications because of high efficiency compared to synchronous machines with exciting winding or to induction motors. This feature of PMSM is achieved through the using the permanent magnets (PM) as the main excitation source. The magnetic properties of the PM have significant influence on all the PMSM characteristics. Recent observations of the PM material properties when used in rotating machines revealed that in all PMSMs the magnets do not necessarily operate in the second quadrant of the demagnetization curve which makes the magnets prone to hysteresis losses. Moreover, still no good analytical approach has not been derived for the magnetic flux density distribution along the PM during the different short circuits faults. The main task of this thesis is to derive simple analytical tool which can predict magnetic flux density distribution along the rotor-surface mounted PM in two cases: during normal operating mode and in the worst moment of time from the PM’s point of view of the three phase symmetrical short circuit. The surface mounted PMSMs were selected because of their prevalence and relatively simple construction. The proposed model is based on the combination of two theories: the theory of the magnetic circuit and space vector theory. The comparison of the results in case of the normal operating mode obtained from finite element software with the results calculated with the proposed model shows good accuracy of model in the parts of the PM which are most of all prone to hysteresis losses. The comparison of the results for three phase symmetrical short circuit revealed significant inaccuracy of the proposed model compared with results from finite element software. The analysis of the inaccuracy reasons was provided. The impact on the model of the Carter factor theory and assumption that air have permeability of the PM were analyzed. The propositions for the further model development are presented.
Resumo:
We have calculated the thermodynamic properties of monatomic fcc crystals from the high temperature limit of the Helmholtz free energy. This equation of state included the static and vibrational energy components. The latter contribution was calculated to order A4 of perturbation theory, for a range of crystal volumes, in which a nearest neighbour central force model was used. We have calculated the lattice constant, the coefficient of volume expansion, the specific heat at constant volume and at constant pressure, the adiabatic and the isothermal bulk modulus, and the Gruneisen parameter, for two of the rare gas solids, Xe and Kr, and for the fcc metals Cu, Ag, Au, Al, and Pb. The LennardJones and the Morse potential were each used to represent the atomic interactions for the rare gas solids, and only the Morse potential was used for the fcc metals. The thermodynamic properties obtained from the A4 equation of state with the Lennard-Jones potential, seem to be in reasonable agreement with experiment for temperatures up to about threequarters of the melting temperature. However, for the higher temperatures, the results are less than satisfactory. For Xe and Kr, the thermodynamic properties calculated from the A2 equation of state with the Morse potential, are qualitatively similar to the A 2 results obtained with the Lennard-Jones potential, however, the properties obtained from the A4 equation of state are in good agreement with experiment, since the contribution from the A4 terms seem to be small. The lattice contribution to the thermal properties of the fcc metals was calculated from the A4 equation of state, and these results produced a slight improvement over the properties calculated from the A2 equation of state. In order to compare the calculated specific heats and bulk moduli results with experiment~ the electronic contribution to thermal properties was taken into account~ by using the free electron model. We found that the results varied significantly with the value chosen for the number of free electrons per atom.
Resumo:
This essay reviews the decision-making process that led to India exploding a nuclear device in May, 1974. An examination of the Analytic, Cybernetic and Cognitive Theories of decision, will enable a greater understanding of the events that led up to the 1974 test. While each theory is seen to be only partially useful, it is only by synthesising the three theories that a comprehensive account of the 1974 test can be given. To achieve this analysis, literature on decision-making in national security issues is reviewed, as well as the domestic and international environment in which involved decisionmakers operated. Finally, the rationale for the test in 1974 is examined. The conclusion revealed is that the explosion of a nuclear device by India in 1974 was primarily related to improving Indian international prestige among Third World countries and uniting a rapidly disintegrating Indian societal consensus. In themselves, individual decision-making theories were found to be of little use, but a combination of the various elements allowed a greater comprehension of the events leading up to the test than might otherwise have been the case.
Resumo:
A review of the literature reveals that there are a number of children in the educational system who are characterized by Attention Deficit Disorder. Further review of the literature reveals that there are information processing programs which have had some success in increasing the learning of these children. Currently, an information processing program which is based on schema theory is being implemented in Lincoln County. Since schema theory based programs build structural, conditional, factual, and procedural schemata which assist the learner in attending to salient factors, learning should be increased. Thirty-four children were selected from a random sampling of Grade Seven classes in Lincoln County. Seventeen of these children were identified by the researcher and classroom teacher as being characterized by Attention Deficit Disorder. From the remaining population, 17 children who were not characterized by Attention Deficit Disorder were randomly selected. The data collected were compared using independent t-tests, paired t-tests, and correlation analysis. Significant differences were found in all cases. The Non-Attention Deficit Disorder children scored significantly higher on all the tests but the Attention Defici t Disorder children had a significantly higher ratio of gain between the pretests and posttests.
Resumo:
Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .
Resumo:
Fifty-six percent of Canadians, 20 years of age and older, are inactive (Canadian Community Health Survey, 200012001). Research has indicated that one of the most dramatic declines in population physical activity occurs between adolescence and young adulthood (Melina, 2001; Stephens, Jacobs, & White, 1985), a time when individuals this age are entering or attending college or university. Colleges and universities have generally been seen as environments where physical activity and sport can be promoted and accommodated as a result of the available resources and facilities (Archer, Probert, & Gagne, 1987; Suminski, Petosa, Utter, & Zhang, 2002). Intramural sports, one of the most common campus recreational sports options available for post-secondary students, enable students to participate in activities that are suited for different levels of ability and interest (Lewis, Jones, Lamke, & Dunn, 1998). While intramural sports can positively affect the physical activity levels and sport participation rates of post-secondary students, their true value lies in their ability to encourage sport participation after school ends and during the post-school lives of graduates (Forrester, Ross, Geary, & Hall, 2007). This study used the Sport Commitment Model (Scanlan et aI., 1993a) and the Theory of Planned Behaviour (Ajzen, 1991) with post secondary intramural volleyball participants in an effort to examine students' commitment to intramural sport and 1 intentions to participate in intramural sports. More specifically, the research objectives of this study were to: (1.) test the Sport Commitment Model with a sample of postsecondary intramural sport participants(2.) determine the utility of the sixth construct, social support, in explaining the sport commitment of post-secondary intramural sport participants; (3.) determine if there are any significant differences in the six constructs of IV the SCM and sport commitment between: gender, level of competition (competitive A vs. B), and number of different intramural sports played; (4.) determine if there are any significant differences between sport commitment levels and constructs from the Theory of Planned Behaviour (attitudes, subjective norms, perceived behavioural control, and intentions); (5.) determine the relationship between sport commitment and intention to continue participation in intramural volleyball, continue participating in intramurals and continuing participating in sport and physical activity after graduation; and (6.) determine if the level of sport commitment changes the relationship between the constructs from the Theory of Planned Behaviour. Of the 318 surveys distributed, there were 302 partiCipants who completed a usable survey from the sample of post-secondary intramural sport participants. There was a fairly even split of males and females; the average age of the students was twenty-one; 90% were undergraduate students; for approximately 25% of the students, volleyball was the only intramural sport they participated in at Brock and most were part of the volleyball competitive B division. Based on the post-secondary students responses, there are indications of intent to continue participation in sport and physical activity. The participation of the students is predominantly influenced by subjective norms, high sport commitment, and high sport enjoyment. This implies students expect, intend and want to 1 participate in intramurals in the future, they are very dedicated to playing on an intramural team and would be willing to do a lot to keep playing and students want to participate when they perceive their pursuits as enjoyable and fun, and it makes them happy. These are key areas that should be targeted and pursued by sport practitioners.
Resumo:
Roughly speaking, Enron has done for reflection on corporate governance what AIDS did for research on the immune system. So far, however, virtually all of this reflection on and subsequent reform of governance has come from those with a stake in the success of modern capitalism. This paper identifies a number of governance challenges for critics of capitalism, and in particular for those who urge corporations to voluntarily adopt missions of broader social responsibility and equal treatment for all stakeholder groups. I argue that by generally neglecting the governance relation between shareholders and senior managers, stakeholder theorists have underestimated the way in which shareholder-focused governance can be in the interests of all stakeholder groups. The enemy, if you will, is not capitalists (shareholders), but greedy, corrupt or incompetent managers. A second set of governance challenges for stakeholder theorists concerns their largely untested proposals for governance reforms that would require managers to act in the interests of all stakeholders and not just shareholders; in other words to treat shareholders as just another stakeholder group. I suggest that in such a governance regime it may be almost impossible to hold managers accountable to anyone – just as it was when state-owned enterprises were given “multi-stakeholder” mandates in the 1960s and 1970s.
Resumo:
The attached file is created with Scientific Workplace Latex
Resumo:
L’approche des capabilités a été caractérisée par un développement fulgurant au cours des vingt-cinq dernières années. Bien que formulée à l’origine par Amartya Sen, détenteur du Prix Nobel en économie, Martha Nussbaum reprit cette approche dans le but de s’en servir comme fondation pour une théorie éthico-politique intégrale du bien. Cependant, la version de Nussbaum s’avéra particulièrement vulnérable à plusieurs critiques importantes, mettant sérieusement en doute son efficacité globale. À la lumière de ces faits, cette thèse vise à évaluer la pertinence théorique et pratique de l’approche des capabilités de Nussbaum, en examinant trois groupes de critiques particulièrement percutantes formulées à son encontre.
Resumo:
Dans ce travail, nous étendons le nombre de conditions physiques actuellement con- nues du trou d’échange exact avec la dérivation de l’expansion de quatrième ordre du trou d’échange sphérique moyenne exacte. Nous comparons les expansions de deux- ième et de quatrième ordre avec le trou d’échange exact pour des systèmes atomiques et moléculaires. Nous avons constaté que, en général, l’expansion du quatrième ordre reproduit plus fidèlement le trou d’échange exact pour les petites valeurs de la distance interélectronique. Nous démontrons que les ensembles de base de type gaussiennes ont une influence significative sur les termes de cette nouvelle condition, en étudiant com- ment les oscillations causées par ces ensembles de bases affectent son premier terme. Aussi, nous proposons quatre modèles de trous d’échange analytiques auxquels nous imposons toutes les conditions actuellement connues du trou d’échange exact et la nou- velle présentée dans ce travail. Nous évaluons la performance des modèles en calculant des énergies d’échange et ses contributions à des énergies d’atomisation. On constate que les oscillations causeés par les bases de type gaussiennes peuvent compromettre la précision et la solution des modèles.
Resumo:
La synthèse d'images dites photoréalistes nécessite d'évaluer numériquement la manière dont la lumière et la matière interagissent physiquement, ce qui, malgré la puissance de calcul impressionnante dont nous bénéficions aujourd'hui et qui ne cesse d'augmenter, est encore bien loin de devenir une tâche triviale pour nos ordinateurs. Ceci est dû en majeure partie à la manière dont nous représentons les objets: afin de reproduire les interactions subtiles qui mènent à la perception du détail, il est nécessaire de modéliser des quantités phénoménales de géométries. Au moment du rendu, cette complexité conduit inexorablement à de lourdes requêtes d'entrées-sorties, qui, couplées à des évaluations d'opérateurs de filtrage complexes, rendent les temps de calcul nécessaires à produire des images sans défaut totalement déraisonnables. Afin de pallier ces limitations sous les contraintes actuelles, il est nécessaire de dériver une représentation multiéchelle de la matière. Dans cette thèse, nous construisons une telle représentation pour la matière dont l'interface correspond à une surface perturbée, une configuration qui se construit généralement via des cartes d'élévations en infographie. Nous dérivons notre représentation dans le contexte de la théorie des microfacettes (conçue à l'origine pour modéliser la réflectance de surfaces rugueuses), que nous présentons d'abord, puis augmentons en deux temps. Dans un premier temps, nous rendons la théorie applicable à travers plusieurs échelles d'observation en la généralisant aux statistiques de microfacettes décentrées. Dans l'autre, nous dérivons une procédure d'inversion capable de reconstruire les statistiques de microfacettes à partir de réponses de réflexion d'un matériau arbitraire dans les configurations de rétroréflexion. Nous montrons comment cette théorie augmentée peut être exploitée afin de dériver un opérateur général et efficace de rééchantillonnage approximatif de cartes d'élévations qui (a) préserve l'anisotropie du transport de la lumière pour n'importe quelle résolution, (b) peut être appliqué en amont du rendu et stocké dans des MIP maps afin de diminuer drastiquement le nombre de requêtes d'entrées-sorties, et (c) simplifie de manière considérable les opérations de filtrage par pixel, le tout conduisant à des temps de rendu plus courts. Afin de valider et démontrer l'efficacité de notre opérateur, nous synthétisons des images photoréalistes anticrenelées et les comparons à des images de référence. De plus, nous fournissons une implantation C++ complète tout au long de la dissertation afin de faciliter la reproduction des résultats obtenus. Nous concluons avec une discussion portant sur les limitations de notre approche, ainsi que sur les verrous restant à lever afin de dériver une représentation multiéchelle de la matière encore plus générale.