897 resultados para implicit theories
The effective use of implicit parallelism through the use of an object-oriented programming language
Resumo:
This thesis explores translating well-written sequential programs in a subset of the Eiffel programming language - without syntactic or semantic extensions - into parallelised programs for execution on a distributed architecture. The main focus is on constructing two object-oriented models: a theoretical self-contained model of concurrency which enables a simplified second model for implementing the compiling process. There is a further presentation of principles that, if followed, maximise the potential levels of parallelism. Model of Concurrency. The concurrency model is designed to be a straightforward target for mapping sequential programs onto, thus making them parallel. It aids the compilation process by providing a high level of abstraction, including a useful model of parallel behaviour which enables easy incorporation of message interchange, locking, and synchronization of objects. Further, the model is sufficient such that a compiler can and has been practically built. Model of Compilation. The compilation-model's structure is based upon an object-oriented view of grammar descriptions and capitalises on both a recursive-descent style of processing and abstract syntax trees to perform the parsing. A composite-object view with an attribute grammar style of processing is used to extract sufficient semantic information for the parallelisation (i.e. code-generation) phase. Programming Principles. The set of principles presented are based upon information hiding, sharing and containment of objects and the dividing up of methods on the basis of a command/query division. When followed, the level of potential parallelism within the presented concurrency model is maximised. Further, these principles naturally arise from good programming practice. Summary. In summary this thesis shows that it is possible to compile well-written programs, written in a subset of Eiffel, into parallel programs without any syntactic additions or semantic alterations to Eiffel: i.e. no parallel primitives are added, and the parallel program is modelled to execute with equivalent semantics to the sequential version. If the programming principles are followed, a parallelised program achieves the maximum level of potential parallelisation within the concurrency model.
Resumo:
The main aim of this thesis is to investigate the application of methods of differential geometry to the constraint analysis of relativistic high spin field theories. As a starting point the coordinate dependent descriptions of the Lagrangian and Dirac-Bergmann constraint algorithms are reviewed for general second order systems. These two algorithms are then respectively employed to analyse the constraint structure of the massive spin-1 Proca field from the Lagrangian and Hamiltonian viewpoints. As an example of a coupled field theoretic system the constraint analysis of the massive Rarita-Schwinger spin-3/2 field coupled to an external electromagnetic field is then reviewed in terms of the coordinate dependent Dirac-Bergmann algorithm for first order systems. The standard Velo-Zwanziger and Johnson-Sudarshan inconsistencies that this coupled system seemingly suffers from are then discussed in light of this full constraint analysis and it is found that both these pathologies degenerate to a field-induced loss of degrees of freedom. A description of the geometrical version of the Dirac-Bergmann algorithm developed by Gotay, Nester and Hinds begins the geometrical examination of high spin field theories. This geometric constraint algorithm is then applied to the free Proca field and to two Proca field couplings; the first of which is the minimal coupling to an external electromagnetic field whilst the second is the coupling to an external symmetric tensor field. The onset of acausality in this latter coupled case is then considered in relation to the geometric constraint algorithm.
Resumo:
The thesis began as a study of new firm formation. Preliminary research suggested that infant death rate was considered to be a closely related problem and the search was for a theory of new firm formation which would explain both. The thesis finds theories of exit and entry inadequate in this respect and focusses instead on theories of entrepreneurship, particularly those which concentrate on entrepreneurship as an agent of change. The role of information is found to be fundamental to economic change and an understanding of information generation and dissemination and the nature and direction of information flows is postulated to lead coterminously to an understanding of entrepreneurhsip and economic change. The economics of information is applied to theories of entrepreneurhsip and some testable hypotheses are derived. The testing relies on etablishing and measuring the information bases of the founders of new firms and then testing for certain hypothesised differences between the information bases of survivors and non-survivors. No theory of entrepreneurship is likely to be straightforwardly testable and many postulates have to be established to bring the theory to a testable stage. A questionnaire is used to gather information from a sample of firms taken from a new micro-data set established as part of the work of the thesis. Discriminant Analysis establishes the variables which best distinguish between survivors and non-survivors. The variables which emerge as important discriminators are consistent with the theory which the analysis is testing. While there are alternative interpretations of the important variables, collective consistency with the theory under test is established. The thesis concludes with an examination of the implications of the theory for policy towards stimulating new firm formation.
Resumo:
This thesis considers the main theoretical positions within the contemporary sociology of nationalism. These can be grouped into two basic types, primordialist theories which assert that nationalism is an inevitable aspect of all human societies, and modernist theories which assert that nationalism and the nation-state first developed within western Europe in recent centuries. With respect to primordialist approaches to nationalism, it is argued that the main common explanation offered is human biological propensity. Consideration is concentrated on the most recent and plausible of such theories, sociobiology. Sociobiological accounts root nationalism and racism in genetic programming which favours close kin, or rather to the redirection of this programming in complex societies, where the social group is not a kin group. It is argued that the stated assumptions of the sociobiologists do not entail the conclusions they draw as to the roots of nationalism, and that in order to arrive at such conclusions further and implausible assumptions have to be made. With respect to modernists, the first group of writers who are considered are those, represented by Carlton Hayes, Hans Kohn and Elie Kedourie, whose main thesis is that the nation-state and nationalism are recent phenomena. Next, the two major attempts to relate nationalism and the nation-state to imperatives specific either to capitalist societies (in the `orthodox' marxist theory elaborated about the turn of the twentieth century) or to the processes of modernisation and industrialisation (the `Weberian' account of Ernest Gellner) are discussed. It is argued that modernist accounts can only be sustained by starting from a definition of nationalism and the nation-state which conflates such phenomena with others which are specific to the modern world. The marxist and Gellner accounts form the necessary starting point for any explanation as to why the nation-state is apparently the sole viable form of polity in the modern world, but their assumption that no pre-modern society was national leaves them without an adequate account of the earliest origins of the nation-state and of nationalism. Finally, a case study from the history of England argues both the achievement of a national state form and the elucidation of crucial components of a nationalist ideology were attained at a period not consistent with any of the versions of the modernist thesis.
Resumo:
The aims of this study were to investigate the beliefs concerning the philosophy of science held by practising science teachers and to relate those beliefs to their pupils' understanding of the philosophy of science. Three philosophies of science, differing in the way they relate experimental work to other parts of the scientific enterprise, are described. By the use of questionnaire techniques, teachers of four extreme types were identified. These are: the H type or hypothetico-deductivist teacher, who sees experiments as potential falsifiers of hypotheses or of logical deductions from them; the I type or inductivist teacher, who regards experiments mainly as a way of increasing the range of observations available for recording before patterns are noted and inductive generalisation is carried out; the V type or verificationist teacher, who expects experiments to provide proof and to demonstrate the truth or accuracy of scientific statements; and the 0 type, who has no discernible philosophical beliefs about the nature of science or its methodology. Following interviews of selected teachers to check their responses to the questionnaire and to determine their normal teaching methods, an experiment was organised in which parallel groups were given H, I and V type teaching in the normal school situation during most of one academic year. Using pre-test and post-test scores on a specially developed test of pupil understanding of the philosophy of science, it was shown that pupils were positively affected by their teacher's implied philosophy of science. There was also some indication that V type teaching improved marks obtained in school science examinations, but appeared to discourage the more able from continuing the study of science. Effects were also noted on vocabulary used by pupils to describe scientists and their activities.
Resumo:
This paper shows that many structural remedies in a sample of European merger cases result in market structures which would probably not be cleared by the Competition Authority (CA) if they were the result of merger (rather than remedy).This is explained by the fact that the CA’s objective through remedy is to restore premerger competition, but markets are often highly concentrated even before merger. If so, the CA must often choose between clearing an ‘uncompetitive’merger, or applying an unsatisfactory remedy. Here, the CA appears reluctant to intervene against coordinated effects, if doing so enhances a leader’s dominance.
Resumo:
Previous empirical assessments of the effectiveness of structural merger remedies have focused mainly on the subsequent viability of the divested assets. Here, we take a different approach by examining how competitive are the market structures which result from the divestments. We employ a tightly specified sample of markets in which the European Commission (EC) has imposed structural merger remedies. It has two key features: (i) it includes all mergers in which the EC appears to have seriously considered, simultaneously, the possibility of collective dominance, as well as single dominance; (ii) in a previous paper, for the same sample, we estimated a model which proved very successful in predicting the Commission’s merger decisions, in terms of the market shares of the leading firms. The former allows us to explore the choices between alternative theories of harm, and the latter provides a yardstick for evaluating whether markets are competitive or not – at least in the eyes of the Commission. Running the hypothetical post-remedy market shares through the model, we can predict whether the EC would have judged the markets concerned to be competitive, had they been the result of a merger rather than a remedy. We find that a significant proportion were not competitive in this sense. One explanation is that the EC has simply been inconsistent – using different criteria for assessing remedies from those for assessing the mergers in the first place. However, a more sympathetic – and in our opinion, more likely – explanation is that the Commission is severely constrained by the pre-merger market structures in many markets. We show that, typically, divestment remedies return the market to the same structure as existed before the proposed merger. Indeed, one can argue that any competition authority should never do more than this. Crucially, however, we find that this pre-merger structure is often itself not competitive. We also observe an analogous picture in a number of markets where the Commission chose not to intervene: while the post-merger structure was not competitive, nor was the pre-merger structure. In those cases, however, the Commission preferred the former to the latter. In effect, in both scenarios, the EC was faced with a no-win decision. This immediately raises a follow-up question: why did the EC intervene for some, but not for others – given that in all these cases, some sort of anticompetitive structure would prevail? We show that, in this sample at least, the answer is often tied to the prospective rank of the merged firm post-merger. In particular, in those markets where the merged firm would not be the largest post-merger, we find a reluctance to intervene even where the resulting market structure is likely to be conducive to collective dominance. We explain this by a willingness to tolerate an outcome which may be conducive to tacit collusion if the alternative is the possibility of an enhanced position of single dominance by the market leader. Finally, because the sample is confined to cases brought under the ‘old’ EC Merger Regulation, we go on to consider how, if at all, these conclusions require qualification following the 2004 revisions, which, amongst other things, made interventions for non-coordinated behaviour possible without requiring that the merged firm be a dominant market leader. Our main conclusions here are that the Commission appears to have been less inclined to intervene in general, but particularly for Collective Dominance (or ‘coordinated effects’ as it is now known in Europe as well as the US.) Moreover, perhaps contrary to expectation, where the merged firm is #2, the Commission has to date rarely made a unilateral effects decision and never made a coordinated effects decision.
Resumo:
A significant forum of scholarly and practitioner-based research has developed in recent years that has sought both to theorize upon and empirically measure the competitiveness of regions. However, the disparate and fragmented nature of this work has led to the lack of a substantive theoretical foundation underpinning the various analyses and measurement methodologies employed. The aim of this paper is to place the regional competitiveness discourse within the context of theories of economic growth, and more particularly, those concerning regional economic growth. It is argued that regional competitiveness models are usually implicitly constructed in the lineage of endogenous growth frameworks, whereby deliberate investments in factors such as human capital and knowledge are considered to be key drivers of growth differentials. This leads to the suggestion that regional competitiveness can be usefully defined as the capacity and capability of regions to achieve economic growth relative to other regions at a similar overall stage of economic development, which will usually be within their own nation or continental bloc. The paper further assesses future avenues for theoretical and methodological exploration, highlighting the role of institutions, resilience and, well-being in understanding how the competitiveness of regions influences their long-term evolution.
Resumo:
The purpose of this thesis is twofold: to examine the validity of the rotating-field and cross-field theories of the single-phase induction motor when applied to a cage rotor machine; and to examine the extent to which skin effect is likely to modify the characteristics of a cage rotor machine. A mathematical analysis is presented for a single-phase induction motor in which the rotor parameters are modified by skin effect. Although this is based on the usual type of ideal machine, a new form of model rotor allows approximations for skin effect phenomena to be included as an integral part of the analysis. Performance equations appropriate to the rotating-field and cross-field theories are deduced, and the corresponding explanations for the steady-state mode of operation are critically examined. The evaluation of the winding currents and developed torque is simplified by the introduction of new dimensionless factors which are functions of the resistance/reactance ratios of the rotor and the speed. Tables of the factors are included for selected numerical values of the parameter ratios, and these are used to deduce typical operating characteristics for both cage and wound rotor machines. It is shown that a qualitative explanation of the mode of operation of a cage rotor machine is obtained from either theory; but the operating characteristics must be deduced from the performance equations of the rotating-field theory, because of the restrictions on the values of the rotor parameters imposed by skin effect.
Resumo:
After exogenously cueing attention to a peripheral location, the return of attention and response to the location can be inhibited. We demonstrate that these inhibitory mechanisms of attention can be associated with objects and can be automatically and implicitly retrieved over relatively long periods. Furthermore, we also show that when face stimuli are associated with inhibition, the effect is more robust for faces presented in the left visual field. This effect can be even more spatially specific, where most robust inhibition is obtained for faces presented in the upper as compared to the lower visual field. Finally, it is revealed that the inhibition is associated with an object’s identity, as inhibition moves with an object to a new location; and that the retrieved inhibition is only transiently present after retrieval.
Resumo:
In the present paper we investigate the life cycles of formalized theories that appear in decision making instruments and science. In few words mixed theories are build in the following steps: Initially a small collection of facts is the kernel of the theory. To express these facts we make a special formalized language. When the collection grows we add some inference rules and thus some axioms to compress the knowledge. The next step is to generalize these rules to all expressions in the formalized language. For these rules we introduce some conclusion procedure. In such a way we make small theories for restricted fields of the knowledge. The most important procedure is the mixing of these partial knowledge systems. In that step we glue the theories together and eliminate the contradictions. The last operation is the most complicated one and some simplifying procedures are proposed.
Resumo:
Pre-eclampsia is a vascular disorder of pregnancy where anti-angiogenic factors, systemic inflammation and oxidative stress predominate, but none can claim to cause pre-eclampsia. This review provides an alternative to the 'two-stage model' of pre-eclampsia in which abnormal spiral arteries modification leads to placental hypoxia, oxidative stress and aberrant maternal systemic inflammation. Very high maternal soluble fms-like tyrosine kinase-1 (sFlt-1 also known as sVEGFR) and very low placenta growth factor (PlGF) are unique to pre-eclampsia; however, abnormal spiral arteries and excessive inflammation are also prevalent in other placental disorders. Metaphorically speaking, pregnancy can be viewed as a car with an accelerator and brakes, where inflammation, oxidative stress and an imbalance in the angiogenic milieu act as the 'accelerator'. The 'braking system' includes the protective pathways of haem oxygenase 1 (also referred as Hmox1 or HO-1) and cystathionine-γ-lyase (also known as CSE or Cth), which generate carbon monoxide (CO) and hydrogen sulphide (H2S) respectively. The failure in these pathways (brakes) results in the pregnancy going out of control and the system crashing. Put simply, pre-eclampsia is an accelerator-brake defect disorder. CO and H2S hold great promise because of their unique ability to suppress the anti-angiogenic factors sFlt-1 and soluble endoglin as well as to promote PlGF and endothelial NOS activity. The key to finding a cure lies in the identification of cheap, safe and effective drugs that induce the braking system to keep the pregnancy vehicle on track past the finishing line.
Resumo:
Java software or libraries can evolve via subclassing. Unfortunately, subclassing may not properly support code adaptation when there are dependencies between classes. More precisely, subclassing in collections of related classes may require reimplementation of otherwise valid classes. This problem is defined as the subclassing anomaly, which is an issue when software evolution or code reuse is a goal of the programmer who is using existing classes. Object Teams offers an implicit fix to this problem and is largely compatible with the existing JVMs. In this paper, we evaluate how well Object Teams succeeds in providing a solution for a complex, real world project. Our results indicate that while Object Teams is a suitable solution for simple examples, it does not meet the requirements for large scale projects. The reasons why Object Teams fails in certain usages may prove useful to those who create linguistic modifications in languages or those who seek new methods for code adaptation.