13 resultados para Formal theories of truth

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis began as a study of new firm formation. Preliminary research suggested that infant death rate was considered to be a closely related problem and the search was for a theory of new firm formation which would explain both. The thesis finds theories of exit and entry inadequate in this respect and focusses instead on theories of entrepreneurship, particularly those which concentrate on entrepreneurship as an agent of change. The role of information is found to be fundamental to economic change and an understanding of information generation and dissemination and the nature and direction of information flows is postulated to lead coterminously to an understanding of entrepreneurhsip and economic change. The economics of information is applied to theories of entrepreneurhsip and some testable hypotheses are derived. The testing relies on etablishing and measuring the information bases of the founders of new firms and then testing for certain hypothesised differences between the information bases of survivors and non-survivors. No theory of entrepreneurship is likely to be straightforwardly testable and many postulates have to be established to bring the theory to a testable stage. A questionnaire is used to gather information from a sample of firms taken from a new micro-data set established as part of the work of the thesis. Discriminant Analysis establishes the variables which best distinguish between survivors and non-survivors. The variables which emerge as important discriminators are consistent with the theory which the analysis is testing. While there are alternative interpretations of the important variables, collective consistency with the theory under test is established. The thesis concludes with an examination of the implications of the theory for policy towards stimulating new firm formation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis considers the main theoretical positions within the contemporary sociology of nationalism. These can be grouped into two basic types, primordialist theories which assert that nationalism is an inevitable aspect of all human societies, and modernist theories which assert that nationalism and the nation-state first developed within western Europe in recent centuries. With respect to primordialist approaches to nationalism, it is argued that the main common explanation offered is human biological propensity. Consideration is concentrated on the most recent and plausible of such theories, sociobiology. Sociobiological accounts root nationalism and racism in genetic programming which favours close kin, or rather to the redirection of this programming in complex societies, where the social group is not a kin group. It is argued that the stated assumptions of the sociobiologists do not entail the conclusions they draw as to the roots of nationalism, and that in order to arrive at such conclusions further and implausible assumptions have to be made. With respect to modernists, the first group of writers who are considered are those, represented by Carlton Hayes, Hans Kohn and Elie Kedourie, whose main thesis is that the nation-state and nationalism are recent phenomena. Next, the two major attempts to relate nationalism and the nation-state to imperatives specific either to capitalist societies (in the `orthodox' marxist theory elaborated about the turn of the twentieth century) or to the processes of modernisation and industrialisation (the `Weberian' account of Ernest Gellner) are discussed. It is argued that modernist accounts can only be sustained by starting from a definition of nationalism and the nation-state which conflates such phenomena with others which are specific to the modern world. The marxist and Gellner accounts form the necessary starting point for any explanation as to why the nation-state is apparently the sole viable form of polity in the modern world, but their assumption that no pre-modern society was national leaves them without an adequate account of the earliest origins of the nation-state and of nationalism. Finally, a case study from the history of England argues both the achievement of a national state form and the elucidation of crucial components of a nationalist ideology were attained at a period not consistent with any of the versions of the modernist thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper shows that many structural remedies in a sample of European merger cases result in market structures which would probably not be cleared by the Competition Authority (CA) if they were the result of merger (rather than remedy).This is explained by the fact that the CA’s objective through remedy is to restore premerger competition, but markets are often highly concentrated even before merger. If so, the CA must often choose between clearing an ‘uncompetitive’merger, or applying an unsatisfactory remedy. Here, the CA appears reluctant to intervene against coordinated effects, if doing so enhances a leader’s dominance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous empirical assessments of the effectiveness of structural merger remedies have focused mainly on the subsequent viability of the divested assets. Here, we take a different approach by examining how competitive are the market structures which result from the divestments. We employ a tightly specified sample of markets in which the European Commission (EC) has imposed structural merger remedies. It has two key features: (i) it includes all mergers in which the EC appears to have seriously considered, simultaneously, the possibility of collective dominance, as well as single dominance; (ii) in a previous paper, for the same sample, we estimated a model which proved very successful in predicting the Commission’s merger decisions, in terms of the market shares of the leading firms. The former allows us to explore the choices between alternative theories of harm, and the latter provides a yardstick for evaluating whether markets are competitive or not – at least in the eyes of the Commission. Running the hypothetical post-remedy market shares through the model, we can predict whether the EC would have judged the markets concerned to be competitive, had they been the result of a merger rather than a remedy. We find that a significant proportion were not competitive in this sense. One explanation is that the EC has simply been inconsistent – using different criteria for assessing remedies from those for assessing the mergers in the first place. However, a more sympathetic – and in our opinion, more likely – explanation is that the Commission is severely constrained by the pre-merger market structures in many markets. We show that, typically, divestment remedies return the market to the same structure as existed before the proposed merger. Indeed, one can argue that any competition authority should never do more than this. Crucially, however, we find that this pre-merger structure is often itself not competitive. We also observe an analogous picture in a number of markets where the Commission chose not to intervene: while the post-merger structure was not competitive, nor was the pre-merger structure. In those cases, however, the Commission preferred the former to the latter. In effect, in both scenarios, the EC was faced with a no-win decision. This immediately raises a follow-up question: why did the EC intervene for some, but not for others – given that in all these cases, some sort of anticompetitive structure would prevail? We show that, in this sample at least, the answer is often tied to the prospective rank of the merged firm post-merger. In particular, in those markets where the merged firm would not be the largest post-merger, we find a reluctance to intervene even where the resulting market structure is likely to be conducive to collective dominance. We explain this by a willingness to tolerate an outcome which may be conducive to tacit collusion if the alternative is the possibility of an enhanced position of single dominance by the market leader. Finally, because the sample is confined to cases brought under the ‘old’ EC Merger Regulation, we go on to consider how, if at all, these conclusions require qualification following the 2004 revisions, which, amongst other things, made interventions for non-coordinated behaviour possible without requiring that the merged firm be a dominant market leader. Our main conclusions here are that the Commission appears to have been less inclined to intervene in general, but particularly for Collective Dominance (or ‘coordinated effects’ as it is now known in Europe as well as the US.) Moreover, perhaps contrary to expectation, where the merged firm is #2, the Commission has to date rarely made a unilateral effects decision and never made a coordinated effects decision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semantic Web Service, one of the most significant research areas within the Semantic Web vision, has attracted increasing attention from both the research community and industry. The Web Service Modelling Ontology (WSMO) has been proposed as an enabling framework for the total/partial automation of the tasks (e.g., discovery, selection, composition, mediation, execution, monitoring, etc.) involved in both intra- and inter-enterprise integration of Web services. To support the standardisation and tool support of WSMO, a formal model of the language is highly desirable. As several variants of WSMO have been proposed by the WSMO community, which are still under development, the syntax and semantics of WSMO should be formally defined to facilitate easy reuse and future development. In this paper, we present a formal Object-Z formal model of WSMO, where different aspects of the language have been precisely defined within one unified framework. This model not only provides a formal unambiguous model which can be used to develop tools and facilitate future development, but as demonstrated in this paper, can be used to identify and eliminate errors present in existing documentation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pre-eclampsia is a vascular disorder of pregnancy where anti-angiogenic factors, systemic inflammation and oxidative stress predominate, but none can claim to cause pre-eclampsia. This review provides an alternative to the 'two-stage model' of pre-eclampsia in which abnormal spiral arteries modification leads to placental hypoxia, oxidative stress and aberrant maternal systemic inflammation. Very high maternal soluble fms-like tyrosine kinase-1 (sFlt-1 also known as sVEGFR) and very low placenta growth factor (PlGF) are unique to pre-eclampsia; however, abnormal spiral arteries and excessive inflammation are also prevalent in other placental disorders. Metaphorically speaking, pregnancy can be viewed as a car with an accelerator and brakes, where inflammation, oxidative stress and an imbalance in the angiogenic milieu act as the 'accelerator'. The 'braking system' includes the protective pathways of haem oxygenase 1 (also referred as Hmox1 or HO-1) and cystathionine-γ-lyase (also known as CSE or Cth), which generate carbon monoxide (CO) and hydrogen sulphide (H2S) respectively. The failure in these pathways (brakes) results in the pregnancy going out of control and the system crashing. Put simply, pre-eclampsia is an accelerator-brake defect disorder. CO and H2S hold great promise because of their unique ability to suppress the anti-angiogenic factors sFlt-1 and soluble endoglin as well as to promote PlGF and endothelial NOS activity. The key to finding a cure lies in the identification of cheap, safe and effective drugs that induce the braking system to keep the pregnancy vehicle on track past the finishing line.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hard real-time systems are a class of computer control systems that must react to demands of their environment by providing `correct' and timely responses. Since these systems are increasingly being used in systems with safety implications, it is crucial that they are designed and developed to operate in a correct manner. This thesis is concerned with developing formal techniques that allow the specification, verification and design of hard real-time systems. Formal techniques for hard real-time systems must be capable of capturing the system's functional and performance requirements, and previous work has proposed a number of techniques which range from the mathematically intensive to those with some mathematical content. This thesis develops formal techniques that contain both an informal and a formal component because it is considered that the informality provides ease of understanding and the formality allows precise specification and verification. Specifically, the combination of Petri nets and temporal logic is considered for the specification and verification of hard real-time systems. Approaches that combine Petri nets and temporal logic by allowing a consistent translation between each formalism are examined. Previously, such techniques have been applied to the formal analysis of concurrent systems. This thesis adapts these techniques for use in the modelling, design and formal analysis of hard real-time systems. The techniques are applied to the problem of specifying a controller for a high-speed manufacturing system. It is shown that they can be used to prove liveness and safety properties, including qualitative aspects of system performance. The problem of verifying quantitative real-time properties is addressed by developing a further technique which combines the formalisms of timed Petri nets and real-time temporal logic. A unifying feature of these techniques is the common temporal description of the Petri net. A common problem with Petri net based techniques is the complexity problems associated with generating the reachability graph. This thesis addresses this problem by using concurrency sets to generate a partial reachability graph pertaining to a particular state. These sets also allows each state to be checked for the presence of inconsistencies and hazards. The problem of designing a controller for the high-speed manufacturing system is also considered. The approach adopted mvolves the use of a model-based controller: This type of controller uses the Petri net models developed, thus preservIng the properties already proven of the controller. It. also contains a model of the physical system which is synchronised to the real application to provide timely responses. The various way of forming the synchronization between these processes is considered and the resulting nets are analysed using concurrency sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis attempts to re-examine the work of Jean-Luc Godard and in particular the claims which have been made for it as the starting-point for a revolutionary cinema.This re-examination involves, firstly, a critical summary of the development of Structuralist thinking, from its origins in linguistics, with Saussure, through to its influence on Marxism, with Althusser. It is this `Structural Marxism' which prepared the ground for a view of Godard as a revolutionary film-maker so its influences on film theory in the decade after 1968 is traced in journals such as Cahiers du Cinéma and Screen and in the work of their editors and contributors. Godard's relationship with such theories was a complex one and some of the cross-breeding is revealed in a brief account of his own ideas about his film-making. More important, however is his practice as a committed `political' film-maker between 1968 and 1972 which is analysed in terms of the responses it makes to the cultural opportunities offered in the period after the revolutionary situation of May 1968. The severe problems revealed by that analysis may be partially resolved in Godard's greatest `political' achievement Tout va bien, but a comparative analysis proves that in earlier `a-political' films such as Vivre sa vie, he was creating more meaningful and perhaps even more revolutionary art, whose formal experimentation is more organically linked to its subject and whose ability to communicate ideas far oustrips the later work. In conclusion some indications are suggested of a more fruitful basis for Marxist theories of art than Structural variants, seeking a non-formalist approach in the work of Marx, of Trotsky, of Brecht and Lukacs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While the need for humanising education is pressing in neoliberal societies, the conditions for its possibility in formal institutions have become particularly cramped. A constellation of factors – the strength of neoliberal ideologies, the corporatisation of universities, the conflation of human freedom with consumer satisfaction, and a wider crisis of hope in the possibility or desirability of social change – make it difficult to apply classical theories of subject-transformation to new work in critical pedagogy. In particular, the growth of interest in pedagogies of comfort (as illustrated in certain forms of ‘therapeutic’ education and concerns about student ‘satisfaction’) and resistance to critical pedagogies suggest that subjectivty has become a primary site of political struggle in education. However, it can no longer be assumed that educators can (or should) liberate students’ repressed desires for ‘humanisation’ by politicising curricula, pedagogy or institutions. Rather, we must work to understand the new meanings and affective conditions of critical subjectivity itself. Bringing critical theories of subject transformation together with new work on ‘pedagogies of discomfort’, I suggest we can create new ways of opening up possibilities for critical education that respond to neoliberal subjectivities without corresponding to or affirming them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this thesis is twofold: to examine the validity of the rotating-field and cross-field theories of the single-phase induction motor when applied to a cage rotor machine; and to examine the extent to which skin effect is likely to modify the characteristics of a cage rotor machine. A mathematical analysis is presented for a single-phase induction motor in which the rotor parameters are modified by skin effect. Although this is based on the usual type of ideal machine, a new form of model rotor allows approximations for skin effect phenomena to be included as an integral part of the analysis. Performance equations appropriate to the rotating-field and cross-field theories are deduced, and the corresponding explanations for the steady-state mode of operation are critically examined. The evaluation of the winding currents and developed torque is simplified by the introduction of new dimensionless factors which are functions of the resistance/reactance ratios of the rotor and the speed. Tables of the factors are included for selected numerical values of the parameter ratios, and these are used to deduce typical operating characteristics for both cage and wound rotor machines. It is shown that a qualitative explanation of the mode of operation of a cage rotor machine is obtained from either theory; but the operating characteristics must be deduced from the performance equations of the rotating-field theory, because of the restrictions on the values of the rotor parameters imposed by skin effect.