37 resultados para Third World Approach to International Law
em Aston University Research Archive
Resumo:
This thesis challenges the consensual scholarly expectation of low EU impact in Central Asia. In particular, it claims that by focusing predominantly on narrow, micro-level factors, the prevailing theoretical perspectives risk overlooking less obvious aspects of the EU?s power, including structural aspects, and thus tend to underestimate the EU?s leverage in the region. Therefore, the thesis argues that a more structurally integrative and holistic approach is needed to understand the EU?s power in the region. In responding to this need, the thesis introduces a conceptual tool, which it terms „transnational power over? (TNPO). Inspired by debates in IPE, in particular new realist and critical IPE perspectives, and combining these views with insights from neorealist, neo-institutionalist and constructivist approaches to EU external relations, the concept of TNPO is an analytically eclectic notion, which helps to assess the degree to which, in today?s globalised and interdependent world, the EU?s power over third countries derives from its control over a combination of material, institutional and ideational structures, making it difficult for the EU?s partners to resist the EU?s initiatives or to reject its offers. In order to trace and assess the mechanisms of EU impact across these three structures, the thesis constructs a toolbox, which centres on four analytical distinctions: (i) EU-driven versus domestically driven mechanisms, (ii) mechanisms based on rationalist logics of action versus mechanisms following constructivist logics of action, (iii) agent-based versus purely structural mechanisms of TNPO, and (iv) transnational and intergovernmental mechanisms of EU impact. Using qualitative research methodology, the thesis then applies the conceptual model to the case of EU-Central Asia. It finds that the EU?s power over Central Asia effectively derives from its control over a combination of material, institutional and ideational structures, including its position as a leader in trade and investment in the region, its (geo)strategic and security-related capabilities vis-à-vis Central Asia, as well as the relatively dense level of institutionalisation of its relations with the five countries and the positive image of the EU in Central Asia as a more neutral actor.
Resumo:
Bridging the contending theories of natural law and international relations, this book proposes a 'relational ontology' as the basis for rethinking our approach to international politics. The book contains a number of challenging and controversial ideas on the study of international political thought which should provoke constructive debate within international relations theory, political theory, and philosophical ethics. © Amanda Russell Beattie 2010. All rights reserved.
Resumo:
The concept of plagiarism is not uncommonly associated with the concept of intellectual property, both for historical and legal reasons: the approach to the ownership of ‘moral’, nonmaterial goods has evolved to the right to individual property, and consequently a need was raised to establish a legal framework to cope with the infringement of those rights. The solution to plagiarism therefore falls most often under two categories: ethical and legal. On the ethical side, education and intercultural studies have addressed plagiarism critically, not only as a means to improve academic ethics policies (PlagiarismAdvice.org, 2008), but mainly to demonstrate that if anything the concept of plagiarism is far from being universal (Howard & Robillard, 2008). Even if differently, Howard (1995) and Scollon (1994, 1995) argued, and Angèlil-Carter (2000) and Pecorari (2008) later emphasised that the concept of plagiarism cannot be studied on the grounds that one definition is clearly understandable by everyone. Scollon (1994, 1995), for example, claimed that authorship attribution is particularly a problem in non-native writing in English, and so did Pecorari (2008) in her comprehensive analysis of academic plagiarism. If among higher education students plagiarism is often a problem of literacy, with prior, conflicting social discourses that may interfere with academic discourse, as Angèlil-Carter (2000) demonstrates, we then have to aver that a distinction should be made between intentional and inadvertent plagiarism: plagiarism should be prosecuted when intentional, but if it is part of the learning process and results from the plagiarist’s unfamiliarity with the text or topic it should be considered ‘positive plagiarism’ (Howard, 1995: 796) and hence not an offense. Determining the intention behind the instances of plagiarism therefore determines the nature of the disciplinary action adopted. Unfortunately, in order to demonstrate the intention to deceive and charge students with accusations of plagiarism, teachers necessarily have to position themselves as ‘plagiarism police’, although it has been argued otherwise (Robillard, 2008). Practice demonstrates that in their daily activities teachers will find themselves being required a command of investigative skills and tools that they most often lack. We thus claim that the ‘intention to deceive’ cannot inevitably be dissociated from plagiarism as a legal issue, even if Garner (2009) asserts that generally plagiarism is immoral but not illegal, and Goldstein (2003) makes the same severance. However, these claims, and the claim that only cases of copyright infringement tend to go to court, have recently been challenged, mainly by forensic linguists, who have been actively involved in cases of plagiarism. Turell (2008), for instance, demonstrated that plagiarism is often connoted with an illegal appropriation of ideas. Previously, she (Turell, 2004) had demonstrated by comparison of four translations of Shakespeare’s Julius Caesar to Spanish that the use of linguistic evidence is able to demonstrate instances of plagiarism. This challenge is also reinforced by practice in international organisations, such as the IEEE, to whom plagiarism potentially has ‘severe ethical and legal consequences’ (IEEE, 2006: 57). What plagiarism definitions used by publishers and organisations have in common – and which the academia usually lacks – is their focus on the legal nature. We speculate that this is due to the relation they intentionally establish with copyright laws, whereas in education the focus tends to shift from the legal to the ethical aspects. However, the number of plagiarism cases taken to court is very small, and jurisprudence is still being developed on the topic. In countries within the Civil Law tradition, Turell (2008) claims, (forensic) linguists are seldom called upon as expert witnesses in cases of plagiarism, either because plagiarists are rarely taken to court or because there is little tradition of accepting linguistic evidence. In spite of the investigative and evidential potential of forensic linguistics to demonstrate the plagiarist’s intention or otherwise, this potential is restricted by the ability to identify a text as being suspect of plagiarism. In an era with such a massive textual production, ‘policing’ plagiarism thus becomes an extraordinarily difficult task without the assistance of plagiarism detection systems. Although plagiarism detection has attracted the attention of computer engineers and software developers for years, a lot of research is still needed. Given the investigative nature of academic plagiarism, plagiarism detection has of necessity to consider not only concepts of education and computational linguistics, but also forensic linguistics. Especially, if intended to counter claims of being a ‘simplistic response’ (Robillard & Howard, 2008). In this paper, we use a corpus of essays written by university students who were accused of plagiarism, to demonstrate that a forensic linguistic analysis of improper paraphrasing in suspect texts has the potential to identify and provide evidence of intention. A linguistic analysis of the corpus texts shows that the plagiarist acts on the paradigmatic axis to replace relevant lexical items with a related word from the same semantic field, i.e. a synonym, a subordinate, a superordinate, etc. In other words, relevant lexical items were replaced with related, but not identical, ones. Additionally, the analysis demonstrates that the word order is often changed intentionally to disguise the borrowing. On the other hand, the linguistic analysis of linking and explanatory verbs (i.e. referencing verbs) and prepositions shows that these have the potential to discriminate instances of ‘patchwriting’ and instances of plagiarism. This research demonstrates that the referencing verbs are borrowed from the original in an attempt to construct the new text cohesively when the plagiarism is inadvertent, and that the plagiarist has made an effort to prevent the reader from identifying the text as plagiarism, when it is intentional. In some of these cases, the referencing elements prove being able to identify direct quotations and thus ‘betray’ and denounce plagiarism. Finally, we demonstrate that a forensic linguistic analysis of these verbs is critical to allow detection software to identify them as proper paraphrasing and not – mistakenly and simplistically – as plagiarism.
Resumo:
The thesis presented an overlapping analysis of private law institutions, in response to the arguments that law must be separated into discrete categories. The basis of this overlapping approach was the realist perspective, which emphasises the role of facts and outcomes as the starting point for legal analysis as opposed to legal principle or doctrine.
Resumo:
Today, the question of how to successfully reduce supply chain costs whilst increasing customer satisfaction continues to be the focus of many firms. It is noted in the literature that supply chain automation can increase flexibility whilst reducing inefficiencies. However, in the dynamic and process driven environment of distribution, there is the absence of a cohesive automation approach to guide companies in improving network competitiveness. This paper aims to address the gap in the literature by developing a three-level framework automation application approach with the assistance of radio frequency identification (RFID) technology and returnable transport equipment (RTE). The first level considers the automation of data retrieval and highlights the benefits of RFID. The second level consists of automating distribution processes such as unloading and assembling orders. As the labour is reduced with the introduction of RFID enabled robots, the balance between automation and labour is discussed. Finally, the third level is an analysis of the decision-making process at network points and the application of cognitive automation to objects. A distribution network scenario is formed and used to illustrate network reconfiguration at each level. The research pinpoints that RFID enabled RTE offers a viable tool to assist supply chain automation. Further research is proposed in particular, the area of cognitive automation to aide with decision-making.
Resumo:
In this paper we present a new approach to ontology learning. Its basis lies in a dynamic and iterative view of knowledge acquisition for ontologies. The Abraxas approach is founded on three resources, a set of texts, a set of learning patterns and a set of ontological triples, each of which must remain in equilibrium. As events occur which disturb this equilibrium various actions are triggered to re-establish a balance between the resources. Such events include acquisition of a further text from external resources such as the Web or the addition of ontological triples to the ontology. We develop the concept of a knowledge gap between the coverage of an ontology and the corpus of texts as a measure triggering actions. We present an overview of the algorithm and its functionalities.
Resumo:
Physiological changes that take place at cellular level are usually reflective of their level of gene expression. Different formulation excipients have an impact on physiological behavior of the exposed cells and in turn affect transporter genes, enterocyte-mediated metabolism and toxicity biomarkers. The aim of this study was to prepare solid dispersion of paracetamol and evaluate genetic changes that occur in Caco-2 cell lines during the permeability of paracetamol alone and paracetamol solid dispersion formulations. Paracetamol-PEG 8000 solid dispersion was prepared by melt fusion method and the formulation was characterised using differential scanning calorimetry (DSC), scanning electron microscopy (SEM) and Fourier transform infrared spectroscopy (FTIR). Formulation of solid dispersion resulted in the conversion of crystalline drug into an amorphous form. Permeability studies showed that paracetamol absorption was higher from the solid dispersion formulation. DNA microarrays analysis was carried out in order to investigate the involvement of any efflux/uptake transporters in paracetamol or its solid dispersion permeability. Neither transporter carriers nor efflux proteins were found to be involved in the absorption of paracetamol or its PEG solid dispersion. Gene expression analysis established that paracetamol toxicity was potentially reduced upon formulation into solid dispersion when ATP binding cassette (ABC) and solute carrier transporter (SLC) genes were analyzed.
Resumo:
Petroleum pipelines are the nervous system of the oil industry, as this transports crude oil from sources to refineries and petroleum products from refineries to demand points. Therefore, the efficient operation of these pipelines determines the effectiveness of the entire business. Pipeline route selection plays a major role when designing an effective pipeline system, as the health of the pipeline depends on its terrain. The present practice of route selection for petroleum pipelines is governed by factors such as the shortest distance, constructability, minimal effects on the environment, and approachability. Although this reduces capital expenditure, it often proves to be uneconomical when life cycle costing is considered. This study presents a route selection model with the application of an Analytic Hierarchy Process (AHP), a multiple attribute decision making technique. AHP considers all the above factors along with the operability and maintainability factors interactively. This system has been demonstrated here through a case study of pipeline route selection, from an Indian perspective. A cost-benefit comparison of the shortest route (conventionally selected) and optimal route establishes the effectiveness of the model.
Resumo:
A chip shooter machine in printed circuit board (PCB) assembly has three movable mechanisms: an X-Y table carrying a PCB, a feeder carrier with several feeders holding components and a rotary turret with multiple assembly heads to pick up and place components. In order to get the minimal placement or assembly time for a PCB on the machine, all the components on the board should be placed in a perfect sequence, and the components should be set up on a right feeder, or feeders since two feeders can hold the same type of components, and additionally, the assembly head should retrieve or pick up a component from a right feeder. The entire problem is very complicated, and this paper presents a genetic algorithm approach to tackle it.
Resumo:
This article examines the adoption, by the New Labour government, of a mixed communities approach to the renewal of disadvantaged neighbourhoods in England. It argues that while there are continuities with previous policy, the new approach represents a more neoliberal policy turn in three respects: its identification of concentrated poverty as the problem; its faith in market-led regeneration; and its alignment with a new urban policy agenda in which cities are gentrified and remodelled as sites for capital accumulation through entrepreneurial local governance. The article then draws on evidence from the early phases of the evaluation of the mixed community demonstration projects to explore how the new policy approach is playing out at a local level, where it is layered upon existing policies, politics and institutional relationships. Tensions between neighbourhood and strategic interests, community and capital are evident as the local projects attempt neighbourhood transformation, while seeking to protect the rights and interests of existing residents. Extensive community consultation efforts run parallel with emergent governance structures, in which local state and capital interests combine and communities may effectively be disempowered. Policies and structures are still evolving and it is not yet entirely clear how these tensions will be resolved, especially in the light of a collapsing housing market, increased poverty and demand for affordable housing, and a shortage of private investment.
Resumo:
Studies of political dynamics between multinational enterprise (MNE) parents and subsidiaries during subsidiary role evolution have focused largely on control and resistance. This paper adopts a critical discursive approach to enable an exploration of subtle dynamics in the way that both headquarters and subsidiaries subjectively reconstruct their independent-interdependent relationships with each other during change. We draw from a real-time qualitative study of a revealing case of charter change in an important European subsidiary of an MNE attempting to build closer integration across European country operations. Our results illustrate the role of three discourses – selling, resistance and reconciliation – in the reconstruction of the subsidiary–parent relationship. From this analysis we develop a process framework that elucidates the important role of these three discourses in the reconstruction of subsidiary roles, showing how resistance is not simply subversive but an important part of integration. Our findings contribute to a better understanding of the micro-level political dynamics in subsidiary role evolution, and of how voice is exercised in MNEs. This study also provides a rare example of discourse-based analysis in an MNE context, advancing our knowledge of how discursive methods can help to advance international business research more generally.
Resumo:
The role of beneficiaries in the humanitarian supply chain is highlighted in the imperative to meet their needs but disputed in terms of their actual decision-making and purchasing power. This paper discusses the use of a beneficiary-focused, community-based approach in the case of a post-crisis housing reconstruction programme. In the community-based approach, beneficiaries become active members of the humanitarian supply chain. Implications of this community-based approach are discussed in the light of supply chain design and aid effectiveness. © 2010 Taylor & Francis.
Resumo:
The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about 800 km, carrying a C-band scatterometer. A scatterometer measures the amount of backscatter microwave radiation reflected by small ripples on the ocean surface induced by sea-surface winds, and so provides instantaneous snap-shots of wind flow over large areas of the ocean surface, known as wind fields. Inherent in the physics of the observation process is an ambiguity in wind direction; the scatterometer cannot distinguish if the wind is blowing toward or away from the sensor device. This ambiguity implies that there is a one-to-many mapping between scatterometer data and wind direction. Current operational methods for wind field retrieval are based on the retrieval of wind vectors from satellite scatterometer data, followed by a disambiguation and filtering process that is reliant on numerical weather prediction models. The wind vectors are retrieved by the local inversion of a forward model, mapping scatterometer observations to wind vectors, and minimising a cost function in scatterometer measurement space. This thesis applies a pragmatic Bayesian solution to the problem. The likelihood is a combination of conditional probability distributions for the local wind vectors given the scatterometer data. The prior distribution is a vector Gaussian process that provides the geophysical consistency for the wind field. The wind vectors are retrieved directly from the scatterometer data by using mixture density networks, a principled method to model multi-modal conditional probability density functions. The complexity of the mapping and the structure of the conditional probability density function are investigated. A hybrid mixture density network, that incorporates the knowledge that the conditional probability distribution of the observation process is predominantly bi-modal, is developed. The optimal model, which generalises across a swathe of scatterometer readings, is better on key performance measures than the current operational model. Wind field retrieval is approached from three perspectives. The first is a non-autonomous method that confirms the validity of the model by retrieving the correct wind field 99% of the time from a test set of 575 wind fields. The second technique takes the maximum a posteriori probability wind field retrieved from the posterior distribution as the prediction. For the third technique, Markov Chain Monte Carlo (MCMC) techniques were employed to estimate the mass associated with significant modes of the posterior distribution, and make predictions based on the mode with the greatest mass associated with it. General methods for sampling from multi-modal distributions were benchmarked against a specific MCMC transition kernel designed for this problem. It was shown that the general methods were unsuitable for this application due to computational expense. On a test set of 100 wind fields the MAP estimate correctly retrieved 72 wind fields, whilst the sampling method correctly retrieved 73 wind fields.
Resumo:
The traditional waterfall software life cycle model has several weaknesses. One problem is that a working version of a system is unavailable until a late stage in the development; any omissions and mistakes in the specification undetected until that stage can be costly to maintain. The operational approach which emphasises the construction of executable specifications can help to remedy this problem. An operational specification may be exercised to generate the behaviours of the specified system, thereby serving as a prototype to facilitate early validation of the system's functional requirements. Recent ideas have centred on using an existing operational method such as JSD in the specification phase of object-oriented development. An explicit transformation phase following specification is necessary in this approach because differences in abstractions between the two domains need to be bridged. This research explores an alternative approach of developing an operational specification method specifically for object-oriented development. By incorporating object-oriented concepts in operational specifications, the specifications have the advantage of directly facilitating implementation in an object-oriented language without requiring further significant transformations. In addition, object-oriented concepts can help the developer manage the complexity of the problem domain specification, whilst providing the user with a specification that closely reflects the real world and so the specification and its execution can be readily understood and validated. A graphical notation has been developed for the specification method which can capture the dynamic properties of an object-oriented system. A tool has also been implemented comprising an editor to facilitate the input of specifications, and an interpreter which can execute the specifications and graphically animate the behaviours of the specified systems.