18 resultados para researching and writing the EU (see also integration theory in this section)
em Aston University Research Archive
Resumo:
The contributions in this research are split in to three distinct, but related, areas. The focus of the work is based on improving the efficiency of video content distribution in the networks that are liable to packet loss, such as the Internet. Initially, the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP) is presented. Since added FEC can be used to reduce the number of retransmissions, the requirement for TCP to deal with any losses is greatly reduced. When real-time applications are needed, delay must be kept to a minimum, and retransmissions not desirable. A balance, therefore, between additional bandwidth and delays due to retransmissions must be struck. This is followed by the proposal of a hybrid transport, specifically for H.264 encoded video, as a compromise between the delay-prone TCP and the loss-prone UDP. It is argued that the playback quality at the receiver often need not be 100% perfect, providing a certain level is assured. Reliable TCP is used to transmit and guarantee delivery of the most important packets. The delay associated with the proposal is measured, and the potential for use as an alternative to the conventional methods of transporting video by either TCP or UDP alone is demonstrated. Finally, a new objective measurement is investigated for assessing the playback quality of video transported using TCP. A new metric is defined to characterise the quality of playback in terms of its continuity. Using packet traces generated from real TCP connections in a lossy environment, simulating the playback of a video is possible, whilst monitoring buffer behaviour to calculate pause intensity values. Subjective tests are conducted to verify the effectiveness of the metric introduced and show that the results of objective and subjective scores made are closely correlated.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
In the global economy, innovation is one of the most important competitive assets for companies willing to compete in international markets. As competition moves from standardised products to customised ones, depending on each specific market needs, economies of scale are not anymore the only winning strategy. Innovation requires firms to establish processes to acquire and absorb new knowledge, leading to the recent theory of Open Innovation. Knowledge sharing and acquisition happens when firms are embedded in networks with other firms, university, institutions and many other economic actors. Several typologies of innovation and firm networks have been identified, with various geographical spans. One of the first being modelled was the Industrial Cluster (or in Italian Distretto Industriale) which was for long considered the benchmark for innovation and economic development. Other kind of networks have been modelled since the late 1970s; Regional Innovation Systems represent one of the latest and more diffuse model of innovation networks, specifically introduced to combine local networks and the global economy. This model was qualitatively exploited since its introduction, but, together with National Innovation Systems, is among the most inspiring for policy makers and is often cited by them, not always properly. The aim of this research is to setup an econometric model describing Regional Innovation Systems, becoming one the first attempts to test and enhance this theory with a quantitative approach. A dataset of 104 secondary and primary data from European regions was built in order to run a multiple linear regression, testing if Regional Innovation Systems are really correlated to regional innovation and regional innovation in cooperation with foreign partners. Furthermore, an exploratory multiple linear regression was performed to verify which variables, among those describing a Regional Innovation Systems, are the most significant for innovating, alone or with foreign partners. Furthermore, the effectiveness of present innovation policies has been tested based on the findings of the econometric model. The developed model confirmed the role of Regional Innovation Systems for creating innovation even in cooperation with international partners: this represents one of the firsts quantitative confirmation of a theory previously based on qualitative models only. Furthermore the results of this model confirmed a minor influence of National Innovation Systems: comparing the analysis of existing innovation policies, both at regional and national level, to our findings, emerged the need for potential a pivotal change in the direction currently followed by policy makers. Last, while confirming the role of the presence a learning environment in a region and the catalyst role of regional administration, this research offers a potential new perspective for the whole private sector in creating a Regional Innovation System.
Resumo:
Whilst some authors have portrayed the Internet as a powerful tool for business and political institutions, others have highlighted the potential of this technology for those vying to constrain or counter-balance the power of organizations, through e-collectivism and on-line action. What appears to be emerging is a contested space that has the potential to simultaneously enhance the power of organizations, whilst also acting as an enabling technology for the empowerment of grass-root networks. In this struggle, organizations are fighting for the retention of “old economy” positions, as well as the development of “new economy” power-bases. In realizing these positions, organizations and institutions are strategizing and manoeuvering in order to shape on-line networks and communications. For example, the on-line activities of individuals can be contained through various technological means, such as surveillance, and the structuring of the virtual world through the use of portals and “walled gardens”. However, loose groupings of individuals are also strategizing to ensure there is a liberation of their communication paths and practices, and to maintain the potential for mobilization within and across traditional boundaries. In this article, the unique nature and potential of the Internet are evaluated, and the struggle over this contested virtual space is explored.
Resumo:
This thesis presents a study of the sources of new product ideas and the development of new product proposals in an organisation in the UK Computer Industry. The thesis extends the work of von Hippel by showing how the phenomenon which he describes as "the Customer Active Paradigm for new product idea generation" can be observed to operate in this Industry. Furthermore, this thesis contrasts his Customer Active Paradigm with the more usually encountered Manufacturer Active Paradigm. In a second area, the thesis draws a number of conclusions relating to methods of market research, confirming existing observations and demonstrating the suitability of flexible interview strategies in certain circumstances. The thesis goes on to demonstrate the importance of free information flow within the organisation, making it more likely that sought and unsought opportunities can be exploited. It is shown that formal information flows and documents are a necessary but not sufficient means of influencing the formation of the organisation's dominant ideas on new product areas. The findings also link the work of Tushman and Katz on the role of "Gatekeepers" with the work of von Hippel by showing that the role of gatekeeper is particularly appropriate and useful to an organisation changing from Customer Active to Manufacturer Active methods of idea generation. Finally, the thesis provides conclusions relating to the exploitation of specific new product opportunities facing the sponsoring organisation.
Resumo:
The issues of Kosovo intependence and European Union membership have dominated Serbian domestic politics and foreign policy since the fall of Slobodan Milosevic in 2000. Despite the lack of formal conditionality on the Kosovo issue, Serbia's isnsitence on its uncompromising 'no recognition' Kosovo policy has been detrimental to its EU candidacy aspirations. This article examines Serbia's Kosovo policies in the context of EU integration, examining in particular the divergence between SErbia's stance towards Kosovo and its aspirations towards EU candidacy.
Resumo:
We developed and tested a team level contingency model of innovation, integrating theories regarding work demands, team reflexivity - the extent to which teams collectively reflect upon their working methods and functioning -, and team innovation. We argued that highly reflexive teams will be more innovative than teams low in reflexivity when facing a demanding work environment. The relationships between team reflexivity, a demanding work environment (i.e. quality of the physical work environment and work load) and team innovation was examined among 98 primary health care teams (PHCTs) in the UK, comprised of 1137 individuals. Results showed that team reflexivity is positively related to team innovation, and that there is an interaction between team reflexivity, team level workload, and team innovation, such that when team level workload is high, combined with a high level of team reflexivity, team innovation is also higher. The complementary interaction between team reflexivity, quality of physical work environment, and team innovation, showed that when the quality of the work environment is low, combined with a high level of team reflexivity, team innovation was also higher. These results are discussed in the context of the need for team reflexivity and team innovation among teams at work facing high work demands.
Resumo:
The thesis presents an experimentally validated modelling study of the flow of combustion air in an industrial radiant tube burner (RTB). The RTB is used typically in industrial heat treating furnaces. The work has been initiated because of the need for improvements in burner lifetime and performance which are related to the fluid mechanics of the com busting flow, and a fundamental understanding of this is therefore necessary. To achieve this, a detailed three-dimensional Computational Fluid Dynamics (CFD) model has been used, validated with experimental air flow, temperature and flue gas measurements. Initially, the work programme is presented and the theory behind RTB design and operation in addition to the theory behind swirling flows and methane combustion. NOx reduction techniques are discussed and numerical modelling of combusting flows is detailed in this section. The importance of turbulence, radiation and combustion modelling is highlighted, as well as the numerical schemes that incorporate discretization, finite volume theory and convergence. The study first focuses on the combustion air flow and its delivery to the combustion zone. An isothermal computational model was developed to allow the examination of the flow characteristics as it enters the burner and progresses through the various sections prior to the discharge face in the combustion area. Important features identified include the air recuperator swirler coil, the step ring, the primary/secondary air splitting flame tube and the fuel nozzle. It was revealed that the effectiveness of the air recuperator swirler is significantly compromised by the need for a generous assembly tolerance. Also, there is a substantial circumferential flow maldistribution introduced by the swirier, but that this is effectively removed by the positioning of a ring constriction in the downstream passage. Computations using the k-ε turbulence model show good agreement with experimentally measured velocity profiles in the combustion zone and proved the use of the modelling strategy prior to the combustion study. Reasonable mesh independence was obtained with 200,000 nodes. Agreement was poorer with the RNG k-ε and Reynolds Stress models. The study continues to address the combustion process itself and the heat transfer process internal to the RTB. A series of combustion and radiation model configurations were developed and the optimum combination of the Eddy Dissipation (ED) combustion model and the Discrete Transfer (DT) radiation model was used successfully to validate a burner experimental test. The previously cold flow validated k-ε turbulence model was used and reasonable mesh independence was obtained with 300,000 nodes. The combination showed good agreement with temperature measurements in the inner and outer walls of the burner, as well as with flue gas composition measured at the exhaust. The inner tube wall temperature predictions validated the experimental measurements in the largest portion of the thermocouple locations, highlighting a small flame bias to one side, although the model slightly over predicts the temperatures towards the downstream end of the inner tube. NOx emissions were initially over predicted, however, the use of a combustion flame temperature limiting subroutine allowed convergence to the experimental value of 451 ppmv. With the validated model, the effectiveness of certain RTB features identified previously is analysed, and an analysis of the energy transfers throughout the burner is presented, to identify the dominant mechanisms in each region. The optimum turbulence-combustion-radiation model selection was then the baseline for further model development. One of these models, an eccentrically positioned flame tube model highlights the failure mode of the RTB during long term operation. Other models were developed to address NOx reduction and improvement of the flame profile in the burner combustion zone. These included a modified fuel nozzle design, with 12 circular section fuel ports, which demonstrates a longer and more symmetric flame, although with limited success in NOx reduction. In addition, a zero bypass swirler coil model was developed that highlights the effect of the stronger swirling combustion flow. A reduced diameter and a 20 mm forward displaced flame tube model shows limited success in NOx reduction; although the latter demonstrated improvements in the discharge face heat distribution and improvements in the flame symmetry. Finally, Flue Gas Recirculation (FGR) modelling attempts indicate the difficulty of the application of this NOx reduction technique in the Wellman RTB. Recommendations for further work are made that include design mitigations for the fuel nozzle and further burner modelling is suggested to improve computational validation. The introduction of fuel staging is proposed, as well as a modification in the inner tube to enhance the effect of FGR.
Resumo:
Models at runtime can be defined as abstract representations of a system, including its structure and behaviour, which exist in tandem with the given system during the actual execution time of that system. Furthermore, these models should be causally connected to the system being modelled, offering a reflective capability. Significant advances have been made in recent years in applying this concept, most notably in adaptive systems. In this paper we argue that a similar approach can also be used to support the dynamic generation of software artefacts at execution time. An important area where this is relevant is the generation of software mediators to tackle the crucial problem of interoperability in distributed systems. We refer to this approach as emergent middleware, representing a fundamentally new approach to resolving interoperability problems in the complex distributed systems of today. In this context, the runtime models are used to capture meta-information about the underlying networked systems that need to interoperate, including their interfaces and additional knowledge about their associated behaviour. This is supplemented by ontological information to enable semantic reasoning. This paper focuses on this novel use of models at runtime, examining in detail the nature of such runtime models coupled with consideration of the supportive algorithms and tools that extract this knowledge and use it to synthesise the appropriate emergent middleware.
Resumo:
The pharmacological effects of a number of centrally acting drugs have been compared in euthyroid mice and mice made hyperthyroid by pretreatment with sodium-1-thyroxine. The potencies of two barbiturates, pentobarbitone and thiopentone - as indicated by the duration of their hypnotic actions and their acute toxicities - are increased in hyperthyroid mice. An acutely active uncoupler of phosphorylative oxidation is 2, 4-dinitrophenol, an agent which proved to be a potent hypnotic when administered intracerebrally. An attempt has been made to relate the mechanism of action of the barbiturates to the uncoupling effects of thyroxine and 2, 4-dinitrophenol. The pharmacological effects of chlorpromazine, reserpine and amphetamine-like drugs have also been studied in hyperthyroid mice. After pretreatment with thyroxine, mice show a reduced tendency to become hypothermic after chlorpromazine or reserpine; in fact, under suitable laboratory conditions these agents produce a hyperthermic effect. Yet their known depressant effects upon locomotor activity were not substantially altered. Thus it appeared that depression of locomotor activity and hypothermia are not necessarily correlated, an observation at variance with previously held opinion. These results have been discussed in the light of our knowledge of the role of the thyroid gland in thermoregulation. The actions of tremorine and its metabolite, oxotremorine, have also been examined. Hyperthyroid animals are less susceptible to both the hypothermia and tremor produced by these agents. An attempt is made to explain these observations, in view of the known mechanism of action of oxotremorine and the tremorgenic actions that thyroxine may have. A number of experimental methods have been used to study the anti-nociceptive (analgesic) effects of drugs in euthyroid and hyperthyroid mice. The sites and mechanisms of action of these drugs and the known actions of thyroxine have been discussed.
Resumo:
THE YOUTH MOVEMENT NASHI (OURS) WAS FOUNDED IN THE SPRING of 2005 against the backdrop of Ukraine’s ‘Orange Revolution’. Its aim was to stabilise Russia’s political system and take back the streets from opposition demonstrators. Personally loyal to Putin and taking its ideological orientation from Surkov’s concept of ‘sovereign democracy’, Nashi has sought to turn the tide on ‘defeatism’ and develop Russian youth into a patriotic new elite that ‘believes in the future of Russia’ (p. 15). Combining a wealth of empirical detail and the application of insights from discourse theory, Ivo Mijnssen analyses the organisation’s development between 2005 and 2012. His analysis focuses on three key moments—the organisation’s foundation, the apogee of its mobilisation around the Bronze Soldier dispute with Estonia, and the 2010 Seliger youth camp—to help understand Nashi’s organisation, purpose and ideational outlook as well as the limitations and challenges it faces. As such,the book is insightful both for those with an interest in post-Soviet Russian youth culture, and for scholars seeking a rounded understanding of the Kremlin’s initiatives to return a sense of identity and purpose to Russian national life.The first chapter, ‘Background and Context’, outlines the conceptual toolkit provided by Ernesto Laclau and Chantal Mouffe to help make sense of developments on the terrain of identity politics. In their terms, since the collapse of the Soviet Union, Russia has experienced acute dislocation of its identity. With the tangible loss of great power status, Russian realities have become unfixed from a discourse enabling national life to be constructed, albeit inherently contingently, as meaningful. The lack of a Gramscian hegemonic discourse to provide a unifying national idea was securitised as an existential threat demanding special measures. Accordingly, the identification of those who are ‘notUs’ has been a recurrent theme of Nashi’s discourse and activity. With the victory in World War II held up as a foundational moment, a constitutive other is found in the notion of ‘unusual fascists’. This notion includes not just neo-Nazis, but reflects a chain of equivalence that expands to include a range of perceived enemies of Putin’s consolidation project such as oligarchs and pro-Western liberals.The empirical background is provided by the second chapter, ‘Russia’s Youth, the Orange Revolution, and Nashi’, which traces the emergence of Nashi amid the climate of political instability of 2004 and 2005. A particularly note-worthy aspect of Mijnssen’s work is the inclusion of citations from his interviews with Nashicommissars; the youth movement’s cadres. Although relatively few in number, such insider conversations provide insight into the ethos of Nashi’s organisation and the outlook of those who have pledged their involvement. Besides the discussion of Nashi’s manifesto, the reader thus gains insight into the motivations of some participants and behind-the-scenes details of Nashi’s activities in response to the perceived threat of anti-government protests. The third chapter, ‘Nashi’s Bronze Soldier’, charts Nashi’s role in elevating the removal of a World War II monument from downtown Tallinn into an international dispute over the interpretation of history. The events subsequent to this securitisation of memory are charted in detail, concluding that Nashi’s activities were ultimately unsuccessful as their demands received little official support.The fourth chapter, ‘Seliger: The Foundry of Modernisation’, presents a distinctive feature of Mijnssen’s study, namely his ethnographic account as a participant observer in the Youth International Forum at Seliger. In the early years of the camp (2005–2007), Russian participants received extensive training, including master classes in ‘methods of forestalling mass unrest’ (p. 131), and the camp served to foster a sense of group identity and purpose among activists. After 2009 the event was no longer officially run as a Nashi camp, and its role became that of a forum for the exchange of ideas about innovation, although camp spirit remained a central feature. In 2010 the camp welcomed international attendees for the first time. As one of about 700 international participants in that year the author provides a fascinating account based on fieldwork diaries.Despite the polemical nature of the topic, Mijnssen’s analysis remains even-handed, exemplified in his balanced assessment of the Seliger experience. While he details the frustrations and disappointments of the international participants with regard to the unaccustomed strict camp discipline, organisational and communication failures, and the controlled format of many discussions,he does not neglect to note the camp’s successes in generating a gratifying collective dynamic between the participants, even among the international attendees who spent only a week there.In addition to the useful bibliography, the book is back-ended by two appendices, which provide the reader with important Russian-language primary source materials. The first is Nashi’s ‘Unusual Fascism’ (Neobyknovennyi fashizm) brochure, and the second is the booklet entitled ‘Some Uncomfortable Questions to the Russian Authorities’ (Neskol’ko neudobnykh voprosov rossiiskoivlasti) which was provided to the Seliger 2010 instructors to guide them in responding to probing questions from foreign participants. Given that these are not readily publicly available even now, they constitute a useful resource from the historical perspective.
Resumo:
The thesis examines the effects of the privatisation process on productivity, competitiveness and performance in two major Brazilian steel companies, which were privatised in between 1991 and 1993. The case study method was adopted in this research due to its strengths as a useful technique allowing in-depth examination of the privatisation process, the context in which it happened and its effects on the companies. The thesis has developed a company analysis framework consisting of three components: management, competitiveness/productivity and performance and examined the evidence on the companies within this framework.The research indicates that there is no straightforward relationship between privatisation, competitiveness and performance. There were many significant differences in the management and technological capabilities, products and performance of the two companies, and these have largely influenced the effects of privatisation on each company. Company Alpha's strengths in technological and management capabilities and high value added products explain strong productivity and financial performance during and after privatisation. Company Beta's performance was weak before the privatisation and remained weak immediately after. Before the privatisation, weaknesses in management, commodity type low value added products and shortage of funds for investment were the major problems. These were compounded by greater government interference. Despite major restructuring, the poor performance has continued after privatisation largely because the company has not been able to improve its productivity sufficiently to be cost competitive in commodity type markets. Both companies state that their strategies have changed significantly. They claim to be more responsive to market conditions and customers and are attempting to develop closer links with major customers. It is not possible to assess the consequences of these changes in the short time that has elapsed since privatisation but Alpha appears to be more effective in developing a coherent strategy because of its strengths. Both companies accelerated their programme of organisational restructuring and reducing the number of their employees during the privatisation process to improve productivity and performance. Alpha has attained standards comparable to major international steel companies. Beta has had to make much bigger organisational changes and cuts in its labour force but its productivity levels still remain low in comparison with Alpha and international competitors.
Resumo:
The laminar distribution of the vacuolation ('spongiform change'), surviving neurons, glial cell nuclei, and prion protein (PrP) deposits was studied in the frontal, parietal and temporal cortex in 11 cases of sporadic Creutzfeldt-Jakob disease (CJD). The distribution of the vacuolation was mainly bimodal with peaks of density in the upper and lower cortical laminae. The density of surviving neurons was greatest in the upper cortex while glial cell nuclei were distributed largely in the lower cortex. PrP deposits exhibited either a bimodal distribution or reached a maximum density in the lower cortex. The vertical density of the vacuoles was positively correlated with the surviving neurons in 12/44 of cortical areas studied, with glial cell nuclei in 16/44 areas and with PrP deposition in 15/28 areas. PrP deposits were positively correlated with glial cell nuclei in 12/31 areas. These results suggest that in sporadic CJD: (1) the lower cortical laminae are the most affected by the pathological changes; (2) the development of the vacuolation may precede that of the extracellular PrP deposits and the glial cell reaction; and (3) the pathological changes may develop initially in the lower cortical laminae and spread to affect the upper cortical laminae. © 2001 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
Purpose: The purpose of the research described in this paper is to disentangle the rhetoric from the reality in relation to supply chain management (SCM) adoption in practice. There is significant evidence of a divergence between theory and practice in the field of SCM. Design/methodology/approach: Based on a review of extant theory, the authors posit a new definitional construct for SCM – the Four Fundamentals – and investigated four research questions (RQs) that emerged from the theoretical review. The empirical work comprised three main phases: focussed interviews, focus groups and a questionnaire survey. Each phase used the authors’ definitional construct as its basis. While the context of the paper’s empirical work is Ireland, the insights and results are generalisable to other geographical contexts. Findings: The data collected during the various stages of the empirical research supported the essence of the definitional construct and allowed it to be further developed and refined. In addition, the findings suggest that, while levels of SCM understanding are generally quite high, there is room for improvement in relation to how this understanding is translated into practice. Research limitations/implications: Expansion of the research design to incorporate case studies, grounded theory and action research has the potential to generate new SCM theory that builds on the Four Fundamentals construct, thus facilitating a deeper and richer understanding of SCM phenomena. The use of longitudinal studies would enable a barometer of progress to be developed over time. Practical implications: The authors’ definitional construct supports improvement in the cohesion of SCM practices, thereby promoting the effective implementation of supply chain strategies. A number of critical success factors and/or barriers to implementation of SCM theory in practice are identified, as are a number of practical measures that could be implemented at policy/supply chain/firm level to improve the level of effective SCM adoption. Originality/value: The authors’ robust definitional construct supports a more cohesive approach to the development of a unified theory of SCM. In addition to a profile of SCM understanding and adoption by firms in Ireland, the related critical success factors and/or inhibitors to success, as well as possible interventions, are identified.