15 resultados para Zuccotti, Susan: Under his very windows
em Aston University Research Archive
Resumo:
Full text: It seems a long time ago now since we were at the BCLA conference. The excellent FIFA World Cup in Brazil kept us occupied over the summer as well as Formula 1, Wimbledon, Tour de France, Commonwealth Games and of course exam paper marking! The BCLA conference this year was held in Birmingham at the International Convention Centre which again proved to be a great venue. The number of attendees overall was up on previous years, and at a record high of 1500 people. Amongst the highlights at this year's annual conference was the live surgery link where Professor Sunil Shah demonstrated the differences in technique between traditional phacoemulsification cataract surgery and femtosecond assisted phacoemulsification cataract surgery. Dr. Raquel Gil Cazorla, a research optometrist at Aston University, assisted in the procedure including calibrating the femtosecond laser. Another highlight for me was the session that I chaired, which was the international session organised by IACLE (International Association of CL Educators). There was a talk by Mirjam van Tilborg about dry eye prevalence in the Netherlands and how it was managed by medical general practitioners (GPs) or optometrists. It was interesting to know that there are only 2 schools of optometry there and currently under 1000 registered optometrists there. It also seems that GPs were more likely to blame CL as the cause for dry eye whereas optometrists who had a fuller range of tests were better at solving the issue. The next part of the session included the presentation of 5 selected posters from around the world. The posters were also displayed in the main poster area but were selected to be presented here as they had international relevance. The posters were: 1. Motivators and Barriers for Contact Lens Recommendation and Wear by Nilesh Thite (India) 2. Contact lens hygiene among Saudi wearers by Dr. Ali Masmaly (Saudi) 3. Trends of contact lens prescribing and patterns of contact lens practice in Jordan by Dr. Mera Haddad (Jordan) 4. Contact Lens Behaviour in Greece by Dr. Dimitra Makrynioti (Greece) 5. How practitioners inform ametropes about the benefits of contact lenses and overcome the potential barriers: an Italian survey, by Dr. Fabrizio Zeri (Italy) It was interesting to learn about CL practice in different parts, for example the CL wearing population ration in Saudi Arabia is around 1:2 Male:Female (similar to other parts of the world) and although the sale of CL is restricted to registered practitioners there are many unregistered outlets, like clothing stores, that flout the rules. In Jordan some older practitioners will still advise patients to use tap water or even saliva! But thankfully the newer generation of practitioners have been educated not to advise this. In Greece one of the concerns was that some practitioners may advise patients to use disposable lenses for longer than they should and again it seems to be the practitioners with inadequate education that would do this. In India it was found that cost was one barrier to using contact lenses but also some practitioners felt that they shouldn’t offer CLs due to cost too. In Italy sensitive eyes and CL care and maintenance were the barriers to CL wear but the motivators were vision and comfort and aesthetics. Finally the international session ended with the IACLE travel award and educator awards presented by IACLE president Shehzad Naroo and BCLA President Andrew Yorke. The travel award went to Wang Ling, Jinling Institute of Technology, Nanjing, China. There were 3 regional Contact Lens Educator of the Year Awards sponsored by Coopervision and presented by Dr. J.C. Aragorn of Coopervision. 1. Asia Pacific Region – Dr. Rajeswari Mahadevan of Sankara Nethralaya Medical Research Foundation, Chennai, India 2. Americas Region – Dr. Sergio Garcia of University of La Salle, Bogotá and the University Santo Tomás, Bucaramanga, Colombia 3. Europe/Africa – Middle East Region: Dr. Eef van der Worp, affiliated with the University of Maastricht, the Netherlands The posters above were just a small selection of those displayed at this year's BCLA conference. If you missed the BCLA conference then you can see the abstracts for all posters and talks in a virtual issue of CLAE very soon. The poster competition was kindly sponsored by Elsevier. The poster winner this year was: Joan Gispets – Corneal and Anterior Chamber Parameters in Keratoconus The 3 runners up were: Debby Yeung – Scleral Lens Central Corneal Clearance Assessment with Biomicroscopy Sarah L. Smith – Subjective Grading of Lid Margin Staining Heiko Pult – Impact of Soft Contact Lenses on Lid Parallel Conjunctival Folds My final two highlights are a little more personal. Firstly, I was awarded Honorary Life Fellowship of the BCLA for my work with the Journal, and I would like to thank the BCLA, Elsevier, the editorial board of CLAE, the reviewers and the authors for their support of my role. My final highlight from the BCLA conference this year was the final presentation of the conference – the BCLA Gold Medal award. The recipient this year was Professor Philip Morgan with his talk ‘Changing the world with contact lenses’. Phil was the person who advised me to go to my first BCLA conference in 1994 (funnily he didn’t attend himself as he was busy getting married!) and now 20 years later he was being honoured with the accolade of being the BCLA Gold Medallist. The date of his BCLA medal addressed was shared with his father's birthday so a double celebration for Phil. Well done to outgoing BCLA President Andy Yorke and his team at the BCLA (including Nick Rumney, Cheryl Donnelly, Sarah Greenwood and Amir Khan) on an excellent conference. And finally welcome to new President Susan Bowers. Copyright © 2014 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Resumo:
Anyone who looks at the title of this special issue will agree that the intent behind the preparation of this volume was ambitious: to predict and discuss “The Future of Manufacturing”. Will manufacturing be important in the future? Even though some sceptics might say not, and put on the table some old familiar arguments, we would strongly disagree. To bring subsidies for the argument we issued the call-for-papers for this special issue of Journal of Manufacturing Technology Management, fully aware of the size of the challenge in our hands. But we strongly believed that the enterprise would be worthwhile. The point of departure is the ongoing debate concerning the meaning and content of manufacturing. The easily visualised internal activity of using tangible resources to make physical products in factories is no longer a viable way to characterise manufacturing. It is now a more loosely defined concept concerning the organisation and management of open, interdependent, systems for delivering goods and services, tangible and intangible, to diverse types of markets. Interestingly, Wickham Skinner is the most cited author in this special issue of JMTM. He provides the departure point of several articles because his vision and insights have guided and inspired researchers in production and operations management from the late 1960s until today. However, the picture that we draw after looking at the contributions in this special issue is intrinsically distinct, much more dynamic, and complex. Seven articles address the following research themes: 1.new patterns of organisation, where the boundaries of firms become blurred and the role of the firm in the production system as well as that of manufacturing within the firm become contingent; 2.new approaches to strategic decision-making in markets characterised by turbulence and weak signals at the customer interface; 3.new challenges in strategic and operational decisions due to changes in the profile of the workforce; 4.new global players, especially China, modifying the manufacturing landscape; and 5.new techniques, methods and tools that are being made feasible through progress in new technological domains. Of course, many other important dimensions could be studied, but these themes are representative of current changes and future challenges. Three articles look at the first theme: organisational evolution of production and operations in firms and networks. Karlsson's and Skold's article represent one further step in their efforts to characterise “the extraprise”. In the article, they advance the construction of a new framework, based on “the network perspective” by defining the formal elements which compose it and exploring the meaning of different types of relationships. The way in which “actors, resources and activities” are conceptualised extends the existing boundaries of analytical thinking in operations management and open new avenues for research, teaching and practice. The higher level of abstraction, an intrinsic feature of the framework, is associated to the increasing degree of complexity that characterises decisions related to strategy and implementation in the manufacturing and operations area, a feature that is expected to become more and more pervasive as time proceeds. Riis, Johansen, Englyst and Sorensen have also based their article on their previous work, which in this case is on “the interactive firm”. They advance new propositions on strategic roles of manufacturing and discuss why the configuration of strategic manufacturing roles, at the level of the network, will become a key issue and how the indirect strategic roles of manufacturing will become increasingly important. Additionally, by considering that value chains will become value webs, they predict that shifts in strategic manufacturing roles will look like a sequence of moves similar to a game of chess. Then, lastly under the first theme, Fleury and Fleury develop a conceptual framework for the study of production systems in general derived from field research in the telecommunications industry, here considered a prototype of the coming information society and knowledge economy. They propose a new typology of firms which, on certain dimensions, complements the propositions found in the other two articles. Their telecoms-based framework (TbF) comprises six types of companies characterised by distinct profiles of organisational competences, which interact according to specific patterns of relationships, thus creating distinct configurations of production networks. The second theme is addressed by Kyläheiko and SandstroÍm in their article “Strategic options based framework for management of dynamic capabilities in manufacturing firms”. They propose a new approach to strategic decision-making in markets characterised by turbulence and weak signals at the customer interface. Their framework for a manufacturing firm in the digital age leads to active asset selection (strategic investments in both tangible and intangible assets) and efficient orchestrating of the global value net in “thin” intangible asset markets. The framework consists of five steps based on Porter's five-forces model, the resources-based view, complemented by means of the concepts of strategic options and related flexibility issues. Thun, GroÍssler and Miczka's contribution to the third theme brings the human dimension to the debate regarding the future of manufacturing. Their article focuses on the challenges brought to management by the ageing of workers in Germany but, in the arguments that are raised, the future challenges associated to workers and work organisation in every production system become visible and relevant. An interesting point in the approach adopted by the authors is that not only the factual problems and solutions are taken into account but the perception of the managers is brought into the picture. China cannot be absent in the discussion of the future of manufacturing. Therefore, within the fourth theme, Vaidya, Bennett and Liu provide the evidence of the gradual improvement of Chinese companies in the medium and high-tech sectors, by using the revealed comparative advantage (RCA) analysis. The Chinese evolution is shown to be based on capabilities developed through combining international technology transfer and indigenous learning. The main implication for the Western companies is the need to take account of the accelerated rhythm of capability development in China. For other developing countries China's case provides lessons of great importance. Finally, under the fifth theme, Kuehnle's article: “Post mass production paradigm (PMPP) trajectories” provides a futuristic scenario of what is already around us and might become prevalent in the future. It takes a very intensive look at a whole set of dimensions that are affecting manufacturing now, and will influence manufacturing in the future, ranging from the application of ICT to the need for social transparency. In summary, this special issue of JMTM presents a brief, but undisputable, demonstration of the possible richness of manufacturing in the future. Indeed, we could even say that manufacturing has no future if we only stick to the past perspectives. Embracing the new is not easy. The new configurations of production systems, the distributed and complementary roles to be performed by distinct types of companies in diversified networked structures, leveraged by the new emergent technologies and associated the new challenges for managing people, are all themes that are carriers of the future. The Guest Editors of this special issue on the future of manufacturing are strongly convinced that their undertaking has been worthwhile.
Resumo:
This essay examines the only book published by the late Harald Kaas. His collection of short stories Uhren und Meere (1979), dealing with depictions of psycho-pathological states of mind, gained Kaas a short-lived notoriety as he himself was a certified schizophrenic possessing first-hand experience of psychiatric treatment. This essay sets out to investigate whether or to what extent the stories in Uhren und Meere can be understood as a document of the language of madness. It concludes that despite the biographical dimension of his schizophrenic experience, Kaas’s texts fail to voice an as it were unadulterated language of madness. However, when read in conjunction with his quasi-poetological interview statements, it is possible to determine the very nature of madness as a collapse of a logical system of language. Meaning that language cannot actively be used to express madness, while at the same time madness can express itself in a language that we necessarily fail to understand. The language of madness manifests itself as the madness of language.
Resumo:
This thesis is concerned with Maine de Biran’s and Samuel Taylor Coleridge’s conceptions of will, and the way in which both thinkers’ posterities have been affected by the central role of these very conceptions in their respective bodies of thought. The research question that animates this work can therefore be divided into two main parts, one of which deals with will, while the other deals with its effects on posterity. In the first pages of the Introduction, I make the case for a comparison between two philosophers, and show how this comparison can bring one closer to truth, understood not in objective, but in subjective terms. I then justify my choice by underlining that, in spite of their many differences, Maine de Biran and Samuel Taylor Coleridge followed comparable paths, intellectually and spiritually, and came to similar conclusions concerning the essential activity of the human mind. Finally, I ask whether it is possible that this very focus on the human will may have contributed to the state of both thinkers’ works and of the reception of those works. This prologue is followed by five parts. In the first part, the similarities and differences between the two thinkers are explored further. In the second part, the connections between philosophy and singularity are examined, in order to show the ambivalence of the will as a foundation for truth. The third part is dedicated to the traditional division between subject and object in psychology, and its relevance in history and in moral philosophy. The fourth part tackles the complexity of the question of influence, with respect to both Maine de Biran’s and Coleridge’s cases, both thinkers being indebted to many philosophers of all times and places, and having to rely heavily on others for the publication, or the interpretation of their own works. The fifth part is concerned with the different aspects of the faculty of will, and primarily its relationship with interiority, as incommensurability, and actual, conditioned existence in a certain historical and spatial context. It ends with a return to the question of will and posterity and an announcement of what will be covered in the main body of the thesis. The main body is divided into three parts:‘L’émancipation’, ‘L’affirmation, and ‘La projection’. The first part is devoted to the way Maine de Biran and Samuel Taylor Coleridge extricated themselves from one epistemological paradigm to contribute to the foundation of another. It is divided in four chapters. The first chapter deals with the aforementioned change of paradigm, as corresponding to the emergence of two separate but associated movements, Romanticism and what the French philosopher refers to as ‘The Age of History’. The second chapter concerns the movement that preceded them, i.e. the Enlightenment, its main features according to both of our thinkers, and the two epistemological models that prevailed under it and influenced them heavily in their early years: Sensationism (Maine de Biran) and Associationism (Coleridge). The third chapter is about the probable influence of Immanuel Kant and his followers on Maine de Biran and Coleridge, and the various facts that allow us to claim originality for both thinkers’ works. In the fourth chapter, I contrast Maine de Biran and Coleridge with other movements and thinkers of their time, showing that, contrary to their respective thoughts, Maine de Biran and Coleridge could not but break free from the then prevailing systematic approach to truth. The second part of the thesis is concerned with the first part of its research question, namely, Maine de Biran’s and Coleridge’s conceptions of the will. It is divided into four chapters. The first chapter is a reflection on the will as a paradox: on the one hand, the will cannot be caused by any other phenomenon, or it is no longer a will; but it cannot be left purely undetermined, as if it is, it is then not different from chance. It thus needs, in order to be, to be contradictorily already moral. The second chapter is a comparison between Maine de Biran’s and Coleridge’s accounts of the origin of the will, where it is found that the French philosopher only observes that he has a will, whereas the English philosopher postulates the existence of this will. The comparison between Maine de Biran’s and Coleridge’s conceptions of the will is pursued in the third chapter, which tackles the question of the coincidence between the will and the self, in both thinkers’ works. It ends with the fourth chapter, which deals with the question of the relationship between the will and what is other to it, i.e. bodily sensations, passions and desires. The third part of the thesis focuses on the second part of its research question, namely the posterity of Maine de Biran’s and Coleridge’s works. It is divided into four chapters. The first chapter constitutes a continuation of the last chapter of the preceding part, in that that it deals with Maine de Biran’s and Coleridge’s relations to the ‘other’, and particularly their potential and actual audience, and with the way these relations may have affected their writing and publishing practices. The second chapter is a survey of both thinkers’ general reception, where it is found that, while Maine de Biran has been claimed by two important movements of thoughts as their initiator, Coleridge has been neglected by the only real movement he could have, or may indeed have pioneered. The third chapter is more directly concerned with the posterities of Maine de Biran’s and Coleridge’s conceptions of will, and attempts to show that the approach to, and the meaning of the will have evolved throughout the nineteenth century, and in the French Spiritualist and the British Idealist movements, from an essentially personal one to a more impersonal one. The fourth chapter is a partial conclusion, whose aim is to give a precise idea of where Maine de Biran and Coleridge stand, in relation to their century and to the philosophical movements and matters we are concerned with. The conclusion is a recapitulation of what has been found, with a particular emphasis on the dialogue initiated between Maine de Biran and Coleridge on the will, and the relation between will and posterity. It suggests that both thinkers have to pay the price of a problematic reception for the individuality that pervades their respective works, and goes further in suggesting that s/he who chooses to found his individuality on the will is bound to feel this incompleteness in his/her own personal life more acutely than s/he who does not. It ends with a reflection on fixedness and movement, as the two antagonistic states that the theoretician of the will paradoxically aspires to.
Resumo:
The thesis offers a comparative interdisciplinary approach to the examination of the intellectual debates about the relationship between individual and society in the GDR under Honecker. It shows that there was not only a continuum of debate between the academic disciplines, but also from the radical critics of the GDR leadership such as Robert Havemann, Rudolf Bahro and Stefan Heym through the social scientists, literary critics and legal theorists working in the academic institutions to theorists close to the GDR leadership. It also shows that the official line and policy of the ruling party itself on the question of the individual and society was not static over the period, but changed in response to internal and external pressures. Over the period 1971 - 1989 greater emphasis was placed by many intellectuals on the individual, his needs and interests. It was increasingly recognised that conflicts could exist between the individual and society in GDR socialism. Whereas the radical critics argued that these conflicts were due to features of GDR society, such as the hierarchical system of labour functions and bureaucracy, and extrapolated from this a general conflict between the political leadership and population, orthodox critics argued that conflicts existed between a specific individual and society and were largely due to external and historical factors. The internal critics also pointed to the social phenomena which were detrimental to the individual's development in the GDR, but they put forward less radical solutions. With the exception of a few radical young writers, all theorists studied in this thesis gave precedence to social interests over individual interests and so did not advocate a return to `individualistic' positions. The continuity of sometimes quite controversial discussions in the GDR academic journals and the flexibility of the official line and policy suggests that it is inappropriate to refer to GDR society under Honecker simply as totalitarian, although it did have some totalitarian features. What the thesis demonstrates is the existence of `Teiloffentlichkeiten' in which critical discussion is conducted even as the official, orthodox line is given out for public consumption in the high-circulation media.
Resumo:
With the growing appreciation of the contribution of small technology-based ventures to a healthy economy, an analysis of the individual who initiates and manages such ventures - the technical entrepreneur - is highly desirable, predominantly because of the influence of such an individual on the management and future strategy of the venture. An examination of recent research has indicated that a study of the previous experience and expertise of the entrepreneur, gained in previous occupations, may be highly relevant in determining the possible success of a new venture. This is particularly true where the specific expertise of the entrepreneur forms the main strategic advantage of the business, as in the case of small technology-based firms. Despite this, there has been very little research which has attempted to examine the relationship between the previous occupational background of the technical entrepreneur, and the management of the small technology-based firm. This thesis will examine this relationship, as well as providing an original contribution to the study of technical entrepreneurship in the UK. Consequently, the exploratory nature of the research prompted an inductive qualitative approach being adopted for the thesis. Through a two stage, multiple-site research approach, an examination was made of technical entrepreneurs heading award-winning technology-based small firms in the UK. The main research questions focused on management within the firm, the novelty and origin of the technology adopted, and the personal characteristics of the entrepreneur under study. The results of this study led to the creation of a specific typology for technical entrepreneurs, based on the individual's role in the development of technology within his previous occupation.
Resumo:
In the bulge test, a sheet metal specimen is clamped over a circular hole in a die and formed into a bulge by the hydraulic pressure on one side of the specirnen. As the unsupported part of the specimen is deformed in this way, its area is increased, in other words, the material is generally stretched and its thickness generally decreased. The stresses causing this stretching action are the membrane stresses in the shell generated by the hydraulic pressure, in the same way as the rubber in a toy balloon is stretched by the membrane stresses caused by the air inside it. The bulge test is a widely used sheet metal test, to determine the "formability" of sheet materials. Research on this forming process (2)-(15)* has hitherto been almost exclusively confined to predicting the behaviour of the bulged specimen through the constitutive equations (stresses and strains in relation to displacements and shapes) and empirical work hardening characteristics of the material as determined in the tension test. In the present study the approach is reversed; the stresses and strains in the specimen are measured and determined from the geometry of the deformed shell. Thus, the bulge test can be used for determining the stress-strain relationship in the material under actual conditions in sheet metal forming processes. When sheet materials are formed by fluid pressure, the work-piece assumes an approximately spherical shape, The exact nature and magnitude of the deviation from the perfect sphere can be defined and measured by an index called prolateness. The distribution of prolateness throughout the workpiece at any particular stage of the forming process is of fundamental significance, because it determines the variation of the stress ratio on which the mode of deformation depends. It is found. that, before the process becomes unstable in sheet metal, the workpiece is exactly spherical only at the pole and at an annular ring. Between the pole and this annular ring the workpiece is more pointed than a sphere, and outside this ring, it is flatter than a sphere. In the forming of sheet materials, the stresses and hence the incremental strains, are closely related to the curvatures of the workpiece. This relationship between geometry and state of stress can be formulated quantitatively through prolateness. The determination of the magnitudes of prolateness, however, requires special techniques. The success of the experimental work is due to the technique of measuring the profile inclination of the meridional section very accurately. A travelling microscope, workshop protractor and surface plate are used for measurements of circumferential and meridional tangential strains. The curvatures can be calculated from geometry. If, however, the shape of the workpiece is expressed in terms of the current radial (r) and axial ( L) coordinates, it is very difficult to calculate the curvatures within an adequate degree of accuracy, owing to the double differentiation involved. In this project, a first differentiation is, in effect, by-passed by measuring the profile inclination directly and the second differentiation is performed in a round-about way, as explained in later chapters. The variations of the stresses in the workpiece thus observed have not, to the knowledge of the author, been reported experimentally. The static strength of shells to withstand fluid pressure and their buckling strength under concentrated loads, both depend on the distribution of the thickness. Thickness distribution can be controlled to a limited extent by changing the work hardening characteristics of the work material and by imposing constraints. A technique is provided in this thesis for determining accurately the stress distribution, on which the strains associated with thinning depend. Whether a problem of controlled thickness distribution is tackled by theory, or by experiments, or by both combined, the analysis in this thesis supplies the theoretical framework and some useful experimental techniques for the research applied to particular problems. The improvement of formability by allowing draw-in can also be analysed with the same theoretical and experimental techniques. Results on stress-strain relationships are usually represented by single stress-strain curves plotted either between one stress and one strain (as in the tension or compression tests) or between the effective stress and effective strain, as in tests on tubular specimens under combined tension, torsion and internal pressure. In this study, the triaxial stresses and strains are plotted simultaneously in triangular coordinates. Thus, both stress and strain are represented by vectors and the relationship between them by the relationship between two vector functions. From the results so obtained, conclusions are drawn on both the behaviour and the properties of the material in the bulge test. The stress ratios are generally equal to the strain-rate ratios (stress vectors collinear with incremental strain vectors) and the work-hardening characteristics, which apply only to the particular strain paths are deduced. Plastic instability of the material is generally considered to have been reached when the oil pressure has attained its maximum value so that further deformation occurs under a constant or lower pressure. It is found that the instability regime of deformation has already occurred long before the maximum pressure is attained. Thus, a new concept of instability is proposed, and for this criterion, instability can occur for any type of pressure growth curves.
Resumo:
It is well established that hydrodynamic journal bearings are responsible for self-excited vibrations and have the effect of lowering the critical speeds of rotor systems. The forces within the oil film wedge, generated by the vibrating journal, may be represented by displacement and velocity coefficient~ thus allowing the dynamical behaviour of the rotor to be analysed both for stability purposes and for anticipating the response to unbalance. However, information describing these coefficients is sparse, misleading, and very often not applicable to industrial type bearings. Results of a combined analytical and experimental investigation into the hydrodynamic oil film coefficients operating in the laminar region are therefore presented, the analysis being applied to a 120 degree partial journal bearing having a 5.0 in diameter journal and a LID ratio of 1.0. The theoretical analysis shows that for this type of popular bearing, the eight linearized coefficients do not accurately describe the behaviour of the vibrating journal based on the theory of small perturbations, due to them being masked by the presence of nonlinearity. A method is developed using the second order terms of Taylor expansion whereby design charts are provided which predict the twentyeight force coefficients for both aligned, and for varying amounts of journal misalignment. The resulting non-linear equations of motion are solved using a modified Newton-Raphson method whereby the whirl trajectories are obtained, thus providing a physical appreciation of the bearing characteristics under dynamically loaded conditions.
Resumo:
The effects of antioxidants and stabilizers on the oxidative degradation of polyolefins (low density polyethylene [LDPE] and polypropylene [PPJ have been studied after subjecting to prior high temperature processing treatments. The changes in the both chemical and physical properties of unstabilized polymers occurring during processing were found to be strongly dependent on the amount of oxygen present in the mixer. Subsequent thermal and photo-oxidation showed very similar characteristics and the chromophore primarily responsible for:both thermo and photooxidative degradation of unstabilized polymers was found to be hydroperoxide formed during processing. Removal of hydroperoxide by heat treatment in an inert atmosphere although increasing ketonic carbonyl concentration, markedly decreased the rate of photo-oxidation, introducing an induction period similar to that of an unprocessed sample. It was concluded that hydroperoxides are the most important initiators in normally processed polymers during the early stages of photo-oxidation. Antioxidants such as metal dithiocarbamates which act by destroying peroxides into non-radica1 products were found to be efficient melt stabilizers for polyolefins and effective UV stabilizers during the initial photo-oxidation stage, whilst a phenolic antioxidant, n-octadecyl-3-(3',5'-di-terbutyl 4'hydroxypheny1) propionate (Irganox 1076) retarded photo-oxidation rate in the later stages. A typical 'UV absorber' 2-hydroxy-4-octyloxy-benzophenone (HOBP) has a minor thermal antioxidant action but retarded photo-oxidation at all stages. A substituated piperidine derivative, Bis [2.2.6.6-tetramethylpiperidlnyl-4] sebacate (Tinuvin 770) behaved as an pro-oxidant during thermal oxidation of polyolefins but was an effective stabilizer against UV light. The UV absorber, HOBP synergised effectively with both peroxide decomposing antioxidants (metal dithiocarbamates) and a chain-breaking antioxidant (Irganox 1076) during photo-oxidation of the poymers studed whereas the combined effect was additive during thermal oxidation. By contrast, the peroxide decornposers and chain-breaking antioxidant (Irganox 1076) which were effective synergists during thermal oxidation of LDPE· were antagonistic during photo-oxidation. The mechanisms of these processes are discussed.
Resumo:
Time after time… and aspect and mood. Over the last twenty five years, the study of time, aspect and - to a lesser extent - mood acquisition has enjoyed increasing popularity and a constant widening of its scope. In such a teeming field, what can be the contribution of this book? We believe that it is unique in several respects. First, this volume encompasses studies from different theoretical frameworks: functionalism vs generativism or function-based vs form-based approaches. It also brings together various sub-fields (first and second language acquisition, child and adult acquisition, bilingualism) that tend to evolve in parallel rather than learn from each other. A further originality is that it focuses on a wide range of typologically different languages, and features less studied languages such as Korean and Bulgarian. Finally, the book gathers some well-established scholars, young researchers, and even research students, in a rich inter-generational exchange, that ensures the survival but also the renewal and the refreshment of the discipline. The book at a glance The first part of the volume is devoted to the study of child language acquisition in monolingual, impaired and bilingual acquisition, while the second part focuses on adult learners. In this section, we will provide an overview of each chapter. The first study by Aviya Hacohen explores the acquisition of compositional telicity in Hebrew L1. Her psycholinguistic approach contributes valuable data to refine theoretical accounts. Through an innovating methodology, she gathers information from adults and children on the influence of definiteness, number, and the mass vs countable distinction on the constitution of a telic interpretation of the verb phrase. She notices that the notion of definiteness is mastered by children as young as 10, while the mass/count distinction does not appear before 10;7. However, this does not entail an adult-like use of telicity. She therefore concludes that beyond definiteness and noun type, pragmatics may play an important role in the derivation of Hebrew compositional telicity. For the second chapter we move from a Semitic language to a Slavic one. Milena Kuehnast focuses on the acquisition of negative imperatives in Bulgarian, a form that presents the specificity of being grammatical only with the imperfective form of the verb. The study examines how 40 Bulgarian children distributed in two age-groups (15 between 2;11-3;11, and 25 between 4;00 and 5;00) develop with respect to the acquisition of imperfective viewpoints, and the use of imperfective morphology. It shows an evolution in the recourse to expression of force in the use of negative imperatives, as well as the influence of morphological complexity on the successful production of forms. With Yi-An Lin’s study, we concentrate both on another type of informant and of framework. Indeed, he studies the production of children suffering from Specific Language Impairment (SLI), a developmental language disorder the causes of which exclude cognitive impairment, psycho-emotional disturbance, and motor-articulatory disorders. Using the Leonard corpus in CLAN, Lin aims to test two competing accounts of SLI (the Agreement and Tense Omission Model [ATOM] and his own Phonetic Form Deficit Model [PFDM]) that conflicts on the role attributed to spellout in the impairment. Spellout is the point at which the Computational System for Human Language (CHL) passes over the most recently derived part of the derivation to the interface components, Phonetic Form (PF) and Logical Form (LF). ATOM claims that SLI sufferers have a deficit in their syntactic representation while PFDM suggests that the problem only occurs at the spellout level. After studying the corpus from the point of view of tense / agreement marking, case marking, argument-movement and auxiliary inversion, Lin finds further support for his model. Olga Gupol, Susan Rohstein and Sharon Armon-Lotem’s chapter offers a welcome bridge between child language acquisition and multilingualism. Their study explores the influence of intensive exposure to L2 Hebrew on the development of L1 Russian tense and aspect morphology through an elicited narrative. Their informants are 40 Russian-Hebrew sequential bilingual children distributed in two age groups 4;0 – 4;11 and 7;0 - 8;0. They come to the conclusion that bilingual children anchor their narratives in perfective like monolinguals. However, while aware of grammatical aspect, bilinguals lack the full form-function mapping and tend to overgeneralize the imperfective on the principles of simplicity (as imperfective are the least morphologically marked forms), universality (as it covers more functions) and interference. Rafael Salaberry opens the second section on foreign language learners. In his contribution, he reflects on the difficulty L2 learners of Spanish encounter when it comes to distinguishing between iterativity (conveyed with the use of the preterite) and habituality (expressed through the imperfect). He examines in turn the theoretical views that see, on the one hand, habituality as part of grammatical knowledge and iterativity as pragmatic knowledge, and on the other hand both habituality and iterativity as grammatical knowledge. He comes to the conclusion that the use of preterite as a default past tense marker may explain the impoverished system of aspectual distinctions, not only at beginners but also at advanced levels, which may indicate that the system is differentially represented among L1 and L2 speakers. Acquiring the vast array of functions conveyed by a form is therefore no mean feat, as confirmed by the next study. Based on the prototype theory, Kathleen Bardovi-Harlig’s chapter focuses on the development of the progressive in L2 English. It opens with an overview of the functions of the progressive in English. Then, a review of acquisition research on the progressive in English and other languages is provided. The bulk of the chapter reports on a longitudinal study of 16 learners of L2 English and shows how their use of the progressive expands from the prototypical uses of process and continuousness to the less prototypical uses of repetition and future. The study concludes that the progressive spreads in interlanguage in accordance with prototype accounts. However, it suggests additional stages, not predicted by the Aspect Hypothesis, in the development from activities and accomplishments at least for the meaning of repeatedness. A similar theoretical framework is adopted in the following chapter, but it deals with a lesser studied language. Hyun-Jin Kim revisits the claims of the Aspect Hypothesis in relation to the acquisition of L2 Korean by two L1 English learners. Inspired by studies on L2 Japanese, she focuses on the emergence and spread of the past / perfective marker ¬–ess- and the progressive – ko iss- in the interlanguage of her informants throughout their third and fourth semesters of study. The data collected through six sessions of conversational interviews and picture description tasks seem to support the Aspect Hypothesis. Indeed learners show a strong association between past tense and accomplishments / achievements at the start and a gradual extension to other types; a limited use of past / perfective marker with states and an affinity of progressive with activities / accomplishments and later achievements. In addition, - ko iss– moves from progressive to resultative in the specific category of Korean verbs meaning wear / carry. While the previous contributions focus on function, Evgeniya Sergeeva and Jean-Pierre Chevrot’s is interested in form. The authors explore the acquisition of verbal morphology in L2 French by 30 instructed native speakers of Russian distributed in a low and high levels. They use an elicitation task for verbs with different models of stem alternation and study how token frequency and base forms influence stem selection. The analysis shows that frequency affects correct production, especially among learners with high proficiency. As for substitution errors, it appears that forms with a simple structure are systematically more frequent than the target form they replace. When a complex form serves as a substitute, it is more frequent only when it is replacing another complex form. As regards the use of base forms, the 3rd person singular of the present – and to some extent the infinitive – play this role in the corpus. The authors therefore conclude that the processing of surface forms can be influenced positively or negatively by the frequency of the target forms and of other competing stems, and by the proximity of the target stem to a base form. Finally, Martin Howard’s contribution takes up the challenge of focusing on the poorer relation of the TAM system. On the basis of L2 French data obtained through sociolinguistic interviews, he studies the expression of futurity, conditional and subjunctive in three groups of university learners with classroom teaching only (two or three years of university teaching) or with a mixture of classroom teaching and naturalistic exposure (2 years at University + 1 year abroad). An analysis of relative frequencies leads him to suggest a continuum of use going from futurate present to conditional with past hypothetic conditional clauses in si, which needs to be confirmed by further studies. Acknowledgements The present volume was inspired by the conference Acquisition of Tense – Aspect – Mood in First and Second Language held on 9th and 10th February 2008 at Aston University (Birmingham, UK) where over 40 delegates from four continents and over a dozen countries met for lively and enjoyable discussions. This collection of papers was double peer-reviewed by an international scientific committee made of Kathleen Bardovi-Harlig (Indiana University), Christine Bozier (Lund Universitet), Alex Housen (Vrije Universiteit Brussel), Martin Howard (University College Cork), Florence Myles (Newcastle University), Urszula Paprocka (Catholic University of Lublin), †Clive Perdue (Université Paris 8), Michel Pierrard (Vrije Universiteit Brussel), Rafael Salaberry (University of Texas at Austin), Suzanne Schlyter (Lund Universitet), Richard Towell (Salford University), and Daniel Véronique (Université d’Aix-en-Provence). We are very much indebted to that scientific committee for their insightful input at each step of the project. We are also thankful for the financial support of the Association for French Language Studies through its workshop grant, and to the Aston Modern Languages Research Foundation for funding the proofreading of the manuscript.
Resumo:
Background: Activated factor XIII (FXIIIa), a transglutaminase, introduces fibrin-fibrin and fibrin-inhibitor cross-links, resulting in more mechanically stable clots. The impact of cross-linking on resistance to fibrinolysis has proved challenging to evaluate quantitatively. Methods: We used a whole blood model thrombus system to characterize the role of cross-linking in resistance to fibrinolytic degradation. Model thrombi, which mimic arterial thrombi formed in vivo, were prepared with incorporated fluorescently labeled fibrinogen, in order to allow quantification of fibrinolysis as released fluorescence units per minute. Results: A site-specific inhibitor of transglutaminases, added to blood from normal donors, yielded model thrombi that lysed more easily, either spontaneously or by plasminogen activators. This was observed both in the cell/platelet-rich head and fibrin-rich tail. Model thrombi from an FXIII-deficient patient lysed more quickly than normal thrombi; replacement therapy with FXIII concentrate normalized lysis. In vitro addition of purified FXIII to the patient's preprophylaxis blood, but not to normal control blood, resulted in more stable thrombi, indicating no further efficacy of supraphysiologic FXIII. However, addition of tissue transglutaminase, which is synthesized by endothelial cells, generated thrombi that were more resistant to fibrinolysis; this may stabilize mural thrombi in vivo. Conclusions: Model thrombi formed under flow, even those prepared as plasma 'thrombi', reveal the effect of FXIII on fibrinolysis. Although very low levels of FXIII are known to produce mechanical clot stability, and to achieve ?-dimerization, they appear to be suboptimal in conferring full resistance to fibrinolysis.
Resumo:
As microblog services such as Twitter become a fast and convenient communication approach, identification of trendy topics in microblog services has great academic and business value. However detecting trendy topics is very challenging due to huge number of users and short-text posts in microblog diffusion networks. In this paper we introduce a trendy topics detection system under computation and communication resource constraints. In stark contrast to retrieving and processing the whole microblog contents, we develop an idea of selecting a small set of microblog users and processing their posts to achieve an overall acceptable trendy topic coverage, without exceeding resource budget for detection. We formulate the selection operation of these subset users as mixed-integer optimization problems, and develop heuristic algorithms to compute their approximate solutions. The proposed system is evaluated with real-time test data retrieved from Sina Weibo, the dominant microblog service provider in China. It's shown that by monitoring 500 out of 1.6 million microblog users and tracking their microposts (about 15,000 daily) with our system, nearly 65% trendy topics can be detected, while on average 5 hours earlier before they appear in Sina Weibo official trends.
Resumo:
Cities are oftentimes seen as undergoing a process of "emergence" in the "new economy." However, this process has largely remained empirically underdetermined. This article examines the intra-city geography of emerging businesses in newly dominant sectors of the urban economy. The change in dominant sectors coincides with a shift towards small- and medium-sized businesses, creating new economic opportunities for urban residential areas. The residential neighborhood is introduced as a place where supply and demand side drivers operate to attract or limit such new economic activity. Allen Scott's perspective of the cognitive-cultural economy is used to analyze which neighborhoods are flourishing sites of the cognitive-cultural sectors. His perspective on industries that are on the rise in urban environments and their growth potential proves very valuable. Social demographic characteristics on the level of the neighborhood are used as predictors of the composition of the local economy. The analyses show that in particular wealthy, gentrified neighborhoods are more prone than others to becoming "hubs" of the cognitive-cultural economy. However, disadvantaged neighborhoods may under certain conditions serve as incubators for business start-ups as they offer low-rent office spaces. This has important consequences for their future economic growth potential as well as the distribution of successful businesses in the city. © 2013 Urban Affairs Association.
Resumo:
Full text: The idea of producing proteins from recombinant DNA hatched almost half a century ago. In his PhD thesis, Peter Lobban foresaw the prospect of inserting foreign DNA (from any source, including mammalian cells) into the genome of a λ phage in order to detect and recover protein products from Escherichia coli [ 1 and 2]. Only a few years later, in 1977, Herbert Boyer and his colleagues succeeded in the first ever expression of a peptide-coding gene in E. coli — they produced recombinant somatostatin [ 3] followed shortly after by human insulin. The field has advanced enormously since those early days and today recombinant proteins have become indispensable in advancing research and development in all fields of the life sciences. Structural biology, in particular, has benefitted tremendously from recombinant protein biotechnology, and an overwhelming proportion of the entries in the Protein Data Bank (PDB) are based on heterologously expressed proteins. Nonetheless, synthesizing, purifying and stabilizing recombinant proteins can still be thoroughly challenging. For example, the soluble proteome is organized to a large part into multicomponent complexes (in humans often comprising ten or more subunits), posing critical challenges for recombinant production. A third of all proteins in cells are located in the membrane, and pose special challenges that require a more bespoke approach. Recent advances may now mean that even these most recalcitrant of proteins could become tenable structural biology targets on a more routine basis. In this special issue, we examine progress in key areas that suggests this is indeed the case. Our first contribution examines the importance of understanding quality control in the host cell during recombinant protein production, and pays particular attention to the synthesis of recombinant membrane proteins. A major challenge faced by any host cell factory is the balance it must strike between its own requirements for growth and the fact that its cellular machinery has essentially been hijacked by an expression construct. In this context, Bill and von der Haar examine emerging insights into the role of the dependent pathways of translation and protein folding in defining high-yielding recombinant membrane protein production experiments for the common prokaryotic and eukaryotic expression hosts. Rather than acting as isolated entities, many membrane proteins form complexes to carry out their functions. To understand their biological mechanisms, it is essential to study the molecular structure of the intact membrane protein assemblies. Recombinant production of membrane protein complexes is still a formidable, at times insurmountable, challenge. In these cases, extraction from natural sources is the only option to prepare samples for structural and functional studies. Zorman and co-workers, in our second contribution, provide an overview of recent advances in the production of multi-subunit membrane protein complexes and highlight recent achievements in membrane protein structural research brought about by state-of-the-art near-atomic resolution cryo-electron microscopy techniques. E. coli has been the dominant host cell for recombinant protein production. Nonetheless, eukaryotic expression systems, including yeasts, insect cells and mammalian cells, are increasingly gaining prominence in the field. The yeast species Pichia pastoris, is a well-established recombinant expression system for a number of applications, including the production of a range of different membrane proteins. Byrne reviews high-resolution structures that have been determined using this methylotroph as an expression host. Although it is not yet clear why P. pastoris is suited to producing such a wide range of membrane proteins, its ease of use and the availability of diverse tools that can be readily implemented in standard bioscience laboratories mean that it is likely to become an increasingly popular option in structural biology pipelines. The contribution by Columbus concludes the membrane protein section of this volume. In her overview of post-expression strategies, Columbus surveys the four most common biochemical approaches for the structural investigation of membrane proteins. Limited proteolysis has successfully aided structure determination of membrane proteins in many cases. Deglycosylation of membrane proteins following production and purification analysis has also facilitated membrane protein structure analysis. Moreover, chemical modifications, such as lysine methylation and cysteine alkylation, have proven their worth to facilitate crystallization of membrane proteins, as well as NMR investigations of membrane protein conformational sampling. Together these approaches have greatly facilitated the structure determination of more than 40 membrane proteins to date. It may be an advantage to produce a target protein in mammalian cells, especially if authentic post-translational modifications such as glycosylation are required for proper activity. Chinese Hamster Ovary (CHO) cells and Human Embryonic Kidney (HEK) 293 cell lines have emerged as excellent hosts for heterologous production. The generation of stable cell-lines is often an aspiration for synthesizing proteins expressed in mammalian cells, in particular if high volumetric yields are to be achieved. In his report, Buessow surveys recent structures of proteins produced using stable mammalian cells and summarizes both well-established and novel approaches to facilitate stable cell-line generation for structural biology applications. The ambition of many biologists is to observe a protein's structure in the native environment of the cell itself. Until recently, this seemed to be more of a dream than a reality. Advances in nuclear magnetic resonance (NMR) spectroscopy techniques, however, have now made possible the observation of mechanistic events at the molecular level of protein structure. Smith and colleagues, in an exciting contribution, review emerging ‘in-cell NMR’ techniques that demonstrate the potential to monitor biological activities by NMR in real time in native physiological environments. A current drawback of NMR as a structure determination tool derives from size limitations of the molecule under investigation and the structures of large proteins and their complexes are therefore typically intractable by NMR. A solution to this challenge is the use of selective isotope labeling of the target protein, which results in a marked reduction of the complexity of NMR spectra and allows dynamic processes even in very large proteins and even ribosomes to be investigated. Kerfah and co-workers introduce methyl-specific isotopic labeling as a molecular tool-box, and review its applications to the solution NMR analysis of large proteins. Tyagi and Lemke next examine single-molecule FRET and crosslinking following the co-translational incorporation of non-canonical amino acids (ncAAs); the goal here is to move beyond static snap-shots of proteins and their complexes and to observe them as dynamic entities. The encoding of ncAAs through codon-suppression technology allows biomolecules to be investigated with diverse structural biology methods. In their article, Tyagi and Lemke discuss these approaches and speculate on the design of improved host organisms for ‘integrative structural biology research’. Our volume concludes with two contributions that resolve particular bottlenecks in the protein structure determination pipeline. The contribution by Crepin and co-workers introduces the concept of polyproteins in contemporary structural biology. Polyproteins are widespread in nature. They represent long polypeptide chains in which individual smaller proteins with different biological function are covalently linked together. Highly specific proteases then tailor the polyprotein into its constituent proteins. Many viruses use polyproteins as a means of organizing their proteome. The concept of polyproteins has now been exploited successfully to produce hitherto inaccessible recombinant protein complexes. For instance, by means of a self-processing synthetic polyprotein, the influenza polymerase, a high-value drug target that had remained elusive for decades, has been produced, and its high-resolution structure determined. In the contribution by Desmyter and co-workers, a further, often imposing, bottleneck in high-resolution protein structure determination is addressed: The requirement to form stable three-dimensional crystal lattices that diffract incident X-ray radiation to high resolution. Nanobodies have proven to be uniquely useful as crystallization chaperones, to coax challenging targets into suitable crystal lattices. Desmyter and co-workers review the generation of nanobodies by immunization, and highlight the application of this powerful technology to the crystallography of important protein specimens including G protein-coupled receptors (GPCRs). Recombinant protein production has come a long way since Peter Lobban's hypothesis in the late 1960s, with recombinant proteins now a dominant force in structural biology. The contributions in this volume showcase an impressive array of inventive approaches that are being developed and implemented, ever increasing the scope of recombinant technology to facilitate the determination of elusive protein structures. Powerful new methods from synthetic biology are further accelerating progress. Structure determination is now reaching into the living cell with the ultimate goal of observing functional molecular architectures in action in their native physiological environment. We anticipate that even the most challenging protein assemblies will be tackled by recombinant technology in the near future.
Resumo:
Congenital nystagmus (CN) is an ocular-motor disorder characterised by involuntary, conjugated ocular oscillations, that can arise since the first months of life. Pathogenesis of congenital nystagmus is still under investigation. In general, CN patients show a considerable decrease of their visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations, mainly horizontal. However, image stabilisation is still achieved during the short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals). To quantify the extent of nystagmus, eye movement recording are routinely employed, allowing physicians to extract and analyse nystagmus main features such as shape, amplitude and frequency. Using eye movement recording, it is also possible to compute estimated visual acuity predictors: analytical functions which estimates expected visual acuity using signal features such as foveation time and foveation position variability. Use of those functions add information to typical visual acuity measurement (e.g. Landolt C test) and could be a support for therapy planning or monitoring. This study focus on robust detection of CN patients' foveations. Specifically, it proposes a method to recognize the exact signal tracts in which a subject foveates, This paper also analyses foveation sequences. About 50 eyemovement recordings, either infrared-oculographic or electrooculographic, from different CN subjects were acquired. Results suggest that an exponential interpolation for the slow phases of nystagmus could improve foveation time computing and reduce influence of breaking saccades and data noise. Moreover a concise description of foveation sequence variability can be achieved using non-fitting splines. © 2009 Springer Berlin Heidelberg.