928 resultados para techniques of acting


Relevância:

90.00% 90.00%

Publicador:

Resumo:

How does a firm choose a proper model of foreign direct investment (FDI) for entering a foreign market? Which mode of entry performs better? What are the performance implications of joint venture (JV) ownership structure? These important questions face a multinational enterprise (MNE) that decides to enter a foreign market. However, few studies have been conducted on such issues, and no consistent or conclusive findings are generated, especially with respect to China. It’s composed of five chapters, providing corresponding answers to the questions given above. Specifically, Chapter One is an overall introductory chapter. Chapter Two is about the choice of entry mode of FDI in China. Chapter Three examines the relationship between four main entry modes and performance. Chapter Four explores the performance implications of JV ownership structure. Chapter Five is an overall concluding chapter. These empirical studies are based on the most recent and richest data that has never been explored in previous studies. It contains information on 11,765 foreign-invested enterprises in China in seven manufacturing industries in 2000, 10,757 in 1999, and 10,666 in 1998. The four FDI entry modes examined include wholly-owned enterprises (WOEs), equity joint ventures (EJVs), contractual joint ventures (CJVs), and joint stock companies (JSCs). In Chapter Two, a multinominal logit model is established, and techniques of multiple linear regression analysis are employed in Chapter Three and Four. It was found that MNEs, under the conditions of a good investment environment, large capital commitment and small cultural distance, prefer the WOE strategy. If these conditions are not met, the EJV mode would be of greater use. The relative propensity to pursue the CJV mode increases with a good investment environment, small capital commitment, and small cultural distance. JSCs are not favoured by MNEs when the investment environment improves and when affiliates are located in the coastal areas. MNEs have been found to have a greater preference for an EJV as a mode of entry into the Chinese market in all industries. It is also found that in terms of return on assets (ROA) and asset turnover, WOEs perform the best, followed by EJVs, CJVs, and JSCs. Finally, minority-owned EJVs or JSCs are found to outperform their majority-owned counterparts in terms of ROA and asset turnover.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of the research is to develop an e-business selection framework for small and medium enterprises (SMEs) by integrating established techniques in planning. The research is case based, comprising four case studies carried out in the printing industry for the purpose of evaluating the framework. Two of the companies are from Singapore, while the other two are from Guangzhou, China and Jinan, China respectively. To determine the need of an e-business selection framework for SMEs, extensive literature reviews were carried out in the area of e-business, business planning frameworks, SMEs and the printing industry. An e-business selection framework is then proposed by integrating the three established techniques of the Balanced Scorecard (BSC), Value Chain Analysis (VCA) and Quality Function Deployment (QFD). The newly developed selection framework is pilot tested using a published case study before actual evaluation is carried out in four case study companies. The case study methodology was chosen because of its ability to integrate diverse data collection techniques required to generate the BSC, VCA and QFD for the selection framework. The findings of the case studies revealed that the three techniques of BSC, VCA and QFD can be integrated seamlessly to complement on each other’s strengths in e-business planning. The eight-step methodology of the selection framework can provide SMEs with a step-by-step approach to e-business through structured planning. Also, the project has also provided better understanding and deeper insights into SMEs in the printing industry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study has concentrated on the development of an impact simulation model for use at the sub-national level. The necessity for the development of this model was demonstrated by the growth of local economic initiatives during the 1970's, and the lack of monitoring and evaluation exercise to assess their success and cost-effectiveness. The first stage of research involved the confirmation that the potential for micro-economic and spatial initiatives existed. This was done by identifying the existence of involuntary structural unemployment. The second stage examined the range of employment policy options from the macroeconomic, micro-economic and spatial perspectives, and focused on the need for evaluation of those policies. The need for spatial impact evaluation exercise in respect of other exogenous shocks, and structural changes was also recognised. The final stage involved the investigation of current techniques of evaluation and their adaptation for the purpose in hand. This led to a recognition of a gap in the armoury of techniques. The employment-dependency model has been developed to fill that gap, providing a low-budget model, capable of implementation at the small area level and generating a vast array of industrially disaggregate data, in terms of employment, employment-income, profits, value-added and gross income, related to levels of United Kingdom final demand. Thus providing scope for a variety of impact simulation exercises.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis attempts a psychological investigation of hemispheric functioning in developmental dyslexia. Previous work using neuropsychological methods with developmental dyslexics is reviewed ,and original work is presented both of a conventional psychometric nature and also utilising a new means of intervention. At the inception of inquiry into dyslexia, comparisons were drawn between developmental dyslexia and acquired alexia, promoting a model of brain damage as the common cause. Subsequent investigators found developmental dyslexics to be neurologically intact, and so an alternative hypothesis was offered, namely that language is abnormally localized (not in the left hemisphere). Research in the last decade, using the advanced techniques of modern neuropsychology, has indicated that developmental dyslexics are probably left hemisphere dominant for language. The development of a new type of pharmaceutical prep~ration (that appears to have a left hemisphere effect) offers an oppertunity to test the experimental hypothesis. This hypothesis propounds that most dyslexics are left hemisphere language dominant, but some of these language related operations are dysfunctioning. The methods utilised are those of psychological assessment of cognitive function, both in a traditional psychometric situation, and with a new form of intervention (Piracetam). The information resulting from intervention will be judged on its therapeutic validity and contribution to the understanding of hemispheric functioning in dyslexics. The experimental studies using conventional psychometric evaluation revealed a dyslexic profile of poor sequencing and name coding ability, with adequate spatial and verbal reasoning skills. Neuropsychological information would tend to suggest that this profile was indicative of adequate right hemsiphere abilities and deficits in some left hemsiphere abilities. When an intervention agent (Piracetam) was used with young adult dyslexics there were improvements in both the rate of acquisition and conservation of verbal learning. An experimental study with dyslexic children revealed that Piracetam appeared to improve reading, writing and sequencing, but did not influence spatial abilities. This would seem to concord with other recent findings, that deve~mental dyslexics may have left hemisphere language localisation, although some of these language related abilities are dysfunctioning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Phosphonoformate and phosphonoacetate are effective antiviral agents, however they are charged at physiological pH and as such penetration into cells and diffusion across the blood-brain bamer is limited. In an attempt to increase the lipophilicity and improve the transport properties of these molecules, prodrugs were synthesised and their stabilities and reconversion to the parent compound subsequently investigated by the techniques of 31P nuclear magnetic resonance spectroscopy and high performance liquid Chromatography. A series of 4-substituted dibenzyl (methoxycarbonyl)phosphonates were prepared and found to be hydrolytically unstable giving predominantly the diesters, benzyl (methoxycarbonyl)phosphonates. This instability arose from the electron-withdrawing effect of the carbonyl group promoting nucleophilic attack at phosphorus. It was possible to influence the mechanism and, to some extent, the rate of hydrolysis of the phosphonoformate triesters to the diesters by varying the electronic nature of the substituent in the 4-position of the aromatic ring. Strongly electron-withdrawing groups increased the sensitivity of phosphorus to nucleophilic attack, thus promoting P-O .bond cleavage and rapid hydrolysis. Conversely, weakly electron-withdrawing substituents encouraged C-O bond fission, presumably through resonance stabilisation of the benzyl carbonium ion. The loss of the protecting group on phosphorus was in competition with nucleophilic attack at the carbonyl group, resulting in P-C bond cleavage with dibenzyl phosphite formation. The high instability and P-C bond fission make triesters unsuitable prodrug forms of phosphonoformate. A range of chemically stable triesters of phosphonoacetate were synthesised and their bioactivation investigated. Di(benzoyloxymethyl) (methoxycarbonylmethyl)phosphonates degraded to the relevant benzoyloxymethyl (methoxycarbonylmethyl)phosphonate in the presence of esterase. The enzymatic activation was restricted to the removal of only one protecting group from phosphorus, most likely due to the close proximity of the benzoyloxy ester function to the anionic charge on the diester. However, in similar systems di(4-alkanoyloxybenzyl) (methoxycarbonylmethyl)phosphonates degraded in the presence of esterase with the loss of both protecting groups on phosphorus to give the monoester, (methoxycarbonylmethyl)phosphonate, via the intermediary of the unstable 4-hydroxy benzyl esters. The methoxycarbonyl function remained intact. The rate of enzymatic hydrolysis and subsequent removal of the protecting groups on phosphorus was dependent on the nature of the alkanoyl group and was most rapid for the 4-nbutanoyloxybenzyl and 4-iso-butanoyloxybenzyl esters of phosphonoacetate. This provides a strategy for the design of a prodrug with sufficient stability in plasma to reach the central nervous system in high concentration, wherein rapid metabolism to the active drug by brain-associated enzymes occurs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ability of Escherichia coli to express the K88 fimbrial adhesin was satisfactorily indicated by the combined techniques of ELISA, haemagglutination and latex agglutination. Detection of expression by electron microscopy and the ability to metabolize raffinose were unsuitable. Quantitative expression of the K88 adhesin was determined by ELISA. Expression was found to vary according to the E.coli strain examined, media type and form. In general it was found that the total amount was greater, while the amount/cfu was less on agar than in broth cultures. Expression of the K88 adhesin during unshaken batch culture was related to the growth rate and was maximal during late logarithmic to early stationary phase. A combination of heat extraction, ammonium sulphate and isoelectric precipitation was found suitable for both large and small scale preparation of purified K88ab adhesin. Extraction of the K88 adhesin was sensitive to pH and it was postulated that this may affect the site of colonisation of by ETEC in vivo. Results of haemagglutination experiments were consistent with the hypothesis that the K88 receptor present on erythrocytes is composed of two elements, one responsible for the binding of K88ab and K88ac and a second responsible for the binding of the K88ad adhesin. Comparison of the haemagglutinating properties of cell-free and cell-bound K88 adhesin revealed some differences probably indicating a minor conformational change in the K88 adhesin on its isolation. The K88ab adhesin was found to bind to erythrocytes over a wide pH range (PH 4-9) and was inhibited by αK88ab and αK88b antisera. Inhibition of haemagglutination was noted with crude heparin, mannan and porcine gastric mucin, chondrosine and several hexosamines, glucosamine in particular. The most potent inhibitor of haemagglutination was n-dodecyl-β-D-glucopyranoside, one of a series of glucosides found to have inhibitory properties. Correlation between hydrophobicity of glucosides tested and degree of inhibition observed suggested hydrophobic forces were important in the interaction of the K88 adhesin with its receptor. The results of Scatchard and Hill plots indicated that binding of the K88ab adhesin to porcine enterocytes in the majority of cases is a two-step, three component system. The first K88 receptor (or site) had a K2. of 1.59x1014M-1 and a minimum of 4.3x104 sites/enterocyte. The second receptor (or site) had a K2 of 4.2x1012M-1 with a calculated 1.75x105 sites/enterocyte. Attempts to inhibit binding of cell-free K88 adhesin to porcine enterocytes by lectins were unsuccessful. However, several carbohydrates including trehalose, lactulose, galactose 1→4 mannopyranoside, chondrosine, galactosamine, stachyose and mannan were inhibitory. The most potent inhibitor was found to be porcine gastric mucin. Inhibition observed with n-octyl-α-D-glucopyranose was difficult to interpret in isolation because of interference with the assay, however, it agreed with the results of haemagglutination inhibition experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis presents an investigation into the application of methods of uncertain reasoning to the biological classification of river water quality. Existing biological methods for reporting river water quality are critically evaluated, and the adoption of a discrete biological classification scheme advocated. Reasoning methods for managing uncertainty are explained, in which the Bayesian and Dempster-Shafer calculi are cited as primary numerical schemes. Elicitation of qualitative knowledge on benthic invertebrates is described. The specificity of benthic response to changes in water quality leads to the adoption of a sensor model of data interpretation, in which a reference set of taxa provide probabilistic support for the biological classes. The significance of sensor states, including that of absence, is shown. Novel techniques of directly eliciting the required uncertainty measures are presented. Bayesian and Dempster-Shafer calculi were used to combine the evidence provided by the sensors. The performance of these automatic classifiers was compared with the expert's own discrete classification of sampled sites. Variations of sensor data weighting, combination order and belief representation were examined for their effect on classification performance. The behaviour of the calculi under evidential conflict and alternative combination rules was investigated. Small variations in evidential weight and the inclusion of evidence from sensors absent from a sample improved classification performance of Bayesian belief and support for singleton hypotheses. For simple support, inclusion of absent evidence decreased classification rate. The performance of Dempster-Shafer classification using consonant belief functions was comparable to Bayesian and singleton belief. Recommendations are made for further work in biological classification using uncertain reasoning methods, including the combination of multiple-expert opinion, the use of Bayesian networks, and the integration of classification software within a decision support system for water quality assessment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The work utilising a new material for contact lenses has fallen into three parts: Physioloeical considerations: Since the cornea is devoid of blood vessels, its oxygen is derived from the atmosphere. Early hydrophilic gel contact lenses interrupted the flow of oxygen and corneal insult resulted. Three techniques of fenestration were tried to overcome this problem. High speed drilling with 0.1 mm diameter twist drills. was found to be mechanically successful, but under clinical conditions mucous blockage of the fenestrations occurred. An investigation was made into the amount of oxygen arriving at the corneal interface; related to gel lens thickness. The results indicated an improvement in corneal oxygen as lens thickness was reduced. The mechanism is thought to be a form of mechanical pump. A series of clinical studies con:firmed the experimental work; the use of thin lenses removing the symptoms of corneal hypoxia. Design: The parameters of lens back curvature. lens thickness and lens diameter have been isolated and related to three criteria of vision (a) Visual acuity. (b) Visual stability and (c) Induced astigmatism. From the results achieved a revised and basically successful design of lens has been developed. Comparative study: The developed form of lens was compared with traditional lenses in a controlled survey. Twelve factors were assessed over a twenty week period of wear using a total of eighty four patients. The results of this study indicate that whilst the expected changes were noted with the traditional lens wearers, gel lens wearers showed no discernible change in any of the factors measured. ldth the exception of' one parameter. In addition to a description of' the completed l'iork. further investigations are ·sug~ested l'lhich. it is hoped. l'iould further improve the optical performance of gel lenses.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Report of the Robens Committee (1972), the Health and Safety at Work Act (1974) and the Safety Representatives and Safety Committees Regulations (1977) provide the framework within which this study of certain aspects of health and safety is carried out. The philosophy of self-regulation is considered and its development is set within an historical and an industrial relations perspective. The research uses a case study approach to examine the effectiveness of self-regulation in health and safety in a public sector organisation. Within this approach, methodological triangulation employs the techniques of interviews, questionnaires, observation and documentary analysis. The work is based in four departments of a Scottish Local Authority and particular attention is given to three of the main 'agents' of self-regulation - safety representatives, supervisors and safety committees and their interactions, strategies and effectiveness. A behavioural approach is taken in considering the attitudes, values, motives and interactions of safety representatives and management. Major internal and external factors, which interact and which influence the effectiveness of joint self-regulation of health and safety, are identified. It is emphasised that an organisation cannot be studied without consideration of the context within which it operates both locally and in the wider environment. One of these factors, organisational structure, is described as bureaucratic and the model of a Representative Bureaucracy described by Gouldner (1954) is compared with findings from the present study. An attempt is made to ascertain how closely the Local Authority fits Gouldner's model. This research contributes both to knowledge and to theory in the subject area by providing an in-depth study of self-regulation in a public sector organisation, which when compared with such studies as those of Beaumont (1980, 1981, 1982) highlights some of the differences between the public and private sectors. Both empirical data and hypothetical models are used to provide description and explanation of the operation of the health and safety system in the Local Authority. As data were collected during a dynamic period in economic, political and social terms, the research discusses some of the effects of the current economic recession upon safety organisation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The theatre director (metteur en scene in French) is a relatively new figure in theatre practice. It was not until the I820s that the term 'mise en scene' gained currency. The term 'director' was not in general use until the I880s. The emergence and the role of the director has been considered from a variety of perspectives, either through the history of theatre (Allevy, Jomaron, Sarrazac, Viala, Biet and Triau); the history of directing (Chinoy and Cole, Boll, Veinstein, Roubine); semiotic approaches to directing (Whitmore, Miller, Pavis); the semiotics of performance (De Marinis); generic approaches to the mise en scene (Thomasseau, Banu); post-dramatic approaches to theatre (Lehmann); approaches to performance process and the specifics of rehearsal methodology (Bradby and Williams, Giannachi and Luckhurst, Picon-Vallin, Styan). What the scholarly literature has not done so far is to map the parameters necessarily involved in the directing process, and to incorporate an analysis of the emergence of the theatre director during the modem period and consider its impact on contemporary performance practice. Directing relates primarily to the making of the performance guided by a director, a single figure charged with the authority to make binding artistic decisions. Each director may have her/his own personal approaches to the process of preparation prior to a show. This is exemplified, for example, by the variety of terms now used to describe the role and function of directing, from producer, to facilitator or outside eye. However, it is essential at the outset to make two observations, each of which contributes to a justification for a generic analysis (as opposed to a genetic approach). Firstly, a director does not work alone, and cooperation with others is involved at all stages of the process. Secondly, beyond individual variation, the role of the director remains twofold. The first is to guide the actors (meneur de jeu, directeur d'acteurs, coach); the second is to make a visual representation in the performance space (set designer, stage designer, costume designer, lighting designer, scenographe). The increasing place of scenography has brought contemporary theatre directors such as Wilson, Castellucci, Fabre to produce performances where the performance space becomes a semiotic dimension that displaces the primacy of the text. The play is not, therefore, the sole artistic vehicle for directing. This definition of directing obviously calls for a definition of what the making of the performance might be. The thesis defines the making of the performance as the activity of bringing a social event, by at least one performer, providing visual and/or textual meaning in a performance space. This definition enables us to evaluate four consistent parameters throughout theatre history: first, the social aspect associated to the performance event; second, the devising process which may be based on visual and/or textual elements; third, the presence of at least one performer in the show; fourth, the performance space (which is not simply related to the theatre stage). Although the thesis focuses primarily on theatre practice, such definition blurs the boundaries between theatre and other collaborative artistic disciplines (cinema, opera, music and dance). These parameters illustrate the possibility to undertake a generic analysis of directing, and resonate with the historical, political and artistic dimensions considered. Such a generic perspective on the role of the director addresses three significant questions: an historical question: how/why has the director emerged?; a sociopolitical question: how/why was the director a catalyst for the politicisation of theatre, and subsequently contributed to the rise of State-funded theatre policy?; and an artistic one: how/why the director has changed theatre practice and theory in the twentieth-century? Directing for the theatre as an artistic activity is a historically situated phenomenon. It would seem only natural from a contemporary perspective to associate the activity of directing to the function of the director. This is relativised, however, by the question of how the performance was produced before the modern period. The thesis demonstrates that the rise of the director is a progressive and historical phenomenon (Dort) rather than a mere invention (Viala, Sarrazac). A chronological analysis of the making of the performance throughout theatre history is the most useful way to open the study. In order to understand the emergence of the director, the research methodology assesses the interconnection of the four parameters above throughout four main periods of theatre history: the beginning of the Renaissance (meneur de jeu), the classical age (actor-manager and stage designer-manager), the modern period (director) and the contemporary period (director-facilitator, performer). This allows us properly to appraise the progressive emergence of the director, as well as to make an analysis of her/his modern and contemporary role. The first chapter argues that the physical separation between the performance space and its audience, which appeared in the early fifteenth-century, has been a crucial feature in the scenographic, aesthetic, political and social organisation of the performance. At the end of the Middle Ages, French farces which raised socio-political issues (see Bakhtin) made a clear division on a single outdoor stage (treteau) between the actors and the spectators, while religious plays (drame fiturgique, mystere) were mostly performed on various outdoor and opened multispaces. As long as the performance was liturgical or religious, and therefore confined within an acceptable framework, it was allowed. At the time, the French ecclesiastical and civil authorities tried, on several occasions, to prohibit staged performances. As a result, practitioners developed non-official indoor spaces, the Theatre de fa Trinite (1398) being the first French indoor theatre recognized by scholars. This self-exclusion from the open public space involved breaking the accepted rules by practitioners (e.g. Les Confreres de fa Passion), in terms of themes but also through individual input into a secular performance rather than the repetition of commonly known religious canvases. These developments heralded the authorised theatres that began to emerge from the mid-sixteenth century, which in some cases were subsidised in their construction. The construction of authorised indoor theatres associated with the development of printing led to a considerable increase in the production of dramatic texts for the stage. Profoundly affecting the reception of the dramatic text by the audience, the distance between the stage and the auditorium accompanied the changing relationship between practitioners and spectators. This distance gave rise to a major development of the role of the actor and of the stage designer. The second chapter looks at the significance of both the actor and set designer in the devising process of the performance from the sixteenth-century to the end of the nineteenth-century. The actor underwent an important shift in function in this period from the delivery of an unwritten text that is learned in the medieval oral tradition to a structured improvisation produced by the commedia dell 'arte. In this new form of theatre, a chef de troupe or an experienced actor shaped the story, but the text existed only through the improvisation of the actors. The preparation of those performances was, moreover, centred on acting technique and the individual skills of the actor. From this point, there is clear evidence that acting began to be the subject of a number of studies in the mid-sixteenth-century, and more significantly in the seventeenth-century, in Italy and France. This is revealed through the implementation of a system of notes written by the playwright to the actors (stage directions) in a range of plays (Gerard de Vivier, Comedie de la Fidelite Nuptiale, 1577). The thesis also focuses on Leoni de' Sommi (Quatro dialoghi, 1556 or 1565) who wrote about actors' techniques and introduced the meneur de jeu in Italy. The actor-manager (meneur de jeu), a professional actor, who scholars have compared to the director (see Strihan), trained the actors. Nothing, however, indicates that the actor-manager was directing the visual representation of the text in the performance space. From the end of the sixteenth-century, the dramatic text began to dominate the process of the performance and led to an expansion of acting techniques, such as the declamation. Stage designers carne from outside the theatre tradition and played a decisive role in the staging of religious celebrations (e.g. Actes des Apotres, 1536). In the sixteenth-century, both the proscenium arch and the borders, incorporated in the architecture of the new indoor theatres (theatre a l'italienne), contributed to create all kinds of illusions on the stage, principally the revival of perspective. This chapter shows ongoing audience demands for more elaborate visual effects on the stage. This led, throughout the classical age, and even more so during the eighteenth-century, to grant the stage design practitioner a major role in the making of the performance (see Ciceri). The second chapter demonstrates that the guidance of the actors and the scenographic conception, which are the artistic components of the role of the director, appear to have developed independently from one another until the nineteenth-century. The third chapter investigates the emergence of the director per se. The causes for this have been considered by a number of scholars, who have mainly identified two: the influence of Naturalism (illustrated by the Meiningen Company, Antoine, and Stanislavski) and the invention of electric lighting. The influence of the Naturalist movement on the emergence of the modem director in the late nineteenth-century is often considered as a radical factor in the history of theatre practice. Naturalism undoubtedly contributed to changes in staging, costume and lighting design, and to a more rigorous commitment to the harmonisation and visualisation of the overall production of the play. Although the art of theatre was dependent on the dramatic text, scholars (Osborne) demonstrate that the Naturalist directors did not strictly follow the playwright's indications written in the play in the late nineteenth-century. On the other hand, the main characteristic of directing in Naturalism at that time depended on a comprehensive understanding of the scenography, which had to respond to the requirements of verisimilitude. Electric lighting contributed to this by allowing for the construction of a visual narrative on stage. However, it was a master technician, rather than an emergent director, who was responsible for key operational decisions over how to use this emerging technology in venues such as the new Bayreuth theatre in 1876. Electric lighting reflects a normal technological evolution and cannot be considered as one of the main causes of the emergence of the director. Two further causes of the emergence of the director, not considered in previous studies, are the invention of cinema and the Symbolist movement (Lugne-Poe, Meyerhold). Cinema had an important technological influence on the practitioners of the Naturalist movement. In order to achieve a photographic truth on the stage (tableau, image), Naturalist directors strove to decorate the stage with the detailed elements that would be expected to be found if the situation were happening in reality. Film production had an influence on the work of actors (Walter). The filmmaker took over a primary role in the making of the film, as the source of the script, the filming process and the editing of the film. This role influenced the conception that theatre directors had of their own work. It is this concept of the director which influenced the development of the theatre director. As for the Symbolist movement, the director's approach was to dematerialise the text of the playwright, trying to expose the spirit, movement, colour and rhythm of the text. Therefore, the Symbolists disengaged themselves from the material aspect of the production, and contributed to give greater artistic autonomy to the role of the director. Although the emergence of the director finds its roots amongst the Naturalist practitioners (through a rigorous attempt to provide a strict visual interpretation of the text on stage), the Symbolist director heralded the modem perspective of the making of performance. The emergence of the director significantly changed theatre practice and theory. For instance, the rehearsal period became a clear work in progress, a platform for both developing practitioners' techniques and staging the show. This chapter explores and contrasts several practitioners' methods based on the two aspects proposed for the definition of the director (guidance of the actors and materialisation of a visual space). The fourth chapter argues that the role of the director became stronger, more prominent, and more hierarchical, through a more political and didactic approach to theatre as exemplified by the cases of France and Germany at the end of the nineteenth-century and through the First World War. This didactic perspective to theatre defines the notion of political theatre. Political theatre is often approached by the literature (Esslin, Willett) through a Marxist interpretation of the great German directors' productions (Reinhardt, Piscator, Brecht). These directors certainly had a great influence on many directors after the Second World War, such as Jean Vilar, Judith Molina, Jean-Louis Barrault, Roger Planchon, Augusto Boal, and others. This chapter demonstrates, moreover, that the director was confirmed through both ontological and educational approaches to the process of making the performance, and consequently became a central and paternal figure in the organisational and structural processes practiced within her/his theatre company. In this way, the stance taken by the director influenced the State authorities in establishing theatrical policy. This is an entirely novel scholarly contribution to the study of the director. The German and French States were not indifferent to the development of political theatre. A network of public theatres was thus developed in the inter-war period, and more significantly after the Second World War. The fifth chapter shows how State theatre policies establish its sources in the development of political theatre, and more specifically in the German theatre trade union movement (Volksbiihne) and the great directors at the end of the nineteenth-century. French political theatre was more influenced by playwrights and actors (Romain Rolland, Louise Michel, Louis Lumet, Emile Berny). French theatre policy was based primarily on theatre directors who decentralised their activities in France during both the inter-war period and the German occupation. After the Second World War, the government established, through directors, a strong network of public theatres. Directors became both the artistic director and the executive director of those institutionalised theatres. The institution was, however, seriously shaken by the social and political upheaval of 1968. It is the link between the State and the institution in which established directors were entangled that was challenged by the young emerging directors who rejected institutionalised responsibility in favour of the autonomy of the artist in the 1960s. This process is elucidated in chapter five. The final chapter defines the contemporary role of the director in contrasting thework of a number of significant young theatre practitioners in the 1960s such as Peter Brook, Ariane Mnouchkine, The Living Theater, Jerzy Grotowski, Augusto Boal, Eugenio Barba, all of whom decided early on to detach their companies from any form of public funding. This chapter also demonstrates how they promoted new forms of performance such as the performance of the self. First, these practitioners explored new performance spaces outside the traditional theatre building. Producing performances in a non-dedicated theatre place (warehouse, street, etc.) was a more frequent practice in the 1960s than before. However, the recent development of cybertheatre questions both the separation of the audience and the practitioners and the place of the director's role since the 1990s. Secondly, the role of the director has been multifaceted since the 1960s. On the one hand, those directors, despite all their different working methods, explored western and non-western acting techniques based on both personal input and collective creation. They challenged theatrical conventions of both the character and the process of making the performance. On the other hand, recent observations and studies distinguish the two main functions of the director, the acting coach and the scenographe, both having found new developments in cinema, television, and in various others events. Thirdly, the contemporary director challenges the performance of the text. In this sense, Antonin Artaud was a visionary. His theatre illustrates the need for the consideration of the totality of the text, as well as that of theatrical production. By contrasting the theories of Artaud, based on a non-dramatic form of theatre, with one of his plays (Le Jet de Sang), this chapter demonstrates how Artaud examined the process of making the performance as a performance. Live art and autobiographical performance, both taken as directing the se(f, reinforce this suggestion. Finally, since the 1990s, autobiographical performance or the performance of the self is a growing practical and theoretical perspective in both performance studies and psychology-related studies. This relates to the premise that each individual is making a representation (through memory, interpretation, etc.) of her/his own life (performativity). This last section explores the links between the place of the director in contemporary theatre and performers in autobiographical practices. The role of the traditional actor is challenged through non-identification of the character in the play, while performers (such as Chris Burden, Ron Athey, Orlan, Franko B, Sterlac) have, likewise, explored their own story/life as a performance. The thesis demonstrates the validity of the four parameters (performer, performance space, devising process, social event) defining a generic approach to the director. A generic perspective on the role of the director would encompass: a historical dimension relative to the reasons for and stages of the 'emergence' of the director; a socio-political analysis concerning the relationship between the director, her/his institutionalisation, and the political realm; and the relationship between performance theory, practice and the contemporary role of the director. Such a generic approach is a new departure in theatre research and might resonate in the study of other collaborative artistic practices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Completing projects faster than the normal duration is always a challenge to the management of any project, as it often demands many paradigm shifts. Opportunities of globalization, competition from private sectors and multinationals force the management of public sector organizations in the Indian petroleum sector to take various aggressive strategies to maintain their profitability. Constructing infrastructure for handling petroleum products is one of them. Moreover, these projects are required to be completed in faster duration compared to normal schedules to remain competitive, to get faster return on investment, and to give longer project life. However, using conventional tools and techniques of project management, it is impossible to handle the problem of reducing the project duration from a normal period. This study proposes the use of concurrent engineering in managing projects for radically reducing project duration. The phases of the project are accomplished concurrently/simultaneously instead of in a series. The complexities that arise in managing projects are tackled through restructuring project organization, improving management commitment, strengthening project-planning activities, ensuring project quality, managing project risk objectively and integrating project activities through management information systems. These would not only ensure completion of projects in fast track, but also improve project effectiveness in terms of quality, cost effectiveness, team building, etc. and in turn overall productivity of the project organization would improve.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To carry out stability studies on more electric systems in which there is a preponderance of motor drive equipment, input admittance expressions are required for the individual pieces of equipment. In this paper the techniques of averaging and small-signal linearisation will be used to derive a simple input admittance model for a low voltage, trapezoidal back EMF, brushless, DC motor drive system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background - MHC Class I molecules present antigenic peptides to cytotoxic T cells, which forms an integral part of the adaptive immune response. Peptides are bound within a groove formed by the MHC heavy chain. Previous approaches to MHC Class I-peptide binding prediction have largely concentrated on the peptide anchor residues located at the P2 and C-terminus positions. Results - A large dataset comprising MHC-peptide structural complexes was created by re-modelling pre-determined x-ray crystallographic structures. Static energetic analysis, following energy minimisation, was performed on the dataset in order to characterise interactions between bound peptides and the MHC Class I molecule, partitioning the interactions within the groove into van der Waals, electrostatic and total non-bonded energy contributions. Conclusion - The QSAR techniques of Genetic Function Approximation (GFA) and Genetic Partial Least Squares (G/PLS) algorithms were used to identify key interactions between the two molecules by comparing the calculated energy values with experimentally-determined BL50 data. Although the peptide termini binding interactions help ensure the stability of the MHC Class I-peptide complex, the central region of the peptide is also important in defining the specificity of the interaction. As thermodynamic studies indicate that peptide association and dissociation may be driven entropically, it may be necessary to incorporate entropic contributions into future calculations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.