925 resultados para nonorthogonal contrasts


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research work reported in this thesis is concerned with the development and application of an urban scale sampling methodology for measuring and assessing background levels of heavy metal soil contamination in large and varied urban areas. The policy context of the work is broadly the environmental health problems posed by contaminated land and their implications for urban development planning. Within this wider policy context, the emphasis in the research has been placed on issues, related to the determination and application of 'guidelines' for assessing the significance of contaminated land for environmental planning. In concentrating on background levels of land contamination, the research responds to the need for additional techniques which address both the problems of measuring soil contamination at the urban scale and which are also capable of providing detailed information for use in the assessment of contaminated sites. Therefore, a key component of the work has been the development of a land-use based sampling framework for generating spatially comprehensive data on heavy metals in soil. The utility of the information output of the sampling method is demonstrated in two alternative ways. Firstly, it has been used to map the existing pattern of typical levels of heavy metals in urban soils. Secondly, it can be used to generate both generalised data in the form of 'reference levels' from which the overall significance of .background contamination may be assessed and detailed data, termed 'normal limit levels' for use in the assessment of site specific investigation data. The fieldwork was conducted in the West Midlands Metropolitan County and surface soil has been sampled and analysed for a measure of plant-available' and 'total' lead cadmium, copper and zinc. The research contrasts with much of the previous work on contaminated land which has generally concentrated on either the detailed investigation of individual sites suspected of being contaminated or the appraisal of land contamination resulting from specific point sources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this research was to determine the effect of a lutein-based nutritional supplemented on measures of visual function in normal and ARMD-affected eyes. Thirty participants were recruited to the ARMD cohort (aged between 55 and 82 years, mean ± SD: 69.2 ± 7.8) and 46 were recruited into the normal cohort (aged between 22 and 73 years, mean ± SD: 50.0 ± 15.9). Outcome measures were distance (DVA) and near (NVA) visual acuity, contrast sensitivity (CS), photostress recovery time measured with the Eger Macular Stressometer (EMS), central visual function assessed with the Macular Mapping test (MMT), and fundus photography. Reliability studies were carried out for the EMS and the MMT. A change of 14 s is required to indicate a clinically significant change in EMS time, and a change of 14 MMT points is required to indicate a clinically significant change in MMT score. Sample sizes were sufficient for the trial to have 80% power to detect a significant clinical effect at the 5% significance level for all outcome measures in the normal cohort, and for CS in the ARMD cohort. The study demonstrated that a nutritional supplement containing 6mg lutein, 750 mg vitamin A, 250 mg vitamin C, 34 mg vitamin E, 10 mg zinc, and 0.5 mg copper had no effect on the outcome measures over nine or 18 months in normal or ARMD affected participants. The finding that nine months of antioxidant supplementation, in this case, has no significant effect on CS in ARMD-affected participants adds to the literature, and contrasts with previous RCTs, the AREDS and the LAST. This project has added to the debate about the use of nutritional supplementation prior to the onset of ARMD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this work was to investigate human contrast perception at various contrast levels ranging from detection threshold to suprathreshold levels by using psychophysical techniques. The work consists of two major parts. The first part deals with contrast matching, and the second part deals with contrast discrimination. Contrast matching technique was used to determine when the perceived contrasts of different stimuli were equal. The effects of spatial frequency, stimulus area, image complexity and chromatic contrast on contrast detection thresholds and matches were studied. These factors influenced detection thresholds and perceived contrast at low contrast levels. However, at suprathreshold contrast levels perceived contrast became directly proportional to the physical contrast of the stimulus and almost independent of factors affecting detection thresholds. Contrast discrimination was studied by measuring contrast increment thresholds which indicate the smallest detectable contrast difference. The effects of stimulus area, external spatial image noise and retinal illuminance were studied. The above factors affected contrast detection thresholds and increment thresholds measured at low contrast levels. At high contrast levels, contrast increment thresholds became very similar so that the effect of these factors decreased. Human contrast perception was modelled by regarding the visual system as a simple image processing system. A visual signal is first low-pass filtered by the ocular optics. This is followed by spatial high-pass filtering by the neural visual pathways, and addition of internal neural noise. Detection is mediated by a local matched filter which is a weighted replica of the stimulus whose sampling efficiency decreases with increasing stimulus area and complexity. According to the model, the signals to be compared in a contrast matching task are first transferred through the early image processing stages mentioned above. Then they are filtered by a restoring transfer function which compensates for the low-level filtering and limited spatial integration at high contrast levels. Perceived contrasts of the stimuli are equal when the restored responses to the stimuli are equal. According to the model, the signals to be discriminated in a contrast discrimination task first go through the early image processing stages, after which signal dependent noise is added to the matched filter responses. The decision made by the human brain is based on the comparison between the responses of the matched filters to the stimuli, and the accuracy of the decision is limited by pre- and post-filter noises. The model for human contrast perception could accurately describe the results of contrast matching and discrimination in various conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studied the effect of (i) the number of grating components and (ii) parameter randomisation on root-mean-square (r.m.s.) contrast sensitivity and spatial integration. The effectiveness of spatial integration without external spatial noise depended on the number of equally spaced orientation components in the sum of gratings. The critical area marking the saturation of spatial integration was found to decrease when the number of components increased from 1 to 5-6 but increased again at 8-16 components. The critical area behaved similarly as a function of the number of grating components when stimuli consisted of 3, 6 or 16 components with different orientations and/or phases embedded in spatial noise. Spatial integration seemed to depend on the global Fourier structure of the stimulus. Spatial integration was similar for sums of two vertical cosine or sine gratings with various Michelson contrasts in noise. The critical area for a grating sum was found to be a sum of logarithmic critical areas for the component gratings weighted by their relative Michelson contrasts. The human visual system was modelled as a simple image processor where the visual stimuli is first low-pass filtered by the optical modulation transfer function of the human eye and secondly high-pass filtered, up to the spatial cut-off frequency determined by the lowest neural sampling density, by the neural modulation transfer function of the visual pathways. The internal noise is then added before signal interpretation occurs in the brain. The detection is mediated by a local spatially windowed matched filter. The model was extended to include complex stimuli and its applicability to the data was found to be successful. The shape of spatial integration function was similar for non-randomised and randomised simple and complex gratings. However, orientation and/or phase randomised reduced r.m.s contrast sensitivity by a factor of 2. The effect of parameter randomisation on spatial integration was modelled under the assumption that human observers change the observer strategy from cross-correlation (i.e., a matched filter) to auto-correlation detection when uncertainty is introduced to the task. The model described the data accurately.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research sets out to compare the values in British and German political discourse, especially the discourse of social policy, and to analyse their relationship to political culture through an analysis of the values of health care reform. The work proceeds from the hypothesis that the known differences in political culture between the two countries will be reflected in the values of political discourse, and takes a comparison of two major recent legislative debates on health care reform as a case study. The starting point in the first chapter is a brief comparative survey of the post-war political cultures of the two countries, including a brief account of the historical background to their development and an overview of explanatory theoretical models. From this are developed the expected contrasts in values in accordance with the hypothesis. The second chapter explains the basis for selecting the corpus texts and the contextual information which needs to be recorded to make a comparative analysis, including the context and content of the reform proposals which comprise the case study. It examines any contextual factors which may need to be taken into account in the analysis. The third and fourth chapters explain the analytical method, which is centred on the use of definition-based taxonomies of value items and value appeal methods to identify, on a sentence-by-sentence basis, the value items in the corpus texts and the methods used to make appeals to those value items. The third chapter is concerned with the classification and analysis of values, the fourth with the classification and analysis of value appeal methods. The fifth chapter will present and explain the results of the analysis, and the sixth will summarize the conclusions and make suggestions for further research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a study of the sources of new product ideas and the development of new product proposals in an organisation in the UK Computer Industry. The thesis extends the work of von Hippel by showing how the phenomenon which he describes as "the Customer Active Paradigm for new product idea generation" can be observed to operate in this Industry. Furthermore, this thesis contrasts his Customer Active Paradigm with the more usually encountered Manufacturer Active Paradigm. In a second area, the thesis draws a number of conclusions relating to methods of market research, confirming existing observations and demonstrating the suitability of flexible interview strategies in certain circumstances. The thesis goes on to demonstrate the importance of free information flow within the organisation, making it more likely that sought and unsought opportunities can be exploited. It is shown that formal information flows and documents are a necessary but not sufficient means of influencing the formation of the organisation's dominant ideas on new product areas. The findings also link the work of Tushman and Katz on the role of "Gatekeepers" with the work of von Hippel by showing that the role of gatekeeper is particularly appropriate and useful to an organisation changing from Customer Active to Manufacturer Active methods of idea generation. Finally, the thesis provides conclusions relating to the exploitation of specific new product opportunities facing the sponsoring organisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The theatre director (metteur en scene in French) is a relatively new figure in theatre practice. It was not until the I820s that the term 'mise en scene' gained currency. The term 'director' was not in general use until the I880s. The emergence and the role of the director has been considered from a variety of perspectives, either through the history of theatre (Allevy, Jomaron, Sarrazac, Viala, Biet and Triau); the history of directing (Chinoy and Cole, Boll, Veinstein, Roubine); semiotic approaches to directing (Whitmore, Miller, Pavis); the semiotics of performance (De Marinis); generic approaches to the mise en scene (Thomasseau, Banu); post-dramatic approaches to theatre (Lehmann); approaches to performance process and the specifics of rehearsal methodology (Bradby and Williams, Giannachi and Luckhurst, Picon-Vallin, Styan). What the scholarly literature has not done so far is to map the parameters necessarily involved in the directing process, and to incorporate an analysis of the emergence of the theatre director during the modem period and consider its impact on contemporary performance practice. Directing relates primarily to the making of the performance guided by a director, a single figure charged with the authority to make binding artistic decisions. Each director may have her/his own personal approaches to the process of preparation prior to a show. This is exemplified, for example, by the variety of terms now used to describe the role and function of directing, from producer, to facilitator or outside eye. However, it is essential at the outset to make two observations, each of which contributes to a justification for a generic analysis (as opposed to a genetic approach). Firstly, a director does not work alone, and cooperation with others is involved at all stages of the process. Secondly, beyond individual variation, the role of the director remains twofold. The first is to guide the actors (meneur de jeu, directeur d'acteurs, coach); the second is to make a visual representation in the performance space (set designer, stage designer, costume designer, lighting designer, scenographe). The increasing place of scenography has brought contemporary theatre directors such as Wilson, Castellucci, Fabre to produce performances where the performance space becomes a semiotic dimension that displaces the primacy of the text. The play is not, therefore, the sole artistic vehicle for directing. This definition of directing obviously calls for a definition of what the making of the performance might be. The thesis defines the making of the performance as the activity of bringing a social event, by at least one performer, providing visual and/or textual meaning in a performance space. This definition enables us to evaluate four consistent parameters throughout theatre history: first, the social aspect associated to the performance event; second, the devising process which may be based on visual and/or textual elements; third, the presence of at least one performer in the show; fourth, the performance space (which is not simply related to the theatre stage). Although the thesis focuses primarily on theatre practice, such definition blurs the boundaries between theatre and other collaborative artistic disciplines (cinema, opera, music and dance). These parameters illustrate the possibility to undertake a generic analysis of directing, and resonate with the historical, political and artistic dimensions considered. Such a generic perspective on the role of the director addresses three significant questions: an historical question: how/why has the director emerged?; a sociopolitical question: how/why was the director a catalyst for the politicisation of theatre, and subsequently contributed to the rise of State-funded theatre policy?; and an artistic one: how/why the director has changed theatre practice and theory in the twentieth-century? Directing for the theatre as an artistic activity is a historically situated phenomenon. It would seem only natural from a contemporary perspective to associate the activity of directing to the function of the director. This is relativised, however, by the question of how the performance was produced before the modern period. The thesis demonstrates that the rise of the director is a progressive and historical phenomenon (Dort) rather than a mere invention (Viala, Sarrazac). A chronological analysis of the making of the performance throughout theatre history is the most useful way to open the study. In order to understand the emergence of the director, the research methodology assesses the interconnection of the four parameters above throughout four main periods of theatre history: the beginning of the Renaissance (meneur de jeu), the classical age (actor-manager and stage designer-manager), the modern period (director) and the contemporary period (director-facilitator, performer). This allows us properly to appraise the progressive emergence of the director, as well as to make an analysis of her/his modern and contemporary role. The first chapter argues that the physical separation between the performance space and its audience, which appeared in the early fifteenth-century, has been a crucial feature in the scenographic, aesthetic, political and social organisation of the performance. At the end of the Middle Ages, French farces which raised socio-political issues (see Bakhtin) made a clear division on a single outdoor stage (treteau) between the actors and the spectators, while religious plays (drame fiturgique, mystere) were mostly performed on various outdoor and opened multispaces. As long as the performance was liturgical or religious, and therefore confined within an acceptable framework, it was allowed. At the time, the French ecclesiastical and civil authorities tried, on several occasions, to prohibit staged performances. As a result, practitioners developed non-official indoor spaces, the Theatre de fa Trinite (1398) being the first French indoor theatre recognized by scholars. This self-exclusion from the open public space involved breaking the accepted rules by practitioners (e.g. Les Confreres de fa Passion), in terms of themes but also through individual input into a secular performance rather than the repetition of commonly known religious canvases. These developments heralded the authorised theatres that began to emerge from the mid-sixteenth century, which in some cases were subsidised in their construction. The construction of authorised indoor theatres associated with the development of printing led to a considerable increase in the production of dramatic texts for the stage. Profoundly affecting the reception of the dramatic text by the audience, the distance between the stage and the auditorium accompanied the changing relationship between practitioners and spectators. This distance gave rise to a major development of the role of the actor and of the stage designer. The second chapter looks at the significance of both the actor and set designer in the devising process of the performance from the sixteenth-century to the end of the nineteenth-century. The actor underwent an important shift in function in this period from the delivery of an unwritten text that is learned in the medieval oral tradition to a structured improvisation produced by the commedia dell 'arte. In this new form of theatre, a chef de troupe or an experienced actor shaped the story, but the text existed only through the improvisation of the actors. The preparation of those performances was, moreover, centred on acting technique and the individual skills of the actor. From this point, there is clear evidence that acting began to be the subject of a number of studies in the mid-sixteenth-century, and more significantly in the seventeenth-century, in Italy and France. This is revealed through the implementation of a system of notes written by the playwright to the actors (stage directions) in a range of plays (Gerard de Vivier, Comedie de la Fidelite Nuptiale, 1577). The thesis also focuses on Leoni de' Sommi (Quatro dialoghi, 1556 or 1565) who wrote about actors' techniques and introduced the meneur de jeu in Italy. The actor-manager (meneur de jeu), a professional actor, who scholars have compared to the director (see Strihan), trained the actors. Nothing, however, indicates that the actor-manager was directing the visual representation of the text in the performance space. From the end of the sixteenth-century, the dramatic text began to dominate the process of the performance and led to an expansion of acting techniques, such as the declamation. Stage designers carne from outside the theatre tradition and played a decisive role in the staging of religious celebrations (e.g. Actes des Apotres, 1536). In the sixteenth-century, both the proscenium arch and the borders, incorporated in the architecture of the new indoor theatres (theatre a l'italienne), contributed to create all kinds of illusions on the stage, principally the revival of perspective. This chapter shows ongoing audience demands for more elaborate visual effects on the stage. This led, throughout the classical age, and even more so during the eighteenth-century, to grant the stage design practitioner a major role in the making of the performance (see Ciceri). The second chapter demonstrates that the guidance of the actors and the scenographic conception, which are the artistic components of the role of the director, appear to have developed independently from one another until the nineteenth-century. The third chapter investigates the emergence of the director per se. The causes for this have been considered by a number of scholars, who have mainly identified two: the influence of Naturalism (illustrated by the Meiningen Company, Antoine, and Stanislavski) and the invention of electric lighting. The influence of the Naturalist movement on the emergence of the modem director in the late nineteenth-century is often considered as a radical factor in the history of theatre practice. Naturalism undoubtedly contributed to changes in staging, costume and lighting design, and to a more rigorous commitment to the harmonisation and visualisation of the overall production of the play. Although the art of theatre was dependent on the dramatic text, scholars (Osborne) demonstrate that the Naturalist directors did not strictly follow the playwright's indications written in the play in the late nineteenth-century. On the other hand, the main characteristic of directing in Naturalism at that time depended on a comprehensive understanding of the scenography, which had to respond to the requirements of verisimilitude. Electric lighting contributed to this by allowing for the construction of a visual narrative on stage. However, it was a master technician, rather than an emergent director, who was responsible for key operational decisions over how to use this emerging technology in venues such as the new Bayreuth theatre in 1876. Electric lighting reflects a normal technological evolution and cannot be considered as one of the main causes of the emergence of the director. Two further causes of the emergence of the director, not considered in previous studies, are the invention of cinema and the Symbolist movement (Lugne-Poe, Meyerhold). Cinema had an important technological influence on the practitioners of the Naturalist movement. In order to achieve a photographic truth on the stage (tableau, image), Naturalist directors strove to decorate the stage with the detailed elements that would be expected to be found if the situation were happening in reality. Film production had an influence on the work of actors (Walter). The filmmaker took over a primary role in the making of the film, as the source of the script, the filming process and the editing of the film. This role influenced the conception that theatre directors had of their own work. It is this concept of the director which influenced the development of the theatre director. As for the Symbolist movement, the director's approach was to dematerialise the text of the playwright, trying to expose the spirit, movement, colour and rhythm of the text. Therefore, the Symbolists disengaged themselves from the material aspect of the production, and contributed to give greater artistic autonomy to the role of the director. Although the emergence of the director finds its roots amongst the Naturalist practitioners (through a rigorous attempt to provide a strict visual interpretation of the text on stage), the Symbolist director heralded the modem perspective of the making of performance. The emergence of the director significantly changed theatre practice and theory. For instance, the rehearsal period became a clear work in progress, a platform for both developing practitioners' techniques and staging the show. This chapter explores and contrasts several practitioners' methods based on the two aspects proposed for the definition of the director (guidance of the actors and materialisation of a visual space). The fourth chapter argues that the role of the director became stronger, more prominent, and more hierarchical, through a more political and didactic approach to theatre as exemplified by the cases of France and Germany at the end of the nineteenth-century and through the First World War. This didactic perspective to theatre defines the notion of political theatre. Political theatre is often approached by the literature (Esslin, Willett) through a Marxist interpretation of the great German directors' productions (Reinhardt, Piscator, Brecht). These directors certainly had a great influence on many directors after the Second World War, such as Jean Vilar, Judith Molina, Jean-Louis Barrault, Roger Planchon, Augusto Boal, and others. This chapter demonstrates, moreover, that the director was confirmed through both ontological and educational approaches to the process of making the performance, and consequently became a central and paternal figure in the organisational and structural processes practiced within her/his theatre company. In this way, the stance taken by the director influenced the State authorities in establishing theatrical policy. This is an entirely novel scholarly contribution to the study of the director. The German and French States were not indifferent to the development of political theatre. A network of public theatres was thus developed in the inter-war period, and more significantly after the Second World War. The fifth chapter shows how State theatre policies establish its sources in the development of political theatre, and more specifically in the German theatre trade union movement (Volksbiihne) and the great directors at the end of the nineteenth-century. French political theatre was more influenced by playwrights and actors (Romain Rolland, Louise Michel, Louis Lumet, Emile Berny). French theatre policy was based primarily on theatre directors who decentralised their activities in France during both the inter-war period and the German occupation. After the Second World War, the government established, through directors, a strong network of public theatres. Directors became both the artistic director and the executive director of those institutionalised theatres. The institution was, however, seriously shaken by the social and political upheaval of 1968. It is the link between the State and the institution in which established directors were entangled that was challenged by the young emerging directors who rejected institutionalised responsibility in favour of the autonomy of the artist in the 1960s. This process is elucidated in chapter five. The final chapter defines the contemporary role of the director in contrasting thework of a number of significant young theatre practitioners in the 1960s such as Peter Brook, Ariane Mnouchkine, The Living Theater, Jerzy Grotowski, Augusto Boal, Eugenio Barba, all of whom decided early on to detach their companies from any form of public funding. This chapter also demonstrates how they promoted new forms of performance such as the performance of the self. First, these practitioners explored new performance spaces outside the traditional theatre building. Producing performances in a non-dedicated theatre place (warehouse, street, etc.) was a more frequent practice in the 1960s than before. However, the recent development of cybertheatre questions both the separation of the audience and the practitioners and the place of the director's role since the 1990s. Secondly, the role of the director has been multifaceted since the 1960s. On the one hand, those directors, despite all their different working methods, explored western and non-western acting techniques based on both personal input and collective creation. They challenged theatrical conventions of both the character and the process of making the performance. On the other hand, recent observations and studies distinguish the two main functions of the director, the acting coach and the scenographe, both having found new developments in cinema, television, and in various others events. Thirdly, the contemporary director challenges the performance of the text. In this sense, Antonin Artaud was a visionary. His theatre illustrates the need for the consideration of the totality of the text, as well as that of theatrical production. By contrasting the theories of Artaud, based on a non-dramatic form of theatre, with one of his plays (Le Jet de Sang), this chapter demonstrates how Artaud examined the process of making the performance as a performance. Live art and autobiographical performance, both taken as directing the se(f, reinforce this suggestion. Finally, since the 1990s, autobiographical performance or the performance of the self is a growing practical and theoretical perspective in both performance studies and psychology-related studies. This relates to the premise that each individual is making a representation (through memory, interpretation, etc.) of her/his own life (performativity). This last section explores the links between the place of the director in contemporary theatre and performers in autobiographical practices. The role of the traditional actor is challenged through non-identification of the character in the play, while performers (such as Chris Burden, Ron Athey, Orlan, Franko B, Sterlac) have, likewise, explored their own story/life as a performance. The thesis demonstrates the validity of the four parameters (performer, performance space, devising process, social event) defining a generic approach to the director. A generic perspective on the role of the director would encompass: a historical dimension relative to the reasons for and stages of the 'emergence' of the director; a socio-political analysis concerning the relationship between the director, her/his institutionalisation, and the political realm; and the relationship between performance theory, practice and the contemporary role of the director. Such a generic approach is a new departure in theatre research and might resonate in the study of other collaborative artistic practices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper contrasts the effects of trade, inward FDI and technological development upon the demand for skilled and unskilled workers in the UK. By focussing on industry level data panel data on smaller firms, the paper also contrasts these effects with those generated by large scale domestic investment. The analysis is placed within the broader context of shifts in British industrial policy, which has seen significant shifts from sectoral to horizontal measures and towards stressing the importance of SMEs, clusters and new technology, all delivered at the regional scale. This, however, is contrasted with continued elements of British and EU regional policy which have emphasised the attraction of inward investment in order to alleviate regional unemployment. The results suggest that such policies are not naturally compatible; that while both trade and FDI benefit skilled workers, they have adverse effects on the demand for unskilled labour in the UK. At the very least this suggests the need for a range of policies to tackle various targets (including in this case unemployment and social inclusion) and the need to integrate these into a coherent industrial strategy at various levels of governance, whether regional and/or national. This has important implications for the form of any ‘new’ industrial policy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose To assess the validity and repeatability of the Aston Halometer. Setting University clinic, United Kingdom. Design Prospective, repeated-measures experimental study. Methods The halometer comprises a bright light-emitting-diode (LED) glare source in the center of an iPad4. Letters subtending 0.21° (∼0.3 logMAR) were moved centrifugally from the LED in 0.05 degree steps in 8 orientations separated by 45 degrees for each of 4 contrast levels (1000, 500, 100, and 25 Weber contrast units [Cw]) in random order. Bangerter occlusion foils were inserted in front of the right eye to simulate monocular glare conditions in 20 subjects (mean age 27.7 ± 3.1 years). Subjects were positioned 2 meters from the screen in a dark room with the iPad controlled from an iPhone via Bluetooth operated by the researcher. The C-Quant straylight meter was also used with each of the foils to measure the level of straylight over the retina. Halometry and straylight repeatability was assessed at a second visit. Results Halo size increased with the different occlusion foils and target contrasts (F = 29.564, P <.001) as expected and in a pattern similar to straylight measures (F = 80.655, P <0.001). Lower contrast letters showed better sensitivity but larger glare-obscured areas, resulting in ceiling effects caused by the screen's field-of-view, with 500 Cw being the best compromise. Intraobserver and interobserver repeatability of the Aston Halometer was good (500Cw: 0.84 to 0.93 and 0.53 to 0.73) and similar to the straylight meter. Conclusion The halometer provides a sensitive, repeatable way of quantifying a patient-recognized form of disability glare in multiple orientations to add objectivity to subjectively reported discomfort glare.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper estimates the implicit model, especially the roles of size asymmetries and firm numbers, used by the European Commission to identify mergers with coordinated effects. This subset of cases offers an opportunity to shed empirical light on the conditions where a Competition Authority believes tacit collusion is most likely to arise. We find that, for the Commission, tacit collusion is a rare phenomenon, largely confined to markets of two, more or less symmetric, players. This is consistent with recent experimental literature, but contrasts with the facts on ‘hard-core’ collusion in which firm numbers and asymmetries are often much larger.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper contrasts the determinants of entrepreneurial entry and high-growth aspiration entrepreneurship. Using the Global Entrepreneurship Monitor (GEM) surveys for 42 countries over the period 1998-2005, we analyse how institutional environment and entrepreneurial characteristics affect individual decisions to become entrepreneurs and aspirations to set up high-growth ventures. We find that institutions exert different effects on entrepreneurial entry and on the individual choice to launch high-growth aspiration projects. In particular, a strong property rights system is important for high-growth aspiration entrepreneurship, but has less pronounced effects for entrepreneurial entry. The availability of finance and the fiscal burden matter for both.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of co-branded products as a form of brand management has gained increasing attention from managers and scientists, as evidenced by the practitioner-oriented articles and empirical studies published since the mid-1990s. However, there is no description that contrasts co-branding with other branding strategies, nor is there a structured overview of the main findings of co-branding studies. We classify different branding strategies, discuss branding literature, and develop a theoretical model for co-branding based on research findings. In addition to managerial implications, we provide a critical assessment of research, identify research questions, and offer a research agenda for cobranding.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study empirically compares and contrasts the cultural value orientations of employees from Poland and Turkey by testing the compatibility of their values in three stages through seven cultural dimensions. The first phase of the study deals with the assessment of inter-country cultural value differences; the second phase investigates the intra-country cultural dynamics between selected demographic groups; and the third phase examines the inter-country cultural differences among the selected demographic groups of employees. The research has been conducted adopting the Maznevski, DiStephano, and Nason's (1995) version of cultural perspectives questionnaire with a sample of 744 (548 Polish and 196 Turkish) respondents. The results show significant cultural differences between Poland and Turkey, a presence of cultural dynamics among certain demographic groups within the country, and a mixture of convergence and divergence in the value systems of certain demographic groups both within and between the two nation(s). The research findings convey important messages to international human resource strategists in order for them to employ an effective and rational employment policy and business negotiation approach(es) to effectively operate in these countries. It also highlights that diversity of cultural values not only requires viewing each of them through cultural dimensions at a macro-level with a cross-country reference, but also requires monitoring their dynamics at the micro-level with reference to controlled demographic groups. © 2013 Taylor & Francis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With careful calculation of signal forwarding weights, relay nodes can be used to work collaboratively to enhance downlink transmission performance by forming a virtual multiple-input multiple-output beamforming system. Although collaborative relay beamforming schemes for single user have been widely investigated for cellular systems in previous literatures, there are few studies on the relay beamforming for multiusers. In this paper, we study the collaborative downlink signal transmission with multiple amplify-and-forward relay nodes for multiusers in cellular systems. We propose two new algorithms to determine the beamforming weights with the same objective of minimizing power consumption of the relay nodes. In the first algorithm, we aim to guarantee the received signal-to-noise ratio at multiusers for the relay beamforming with orthogonal channels. We prove that the solution obtained by a semidefinite relaxation technology is optimal. In the second algorithm, we propose an iterative algorithm that jointly selects the base station antennas and optimizes the relay beamforming weights to reach the target signal-to-interference-and-noise ratio at multiusers with nonorthogonal channels. Numerical results validate our theoretical analysis and demonstrate that the proposed optimal schemes can effectively reduce the relay power consumption compared with several other beamforming approaches. © 2012 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A sequence of constant-frequency tones can promote streaming in a subsequent sequence of alternating-frequency tones, but why this effect occurs is not fully understood and its time course has not been investigated. Experiment 1 used a 2.0-s-long constant-frequency inducer (10 repetitions of a low-frequency pure tone) to promote segregation in a subsequent, 1.2-s test sequence of alternating low- and high-frequency tones. Replacing the final inducer tone with silence substantially reduced reported test-sequence segregation. This reduction did not occur when either the 4th or 7th inducer was replaced with silence. This suggests that a change at the induction/test-sequence boundary actively resets build-up, rather than less segregation occurring simply because fewer inducer tones were presented. Furthermore, Experiment 2 found that a constant-frequency inducer produced its maximum segregation-promoting effect after only three tones—this contrasts with the more gradual build-up typically observed for alternating-frequency sequences. Experiment 3 required listeners to judge continuously the grouping of 20-s test sequences. Constant-frequency inducers were considerably more effective at promoting segregation than alternating ones; this difference persisted for ~10 s. In addition, resetting arising from a single deviant (longer tone) was associated only with constant-frequency inducers. Overall, the results suggest that constant-frequency inducers promote segregation by capturing one subset of test-sequence tones into an ongoing, preestablished stream, and that a deviant tone may reduce segregation by disrupting this capture. These findings offer new insight into the dynamics of stream segregation, and have implications for the neural basis of streaming and the role of attention in stream formation. (PsycINFO Database Record (c) 2013 APA, all rights reserved)