25 resultados para PHOTOGRAPHIC INTERPRETATION
em Aston University Research Archive
Resumo:
Luminance changes within a scene are ambiguous; they can indicate reflectance changes, shadows, or shading due to surface undulations. How does vision distinguish between these possibilities? When a surface painted with an albedo texture is shaded, the change in local mean luminance (LM) is accompanied by a similar modulation of the local luminance amplitude (AM) of the texture. This relationship does not necessarily hold for reflectance changes or for shading of a relief texture. Here we concentrate on the role of AM in shape-from-shading. Observers were presented with a noise texture onto which sinusoidal LM and AM signals were superimposed, and were asked to indicate which of two marked locations was closer to them. Shape-from-shading was enhanced when LM and AM co-varied (in-phase), and was disrupted when they were out-of-phase. The perceptual differences between cue types (in-phase vs out-of-phase) were enhanced when the two cues were present at different orientations within a single image. Similar results were found with a haptic matching task. We conclude that vision can use AM to disambiguate luminance changes. LM and AM have a positive relationship for rendered, undulating, albedo textures, and we assess the degree to which this relationship holds in natural images. [Supported by EPSRC grants to AJS and MAG].
Resumo:
The pattern of illumination on an undulating surface can be used to infer its 3-D form (shape-from-shading). But the recovery of shape would be invalid if the luminance changes actually arose from changes in reflectance. So how does vision distinguish variation in illumination from variation in reflectance to avoid illusory depth? When a corrugated surface is painted with an albedo texture, the variation in local mean luminance (LM) due to shading is accompanied by a similar modulation in local luminance amplitude (AM). This is not so for reflectance variation, nor for roughly textured surfaces. We used depth mapping and paired comparison methods to show that modulations of local luminance amplitude play a role in the interpretation of shape-from-shading. The shape-from-shading percept was enhanced when LM and AM co-varied (in-phase) and was disrupted when they were out of phase or (to a lesser degree) when AM was absent. The perceptual differences between cue types (in-phase vs out-of-phase) were enhanced when the two cues were present at different orientations within a single image. Our results suggest that when LM and AM co-vary (in-phase) this indicates that the source of variation is illumination (caused by undulations of the surface), rather than surface reflectance. Hence, the congruence of LM and AM is a cue that supports a shape-from-shading interpretation. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
Thesis is about the enterprise reform in China in general, and the Contract Management Responsibility System (the CMRS) in particular. The latter is a new institutional arrangement to deal with the relation between the government and the state-owned enterprise which has always been at the centre of the enterprise reform. The focus of the research is on the process of institutionalization in order to study the problems of the emergence of a free enterprise system in China. The research is conducted by four in-depth case studies to reveal how the CMRS is running and what interaction is taking place between the government and the state-owned enterprise under the system. Drawing on the empirical work, the thesis analyzes the features of the CMRS and the characteristics of its implementation process with respect to the structural-institutional paradigm, and the property rights approach. The research shows that to establish a market-type relation between the government and the enterprise is a complicated and dynamic process. It involves the understanding of the two different economic mechanisms, market and planning, and the interations taken by two parties. It concludes that the CMRS is an unstable system, either going back to the previous system or moving towards a market system, because its dynamic and control dimension are dysfunctional.
Resumo:
Much of the geometrical data relating to engineering components and assemblies is stored in the form of orthographic views, either on paper or computer files. For various engineering applications, however, it is necessary to describe objects in formal geometric modelling terms. The work reported in this thesis is concerned with the development and implementation of concepts and algorithms for the automatic interpretation of orthographic views as solid models. The various rules and conventions associated with engineering drawings are reviewed and several geometric modelling representations are briefly examined. A review of existing techniques for the automatic, and semi-automatic, interpretation of engineering drawings as solid models is given. A new theoretical approach is then presented and discussed. The author shows how the implementation of such an approach for uniform thickness objects may be extended to more general objects by introducing the concept of `approximation models'. Means by which the quality of the transformations is monitored, are also described. Detailed descriptions of the interpretation algorithms and the software package that were developed for this project are given. The process is then illustrated by a number of practical examples. Finally, the thesis concludes that, using the techniques developed, a substantial percentage of drawings of engineering components could be converted into geometric models with a specific degree of accuracy. This degree is indicative of the suitability of the model for a particular application. Further work on important details is required before a commercially acceptable package is produced.
Resumo:
This thesis presents a thorough and principled investigation into the application of artificial neural networks to the biological monitoring of freshwater. It contains original ideas on the classification and interpretation of benthic macroinvertebrates, and aims to demonstrate their superiority over the biotic systems currently used in the UK to report river water quality. The conceptual basis of a new biological classification system is described, and a full review and analysis of a number of river data sets is presented. The biological classification is compared to the common biotic systems using data from the Upper Trent catchment. This data contained 292 expertly classified invertebrate samples identified to mixed taxonomic levels. The neural network experimental work concentrates on the classification of the invertebrate samples into biological class, where only a subset of the sample is used to form the classification. Other experimentation is conducted into the identification of novel input samples, the classification of samples from different biotopes and the use of prior information in the neural network models. The biological classification is shown to provide an intuitive interpretation of a graphical representation, generated without reference to the class labels, of the Upper Trent data. The selection of key indicator taxa is considered using three different approaches; one novel, one from information theory and one from classical statistical methods. Good indicators of quality class based on these analyses are found to be in good agreement with those chosen by a domain expert. The change in information associated with different levels of identification and enumeration of taxa is quantified. The feasibility of using neural network classifiers and predictors to develop numeric criteria for the biological assessment of sediment contamination in the Great Lakes is also investigated.
Une étude générique du metteur en scène au théâtre:son émergence et son rôle moderne et contemporain
Resumo:
The theatre director (metteur en scene in French) is a relatively new figure in theatre practice. It was not until the I820s that the term 'mise en scene' gained currency. The term 'director' was not in general use until the I880s. The emergence and the role of the director has been considered from a variety of perspectives, either through the history of theatre (Allevy, Jomaron, Sarrazac, Viala, Biet and Triau); the history of directing (Chinoy and Cole, Boll, Veinstein, Roubine); semiotic approaches to directing (Whitmore, Miller, Pavis); the semiotics of performance (De Marinis); generic approaches to the mise en scene (Thomasseau, Banu); post-dramatic approaches to theatre (Lehmann); approaches to performance process and the specifics of rehearsal methodology (Bradby and Williams, Giannachi and Luckhurst, Picon-Vallin, Styan). What the scholarly literature has not done so far is to map the parameters necessarily involved in the directing process, and to incorporate an analysis of the emergence of the theatre director during the modem period and consider its impact on contemporary performance practice. Directing relates primarily to the making of the performance guided by a director, a single figure charged with the authority to make binding artistic decisions. Each director may have her/his own personal approaches to the process of preparation prior to a show. This is exemplified, for example, by the variety of terms now used to describe the role and function of directing, from producer, to facilitator or outside eye. However, it is essential at the outset to make two observations, each of which contributes to a justification for a generic analysis (as opposed to a genetic approach). Firstly, a director does not work alone, and cooperation with others is involved at all stages of the process. Secondly, beyond individual variation, the role of the director remains twofold. The first is to guide the actors (meneur de jeu, directeur d'acteurs, coach); the second is to make a visual representation in the performance space (set designer, stage designer, costume designer, lighting designer, scenographe). The increasing place of scenography has brought contemporary theatre directors such as Wilson, Castellucci, Fabre to produce performances where the performance space becomes a semiotic dimension that displaces the primacy of the text. The play is not, therefore, the sole artistic vehicle for directing. This definition of directing obviously calls for a definition of what the making of the performance might be. The thesis defines the making of the performance as the activity of bringing a social event, by at least one performer, providing visual and/or textual meaning in a performance space. This definition enables us to evaluate four consistent parameters throughout theatre history: first, the social aspect associated to the performance event; second, the devising process which may be based on visual and/or textual elements; third, the presence of at least one performer in the show; fourth, the performance space (which is not simply related to the theatre stage). Although the thesis focuses primarily on theatre practice, such definition blurs the boundaries between theatre and other collaborative artistic disciplines (cinema, opera, music and dance). These parameters illustrate the possibility to undertake a generic analysis of directing, and resonate with the historical, political and artistic dimensions considered. Such a generic perspective on the role of the director addresses three significant questions: an historical question: how/why has the director emerged?; a sociopolitical question: how/why was the director a catalyst for the politicisation of theatre, and subsequently contributed to the rise of State-funded theatre policy?; and an artistic one: how/why the director has changed theatre practice and theory in the twentieth-century? Directing for the theatre as an artistic activity is a historically situated phenomenon. It would seem only natural from a contemporary perspective to associate the activity of directing to the function of the director. This is relativised, however, by the question of how the performance was produced before the modern period. The thesis demonstrates that the rise of the director is a progressive and historical phenomenon (Dort) rather than a mere invention (Viala, Sarrazac). A chronological analysis of the making of the performance throughout theatre history is the most useful way to open the study. In order to understand the emergence of the director, the research methodology assesses the interconnection of the four parameters above throughout four main periods of theatre history: the beginning of the Renaissance (meneur de jeu), the classical age (actor-manager and stage designer-manager), the modern period (director) and the contemporary period (director-facilitator, performer). This allows us properly to appraise the progressive emergence of the director, as well as to make an analysis of her/his modern and contemporary role. The first chapter argues that the physical separation between the performance space and its audience, which appeared in the early fifteenth-century, has been a crucial feature in the scenographic, aesthetic, political and social organisation of the performance. At the end of the Middle Ages, French farces which raised socio-political issues (see Bakhtin) made a clear division on a single outdoor stage (treteau) between the actors and the spectators, while religious plays (drame fiturgique, mystere) were mostly performed on various outdoor and opened multispaces. As long as the performance was liturgical or religious, and therefore confined within an acceptable framework, it was allowed. At the time, the French ecclesiastical and civil authorities tried, on several occasions, to prohibit staged performances. As a result, practitioners developed non-official indoor spaces, the Theatre de fa Trinite (1398) being the first French indoor theatre recognized by scholars. This self-exclusion from the open public space involved breaking the accepted rules by practitioners (e.g. Les Confreres de fa Passion), in terms of themes but also through individual input into a secular performance rather than the repetition of commonly known religious canvases. These developments heralded the authorised theatres that began to emerge from the mid-sixteenth century, which in some cases were subsidised in their construction. The construction of authorised indoor theatres associated with the development of printing led to a considerable increase in the production of dramatic texts for the stage. Profoundly affecting the reception of the dramatic text by the audience, the distance between the stage and the auditorium accompanied the changing relationship between practitioners and spectators. This distance gave rise to a major development of the role of the actor and of the stage designer. The second chapter looks at the significance of both the actor and set designer in the devising process of the performance from the sixteenth-century to the end of the nineteenth-century. The actor underwent an important shift in function in this period from the delivery of an unwritten text that is learned in the medieval oral tradition to a structured improvisation produced by the commedia dell 'arte. In this new form of theatre, a chef de troupe or an experienced actor shaped the story, but the text existed only through the improvisation of the actors. The preparation of those performances was, moreover, centred on acting technique and the individual skills of the actor. From this point, there is clear evidence that acting began to be the subject of a number of studies in the mid-sixteenth-century, and more significantly in the seventeenth-century, in Italy and France. This is revealed through the implementation of a system of notes written by the playwright to the actors (stage directions) in a range of plays (Gerard de Vivier, Comedie de la Fidelite Nuptiale, 1577). The thesis also focuses on Leoni de' Sommi (Quatro dialoghi, 1556 or 1565) who wrote about actors' techniques and introduced the meneur de jeu in Italy. The actor-manager (meneur de jeu), a professional actor, who scholars have compared to the director (see Strihan), trained the actors. Nothing, however, indicates that the actor-manager was directing the visual representation of the text in the performance space. From the end of the sixteenth-century, the dramatic text began to dominate the process of the performance and led to an expansion of acting techniques, such as the declamation. Stage designers carne from outside the theatre tradition and played a decisive role in the staging of religious celebrations (e.g. Actes des Apotres, 1536). In the sixteenth-century, both the proscenium arch and the borders, incorporated in the architecture of the new indoor theatres (theatre a l'italienne), contributed to create all kinds of illusions on the stage, principally the revival of perspective. This chapter shows ongoing audience demands for more elaborate visual effects on the stage. This led, throughout the classical age, and even more so during the eighteenth-century, to grant the stage design practitioner a major role in the making of the performance (see Ciceri). The second chapter demonstrates that the guidance of the actors and the scenographic conception, which are the artistic components of the role of the director, appear to have developed independently from one another until the nineteenth-century. The third chapter investigates the emergence of the director per se. The causes for this have been considered by a number of scholars, who have mainly identified two: the influence of Naturalism (illustrated by the Meiningen Company, Antoine, and Stanislavski) and the invention of electric lighting. The influence of the Naturalist movement on the emergence of the modem director in the late nineteenth-century is often considered as a radical factor in the history of theatre practice. Naturalism undoubtedly contributed to changes in staging, costume and lighting design, and to a more rigorous commitment to the harmonisation and visualisation of the overall production of the play. Although the art of theatre was dependent on the dramatic text, scholars (Osborne) demonstrate that the Naturalist directors did not strictly follow the playwright's indications written in the play in the late nineteenth-century. On the other hand, the main characteristic of directing in Naturalism at that time depended on a comprehensive understanding of the scenography, which had to respond to the requirements of verisimilitude. Electric lighting contributed to this by allowing for the construction of a visual narrative on stage. However, it was a master technician, rather than an emergent director, who was responsible for key operational decisions over how to use this emerging technology in venues such as the new Bayreuth theatre in 1876. Electric lighting reflects a normal technological evolution and cannot be considered as one of the main causes of the emergence of the director. Two further causes of the emergence of the director, not considered in previous studies, are the invention of cinema and the Symbolist movement (Lugne-Poe, Meyerhold). Cinema had an important technological influence on the practitioners of the Naturalist movement. In order to achieve a photographic truth on the stage (tableau, image), Naturalist directors strove to decorate the stage with the detailed elements that would be expected to be found if the situation were happening in reality. Film production had an influence on the work of actors (Walter). The filmmaker took over a primary role in the making of the film, as the source of the script, the filming process and the editing of the film. This role influenced the conception that theatre directors had of their own work. It is this concept of the director which influenced the development of the theatre director. As for the Symbolist movement, the director's approach was to dematerialise the text of the playwright, trying to expose the spirit, movement, colour and rhythm of the text. Therefore, the Symbolists disengaged themselves from the material aspect of the production, and contributed to give greater artistic autonomy to the role of the director. Although the emergence of the director finds its roots amongst the Naturalist practitioners (through a rigorous attempt to provide a strict visual interpretation of the text on stage), the Symbolist director heralded the modem perspective of the making of performance. The emergence of the director significantly changed theatre practice and theory. For instance, the rehearsal period became a clear work in progress, a platform for both developing practitioners' techniques and staging the show. This chapter explores and contrasts several practitioners' methods based on the two aspects proposed for the definition of the director (guidance of the actors and materialisation of a visual space). The fourth chapter argues that the role of the director became stronger, more prominent, and more hierarchical, through a more political and didactic approach to theatre as exemplified by the cases of France and Germany at the end of the nineteenth-century and through the First World War. This didactic perspective to theatre defines the notion of political theatre. Political theatre is often approached by the literature (Esslin, Willett) through a Marxist interpretation of the great German directors' productions (Reinhardt, Piscator, Brecht). These directors certainly had a great influence on many directors after the Second World War, such as Jean Vilar, Judith Molina, Jean-Louis Barrault, Roger Planchon, Augusto Boal, and others. This chapter demonstrates, moreover, that the director was confirmed through both ontological and educational approaches to the process of making the performance, and consequently became a central and paternal figure in the organisational and structural processes practiced within her/his theatre company. In this way, the stance taken by the director influenced the State authorities in establishing theatrical policy. This is an entirely novel scholarly contribution to the study of the director. The German and French States were not indifferent to the development of political theatre. A network of public theatres was thus developed in the inter-war period, and more significantly after the Second World War. The fifth chapter shows how State theatre policies establish its sources in the development of political theatre, and more specifically in the German theatre trade union movement (Volksbiihne) and the great directors at the end of the nineteenth-century. French political theatre was more influenced by playwrights and actors (Romain Rolland, Louise Michel, Louis Lumet, Emile Berny). French theatre policy was based primarily on theatre directors who decentralised their activities in France during both the inter-war period and the German occupation. After the Second World War, the government established, through directors, a strong network of public theatres. Directors became both the artistic director and the executive director of those institutionalised theatres. The institution was, however, seriously shaken by the social and political upheaval of 1968. It is the link between the State and the institution in which established directors were entangled that was challenged by the young emerging directors who rejected institutionalised responsibility in favour of the autonomy of the artist in the 1960s. This process is elucidated in chapter five. The final chapter defines the contemporary role of the director in contrasting thework of a number of significant young theatre practitioners in the 1960s such as Peter Brook, Ariane Mnouchkine, The Living Theater, Jerzy Grotowski, Augusto Boal, Eugenio Barba, all of whom decided early on to detach their companies from any form of public funding. This chapter also demonstrates how they promoted new forms of performance such as the performance of the self. First, these practitioners explored new performance spaces outside the traditional theatre building. Producing performances in a non-dedicated theatre place (warehouse, street, etc.) was a more frequent practice in the 1960s than before. However, the recent development of cybertheatre questions both the separation of the audience and the practitioners and the place of the director's role since the 1990s. Secondly, the role of the director has been multifaceted since the 1960s. On the one hand, those directors, despite all their different working methods, explored western and non-western acting techniques based on both personal input and collective creation. They challenged theatrical conventions of both the character and the process of making the performance. On the other hand, recent observations and studies distinguish the two main functions of the director, the acting coach and the scenographe, both having found new developments in cinema, television, and in various others events. Thirdly, the contemporary director challenges the performance of the text. In this sense, Antonin Artaud was a visionary. His theatre illustrates the need for the consideration of the totality of the text, as well as that of theatrical production. By contrasting the theories of Artaud, based on a non-dramatic form of theatre, with one of his plays (Le Jet de Sang), this chapter demonstrates how Artaud examined the process of making the performance as a performance. Live art and autobiographical performance, both taken as directing the se(f, reinforce this suggestion. Finally, since the 1990s, autobiographical performance or the performance of the self is a growing practical and theoretical perspective in both performance studies and psychology-related studies. This relates to the premise that each individual is making a representation (through memory, interpretation, etc.) of her/his own life (performativity). This last section explores the links between the place of the director in contemporary theatre and performers in autobiographical practices. The role of the traditional actor is challenged through non-identification of the character in the play, while performers (such as Chris Burden, Ron Athey, Orlan, Franko B, Sterlac) have, likewise, explored their own story/life as a performance. The thesis demonstrates the validity of the four parameters (performer, performance space, devising process, social event) defining a generic approach to the director. A generic perspective on the role of the director would encompass: a historical dimension relative to the reasons for and stages of the 'emergence' of the director; a socio-political analysis concerning the relationship between the director, her/his institutionalisation, and the political realm; and the relationship between performance theory, practice and the contemporary role of the director. Such a generic approach is a new departure in theatre research and might resonate in the study of other collaborative artistic practices.
Resumo:
Introduction: Macular oedema is not directly visible on digital photographs used in screening. Photographic surrogate markers are used to detect patients who may have macular oedema. Evidence suggests that only around 10% of patients with these surrogate markers referred to an ophthalmologist have macular oedema when examined by slit-lamp biomicroscopy. Purpose: The purpose of this audit was to determine how many patients with surrogate markers were truly identified by optical coherence tomography (OCT) as having macular oedema. Method: Data were collected from patients attending digital diabetic retinopathy screening. Patients who presented with surrogate markers for macular oedema also had an OCT scan. The fast macula scan on the Stratus OCT was used and an ophthalmologist reviewed the scans to determine whether macular oedema was present. Results: Out of 66 patients with maculopathy defined as haemorrhages or microaneurysms within one optic disc diameter (DD) of the fovea and visual acuity (VA) worse than 6/9 11 (17%) showed thickening on the OCT, only 4 (6%) had macular oedema. None required laser. Out of 155 patients with maculopathy defined as any exudate within one DD of the fovea or circinate within two DD 45 (29%) showed thickening on the OCT of these 27% required laser. Conclusion: OCT is a useful tool in screening to help identify those who need a true referral to ophthalmology for maculopathy. If exudate is present the chance of having macular oedema and requiring laser treatment is greater than the presence of microaneurysms within one DD and reduced VA.
Resumo:
DESIGN. Retrospective analysis PURPOSE. Macular oedema is not directly visible on two dimensional digital photographs such that surrogate markers need to be used. In the English National Screening Programme these are exudate within one optic disc diameter (DD) of the fovea, group of exudates within two DD of the fovea and haemorrhages or microaneurysms (HMA) within one DD of the fovea with best corrected visual acuity (VA) worse than 6/9. All patients who present with any of these surrogate markers at screening are referred to an ophthalmology clinic for slit lamp examination. The purpose of this audit was to determine how many patients with positive maculopathy diagnosis on photography were truly identified by optical coherence tomography (OCT) with macular oedema. METHODS. Data was collected from patients attending digital diabetic retinopathy screening. Patients who presented with surrogate markers for macular oedema also had an OCT scan. The fast macula scan on the Stratus OCT was used and an ophthalmologist reviewed the scans to determine whether macular oedema was present. RESULTS. Maculopathy by exudates: Of 155 patients 45 (29%) showed thickening on the OCT of these 12 required laser. Those who also had pre-proliferative retinopathy (n=20) were more likely to have macular oedema (75%) than those with background diabetic retinopathy. Maculopathy by HMA and VA worse than 6/9: Of 66 patients 11 (16.7%) showed thickening on the OCT. 5 (7.6%) of these had macular oedema, 5 (7.6%) epi-retinal membrane, and 1 (1.5%) age related macular degeneration. None of these patients required laser. CONCLUSIONS. The likelihood of the presence of macular oedema and requiring laser treatment is greater with macular exudation than HMA within one DD and reduced VA. Overall the surrogate markers used show low specificity for macular oedema, however combining OCT with photography does identify those with macular oedema who require a true referral for an ophthalmological slit lamp examination.
Resumo:
The concept of 'masculinity' has over more years received increased attention within consumer research discourse suggesting the potential of a 'crisis of masculinity', symptomatic of a growing feminisation, or 'queering' of visual imagery and consumption (e.g. Patterson & Elliott, 2002). Although this corpus of research has served to enrich the broader gender identity debate, it is, arguably, still relatively underdeveloped and therefore warrants further insight and elaboration. The aim of this paper is, therefore, to explore how masculinity is represented and interpreted by men using the Dolce et Gabbana men's 2005 print advertising campaign. The rationale for using this particular campaign is that it is one of the most homoerotic, provocative, and well publicised campaigns to cross over from the 'gay' media to more mainstream UK men's magazines. Masculinity, and what it means to be 'masculine', manifests itself within particular ideological, moral, cultural and hegemonic discourses. Masculinity is not a homogenous term which can be simply reduced, and ascribed, to those born as 'male' rather than 'female'.
Resumo:
Appealingly simple: A new method is described that allows the diffusion coefficient of a small molecule to be estimated given only the molecular weight and the viscosity of the solvent used. This method makes possible the quantitative interpretation of the diffusion domain of diffusion-ordered NMR spectra (see picture). © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
The way interviews are used in accounting research, and the way this research is written up, suggests that there is only one way to interpret these interviews. This invests the author(s) with great perceptive power and storytelling ability. What if different assumptions are used about how to interpret research, and how to present the ensuing findings? We give an illustration of what this might imply, using the notion of 'reflexivity'. The setting for our illustration concerns a series of interviews with management accountants on the dilemmas they face in their daily work. We apply Alvesson's ideas on how to use metaphors to open up the interpretation of interview accounts. The aim of the paper is to shed a different light on the way interviews can be used and interpreted in accounting research. We assert that allowing for reflexive accounts is likely to require substantially differently written research papers, in which the process of discovery is emphasized. © 2011 Elsevier Ltd.
Resumo:
EEG Hyperscanning is a method for studying two or more individuals simultaneously with the objective of elucidating how co-variations in their neural activity (i.e., hyperconnectivity) are influenced by their behavioral and social interactions. The aim of this study was to compare the performance of different hyper-connectivity measures using (i) simulated data, where the degree of coupling could be systematically manipulated, and (ii) individually recorded human EEG combined into pseudo-pairs of participants where no hyper-connections could exist. With simulated data we found that each of the most widely used measures of hyperconnectivity were biased and detected hyper-connections where none existed. With pseudo-pairs of human data we found spurious hyper-connections that arose because there were genuine similarities between the EEG recorded from different people independently but under the same experimental conditions. Specifically, there were systematic differences between experimental conditions in terms of the rhythmicity of the EEG that were common across participants. As any imbalance between experimental conditions in terms of stimulus presentation or movement may affect the rhythmicity of the EEG, this problem could apply in many hyperscanning contexts. Furthermore, as these spurious hyper-connections reflected real similarities between the EEGs, they were not Type-1 errors that could be overcome by some appropriate statistical control. However, some measures that have not previously been used in hyperconnectivity studies, notably the circular correlation co-efficient (CCorr), were less susceptible to detecting spurious hyper-connections of this type. The reason for this advantage in performance is discussed and the use of the CCorr as an alternative measure of hyperconnectivity is advocated. © 2013 Burgess.