20 resultados para Intra-individual variation

em Aston University Research Archive


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Excessive consumption of dietary fat is acknowledged to be a widespread problem linked to a range of medical conditions. Despite this, little is known about the specific sensory appeal held by fats and no previous published research exists concerning human perception of non-textural taste qualities in fats. This research aimed to address whether a taste component can be found in sensory perception of pure fats. It also examined whether individual differences existed in human taste responses to fat, using both aggregated data analysis methods and multidimensional scaling. Results indicated that individuals were able to detect both the primary taste qualities of sweet, salty, sour and bitter in pure processed oils and reliably ascribe their own individually-generated taste labels, suggested that a taste component may be present in human responses to fat. Individual variation appeared to exist, both in the perception of given taste qualities and in perceived intensity and preferences. A number of factors were examined in relation to such individual differences in taste perception, including age, gender, genetic sensitivity to 6-n-propylthiouracil, body mass, dietary preferences and intake, dieting behaviours and restraint. Results revealed that, to varying extents, gender, age, sensitivity to 6-n-propylthiouracil, dietary preferences, habitual dietary intake and restraint all appeared to be related to individual variation in taste responses to fat. However, in general, these differences appeared to exist in the form of differing preferences and levels of intensity with which taste qualities detected in fat were perceived, as opposed to the perception of specific taste qualities being associated with given traits or states. Equally, each of these factors appeared to exert only a limited influence upon variation in sensory responses and thus the potential for using taste responses to fats as a marker for issues such as over-consumption, obesity or eating disorder is at present limited.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Human swallowing represents a complex highly coordinated sensorimotor function whose functional neuroanatomy remains incompletely understood. Specifically, previous studies have failed to delineate the temporo-spatial sequence of those cerebral loci active during the differing phases of swallowing. We therefore sought to define the temporal characteristics of cortical activity associated with human swallowing behaviour using a novel application of magnetoencephalography (MEG). In healthy volunteers (n = 8, aged 28-45), 151-channel whole cortex MEG was recorded during the conditions of oral water infusion, volitional wet swallowing (5 ml bolus), tongue thrust or rest. Each condition lasted for 5 s and was repeated 20 times. Synthetic aperture magnetometry (SAM) analysis was performed on each active epoch and compared to rest. Temporal sequencing of brain activations utilised time-frequency wavelet plots of regions selected using virtual electrodes. Following SAM analysis, water infusion preferentially activated the caudolateral sensorimotor cortex, whereas during volitional swallowing and tongue movement, the superior sensorimotor cortex was more strongly active. Time-frequency wavelet analysis indicated that sensory input from the tongue simultaneously activated caudolateral sensorimotor and primary gustatory cortex, which appeared to prime the superior sensory and motor cortical areas, involved in the volitional phase of swallowing. Our data support the existence of a temporal synchrony across the whole cortical swallowing network, with sensory input from the tongue being critical. Thus, the ability to non-invasively image this network, with intra-individual and high temporal resolution, provides new insights into the brain processing of human swallowing. © 2004 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study unites investigations into the linguistic relativity of color categories with research on children's category acquisition. Naming, comprehension, and memory for colors were tracked in 2 populations over a 3-year period. Children from a seminomadic equatorial African culture, whose language contains 5 color terms, were compared with a group of English children. Despite differences in visual environment, language, and education, they showed similar patterns of term acquisition. Both groups acquired color vocabulary slowly and with great individual variation. Those knowing no color terms made recognition errors based on perceptual distance, and the influence of naming on memory increased with age. An initial perceptually driven color continuum appears to be progressively organized into sets appropriate to each culture and language. PsycINFO Database Record (c) 2009 APA, all rights reserved

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A survey is made of the literature relating to a number of dimensions of cognitive style, from which it is concluded that cognitive style has a strong theoretical potential as a predictor of academic performance. It is also noted that there have been few attempts to relate co gnitive style to academic performance, and that these have met with limited success. On the assumption that theories of individual differences should be congruent with theories of general functioning, an examination is made of the model of cognition presupposed by ,dimen sions of cognitive style. A central feature of this model is the distinction between cognitive content and cognitive structure. The origins of this distinction are traced back to the normative and experimental or quasi-experimental characteristics of research in psychology. The validity of the distinction is examined with reference to modern research findings, and the conclusion is drawn that the norma~ive experimental method is an increasingly inappropriate tool of research when applied to higher levels of cognitive functioning, as it cannot handle subject idiosyncracy or patterns of interaction. An examination of the presuppositions of educational research leads to the complementary conclusion that the research methods imply an oversimplified model of the educational situation. Two empirical studies are reported: (1) An experiment using conventional cognitive style dimensions as predictors of performance under two teaching methods (2) An attempt to predict individual differences in overall academic performance by means of a research technique which uses a questionnaire, intra-individual scoring, and an analysis of patterns of responses, and which attempts to take some account of subject idiosyncracy. The implifications of these studies for fUrther research are noted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The theatre director (metteur en scene in French) is a relatively new figure in theatre practice. It was not until the I820s that the term 'mise en scene' gained currency. The term 'director' was not in general use until the I880s. The emergence and the role of the director has been considered from a variety of perspectives, either through the history of theatre (Allevy, Jomaron, Sarrazac, Viala, Biet and Triau); the history of directing (Chinoy and Cole, Boll, Veinstein, Roubine); semiotic approaches to directing (Whitmore, Miller, Pavis); the semiotics of performance (De Marinis); generic approaches to the mise en scene (Thomasseau, Banu); post-dramatic approaches to theatre (Lehmann); approaches to performance process and the specifics of rehearsal methodology (Bradby and Williams, Giannachi and Luckhurst, Picon-Vallin, Styan). What the scholarly literature has not done so far is to map the parameters necessarily involved in the directing process, and to incorporate an analysis of the emergence of the theatre director during the modem period and consider its impact on contemporary performance practice. Directing relates primarily to the making of the performance guided by a director, a single figure charged with the authority to make binding artistic decisions. Each director may have her/his own personal approaches to the process of preparation prior to a show. This is exemplified, for example, by the variety of terms now used to describe the role and function of directing, from producer, to facilitator or outside eye. However, it is essential at the outset to make two observations, each of which contributes to a justification for a generic analysis (as opposed to a genetic approach). Firstly, a director does not work alone, and cooperation with others is involved at all stages of the process. Secondly, beyond individual variation, the role of the director remains twofold. The first is to guide the actors (meneur de jeu, directeur d'acteurs, coach); the second is to make a visual representation in the performance space (set designer, stage designer, costume designer, lighting designer, scenographe). The increasing place of scenography has brought contemporary theatre directors such as Wilson, Castellucci, Fabre to produce performances where the performance space becomes a semiotic dimension that displaces the primacy of the text. The play is not, therefore, the sole artistic vehicle for directing. This definition of directing obviously calls for a definition of what the making of the performance might be. The thesis defines the making of the performance as the activity of bringing a social event, by at least one performer, providing visual and/or textual meaning in a performance space. This definition enables us to evaluate four consistent parameters throughout theatre history: first, the social aspect associated to the performance event; second, the devising process which may be based on visual and/or textual elements; third, the presence of at least one performer in the show; fourth, the performance space (which is not simply related to the theatre stage). Although the thesis focuses primarily on theatre practice, such definition blurs the boundaries between theatre and other collaborative artistic disciplines (cinema, opera, music and dance). These parameters illustrate the possibility to undertake a generic analysis of directing, and resonate with the historical, political and artistic dimensions considered. Such a generic perspective on the role of the director addresses three significant questions: an historical question: how/why has the director emerged?; a sociopolitical question: how/why was the director a catalyst for the politicisation of theatre, and subsequently contributed to the rise of State-funded theatre policy?; and an artistic one: how/why the director has changed theatre practice and theory in the twentieth-century? Directing for the theatre as an artistic activity is a historically situated phenomenon. It would seem only natural from a contemporary perspective to associate the activity of directing to the function of the director. This is relativised, however, by the question of how the performance was produced before the modern period. The thesis demonstrates that the rise of the director is a progressive and historical phenomenon (Dort) rather than a mere invention (Viala, Sarrazac). A chronological analysis of the making of the performance throughout theatre history is the most useful way to open the study. In order to understand the emergence of the director, the research methodology assesses the interconnection of the four parameters above throughout four main periods of theatre history: the beginning of the Renaissance (meneur de jeu), the classical age (actor-manager and stage designer-manager), the modern period (director) and the contemporary period (director-facilitator, performer). This allows us properly to appraise the progressive emergence of the director, as well as to make an analysis of her/his modern and contemporary role. The first chapter argues that the physical separation between the performance space and its audience, which appeared in the early fifteenth-century, has been a crucial feature in the scenographic, aesthetic, political and social organisation of the performance. At the end of the Middle Ages, French farces which raised socio-political issues (see Bakhtin) made a clear division on a single outdoor stage (treteau) between the actors and the spectators, while religious plays (drame fiturgique, mystere) were mostly performed on various outdoor and opened multispaces. As long as the performance was liturgical or religious, and therefore confined within an acceptable framework, it was allowed. At the time, the French ecclesiastical and civil authorities tried, on several occasions, to prohibit staged performances. As a result, practitioners developed non-official indoor spaces, the Theatre de fa Trinite (1398) being the first French indoor theatre recognized by scholars. This self-exclusion from the open public space involved breaking the accepted rules by practitioners (e.g. Les Confreres de fa Passion), in terms of themes but also through individual input into a secular performance rather than the repetition of commonly known religious canvases. These developments heralded the authorised theatres that began to emerge from the mid-sixteenth century, which in some cases were subsidised in their construction. The construction of authorised indoor theatres associated with the development of printing led to a considerable increase in the production of dramatic texts for the stage. Profoundly affecting the reception of the dramatic text by the audience, the distance between the stage and the auditorium accompanied the changing relationship between practitioners and spectators. This distance gave rise to a major development of the role of the actor and of the stage designer. The second chapter looks at the significance of both the actor and set designer in the devising process of the performance from the sixteenth-century to the end of the nineteenth-century. The actor underwent an important shift in function in this period from the delivery of an unwritten text that is learned in the medieval oral tradition to a structured improvisation produced by the commedia dell 'arte. In this new form of theatre, a chef de troupe or an experienced actor shaped the story, but the text existed only through the improvisation of the actors. The preparation of those performances was, moreover, centred on acting technique and the individual skills of the actor. From this point, there is clear evidence that acting began to be the subject of a number of studies in the mid-sixteenth-century, and more significantly in the seventeenth-century, in Italy and France. This is revealed through the implementation of a system of notes written by the playwright to the actors (stage directions) in a range of plays (Gerard de Vivier, Comedie de la Fidelite Nuptiale, 1577). The thesis also focuses on Leoni de' Sommi (Quatro dialoghi, 1556 or 1565) who wrote about actors' techniques and introduced the meneur de jeu in Italy. The actor-manager (meneur de jeu), a professional actor, who scholars have compared to the director (see Strihan), trained the actors. Nothing, however, indicates that the actor-manager was directing the visual representation of the text in the performance space. From the end of the sixteenth-century, the dramatic text began to dominate the process of the performance and led to an expansion of acting techniques, such as the declamation. Stage designers carne from outside the theatre tradition and played a decisive role in the staging of religious celebrations (e.g. Actes des Apotres, 1536). In the sixteenth-century, both the proscenium arch and the borders, incorporated in the architecture of the new indoor theatres (theatre a l'italienne), contributed to create all kinds of illusions on the stage, principally the revival of perspective. This chapter shows ongoing audience demands for more elaborate visual effects on the stage. This led, throughout the classical age, and even more so during the eighteenth-century, to grant the stage design practitioner a major role in the making of the performance (see Ciceri). The second chapter demonstrates that the guidance of the actors and the scenographic conception, which are the artistic components of the role of the director, appear to have developed independently from one another until the nineteenth-century. The third chapter investigates the emergence of the director per se. The causes for this have been considered by a number of scholars, who have mainly identified two: the influence of Naturalism (illustrated by the Meiningen Company, Antoine, and Stanislavski) and the invention of electric lighting. The influence of the Naturalist movement on the emergence of the modem director in the late nineteenth-century is often considered as a radical factor in the history of theatre practice. Naturalism undoubtedly contributed to changes in staging, costume and lighting design, and to a more rigorous commitment to the harmonisation and visualisation of the overall production of the play. Although the art of theatre was dependent on the dramatic text, scholars (Osborne) demonstrate that the Naturalist directors did not strictly follow the playwright's indications written in the play in the late nineteenth-century. On the other hand, the main characteristic of directing in Naturalism at that time depended on a comprehensive understanding of the scenography, which had to respond to the requirements of verisimilitude. Electric lighting contributed to this by allowing for the construction of a visual narrative on stage. However, it was a master technician, rather than an emergent director, who was responsible for key operational decisions over how to use this emerging technology in venues such as the new Bayreuth theatre in 1876. Electric lighting reflects a normal technological evolution and cannot be considered as one of the main causes of the emergence of the director. Two further causes of the emergence of the director, not considered in previous studies, are the invention of cinema and the Symbolist movement (Lugne-Poe, Meyerhold). Cinema had an important technological influence on the practitioners of the Naturalist movement. In order to achieve a photographic truth on the stage (tableau, image), Naturalist directors strove to decorate the stage with the detailed elements that would be expected to be found if the situation were happening in reality. Film production had an influence on the work of actors (Walter). The filmmaker took over a primary role in the making of the film, as the source of the script, the filming process and the editing of the film. This role influenced the conception that theatre directors had of their own work. It is this concept of the director which influenced the development of the theatre director. As for the Symbolist movement, the director's approach was to dematerialise the text of the playwright, trying to expose the spirit, movement, colour and rhythm of the text. Therefore, the Symbolists disengaged themselves from the material aspect of the production, and contributed to give greater artistic autonomy to the role of the director. Although the emergence of the director finds its roots amongst the Naturalist practitioners (through a rigorous attempt to provide a strict visual interpretation of the text on stage), the Symbolist director heralded the modem perspective of the making of performance. The emergence of the director significantly changed theatre practice and theory. For instance, the rehearsal period became a clear work in progress, a platform for both developing practitioners' techniques and staging the show. This chapter explores and contrasts several practitioners' methods based on the two aspects proposed for the definition of the director (guidance of the actors and materialisation of a visual space). The fourth chapter argues that the role of the director became stronger, more prominent, and more hierarchical, through a more political and didactic approach to theatre as exemplified by the cases of France and Germany at the end of the nineteenth-century and through the First World War. This didactic perspective to theatre defines the notion of political theatre. Political theatre is often approached by the literature (Esslin, Willett) through a Marxist interpretation of the great German directors' productions (Reinhardt, Piscator, Brecht). These directors certainly had a great influence on many directors after the Second World War, such as Jean Vilar, Judith Molina, Jean-Louis Barrault, Roger Planchon, Augusto Boal, and others. This chapter demonstrates, moreover, that the director was confirmed through both ontological and educational approaches to the process of making the performance, and consequently became a central and paternal figure in the organisational and structural processes practiced within her/his theatre company. In this way, the stance taken by the director influenced the State authorities in establishing theatrical policy. This is an entirely novel scholarly contribution to the study of the director. The German and French States were not indifferent to the development of political theatre. A network of public theatres was thus developed in the inter-war period, and more significantly after the Second World War. The fifth chapter shows how State theatre policies establish its sources in the development of political theatre, and more specifically in the German theatre trade union movement (Volksbiihne) and the great directors at the end of the nineteenth-century. French political theatre was more influenced by playwrights and actors (Romain Rolland, Louise Michel, Louis Lumet, Emile Berny). French theatre policy was based primarily on theatre directors who decentralised their activities in France during both the inter-war period and the German occupation. After the Second World War, the government established, through directors, a strong network of public theatres. Directors became both the artistic director and the executive director of those institutionalised theatres. The institution was, however, seriously shaken by the social and political upheaval of 1968. It is the link between the State and the institution in which established directors were entangled that was challenged by the young emerging directors who rejected institutionalised responsibility in favour of the autonomy of the artist in the 1960s. This process is elucidated in chapter five. The final chapter defines the contemporary role of the director in contrasting thework of a number of significant young theatre practitioners in the 1960s such as Peter Brook, Ariane Mnouchkine, The Living Theater, Jerzy Grotowski, Augusto Boal, Eugenio Barba, all of whom decided early on to detach their companies from any form of public funding. This chapter also demonstrates how they promoted new forms of performance such as the performance of the self. First, these practitioners explored new performance spaces outside the traditional theatre building. Producing performances in a non-dedicated theatre place (warehouse, street, etc.) was a more frequent practice in the 1960s than before. However, the recent development of cybertheatre questions both the separation of the audience and the practitioners and the place of the director's role since the 1990s. Secondly, the role of the director has been multifaceted since the 1960s. On the one hand, those directors, despite all their different working methods, explored western and non-western acting techniques based on both personal input and collective creation. They challenged theatrical conventions of both the character and the process of making the performance. On the other hand, recent observations and studies distinguish the two main functions of the director, the acting coach and the scenographe, both having found new developments in cinema, television, and in various others events. Thirdly, the contemporary director challenges the performance of the text. In this sense, Antonin Artaud was a visionary. His theatre illustrates the need for the consideration of the totality of the text, as well as that of theatrical production. By contrasting the theories of Artaud, based on a non-dramatic form of theatre, with one of his plays (Le Jet de Sang), this chapter demonstrates how Artaud examined the process of making the performance as a performance. Live art and autobiographical performance, both taken as directing the se(f, reinforce this suggestion. Finally, since the 1990s, autobiographical performance or the performance of the self is a growing practical and theoretical perspective in both performance studies and psychology-related studies. This relates to the premise that each individual is making a representation (through memory, interpretation, etc.) of her/his own life (performativity). This last section explores the links between the place of the director in contemporary theatre and performers in autobiographical practices. The role of the traditional actor is challenged through non-identification of the character in the play, while performers (such as Chris Burden, Ron Athey, Orlan, Franko B, Sterlac) have, likewise, explored their own story/life as a performance. The thesis demonstrates the validity of the four parameters (performer, performance space, devising process, social event) defining a generic approach to the director. A generic perspective on the role of the director would encompass: a historical dimension relative to the reasons for and stages of the 'emergence' of the director; a socio-political analysis concerning the relationship between the director, her/his institutionalisation, and the political realm; and the relationship between performance theory, practice and the contemporary role of the director. Such a generic approach is a new departure in theatre research and might resonate in the study of other collaborative artistic practices.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The electroretinogram evoked by reversal pattern stimulation (rPERG) is known to contain both pattern contrast and luminance related components. The retinal mechanisms of the transient rPERGs subserving these functional characteristics are the main concern in the present studies. Considerable attention has been paid to the luminance-related characteristics of the response. The transient PERGs were found to consist of two subsequent processes using low frequency attenuation analysis. The processes overlapped and the individual difference in each process timings formed the major cause for the variations of the negative potential waveform of the transient rPERGs. Attention has been paid to those having ‘notch’ type of variation. Under different contrast levels, the amplitudes of the positive and negative potentials were linearly increased with higher contrast level and the negative potential showed a higher sensitivity to contrast changes and higher contrast gain. Under lower contrast levels, the decreased amplitudes made the difference in the timing course of the positive and negative processes evident, interpreting the appearance of the notch in some cases. Visual adaptation conditions for recording the transient rPERG were discussed. Another effort was to study the large variation of the transient rPERGs (especially the positive potential, P50) in the elderly who’s distant and near visual acuity were normal. It was found that reduction of retinal illumination contributed mostly to the P50 amplitude loss and contrast loss mostly to the negative potential (N95) amplitude loss. Senile miosis was thought to have little effect on the reduction of the retinal illumination, while the changes in the optics of the eye was probably the major cause for it, which interpreted the larger individual variation of the P50 amplitude of the elderly PERGs. Convex defocus affected the transient rPERGs more effectively than concave lenses, especially the N95 amplitude in the elderly. The disability of accommodation and the type and the degree of subjects’ ametropia should be taken into consideration when the elderly rPERGs were analysed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lipid peroxidation products like malondialdehyde, 4-hydroxynonenal and F(2)-isoprostanes are widely used as markers of oxidative stress in vitro and in vivo. This study reports the results of a multi-laboratory validation study by COST Action B35 to assess inter-laboratory and intra-laboratory variation in the measurement of lipid peroxidation. Human plasma samples were exposed to UVA irradiation at different doses (0, 15 J, 20 J), encoded and shipped to 15 laboratories, where analyses of malondialdehyde, 4-hydroxynonenal and isoprostanes were conducted. The results demonstrate a low within-day-variation and a good correlation of results observed on two different days. However, high coefficients of variation were observed between the laboratories. Malondialdehyde determined by HPLC was found to be the most sensitive and reproducible lipid peroxidation product in plasma upon UVA treatment. It is concluded that measurement of malondialdehyde by HPLC has good analytical validity for inter-laboratory studies on lipid peroxidation in human EDTA-plasma samples, although it is acknowledged that this may not translate to biological validity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aims - To characterize the population pharmacokinetics of ranitidine in critically ill children and to determine the influence of various clinical and demographic factors on its disposition. Methods - Data were collected prospectively from 78 paediatric patients (n = 248 plasma samples) who received oral or intravenous ranitidine for prophylaxis against stress ulcers, gastrointestinal bleeding or the treatment of gastro-oesophageal reflux. Plasma samples were analysed using high-performance liquid chromatography, and the data were subjected to population pharmacokinetic analysis using nonlinear mixed-effects modelling. Results - A one-compartment model best described the plasma concentration profile, with an exponential structure for interindividual errors and a proportional structure for intra-individual error. After backward stepwise elimination, the final model showed a significant decrease in objective function value (−12.618; P < 0.001) compared with the weight-corrected base model. Final parameter estimates for the population were 32.1 l h−1 for total clearance and 285 l for volume of distribution, both allometrically modelled for a 70 kg adult. Final estimates for absorption rate constant and bioavailability were 1.31 h−1 and 27.5%, respectively. No significant relationship was found between age and weight-corrected ranitidine pharmacokinetic parameters in the final model, with the covariate for cardiac failure or surgery being shown to reduce clearance significantly by a factor of 0.46. Conclusions - Currently, ranitidine dose recommendations are based on children's weights. However, our findings suggest that a dosing scheme that takes into consideration both weight and cardiac failure/surgery would be more appropriate in order to avoid administration of higher or more frequent doses than necessary.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Electrocardiography (ECG) has been recently proposed as biometric trait for identification purposes. Intra-individual variations of ECG might affect identification performance. These variations are mainly due to Heart Rate Variability (HRV). In particular, HRV causes changes in the QT intervals along the ECG waveforms. This work is aimed at analysing the influence of seven QT interval correction methods (based on population models) on the performance of ECG-fiducial-based identification systems. In addition, we have also considered the influence of training set size, classifier, classifier ensemble as well as the number of consecutive heartbeats in a majority voting scheme. The ECG signals used in this study were collected from thirty-nine subjects within the Physionet open access database. Public domain software was used for fiducial points detection. Results suggested that QT correction is indeed required to improve the performance. However, there is no clear choice among the seven explored approaches for QT correction (identification rate between 0.97 and 0.99). MultiLayer Perceptron and Support Vector Machine seemed to have better generalization capabilities, in terms of classification performance, with respect to Decision Tree-based classifiers. No such strong influence of the training-set size and the number of consecutive heartbeats has been observed on the majority voting scheme.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aims of this thesis were to investigate the neuropsychological, neurophysiological, and cognitive contributors to mobility changes with increasing age. In a series of studies with adults aged 45-88 years, unsafe pedestrian behaviour and falls were investigated in relation to i) cognitive functions (including response time variability, executive function, and visual attention tests), ii) mobility assessments (including gait and balance and using motion capture cameras), iii) motor initiation and pedestrian road crossing behavior (using a simulated pedestrian road scene), iv) neuronal and functional brain changes (using a computer based crossing task with magnetoencephalography), and v) quality of life questionnaires (including fear of falling and restricted range of travel). Older adults are more likely to be fatally injured at the far-side of the road compared to the near-side of the road, however, the underlying mobility and cognitive processes related to lane-specific (i.e. near-side or far-side) pedestrian crossing errors in older adults is currently unknown. The first study explored cognitive, motor initiation, and mobility predictors of unsafe pedestrian crossing behaviours. The purpose of the first study (Chapter 2) was to determine whether collisions at the near-side and far-side would be differentially predicted by mobility indices (such as walking speed and postural sway), motor initiation, and cognitive function (including spatial planning, visual attention, and within participant variability) with increasing age. The results suggest that near-side unsafe pedestrian crossing errors are related to processing speed, whereas far-side errors are related to spatial planning difficulties. Both near-side and far-side crossing errors were related to walking speed and motor initiation measures (specifically motor initiation variability). The salient mobility predictors of unsafe pedestrian crossings determined in the above study were examined in Chapter 3 in conjunction with the presence of a history of falls. The purpose of this study was to determine the extent to which walking speed (indicated as a salient predictor of unsafe crossings and start-up delay in Chapter 2), and previous falls can be predicted and explained by age-related changes in mobility and cognitive function changes (specifically within participant variability and spatial ability). 53.2% of walking speed variance was found to be predicted by self-rated mobility score, sit-to-stand time, motor initiation, and within participant variability. Although a significant model was not found to predict fall history variance, postural sway and attentional set shifting ability was found to be strongly related to the occurrence of falls within the last year. Next in Chapter 4, unsafe pedestrian crossing behaviour and pedestrian predictors (both mobility and cognitive measures) from Chapter 2 were explored in terms of increasing hemispheric laterality of attentional functions and inter-hemispheric oscillatory beta power changes associated with increasing age. Elevated beta (15-35 Hz) power in the motor cortex prior to movement, and reduced beta power post-movement has been linked to age-related changes in mobility. In addition, increasing recruitment of both hemispheres has been shown to occur and be beneficial to perform similarly to younger adults in cognitive tasks (Cabeza, Anderson, Locantore, & McIntosh, 2002). It has been hypothesised that changes in hemispheric neural beta power may explain the presence of more pedestrian errors at the farside of the road in older adults. The purpose of the study was to determine whether changes in age-related cortical oscillatory beta power and hemispheric laterality are linked to unsafe pedestrian behaviour in older adults. Results indicated that pedestrian errors at the near-side are linked to hemispheric bilateralisation, and neural overcompensation post-movement, 4 whereas far-side unsafe errors are linked to not employing neural compensation methods (hemispheric bilateralisation). Finally, in Chapter 5, fear of falling, life space mobility, and quality of life in old age were examined to determine their relationships with cognition, mobility (including fall history and pedestrian behaviour), and motor initiation. In addition to death and injury, mobility decline (such as pedestrian errors in Chapter 2, and falls in Chapter 3) and cognition can negatively affect quality of life and result in activity avoidance. Further, number of falls in Chapter 3 was not significantly linked to mobility and cognition alone, and may be further explained by a fear of falling. The objective of the above study (Study 2, Chapter 3) was to determine the role of mobility and cognition on fear of falling and life space mobility, and the impact on quality of life measures. Results indicated that missing safe pedestrian crossing gaps (potentially indicating crossing anxiety) and mobility decline were consistent predictors of fear of falling, reduced life space mobility, and quality of life variance. Social community (total number of close family and friends) was also linked to life space mobility and quality of life. Lower cognitive functions (particularly processing speed and reaction time) were found to predict variance in fear of falling and quality of life in old age. Overall, the findings indicated that mobility decline (particularly walking speed or walking difficulty), processing speed, and intra-individual variability in attention (including motor initiation variability) are salient predictors of participant safety (mainly pedestrian crossing errors) and wellbeing with increasing age. More research is required to produce a significant model to explain the number of falls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The oculomotor synergy as expressed by the CA/C and AC/A ratios was investigated to examine its influence on our previous observation that whereas convergence responses to stereoscopic images are generally stable, some individuals exhibit significant accommodative overshoot. Using a modified video refraction unit while viewing a stereoscopic LCD, accommodative and convergence responses to balanced and unbalanced vergence and focal stimuli (BVFS and UBVFS) were measured. Accommodative overshoot of at least 0.3 D was found in 3 out of 8 subjects for UBVFS. The accommodative response differential (RD) was taken to be the difference between the initial response and the subsequent mean static steady-state response. Without overshoot, RD was quantified by finding the initial response component. A mean RD of 0.11 +/- 0.27 D was found for the 1.0 D step UBVFS condition. The mean RD for the BVFS was 0.00 +/- 0.17 D. There was a significant positive correlation between CA/C ratio and RD (r = +0.75, n = 8, p <0.05) for only UBVFS. We propose that inter-subject variation in RD is influenced by the CA/C ratio as follows: an initial convergence response, induced by disparity of the image, generates convergence-driven accommodation commensurate with the CA/C ratio; the associated transient defocus subsequently decays to a balanced position between defocus-induced and convergence-induced accommodations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to determine how thallus symmetry could be maintained in foliose lichens when variation in the growth of individual lobes may be high. Hence, the radial growth of a sample of lobes was studied monthly, over 22 months, in 7 thalli of Parmelia conspersa (Ehrh. Ex Ach.) Ach. And 5 thalli of P. glabratula ssp fuliginosa (fr. ex Duby) Laund. The degree of variation in the total radial growth of different lobes within a thallus over 22 months varied between thalli. Individual lobes showed a fluctuating pattern of radial growth from month to month with alternating periods of fast and slow growth. Monthly variations in radial growth of different lobes were synchronized in some but not in all thalli. Few significant correlations were found between the radial growth of individual lobes and total monthly rainfall or shortwave radiation. The levels of ribitol, arabitol and mannitol were measured in individual lobes. All three polyols varied significantly between lobes within a thallus suggesting that variations in algal phostosynthesis and in the partitioning of fungal polyols may contribute to lobe growth variation. The effect on thallus symmetry of lobes which grew radially either consistently faster or slower than average was studied. Slow growing lobes were overgrown, and gaps in the perimeter were eliminated by the growth of neighbouring lobes, in approximately 7 to 9 months. However, a rapidly growing lobe, with its neighbours removed on either side, continued to grow radially at the same rate as rapidly growing control lobes. The results suggested that lobe growth variation results from a combination of factors which may include the origin of the lobes, lobe morphology and the patterns of algal cell division and hyphal elongation in different lobes. No convincing evidence was found to suggest that exchange of carbohydrate occurred between lobes which would tend to equalize their radial growth. Hence, the fluctuating pattern of lobe growth observed may be sufficient to maintain a degree of symmetry in most thalli. In addition, slow growing lobes would tend to be overgrown by faster growing neighbours thus preventing the formation of indentations in the thallus perimeter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variations in hypothallus width were studied in relation to radial growth in the lichen Rhizocarpon geographicum (L.) DC. in South Gwynedd, Wales, UK. Variations were present both within and between thalli and in successive three-month growth periods, but there was no significant variation associated with thallus size. In individual thalli, there were increases and reductions in hypothallus width in successive three-month growth periods attributable to hypothallus growth and changes at the margin of the areolae. Total radial growth over 18 months was positively correlated with initial hypothallus width. These results suggest: 1) individual thalli of similar size vary considerably in hypothallus width, 2) fluctuations in the location of the margin of the areolae in successive three month periods is an important factor determining this variability, 3) hypothallus width predicts subsequent radial growth over 18 months, and 4) variation in hypothallus; width is a factor determining between thallus variability in radial growth rates in yellow-green species of Rhizocarpon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soredial dispersal from individual soralia of Hypogymnia physodes (L.) Nyl. was studied in the field under natural conditions and by exposing the soralia to an electric fan. Individual soralia were placed on the adhesive surface of dust particle collectors which were pinned to vertical boards in the field. The majority of soredia that were deposited on the adhesive strips during the experiments were found within 1 cm of the source soralium. Deposition was studied over 6 successive days under natural conditions. Significantly fewer soredia were deposited from soralia after removal of mature accumulations and from soralia taken from moist thalli compared with soralia from air dry thalli. In addition, there was a decline in soredial deposition over the 6 days. The influence of wind speed and initial thallus moisture content on soredial deposition over short intervals of time was studied using an electric fan. More soredia and larger soredial clusters were deposited from air dry than moist soralia at all wind speeds. Variation in wind speed between 4 and 9 m/sec had little effect on soredial deposition. Deposition of soredia was also studied using the fan over successive 5-min intervals. Large numbers of soredia were deposited during the first 5-min period. Deposition then declined but recovered after about four 5-min periods. In all experiments there were differences between individual soralia in total numbers of soredia deposited and in the pattern of deposition over time. These results suggest (1) soredia accumulate on soralia and these deposits may be gradually or rapidly depleted in the field, (2) that after the release of soredial accumulations some newly exposed soredia may be rapidly dispersed, (3) a high initial thallus moisture content inhibits soredial release and (4) variation in wind speed is less important than moisture in influencing soredial deposition. The results may help to explain the intermittent pattern of soredial deposition and the poor correlations between deposition and climatic factors observed previously in the field. © 1992.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper contributes to the literature on the intra-firm diffusion of innovations by investigating the factors that affect the firm’s decision to adopt and use sets of complementary innovations. We define complementary innovations those innovations whose joint use generates super additive gains, i.e. the gain from the joint adoption is higher than the sum of the gains derived from the adoption of each innovation in isolation. From a theoretical perspective, we present a simple decision model, whereby the firm decides ‘whether’ and ‘how much’ to invest in each of the innovations under investigation based upon the expected profit gain from each possible combination of adoption and use. The model shows how the extent of complementarity among the innovations can affect the firm’s profit gains and therefore the likelihood that the firm will adopt these innovations jointly, rather than individually. From an empirical perspective, we focus on four sets of management practices, namely operating (OMP), monitoring (MMP), targets (TMP) and incentives (IMP) management practices. We show that these sets of practices, although to a different extent, are complementary to each other. Then, we construct a synthetic indicator of the depth of their use. The resulting intra-firm index is built to reflect not only the number of practices adopted but also the depth of their individual use and the extent of their complementarity. The empirical testing of the decision model is carried out using the evidence from the adoption behaviour of a sample of 1,238 UK establishments present in the 2004 Workplace Employment Relations Survey (WERS). Our empirical results show that the intra-firm profitability based model is a good model in that it can explain more of the variability of joint adoption than models based upon the variability of adoption and use of individual practices. We also investigate whether a number of firm specific and market characteristics by affecting the size of the gains (which the joint adoption of innovations can generate) may drive the intensity of use of the four innovations. We find that establishment size, whether foreign owned, whether exposed to an international market and the degree of homogeneity of the final product are important determinants of the intensity of the joint adoption of the four innovations. Most importantly, our results point out that the factors that the economics of innovation literature has been showing to affect the intensity of use of a technological innovation do also affect the intensity of use of sets of innovative management practices. However, they can explain only a small part of the diversity of their joint adoption use by the firms in the sample.