811 resultados para Fixing
Resumo:
This article outlines the complex stories through which national belonging is made, and some ways in which class mediates the racialisation process. It is based on fieldwork on the ways in which white UK people in provincial cities construct identities based on positioning vis-a`-vis other groups, communities and the nation. I argue that this relational identity work revolves around fixing a moral-ethical location against which the behaviour and culture of Others is measured, and that this has a temporal and spatial specificity. First, attitudinal trends by social class emerge in our work as being to do with emphasis and life experience rather than constituting absolute distinctions in attitudes. Second, in an era supposedly marked by the hegemony of ‘new’ or ‘cultural’ racism, bloodlines and phenotypes are still frequently utilised in race-making discursive work. Third, in provincial urban England, there is a marked ambivalence towards Britishness (as compromised by Others) and an openness to Englishness as a more authentic source of identification.
Resumo:
We measured the properties of interocular suppression in strabismic amblyopes and compared these to dichoptic masking in binocularly normal observers. We used a dichoptic version of the well-established probed-sinewave paradigm that measured sensitivity to a brief target stimulus (one of four letters to be discriminated) in the amblyopic eye at different times relative to a suppression-inducing mask in the fixing eye. This was done using both sinusoidal steady state and transient approaches. The suppression-inducing masks were either modulations of luminance or contrast (full field, just overlaying the target, or just surrounding the target). Our results were interpreted using a descriptive model that included contrast gain control and spatio-temporal filtering prior to excitatory binocular combination. The suppression we measured, other than in magnitude, was not fundamentally different from normal dichoptic masking: lowpass spatio-temporal properties with similar contributions from both surround and overlay suppression.
Resumo:
In previous statnotes, the application of correlation and regression methods to the analysis of two variables (X,Y) was described. The most important statistic used to measure the degree of correlation between two variables is Pearson’s ‘product moment correlation coefficient’ (‘r’). The correlation between two variables may be due to their common relation to other variables. Hence, investigators using correlation studies need to be alert to the possibilities of spurious correlation and the methods of ‘partial correlation’ are one method of taking this into account. This statnote applies the methods of partial correlation to three scenarios. First, to a fairly obvious example of a spurious correlation resulting from the ‘size effect’ involving the relationship between the number of general practitioners (GP) and the number of deaths of patients in a town. Second, to the relationship between the abundance of the nitrogen-fixing bacterium Azotobacter in soil and three soil variables, and finally, to a more complex scenario, first introduced in Statnote 24involving the relationship between the growth of lichens in the field and climate.
Resumo:
Purpose: The purpose of this paper is to examine the quality of evidence collected during interview. Current UK national guidance on the interviewing of victims and witnesses recommends a phased approach, allowing the interviewee to deliver their free report before any questioning takes place, and stipulating that during this free report the interviewee should not be interrupted. Interviewers, therefore, often find it necessary during questioning to reactivate parts of the interviewee's free report for further elaboration. Design/methodology/approach: The first section of this paper draws on a collection of police interviews with women reporting rape, and discusses one method by which this is achieved - the indirect quotation of the interviewee by the interviewer - exploring the potential implications for the quality of evidence collected during this type of interview. The second section of the paper draws on the same data set and concerns itself with a particular method by which information provided by an interviewee has its meaning "fixed" by the interviewer. Findings: It is found that "formulating" is a recurrent practice arising from the need to clarify elements of the account for the benefit of what is termed the "overhearing audience" - in this context, the police scribe, CPS, and potentially the Court. Since the means by which this "fixing" is achieved necessarily involves the foregrounding of elements of the account deemed to be particularly salient at the expense of other elements which may be entirely deleted, formulations are rarely entirely neutral. Their production, therefore, has the potential to exert undue interviewer influence over the negotiated "final version" of interviewees' accounts. Originality/value: The paper highlights the fact that accurate re-presentations of interviewees' accounts are a crucial tool in ensuring smooth progression of interviews and that re-stated speech and formulation often have implications for the quality of evidence collected during significant witness interviews. © Emerald Group Publishing Limited.
Resumo:
For more than a century it has been known that the eye is not a perfect optical system, but rather a system that suffers from aberrations beyond conventional prescriptive descriptions of defocus and astigmatism. Whereas traditional refraction attempts to describe the error of the eye with only two parameters, namely sphere and cylinder, measurements of wavefront aberrations depict the optical error with many more parameters. What remains questionable is the impact these additional parameters have on visual function. Some authors have argued that higher-order aberrations have a considerable effect on visual function and in certain cases this effect is significant enough to induce amblyopia. This has been referred to as ‘higher-order aberration-associated amblyopia’. In such cases, correction of higher-order aberrations would not restore visual function. Others have reported that patients with binocular asymmetric aberrations display an associated unilateral decrease in visual acuity and, if the decline in acuity results from the aberrations alone, such subjects may have been erroneously diagnosed as amblyopes. In these cases, correction of higher-order aberrations would restore visual function. This refractive entity has been termed ‘aberropia’. In order to investigate these hypotheses, the distribution of higher-order aberrations in strabismic, anisometropic and idiopathic amblyopes, and in a group of visual normals, was analysed both before and after wavefront-guided laser refractive correction. The results show: (i) there is no significant asymmetry in higher-order aberrations between amblyopic and fixing eyes prior to laser refractive treatment; (ii) the mean magnitude of higher-order aberrations is similar within the amblyopic and visually normal populations; (iii) a significant improvement in visual acuity can be realised for adult amblyopic patients utilising wavefront-guided laser refractive surgery and a modest increase in contrast sensitivity was observed for the amblyopic eye of anisometropes following treatment (iv) an overall trend towards increased higher-order aberrations following wavefront-guided laser refractive treatment was observed for both visually normal and amblyopic eyes. In conclusion, while the data do not provide any direct evidence for the concepts of either ‘aberropia’ or ‘higher-order aberration-associated amblyopia’, it is clear that gains in visual acuity and contrast sensitivity may be realised following laser refractive treatment of the amblyopic adult eye. Possible mechanisms by which these gains are realised are discussed.
Resumo:
When machining a large-scale aerospace part, the part is normally located and clamped firmly until a set of features are machined. When the part is released, its size and shape may deform beyond the tolerance limits due to stress release. This paper presents the design of a new fixing method and flexible fixtures that would automatically respond to workpiece deformation during machining. Deformation is inspected and monitored on-line, and part location and orientation can be adjusted timely to ensure follow-up operations are carried out under low stress and with respect to the related datum defined in the design models.
Resumo:
We characterize the preference domains on which the Borda count satisfies Maskin monotonicity. The basic concept is the notion of a "cyclic permutation domain" which arises by fixing one particular ordering of alternatives and including all its cyclic permutations. The cyclic permutation domains are exactly the maximal domains on which the Borda count is strategy-proof when combined with every possible tie breaking rule. It turns out that the Borda count is monotonic on a larger class of domains. We show that the maximal domains on which the Borda count satisfies Maskin monotonicity are the "cyclically nested permutation domains" which are obtained from the cyclic permutation domains in an appropriately specified recursive way. ------ *We thank József Mala for posing the question of Nash implementability on restricted domains that led to this research. We are very grateful to two anonymous referees and an associate editor for their helpful comments and suggestions. The second author gratefully acknowledges financial support from the Hungarian Academy of Sciences (MTA) through the Bolyai János research fellowship.
Resumo:
A kiskereskedelmi árrögzítés évtizedek óta vitatott kérdés a közgazdasági elméletben. Az Egyesült Államok legfelsőbb bíróságának közelmúltbeli döntése - megszüntetve az ilyen típusú árkorlátozások önmagában törvénytelennek ítélését - ismételten felhívta a figyelmet az adott problémakörre. Cikkünkben az árrögzítés eddig mellőzött versenyfokozó hatásával foglalkozunk. A megszokott statikus modellek helyett dinamikus környezetet feltételezve, arra a következtetésre jutunk, hogy egy profitmaximalizáló termelőnek számos esetben célszerű kiskereskedelmi árrögzítést alkalmazni egy esetlegesen kialakuló forgalmazói kartell megelőzésére, amelynek egyértelműen pozitív hatása van nemcsak a termelő profitjára, hanem a kialakuló fogyasztói többletre nézve is. Amellett érvelünk, hogy indokolatlan a még mindig uralkodó, a legtöbb ország versenyszabályozásában tetten érhető, önmagában törvénytelennek minősített megítélés a vertikális árkorlátozásokkal kapcsolatban. / === / Retail price fixing has been a disputed issue in theoretical economics for decades, to which attention was drawn again by a recent decision by the US Supreme Court ending the illegality of such price restrictions as such. Assuming a dynamic environment instead of the customary static model leads to the conclusion that it is frequently advantageous to a profit-maximizing producer to use retail price maintenance to avert the possible appearance of a reseller cartel. This will have a clearly positive effect on producer profits, and also in terms of increasing consumption. It is also argued in the study that it is unjustified to qualify such vertical pricing restrictions as essentially illegal, after the manner of the competition rules in most countries.
Resumo:
Diazotrophic (N2-fixing) cyanobacteria provide the biological source of new nitrogen for large parts of the ocean. However, little is known about their sensitivity to global change. Here we show that the single most important nitrogen fixer in today's ocean, Trichodesmium, is strongly affected by changes in CO2 concentrations. Cell division rate doubled with rising CO2 (glacial to projected year 2100 levels) prompting lower carbon, nitrogen and phosphorus cellular contents, and reduced cell dimensions. N2 fixation rates per unit of phosphorus utilization as well as C:P and N:P ratios more than doubled at high CO2, with no change in C:N ratios. This could enhance the productivity of N-limited oligotrophic oceans, drive some of these areas into P limitation, and increase biological carbon sequestration in the ocean. The observed CO2 sensitivity of Trichodesmium could thereby provide a strong negative feedback to atmospheric CO2 increase.
Resumo:
The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.
Resumo:
Léon Walras (1874) already had realized that his neo-classical general equilibrium model could not accommodate autonomous investment. Sen analysed the same issue in a simple, one-sector macroeconomic model of a closed economy. He showed that fixing investment in the model, built strictly on neo-classical assumptions, would make the system overdetermined, thus, one should loosen some neo-classical condition of competitive equilibrium. He analysed three not neo-classical “closure options”, which could make the model well determined in the case of fixed investment. Others later extended his list and it showed that the closure dilemma arises in the more complex computable general equilibrium (CGE) models as well, as does the choice of adjustment mechanism assumed to bring about equilibrium at the macro level. By means of numerical models, it was also illustrated that the adopted closure rule can significantly affect the results of policy simulations based on a CGE model. Despite these warnings, the issue of macro closure is often neglected in policy simulations. It is, therefore, worth revisiting the issue and demonstrating by further examples its importance, as well as pointing out that the closure problem in the CGE models extends well beyond the problem of how to incorporate autonomous investment into a CGE model. Several closure rules are discussed in this paper and their diverse outcomes are illustrated by numerical models calibrated on statistical data. First, the analyses is done in a one-sector model, similar to Sen’s, but extended into a model of an open economy. Next, the same analyses are repeated using a fully-fledged multisectoral CGE model, calibrated on the same statistical data. Comparing the results obtained by the two models it is shown that although, using the same closure option, they generate quite similar results in terms of the direction and – to a somewhat lesser extent – of the magnitude of change in the main macro variables, the predictions of the multi-sectoral CGE model are clearly more realistic and balanced.
Resumo:
The inorganic silicate fraction extracted from bulk pelagic sediments from the North Pacific Ocean is eolian dust. It monitors the composition of continental crust exposed to erosion in Asia. 176Lu/177Hf ratios of modern dust are subchondritic between 0.011 and 0.016 but slightly elevated with respect to immature sediments. Modern dust samples display a large range in Hf isotopic composition (IC), -4.70 < epsilon-Hf < +16.45, which encompasses that observed for the time series of DSDP cores 885/886 and piston core LL44-GPC3 extending back to the late Cretaceous. Hafnium and neodymium isotopic results are consistent with a dominantly binary mixture of dust contributed from island arc volcanic material and dust from central Asia. The Hf-Nd isotopic correlation for all modern dust samples, epsilon-Hf= =0.78 epsilon-Nd = +5.66 (n =22, R**2 =0.79), is flatter than those reported so far for terrestrial reservoirs. Moreover, the variability in epsilon-Hf of Asian dust exceeds that predicted on the basis of corresponding epsilon-Nd values (34.76 epsilon-Hf < +2.5; -10.96< epsilon-Nd <-10.1). This is attributed to: (1) the fixing of an important unradiogenic fraction of Hf in zircons, balanced by radiogenic Hf that is mobile in the erosional cycle, (2) the elevated Lu/Hf ratio in chemical sediments which, given time, results in a Hf signature that is radiogenic compared with Hf expected from its corresponding Nd isotopic components, and (3) the possibility that diagenetic resetting of marine sediments may incorporate a significant radiogenic Hf component into diagenetically grown minerals such as illite. Together, these processes may explain the variability and more radiogenic character of Hf isotopes when compared to the Nd isotopic signatures of Asian dust. The Hf-Nd isotope time series of eolian dust are consistent with the results of modern dust except two samples that have extremely radiogenic Hf for their Nd (epsilon-Hf =+8.6 and +10.3, epsilon-Nd =39.5 and 39.8). These data may point to a source contribution of dust unresolved by Nd and Pb isotopes. The Hf IC of eolian dust input to the oceans may be more variable and more radiogenic than previously anticipated. The Hf signature of Pacific seawater, however, has varied little over the past 20 Myr, especially across the drastic increase of eolian dust flux from Asia around 3.5 Ma. Therefore, continental contributions to seawater Hf appear to be riverine rather than eolian. Current predictions regarding the relative proportions of source components to seawater Hf must account for the presence of a variable and radiogenic continental component. Data on the IC and flux of river-dissolved Hf to the oceans are urgently required to better estimate contributions to seawater Hf. This then would permit the use of Hf isotopes as a monitor of past changes in erosion.
Resumo:
En el presente trabajo, se analizan los factores que intervienen en la interpretación de enunciados asumiendo que, en tanto proceso, la interpretación supone la interacción de distintos sistemas modulares de la mente: La Facultad del Lenguaje (FdeL), y los sistemas sensorio-motriz (SM) y conceptual-intencional (CI). Este último incluye mecanismos para la elaboración de representaciones conceptuales (C) y mecanismos inferenciales involucrados más globalmente en la fijación de estados intencionales (I). Estos sistemas externos a la Facultad del Lenguaje imponen restricciones de manera tal que los núcleos de información que llegan a las interfaces sean legibles por los sistemas SM y CI (Chomsky, 1995-2008). En este sentido, entender la interpretación como proceso conlleva atender a la relación entre los aspectos puramente lingüísticos (sintaxis, semántica), los prosódicos, y los inferenciales (pragmática). En este trabajo, se busca comprender el funcionamiento de aquellos elementos del sistema lingüístico que propician la obtención de los supuestos necesarios para llevar a cabo ese proceso. ;Adoptando un paralelismo entre las categorías sintácticas y las semánticas postuladas en el relevantismo, se intenta dilucidar cómo actúa la variada evidencia lingüística que provee un H en la comunicación, de modo que un O puede arribar a alguna hipótesis del 'significado de H'. A partir del análisis de distintos enunciados, se explora el alcance de ese paralelismo y se llega a una caracterización tentativa del proceso de interpretación. Finalmente, siguiendo ideas de distintos autores, se propone concebir la interfaz FdeL-CI a partir de pares primitivos de información, que son relevantes a todos los sistemas cognitivos involucrados en la comunicación
Resumo:
En el presente trabajo, se analizan los factores que intervienen en la interpretación de enunciados asumiendo que, en tanto proceso, la interpretación supone la interacción de distintos sistemas modulares de la mente: La Facultad del Lenguaje (FdeL), y los sistemas sensorio-motriz (SM) y conceptual-intencional (CI). Este último incluye mecanismos para la elaboración de representaciones conceptuales (C) y mecanismos inferenciales involucrados más globalmente en la fijación de estados intencionales (I). Estos sistemas externos a la Facultad del Lenguaje imponen restricciones de manera tal que los núcleos de información que llegan a las interfaces sean legibles por los sistemas SM y CI (Chomsky, 1995-2008). En este sentido, entender la interpretación como proceso conlleva atender a la relación entre los aspectos puramente lingüísticos (sintaxis, semántica), los prosódicos, y los inferenciales (pragmática). En este trabajo, se busca comprender el funcionamiento de aquellos elementos del sistema lingüístico que propician la obtención de los supuestos necesarios para llevar a cabo ese proceso. ;Adoptando un paralelismo entre las categorías sintácticas y las semánticas postuladas en el relevantismo, se intenta dilucidar cómo actúa la variada evidencia lingüística que provee un H en la comunicación, de modo que un O puede arribar a alguna hipótesis del 'significado de H'. A partir del análisis de distintos enunciados, se explora el alcance de ese paralelismo y se llega a una caracterización tentativa del proceso de interpretación. Finalmente, siguiendo ideas de distintos autores, se propone concebir la interfaz FdeL-CI a partir de pares primitivos de información, que son relevantes a todos los sistemas cognitivos involucrados en la comunicación
Resumo:
En el presente trabajo, se analizan los factores que intervienen en la interpretación de enunciados asumiendo que, en tanto proceso, la interpretación supone la interacción de distintos sistemas modulares de la mente: La Facultad del Lenguaje (FdeL), y los sistemas sensorio-motriz (SM) y conceptual-intencional (CI). Este último incluye mecanismos para la elaboración de representaciones conceptuales (C) y mecanismos inferenciales involucrados más globalmente en la fijación de estados intencionales (I). Estos sistemas externos a la Facultad del Lenguaje imponen restricciones de manera tal que los núcleos de información que llegan a las interfaces sean legibles por los sistemas SM y CI (Chomsky, 1995-2008). En este sentido, entender la interpretación como proceso conlleva atender a la relación entre los aspectos puramente lingüísticos (sintaxis, semántica), los prosódicos, y los inferenciales (pragmática). En este trabajo, se busca comprender el funcionamiento de aquellos elementos del sistema lingüístico que propician la obtención de los supuestos necesarios para llevar a cabo ese proceso. ;Adoptando un paralelismo entre las categorías sintácticas y las semánticas postuladas en el relevantismo, se intenta dilucidar cómo actúa la variada evidencia lingüística que provee un H en la comunicación, de modo que un O puede arribar a alguna hipótesis del 'significado de H'. A partir del análisis de distintos enunciados, se explora el alcance de ese paralelismo y se llega a una caracterización tentativa del proceso de interpretación. Finalmente, siguiendo ideas de distintos autores, se propone concebir la interfaz FdeL-CI a partir de pares primitivos de información, que son relevantes a todos los sistemas cognitivos involucrados en la comunicación