28 resultados para Electrostatic interpretation
Resumo:
The way interviews are used in accounting research, and the way this research is written up, suggests that there is only one way to interpret these interviews. This invests the author(s) with great perceptive power and storytelling ability. What if different assumptions are used about how to interpret research, and how to present the ensuing findings? We give an illustration of what this might imply, using the notion of 'reflexivity'. The setting for our illustration concerns a series of interviews with management accountants on the dilemmas they face in their daily work. We apply Alvesson's ideas on how to use metaphors to open up the interpretation of interview accounts. The aim of the paper is to shed a different light on the way interviews can be used and interpreted in accounting research. We assert that allowing for reflexive accounts is likely to require substantially differently written research papers, in which the process of discovery is emphasized. © 2011 Elsevier Ltd.
Resumo:
This investigation originated from work by Dr. A.H. McIlraith of the National Physical Laboratory who, in 1966, described a new type of charged particle oscillator. This makes use of two equal cylindrical electrodes to constrain the particles in such a way that they follow extremely long oscillatory paths between the electrodes under the influence of an electrostatic field alone. The object of this work has been to study the principle of the oscillator in detail and to investigate its properties and applications. Any device which is capable of creating long electron trajectories has potential application in the field of ultra high vacuum technology. It was therefore considered that a critical review of the problems associated with the production and measurement of ultra high vacuum was relevant in the initial stages of the work. The oscillator has been applied with a considerable degree of success as a high energy electrostatic ion source. This offers several advantages over existing ion sources. It can be operated at much lower pressures without the need of a magnetic field. The oscillator principle has also been applied as a thermionic ionization gauge and has been compared with other ionization gauges to pressures as low as 5 x 10- 11 torr.. This new gauge exhibited a number of advantages over most of the existing gauges. Finally the oscillator has been used in an evaporation ion pump and has exhibited fairly high pumping speeds for argon gas relative to those for nitrogen. This investigation supports the original work of Dr. A.H. McIlraith and shows that his proposed oscillator has considerable potential in the fields of vacuum technology and electron physics.
Resumo:
EEG Hyperscanning is a method for studying two or more individuals simultaneously with the objective of elucidating how co-variations in their neural activity (i.e., hyperconnectivity) are influenced by their behavioral and social interactions. The aim of this study was to compare the performance of different hyper-connectivity measures using (i) simulated data, where the degree of coupling could be systematically manipulated, and (ii) individually recorded human EEG combined into pseudo-pairs of participants where no hyper-connections could exist. With simulated data we found that each of the most widely used measures of hyperconnectivity were biased and detected hyper-connections where none existed. With pseudo-pairs of human data we found spurious hyper-connections that arose because there were genuine similarities between the EEG recorded from different people independently but under the same experimental conditions. Specifically, there were systematic differences between experimental conditions in terms of the rhythmicity of the EEG that were common across participants. As any imbalance between experimental conditions in terms of stimulus presentation or movement may affect the rhythmicity of the EEG, this problem could apply in many hyperscanning contexts. Furthermore, as these spurious hyper-connections reflected real similarities between the EEGs, they were not Type-1 errors that could be overcome by some appropriate statistical control. However, some measures that have not previously been used in hyperconnectivity studies, notably the circular correlation co-efficient (CCorr), were less susceptible to detecting spurious hyper-connections of this type. The reason for this advantage in performance is discussed and the use of the CCorr as an alternative measure of hyperconnectivity is advocated. © 2013 Burgess.
Resumo:
We show that the variation in dispersion managed soliton energy that occurs as the amplifier position varies within the dispersion map, for a fixed map strength, can be interpreted using the concept of effective average dispersion. Using this concept we physically explain why the location of the amplifier can produce a greater or lesser energy enhancement factor than the lossless model. © 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Neuroimaging (NI) technologies are having increasing impact in the study of complex cognitive and social processes. In this emerging field of social cognitive neuroscience, a central goal should be to increase the understanding of the interaction between the neurobiology of the individual and the environment in which humans develop and function. The study of sex/gender is often a focus for NI research, and may be motivated by a desire to better understand general developmental principles, mental health problems that show female-male disparities, and gendered differences in society. In order to ensure the maximum possible contribution of NI research to these goals, we draw attention to four key principles—overlap, mosaicism, contingency and entanglement—that have emerged from sex/gender research and that should inform NI research design, analysis and interpretation. We discuss the implications of these principles in the form of constructive guidelines and suggestions for researchers, editors, reviewers and science communicators.
Resumo:
This article discusses the question of compositionality by examining whether the indiscriminacy reading of the collocation of just with any can be shown to be a consequence of the schematic meaning-potential of each of these two items. A comparison of justwith other restrictive focus particles allows its schematic meaning to be defined as that of goodness of fit. Any is defined as representing an indefinite member of a set as extractable from the set in exactly the same way as each of the other members thereof. The collocation just any often gives rise to a scalar reading oriented towards the lowest value on the scale due to the fact that focus on the unconstrained extractability of a random indefinite item brings into consideration even marginal cases and the latter tend to be interpreted as situated on the lower end of the scale. The attention to low-end values also explains why just any is regularly found with the adjective old, the prepositional phrase at all and various devaluating expressions. It is concluded that the meanings of the component parts of this collocation do indeed account for the meaning of the whole, and that an appropriate methodology allows identification of linguistic meanings and their interrelations. © 2011 Elsevier B.V.
Resumo:
In SNAP (Surface nanoscale axial photonics) resonators propagation of a slow whispering gallery mode along an optical fiber is controlled by nanoscale variation of the effective radius of the fiber [1]. Similar behavior can be realized in so - called nanobump microresonators in which the introduced variation of the effective radius is asymmetric, i.e. depends on the axial coordinate [2]. The possibilities of realization of such structures “on the fly” in an optical fiber by applying external electrostatic fields to it is discussed in this work. It is shown that local variations in effective radius of the fiber and in its refractive index caused by external electric fields can be large enough to observe SNAP structure - like behavior in an originally flat optical fiber. Theoretical estimations of the introduced refractive index and effective radius changes and results of finite element calculations are presented. Various effects are taken into account: electromechanical (piezoelectricity and electrostriction), electro-optical (Pockels and Kerr effects) and elasto-optical effect. Different initial fibre cross-sections are studied. The aspects of use of linear isotropic (such as silica) and non-linear anisotropic (such as lithium niobate) materials of the fiber are discussed. REFERENCES [1] M. Sumetsky, J. M. Fini, Opt. Exp. 19, 26470 (2011). [2] L. A. Kochkurov, M. Sumetsky, Opt. Lett. 40, 1430 (2015).
Resumo:
A protocol with repeated stimulation cycles should be analyzed stepwise, in that each stimulation is evaluated, and a reaction pattern is identified. No two subjects will react identically, in that dilation and recovery times can vary; however, this is not reason enough to abandon a multiple stimulation cycle with fixed recovery and stimulation times. Furthermore, it enables us to examine and determine the range in which a normal subject will be placed and can then be compared to different pathophysiological states (i.e., smokers and different diseases). The purpose of our paper was to highlight the importance of evaluating these different cycles and the danger of false interpretation when averaging results. There are many different ways of evaluating dilatory responses and elasticity, but each of them must be carefully evaluated and should not be overaveraged, which can result in a loss of sensitivity and specificity.
Resumo:
This paper presents a new interpretation for the Superpave IDT strength test based on a viscoelastic-damage framework. The framework is based on continuum damage mechanics and the thermodynamics of irreversible processes with an anisotropic damage representation. The new approach introduces considerations for the viscoelastic effects and the damage accumulation that accompanies the fracture process in the interpretation of the Superpave IDT strength test for the identification of the Dissipated Creep Strain Energy (DCSE) limit from the test result. The viscoelastic model is implemented in a Finite Element Method (FEM) program for the simulation of the Superpave IDT strength test. The DCSE values obtained using the new approach is compared with the values obtained using the conventional approach to evaluate the validity of the assumptions made in the conventional interpretation of the test results. The result shows that the conventional approach over-estimates the DCSE value with increasing estimation error at higher deformation rates.
Resumo:
The representation of serial position in sequences is an important topic in a variety of cognitive areas including the domains of language, memory, and motor control. In the neuropsychological literature, serial position data have often been normalized across different lengths, and an improved procedure for this has recently been reported by Machtynger and Shallice (2009). Effects of length and a U-shaped normalized serial position curve have been criteria for identifying working memory deficits. We present simulations and analyses to illustrate some of the issues that arise when relating serial position data to specific theories. We show that critical distinctions are often difficult to make based on normalized data. We suggest that curves for different lengths are best presented in their raw form and that binomial regression can be used to answer specific questions about the effects of length, position, and linear or nonlinear shape that are critical to making theoretical distinctions. © 2010 Psychology Press.
Resumo:
The semantic model developed in this research was in response to the difficulty a group of mathematics learners had with conventional mathematical language and their interpretation of mathematical constructs. In order to develop the model ideas from linguistics, psycholinguistics, cognitive psychology, formal languages and natural language processing were investigated. This investigation led to the identification of four main processes: the parsing process, syntactic processing, semantic processing and conceptual processing. The model showed the complex interdependency between these four processes and provided a theoretical framework in which the behaviour of the mathematics learner could be analysed. The model was then extended to include the use of technological artefacts into the learning process. To facilitate this aspect of the research, the theory of instrumentation was incorporated into the semantic model. The conclusion of this research was that although the cognitive processes were interdependent, they could develop at different rates until mastery of a topic was achieved. It also found that the introduction of a technological artefact into the learning environment introduced another layer of complexity, both in terms of the learning process and the underlying relationship between the four cognitive processes.