21 resultados para intuition
em CentAUR: Central Archive University of Reading - UK
Resumo:
The study of intuition is an emerging area of research in psychology, social sciences, and business studies. It is increasingly of interest to the study of management, for example in decision-making as a counterpoint to structured approaches. Recently work has been undertaken to conceptualize a construct for the intuitive nature of technology. However to-date there is no common under-standing of the term intuition in information systems (IS) research. This paper extends the study of intuition in IS research by using exploratory research to cate-gorize the use of the word “intuition” and related terms in papers published in two prominent IS journals over a ten year period. The entire text of MIS Quarterly and Information Systems Research was reviewed for the years 1999 through 2008 using searchable PDF versions of these publications. As far as could be deter-mined, this is the first application of this approach in the analysis of the text of IS academic journals. The use of the word “intuition” and related terms was catego-rized using coding consistent with Grounded Theory. The focus of this research was on the first two stages of Grounded Theory analysis - the development of codes and constructs. Saturation of coding was not reached: an extended review of these publications would be required to enable theory development. Over 400 incidents of the use of “intuition”, and related terms were found in the articles reviewed. The most prominent use of the term of “intuition” was coded as “Intui-tion as Authority” in which intuition was used to validate a research objective or finding; representing approximately 37 per cent of codes assigned. The second most common coding occurred in research articles with mathematical analysis, representing about 19 per cent of the codes assigned, for example where a ma-thematical formulation or result was “intuitive”. The possibly most impactful use of the term “intuition” was “Intuition as Outcome”, representing approximately 7 per cent of all coding, which characterized research results as adding to the intui-tive understanding of a research topic or phenomena. This research contributes to a greater theoretical understanding of intuition enabling insight into the use of intuition, and the eventual development of a theory on the use of intuition in academic IS research publications. It also provides potential benefits to practi-tioners by providing insight into and validation of the use of intuition in IS man-agement. Research directions include the creation of reflective and/or formative constructs for intuition in information systems research.
Resumo:
Intuition is an important and under-researched concept in information systems. Prior exploratory research has shown that that there is potential to characterize the use of intuition in academic information systems research. This paper extends this research to all of the available issues of two leading IS journals with the aim of reaching an approximation of theoretical saturation. Specifically, the entire text of MISQ and ISR was reviewed for the years 1990 through 2009 using searchable PDF versions of these publications. All references to intuition were coded on a basis consistent with Grounded Theory, interpreted as a gestalt and represented as a mind-map. In the period 1990-2009, 681 incidents of the use of "intuition", and related terms were found in the articles reviewed, representing a greater range of codes than prior research. In addition, codes were assigned to all issues of MIS Quarterly from commencement of publication to the end of the 2012 publication year to support the conjecture that coding saturation has been approximated. The most prominent use of the term of "intuition" was coded as "Intuition as Authority" in which intuition was used to validate a statement, research objective or a finding; representing approximately 34 per cent of codes assigned. In research articles where mathematical analysis was presented, researchers not infrequently commented on the degree to which a mathematical formulation was "intuitive"; this was the second most common coding representing approximately 16 per cent of the codes. The possibly most impactful use of the term "intuition" was "Intuition as Outcome", representing approximately 7 per cent of all coding, which characterized research results as adding to the intuitive understanding of a research topic or phenomena.This research aims to contribute to a greater theoretical understanding of the use of intuition in academic IS research publications. It provides potential benefits to practitioners by providing insight into the use of intuition in IS management, for example, emphasizing the emerging importance of "intuitive technology". Research directions include the creation of reflective and/or formative constructs for intuition in information systems research and the expansion of this novel research method to additional IS academic publications and topics.
Resumo:
Recent decades have seen a surge in interest in metaphilosophy. In particular there has been an interest in philosophical methodology. Various questions have been asked about philosophical methods. Are our methods any good? Can we improve upon them? Prior to such evaluative and ameliorative concerns, however, is the matter of what methods philosophers actually use. Worryingly, our understanding of philosophical methodology is impoverished in various respects. This article considers one particular respect in which we seem to be missing an important part of the picture. While it is a received wisdom that the word “intuition” has exploded across analytic philosophy in recent decades, the article presents evidence that the explosion is apparent across a broad swathe of academia (and perhaps beyond). It notes various implications for current methodological debates about the role of intuitions in philosophy.
Resumo:
I argue that the account of self-evidence developed by Robert Audi cannot be true, and offer an alternatve account in terms of intuitions, understood as seemings
Resumo:
The word “intuition” is one frequently used in philosophy. It is often assumed that the way in which philosophers use the word, and others like it, is very distinctive. This claim has been subjected to little empirical scrutiny, however. This article presents the first steps in a qualitative analysis of the use of intuition talk in the academy. It presents the findings of two preliminary empirical studies. The first study examines the use of intuition talk in spoken academic English. The second examines the use of intuition talk in written academic English. It considers what these studies tell us about the distinctiveness of philosophical language and methods and considers some implications for evaluative and ameliorative methodology.
Resumo:
Matheron's usual variogram estimator can result in unreliable variograms when data are strongly asymmetric or skewed. Asymmetry in a distribution can arise from a long tail of values in the underlying process or from outliers that belong to another population that contaminate the primary process. This paper examines the effects of underlying asymmetry on the variogram and on the accuracy of prediction, and the second one examines the effects arising from outliers. Standard geostatistical texts suggest ways of dealing with underlying asymmetry; however, this is based on informed intuition rather than detailed investigation. To determine whether the methods generally used to deal with underlying asymmetry are appropriate, the effects of different coefficients of skewness on the shape of the experimental variogram and on the model parameters were investigated. Simulated annealing was used to create normally distributed random fields of different size from variograms with different nugget:sill ratios. These data were then modified to give different degrees of asymmetry and the experimental variogram was computed in each case. The effects of standard data transformations on the form of the variogram were also investigated. Cross-validation was used to assess quantitatively the performance of the different variogram models for kriging. The results showed that the shape of the variogram was affected by the degree of asymmetry, and that the effect increased as the size of data set decreased. Transformations of the data were more effective in reducing the skewness coefficient in the larger sets of data. Cross-validation confirmed that variogram models from transformed data were more suitable for kriging than were those from the raw asymmetric data. The results of this study have implications for the 'standard best practice' in dealing with asymmetry in data for geostatistical analyses. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Three experiments examine whether simple pair-wise comparison judgments, involving the “recognition heuristic” (Goldstein & Gigerenzer, 2002), are sensitive to implicit cues to the nature of the comparison required. Experiments 1 & 2 show that participants frequently choose the recognized option of a pair if asked to make “larger” judgments but are significantly less likely to choose the unrecognized option when asked to make “smaller” judgments. Experiment 3 demonstrates that, overall, participants consider recognition to be a more reliable guide to judgments of a magnitude criterion than lack of recognition and that this intuition drives the framing effect. These results support the idea that, when making pair-wise comparison judgments, inferring that the recognized item is large is simpler than inferring that the unrecognized item is small.
Resumo:
The objective of this paper is to revisit the von Liebig hypothesis by reexamining five samples of experimental data and by applying to it recent advances in Bayesian techniques. The samples were published by Hexem and Heady as described in a further section. Prior to outlining the estimation strategy, we discuss the intuition underlying our approach and, briefly, the literature on which it is based. We present an algorithm for the basic von Liebig formulation and demonstrate its application using simulated data (table 1). We then discuss the modifications needed to the basic model that facilitate estimation of a von Liebig frontier and we demonstrate the extended algorithm using simulated data (table 2). We then explore, empirically, the relationships between limiting water and nitrogen in the Hexem and Heady corn samples and compare the results between the two formulations (table 3). Finally, some conclusions and suggestions for further research are offered.
Resumo:
A review of current risk pricing practices in the financial, insurance and construction sectors is conducted through a comprehensive literature review. The purpose was to inform a study on risk and price in the tendering processes of contractors: specifically, how contractors take account of risk when they are calculating their bids for construction work. The reference to mainstream literature was in view of construction management research as a field of application rather than a fundamental academic discipline. Analytical models are used for risk pricing in the financial sector. Certain mathematical laws and principles of insurance are used to price risk in the insurance sector. construction contractors and practitioners are described to traditionally price allowances for project risk using mechanisms such as intuition and experience. Project risk analysis models have proliferated in recent years. However, they are rarely used because of problems practitioners face when confronted with them. A discussion of practices across the three sectors shows that the construction industry does not approach risk according to the sophisticated mechanisms of the two other sectors. This is not a poor situation in itself. However, knowledge transfer from finance and insurance can help construction practitioners. But also, formal risk models for contractors should be informed by the commercial exigencies and unique characteristics of the construction sector.
Resumo:
Recently a substantial amount of research has been done in the field of dextrous manipulation and hand manoeuvres. The main concern has been how to control robot hands so that they can execute manipulation tasks with the same dexterity and intuition as human hands. This paper surveys multi-fingered robot hand research and development topics which include robot hand design, object force distribution and control, grip transform, grasp stability and its synthesis, grasp stiffness and compliance motion and robot arm-hand coordination. Three main topics are presented in this article. The first is an introduction to the subject. The second concentrates on examples of mechanical manipulators used in research and the methods employed to control them. The third presents work which has been done on the field of object manipulation.
Resumo:
Programming is a skill which requires knowledge of both the basic constructs of the computer language used and techniques employing these constructs. How these are used in any given application is determined intuitively, and this intuition is based on experience of programs already written. One aim of this book is to describe the techniques and give practical examples of the techniques in action - to provide some experience. Another aim of the book is to show how a program should be developed, in particular how a relatively large program should be tackled in a structured manner. These aims are accomplished essentially by describing the writing of one large program, a diagram generator package, in which a number of useful programming techniques are employed. Also, the book provides a useful program, with an in-built manual describing not only how the program works, but also how it does it, with full source code listings. This means that the user can, if required, modify the package to meet particular requirements. A floppy disk is available from the publishers containing the program, including listings of the source code. All the programs are written in Modula-2, using JPI's Top Speed Modula-2 system running on IBM-PCs and compatibles. This language was chosen as it is an ideal language for implementing large programs and it is the main language taught in the Cybernetics Department at the University of Reading. There are some aspects of the Top Speed implementation which are not standard, so suitable comments are given when these occur. Although implemented in Modula-2, many of the techniques described here are appropriate to other languages, like Pascal of C, for example. The book and programs are based on a second year undergraduate course taught at Reading to Cybernetics students, entitled Algorithms and Data Structures. Useful techniques are described for the reader to use, applications where they are appropriate are recommended, but detailed analyses of the techniques are not given.
Resumo:
In this paper the authors exploit two equivalent formulations of the average rate of material entropy production in the climate system to propose an approximate splitting between contributions due to vertical and eminently horizontal processes. This approach is based only on 2D radiative fields at the surface and at the top of atmosphere. Using 2D fields at the top of atmosphere alone, lower bounds to the rate of material entropy production and to the intensity of the Lorenz energy cycle are derived. By introducing a measure of the efficiency of the planetary system with respect to horizontal thermodynamic processes, it is possible to gain insight into a previous intuition on the possibility of defining a baroclinic heat engine extracting work from the meridional heat flux. The approximate formula of the material entropy production is verified and used for studying the global thermodynamic properties of climate models (CMs) included in the Program for Climate Model Diagnosis and Intercomparison (PCMDI)/phase 3 of the Coupled Model Intercomparison Project (CMIP3) dataset in preindustrial climate conditions. It is found that about 90% of the material entropy production is due to vertical processes such as convection, whereas the large-scale meridional heat transport contributes to only about 10% of the total. This suggests that the traditional two-box models used for providing a minimal representation of entropy production in planetary systems are not appropriate, whereas a basic—but conceptually correct—description can be framed in terms of a four-box model. The total material entropy production is typically 55 mW m−2 K−1, with discrepancies on the order of 5%, and CMs’ baroclinic efficiencies are clustered around 0.055. The lower bounds on the intensity of the Lorenz energy cycle featured by CMs are found to be around 1.0–1.5 W m−2, which implies that the derived inequality is rather stringent. When looking at the variability and covariability of the considered thermodynamic quantities, the agreement among CMs is worse, suggesting that the description of feedbacks is more uncertain. The contributions to material entropy production from vertical and horizontal processes are positively correlated, so that no compensation mechanism seems in place. Quite consistently among CMs, the variability of the efficiency of the system is a better proxy for variability of the entropy production due to horizontal processes than that of the large-scale heat flux. The possibility of providing constraints on the 3D dynamics of the fluid envelope based only on 2D observations of radiative fluxes seems promising for the observational study of planets and for testing numerical models.