870 resultados para reasoning biases
Resumo:
This paper explores how the amalgamated wisdom of East and West can instigate a wisdombased renaissance of humanistic epistemology (Rooney & McKenna, 2005) to provide a platform of harmony in managing knowledge-worker productivity, one of the biggest management challenges of the 21st century (Drucker, 1999). The paper invites further discussions from the social and business research communities on the significance of "interpretation realism" technique in comprehending philosophies of Lao Tzu Confucius and Sun Tzu (Lao/Confucius/Sun] written in "Classical Chinese." This paper concludes with a call to build prudent, responsible practices in management which affects the daily lives of many (Rooney & McKenna, 2005) in today's knowledgebased economy. Interpretation Realism will be applied to an analysis of three Chinese classics of Lao/Confucius/Sun which have been embodied in the Chinese culture for over 2,500 years. Comprehending Lao/Confucius/Sun's philosophies is the first step towards understanding Classical Chinese culture. However, interpreting Chinese subtlety in language and the yin and yang circular synthesis in their mode of thinking is very different to understanding Western thought with its open communication and its linear, analytical pattern of Aristotelian/Platonic wisdom (Zuo, 2012). Furthermore, Eastern ways of communication are relatively indirect and mediatory in culture. Western ways of communication are relatively direct and litigious in culture (Goh, 2002). Furthermore, Lao/Confucius/Sun's philosophies are difficult to comprehend as there are four written Chinese formats and over 250 dialects: Pre-classical Chinese Classical Chinese Literary Chinese and modern Vernacular Chinese Because Classical Chinese is poetic, comprehension requires a mixed approach of interpretation realism combining logical reasoning behind "word splitting word occurrences", "empathetic metaphor" and "poetic appreciation of word.
Resumo:
During the evolution of the music industry, developments in the media environment have required music firms to adapt in order to survive. Changes in broadcast radio programming during the 1950s; the Compact Cassette during the 1970s; and the deregulation of media ownership during the 1990s are all examples of changes which have heavily affected the music industry. This study explores similar contemporary dynamics, examines how decision makers in the music industry perceive and make sense of the developments, and reveals how they revise their business strategies, based on their mental models of the media environment. A qualitative system dynamics model is developed in order to support the reasoning brought forward by the study. The model is empirically grounded, but is also based on previous music industry research and a theoretical platform constituted by concepts from evolutionary economics and sociology of culture. The empirical data primarily consist of 36 personal interviews with decision makers in the American, British and Swedish music industrial ecosystems. The study argues that the model which is proposed, more effectively explains contemporary music industry dynamics than music industry models presented by previous research initiatives. Supported by the model, the study is able to show how “new” media outlets make old music business models obsolete and challenge the industry’s traditional power structures. It is no longer possible to expose music at one outlet (usually broadcast radio) in the hope that it will lead to sales of the same music at another (e.g. a compact disc). The study shows that many music industry decision makers still have not embraced the new logic, and have not yet challenged their traditional mental models of the media environment. Rather, they remain focused on preserving the pivotal role held by the CD and other physical distribution technologies. Further, the study shows that while many music firms remain attached to the old models, other firms, primarily music publishers, have accepted the transformation, and have reluctantly recognised the realities of a virtualised environment.
Resumo:
Recent studies have linked the ability of novice (CS1) programmers to read and explain code with their ability to write code. This study extends earlier work by asking CS2 students to explain object-oriented data structures problems that involve recursion. Results show a strong correlation between ability to explain code at an abstract level and performance on code writing and code reading test problems for these object-oriented data structures problems. The authors postulate that there is a common set of skills concerned with reasoning about programs that explains the correlation between writing code and explaining code. The authors suggest that an overly exclusive emphasis on code writing may be detrimental to learning to program. Non-code writing learning activities (e.g., reading and explaining code) are likely to improve student ability to reason about code and, by extension, improve student ability to write code. A judicious mix of code-writing and code-reading activities is recommended.
Resumo:
The author, Dean Shepherd, is of entrepreneurship—how entrepreneurs think, decide to act, and feel. He recently realized that while his publications in academic journals have implications for entrepreneurs, those implications have remained relatively hidden in the text of the articles and hidden in articles published in journals largely inaccessible to those involved in the entrepreneurial process. This series is designed to bring the practical implications of his research to the forefront.
Resumo:
Objectives The goal of this article is to examine whether or not the results of the Queensland Community Engagement Trial (QCET)-a randomized controlled trial that tested the impact of procedural justice policing on citizen attitudes toward police-were affected by different types of nonresponse bias. Method We use two methods (Cochrane and Elffers methods) to explore nonresponse bias: First, we assess the impact of the low response rate by examining the effects of nonresponse group differences between the experimental and control conditions and pooled variance under different scenarios. Second, we assess the degree to which item response rates are influenced by the control and experimental conditions. Results Our analysis of the QCET data suggests that our substantive findings are not influenced by the low response rate in the trial. The results are robust even under extreme conditions, and statistical significance of the results would only be compromised in cases where the pooled variance was much larger for the nonresponse group and the difference between experimental and control conditions was greatly diminished. We also find that there were no biases in the item response rates across the experimental and control conditions. Conclusion RCTs that involve field survey responses-like QCET-are potentially compromised by low response rates and how item response rates might be influenced by the control or experimental conditions. Our results show that the QCET results were not sensitive to the overall low response rate across the experimental and control conditions and the item response rates were not significantly different across the experimental and control groups. Overall, our analysis suggests that the results of QCET are robust and any biases in the survey responses do not significantly influence the main experimental findings.
Resumo:
Many large-scale GNSS CORS networks have been deployed around the world to support various commercial and scientific applications. To make use of these networks for real-time kinematic positioning services, one of the major challenges is the ambiguity resolution (AR) over long inter-station baselines in the presence of considerable atmosphere biases. Usually, the widelane ambiguities are fixed first, followed by the procedure of determination of the narrowlane ambiguity integers based on the ionosphere-free model in which the widelane integers are introduced as known quantities. This paper seeks to improve the AR performance over long baseline through efficient procedures for improved float solutions and ambiguity fixing. The contribution is threefold: (1) instead of using the ionosphere-free measurements, the absolute and/or relative ionospheric constraints are introduced in the ionosphere-constrained model to enhance the model strength, thus resulting in the better float solutions; (2) the realistic widelane ambiguity precision is estimated by capturing the multipath effects due to the observation complexity, leading to improvement of reliability of widelane AR; (3) for the narrowlane AR, the partial AR for a subset of ambiguities selected according to the successively increased elevation is applied. For fixing the scalar ambiguity, an error probability controllable rounding method is proposed. The established ionosphere-constrained model can be efficiently solved based on the sequential Kalman filter. It can be either reduced to some special models simply by adjusting the variances of ionospheric constraints, or extended with more parameters and constraints. The presented methodology is tested over seven baselines of around 100 km from USA CORS network. The results show that the new widelane AR scheme can obtain the 99.4 % successful fixing rate with 0.6 % failure rate; while the new rounding method of narrowlane AR can obtain the fix rate of 89 % with failure rate of 0.8 %. In summary, the AR reliability can be efficiently improved with rigorous controllable probability of incorrectly fixed ambiguities.
Resumo:
Numerous studies have documented subtle but consistent sex differences in self-reports and observer-ratings of five-factor personality traits, and such effects were found to show well-defined developmental trajectories and remarkable similarity across nations. In contrast, very little is known about perceived gender differences in five-factor traits in spite of their potential implications for gender biases at the interpersonal and societal level. In particular, it is not clear how perceived gender differences in five-factor personality vary across age groups and national contexts and to what extent they accurately reflect assessed sex differences in personality. To address these questions, we analyzed responses from 3,323 individuals across 26 nations (mean age = 22.3 years, 31% male) who were asked to rate the five-factor personality traits of typical men or women in three age groups (adolescent, adult, and older adult) in their respective nations. Raters perceived women as slightly higher in openness, agreeableness, and conscientiousness as well as some aspects of extraversion and neuroticism. Perceived gender differences were fairly consistent across nations and target age groups and mapped closely onto assessed sex differences in self- and observer-rated personality. Associations between the average size of perceived gender differences and national variations in sociodemographic characteristics, value systems, or gender equality did not reach statistical significance. Findings contribute to our understanding of the underlying mechanisms of gender stereotypes of personality and suggest that perceptions of actual sex differences may play a more important role than culturally based gender roles and socialization processes.
Resumo:
Objective: To develop a system for the automatic classification of pathology reports for Cancer Registry notifications. Method: A two pass approach is proposed to classify whether pathology reports are cancer notifiable or not. The first pass queries pathology HL7 messages for known report types that are received by the Queensland Cancer Registry (QCR), while the second pass aims to analyse the free text reports and identify those that are cancer notifiable. Cancer Registry business rules, natural language processing and symbolic reasoning using the SNOMED CT ontology were adopted in the system. Results: The system was developed on a corpus of 500 histology and cytology reports (with 47% notifiable reports) and evaluated on an independent set of 479 reports (with 52% notifiable reports). Results show that the system can reliably classify cancer notifiable reports with a sensitivity, specificity, and positive predicted value (PPV) of 0.99, 0.95, and 0.95, respectively for the development set, and 0.98, 0.96, and 0.96 for the evaluation set. High sensitivity can be achieved at a slight expense in specificity and PPV. Conclusion: The system demonstrates how medical free-text processing enables the classification of cancer notifiable pathology reports with high reliability for potential use by Cancer Registries and pathology laboratories.
Resumo:
This project develops and evaluates a model of curriculum design that aims to assist student learning of foundational disciplinary ‘Threshold Concepts’. The project uses phenomenographic action research, cross-institutional peer collaboration and the Variation Theory of Learning to develop and trial the model. Two contrasting disciplines (Physics and Law) and four institutions (two research-intensive and two universities of technology) were involved in the project, to ensure broad applicability of the model across different disciplines and contexts. The Threshold Concepts that were selected for curriculum design attention were measurement uncertainty in Physics and legal reasoning in Law. Threshold Concepts are key disciplinary concepts that are inherently troublesome, transformative and integrative in nature. Once understood, such concepts transform students’ views of the discipline because they enable students to coherently integrate what were previously seen as unrelated aspects of the subject, providing new ways of thinking about it (Meyer & Land 2003, 2005, 2006; Land et al. 2008). However, the integrative and transformative nature of such threshold concepts make them inherently difficult for students to learn, with resulting misunderstandings of concepts being prevalent...
Resumo:
This chapter addresses data modelling as a means of promoting statistical literacy in the early grades. Consideration is first given to the importance of increasing young children’s exposure to statistical reasoning experiences and how data modelling can be a rich means of doing so. Selected components of data modelling are then reviewed, followed by a report on some findings from the third-year of a three-year longitudinal study across grades one through three.
Resumo:
This study considers the role and nature of co-thought gestures when students process map-based mathematics tasks. These gestures are typically spontaneously produced silent gestures which do not accompany speech and are represented by small movements of the hands or arms often directed toward an artefact. The study analysed 43 students (aged 10–12 years) over a 3-year period as they solved map tasks that required spatial reasoning. The map tasks were representative of those typically found in mathematics classrooms for this age group and required route finding and coordinate knowledge. The results indicated that co-thought gestures were used to navigate the problem space and monitor movements within the spatial challenges of the respective map tasks. Gesturing was most influential when students encountered unfamiliar tasks or when they found the tasks spatially demanding. From a teaching and learning perspective, explicit co-thought gesturing highlights cognitive challenges students are experiencing since students tended to not use gesturing in tasks where the spatial demands were low.
Resumo:
Aim Evidence linking the accumulation of exotic species to the suppression of native diversity is equivocal, often relying on data from studies that have used different methods. Plot-level studies often attribute inverse relationships between native and exotic diversity to competition, but regional abiotic filters, including anthropogenic influences, can produce similar patterns.We seek to test these alternatives using identical scale-dependent sampling protocols in multiple grasslands on two continents. Location Thirty-two grassland sites in North America and Australia. Methods We use multiscale observational data, collected identically in grain and extent at each site, to test the association of local and regional factors with the plot-level richness and abundance of native and exotic plants. Sites captured environmental and anthropogenic gradients including land-use intensity, human population density, light and soil resources, climate and elevation. Site selection occurred independently of exotic diversity, meaning that the numbers of exotic species varied randomly thereby reducing potential biases if only highly invaded sites were chosen. Results Regional factors associated directly or indirectly with human activity had the strongest associations with plot-level diversity. These regional drivers had divergent effects: urban-based economic activity was associated with high exotic : native diversity ratios; climate- and landscape-based indicators of lower human population density were associated with low exotic : native ratios. Negative correlations between plot-level native and exotic diversity, a potential signature of competitive interactions, were not prevalent; this result did not change along gradients of productivity or heterogeneity. Main conclusion We show that plot-level diversity of native and exotic plants are more consistently associatedwith regional-scale factors relating to urbanization and climate suitability than measures indicative of competition. These findings clarify the long-standing difficulty in resolving drivers of exotic diversity using single-factor mechanisms, suggesting that multiple interacting anthropogenic-based processes best explain the accumulation of exotic diversity in modern landscapes.
Resumo:
Background Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to dispersed information resources and a vast amount of manual processing of unstructured information, accurate point-of-care diagnosis is often difficult. Aims The aim of this research is to report initial experimental evaluation of a clinician-informed automated method for the issue of initial misdiagnoses associated with delayed receipt of unstructured radiology reports. Method A method was developed that resembles clinical reasoning for identifying limb abnormalities. The method consists of a gazetteer of keywords related to radiological findings; the method classifies an X-ray report as abnormal if it contains evidence contained in the gazetteer. A set of 99 narrative reports of radiological findings was sourced from a tertiary hospital. Reports were manually assessed by two clinicians and discrepancies were validated by a third expert ED clinician; the final manual classification generated by the expert ED clinician was used as ground truth to empirically evaluate the approach. Results The automated method that attempts to individuate limb abnormalities by searching for keywords expressed by clinicians achieved an F-measure of 0.80 and an accuracy of 0.80. Conclusion While the automated clinician-driven method achieved promising performances, a number of avenues for improvement were identified using advanced natural language processing (NLP) and machine learning techniques.
Resumo:
Dragon stream cipher is one of the focus ciphers which have reached Phase 2 of the eSTREAMproject. In this paper, we present a new method of building a linear distinguisher for Dragon. The distinguisher is constructed by exploiting the biases of two S-boxes and the modular addition which are basic components of the nonlinear function F. The bias of the distinguisher is estimated to be around 2−75.32 which is better than the bias of the distinguisher presented by Englund and Maximov. We have shown that Dragon is distinguishable from a random cipher by using around 2150.6 keystream words and 259 memory. In addition, we present a very efficient algorithm for computing the bias of linear approximation of modular addition.
Resumo:
This study seeks to fill in gap in the existing literature by looking at how and whether disclosure of social value creation becomes a part of legitimation strategies of social enterprises. By using legitimacy reasoning, this study informs that three global social organizations, Grameen Bank, Charity Water, and the Bill and Melinda Gates Foundation provide evidence of the use of disclosures of social value creation in order to conform with the expectations of the broader community—the community that wants to see poverty and injustice free world.