57 resultados para Opinion Mining, Sentiment Analysis, Context-Sensitive Text Mining, Inferential Language Modelling, Business Intelligence
Resumo:
This unique and path-breaking Handbook explores the issue of comparative Human Resource Management (HRM) and challenges the notion that there can be a 'one best way' to manage HRM. The Handbook of Research on Comparative Human Resource Management provides a theoretical, practical and regional analysis of comparative HRM. This book, edited by two specialists on comparative HRM and written by leading experts on each topic and from each region, explores the range of different approaches to conceptualising HRM, and highlights HRM policy and practice that occur in the various regions of the world. As such, the volume provides a challenge to the typical assumption that there are consistent problems in managing human resources around the globe that call for standardised solutions. Instead, the contributors emphasise the importance of institutional and cultural factors that make HRM a most context-sensitive management task. Offering a comprehensive view for readers with different interests, this insightful Handbook will prove to be an essential resource for academics, researchers and postgraduate students in international business, business administration, HRM, socio-economics and cross-cultural management. Practitioners interested in the cultural aspects of HRM will also find this Handbook invaluable.
Resumo:
Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.
Resumo:
Humans are especially good at taking another's perspective-representing what others might be thinking or experiencing. This "mentalizing" capacity is apparent in everyday human interactions and conversations. We investigated its neural basis using magnetoencephalography. We focused on whether mentalizing was engaged spontaneously and routinely to understand an utterance's meaning or largely on-demand, to restore "common ground" when expectations were violated. Participants conversed with 1 of 2 confederate speakers and established tacit agreements about objects' names. In a subsequent "test" phase, some of these agreements were violated by either the same or a different speaker. Our analysis of the neural processing of test phase utterances revealed recruitment of neural circuits associated with language (temporal cortex), episodic memory (e.g., medial temporal lobe), and mentalizing (temporo-parietal junction and ventromedial prefrontal cortex). Theta oscillations (3-7 Hz) were modulated most prominently, and we observed phase coupling between functionally distinct neural circuits. The episodic memory and language circuits were recruited in anticipation of upcoming referring expressions, suggesting that context-sensitive predictions were spontaneously generated. In contrast, the mentalizing areas were recruited on-demand, as a means for detecting and resolving perceived pragmatic anomalies, with little evidence they were activated to make partner-specific predictions about upcoming linguistic utterances.
Resumo:
Intersubjectivity is an important concept in psychology and sociology. It refers to sharing conceptualizations through social interactions in a community and using such shared conceptualization as a resource to interpret things that happen in everyday life. In this work, we make use of intersubjectivity as the basis to model shared stance and subjectivity for sentiment analysis. We construct an intersubjectivity network which links review writers, terms they used, as well as the polarities of the terms. Based on this network model, we propose a method to learn writer embeddings which are subsequently incorporated into a convolutional neural network for sentiment analysis. Evaluations on the IMDB, Yelp 2013 and Yelp 2014 datasets show that the proposed approach has achieved the state-of-the-art performance.
Resumo:
The assertion about the peculiarly intricate and complex character of social phenomena has, in much of social discourse, a virtually uncontested tradition. A significant part of the premise about the complexity of social phenomena is the conviction that it complicates, perhaps even inhibits the development and application of social scientific knowledge. Our paper explores the origins, the basis and the consequences of this assertion and asks in particular whether the classic complexity assertion still deserves to be invoked in analyses that ask about the production and the utilization of social scientific knowledge in modern society. We refer to one of the most prominent and politically influential social scientific theories, John Maynard Keynes' economic theory as an illustration. We conclude that, the practical value of social scientific knowledge is not necessarily dependent on a faithful, in the sense of complete, representation of (complex) social reality. Practical knowledge is context sensitive if not project bound. Social scientific knowledge that wants to optimize its practicality has to attend and attach itself to elements of practical social situations that can be altered or are actionable by relevant actors. This chapter represents an effort to re-examine the relation between social reality, social scientific knowledge and its practical application. There is a widely accepted view about the potential social utility of social scientific knowledge that invokes the peculiar complexity of social reality as an impediment to good theoretical comprehension and hence to its applicability.
Resumo:
Purpose: To determine the most appropriate analysis technique for the differentiation of multifocal intraocular lens (MIOL) designs using defocus curve assessment of visual capability.Methods:Four groups of fifteen subjects were implanted bilaterally with either monofocal intraocular lenses, refractive MIOLs, diffractive MIOLs, or a combination of refractive and diffractive MIOLs. Defocus curves between -5.0D and +1.5D were evaluated using an absolute and relative depth-of-focus method, the direct comparison method and a new 'Area-of-focus' metric. The results were correlated with a subjective perception of near and intermediate vision. Results:Neither depth-of-focus method of analysis were sensitive enough to differentiate between MIOL groups (p>0.05). The direct comparison method indicated that the refractive MIOL group performed better at +1.00, -1.00 and -1.50 D and worse at -3.00, -3.50, -4.00 and -5.00D compared to the diffractive MIOL group (p
Resumo:
What does ‘care’ mean in contemporary society? How are caring relationships practised in different contexts? What resources do individuals and collectives draw upon in order to care for, care with and care about themselves and others? How do such relationships and practices relate to broader social processes? Care shapes people’s everyday lives and relationships and caring relations and practices influence the economies of different societies. This interdisciplinary book takes a nuanced and context-sensitive approach to exploring caring relationships, identities and practices within and across a variety of cultural, familial, geographical and institutional arenas. Grounded in rich empirical research and discussing key theoretical, policy and practice debates, it provides important, yet often neglected, international and cross-cultural perspectives. It is divided into four sections covering: caring within educational institutions; caring amongst communities and networks; caring and families; and caring across the life-course. Contributing to broader theoretical, philosophical and moral debates associated with the ethics of care, citizenship, justice, relationality and entanglements of power, Critical Approaches to Care is an important work for students and academics studying caring and care work in the fields of health and social care, sociology, social policy, anthropology, education, human geography and politics.
Resumo:
The use of spreadsheets has become routine in all aspects of business with usage growing across a range of functional areas and a continuing trend towards end user spreadsheet development. However, several studies have raised concerns about the accuracy of spreadsheet models in general, and of end user developed applications in particular, raising the risk element for users. High error rates have been discovered, even though the users/developers were confident that their spreadsheets were correct. The lack of an easy to use, context-sensitive validation methodology has been highlighted as a significant contributor to the problems of accuracy. This paper describes experiences in using a practical, contingency factor-based methodology for validation of spreadsheet-based DSS. Because the end user is often both the system developer and a stakeholder, the contingency factor-based validation methodology may need to be used in more than one way. The methodology can also be extended to encompass other DSS.
Resumo:
Increasingly, people's digital identities are attached to, and expressed through, their mobile devices. At the same time digital sensors pervade smart environments in which people are immersed. This paper explores different perspectives in which users' modelling features can be expressed through the information obtained by their attached personal sensors. We introduce the PreSense Ontology, which is designed to assign meaning to sensors' observations in terms of user modelling features. We believe that the Sensing Presence ( PreSense ) Ontology is a first step toward the integration of user modelling and "smart environments". In order to motivate our work we present a scenario and demonstrate how the ontology could be applied in order to enable context-sensitive services. © 2012 Springer-Verlag.
Resumo:
The human accommodation system has been extensively examined for over a century, with a particular focus on trying to understand the mechanisms that lead to the loss of accommodative ability with age (Presbyopia). The accommodative process, along with the potential causes of presbyopia, are disputed; hindering efforts to develop methods of restoring accommodation in the presbyopic eye. One method that can be used to provide insight into this complex area is Finite Element Analysis (FEA). The effectiveness of FEA in modelling the accommodative process has been illustrated by a number of accommodative FEA models developed to date. However, there have been limitations to these previous models; principally due to the variation in data on the geometry of the accommodative components, combined with sparse measurements of their material properties. Despite advances in available data, continued oversimplification has occurred in the modelling of the crystalline lens structure and the zonular fibres that surround the lens. A new accommodation model was proposed by the author that aims to eliminate these limitations. A novel representation of the zonular structure was developed, combined with updated lens and capsule modelling methods. The model has been designed to be adaptable so that a range of different age accommodation systems can be modelled, allowing the age related changes that occur to be simulated. The new modelling methods were validated by comparing the changes induced within the model to available in vivo data, leading to the definition of three different age models. These were used in an extended sensitivity study on age related changes, where individual parameters were altered to investigate their effect on the accommodative process. The material properties were found to have the largest impact on the decline in accommodative ability, in particular compared to changes in ciliary body movement or zonular structure. Novel data on the importance of the capsule stiffness and thickness was also established. The new model detailed within this thesis provides further insight into the accommodation mechanism, as well as a foundation for future, more detailed investigations into accommodation, presbyopia and accommodative restoration techniques.
Resumo:
Increasingly, people's digital identities are attached to, and expressed through, their mobile devices. At the same time digital sensors pervade smart environments in which people are immersed. This paper explores different perspectives in which users' modelling features can be expressed through the information obtained by their attached personal sensors. We introduce the PreSense Ontology, which is designed to assign meaning to sensors' observations in terms of user modelling features. We believe that the Sensing Presence ( PreSense ) Ontology is a first step toward the integration of user modelling and "smart environments". In order to motivate our work we present a scenario and demonstrate how the ontology could be applied in order to enable context-sensitive services. © 2012 Springer-Verlag.
Resumo:
Adult illiteracy rates are alarmingly high worldwide. The portability, affordability, and ease of use of mobile (or handheld) devices offer a realistic opportunity to provide novel, context-sensitive literacy resources to adults with limited literacy skills. To this end, we developed the concept of ALEX – a mobile Adult Literacy support application for EXperiential learning (Lumsden et al., 2005). On the basis of a medium-fidelity prototype of this application, we conducted an evaluation of ALEX using participants from our in tended user group. This evaluation had two goals: (a) to assess the usefulness of the ALEX concept and the usability of its current design; and (b) to reflect on the appropriateness of our evaluation process given the literacy-related needs of our participants. This paper outlines our approach to this evaluation as well as the results we obtained and our reflections on the process.
Resumo:
As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.
Resumo:
Feminist poststructuralist discourse analysis (FPDA) is an approach to analyzing spoken interactions that focuses on the ways in which speakers negotiate their subject positions within competing and interwoven discourses. This article identifies the theoretical background to FPDA, its key principles, its distinctiveness from other approaches such as critical discourse analysis, and outlines some of the main directions in current research.