820 resultados para Context information
Resumo:
The purpose of this study was to test Lotka’s law of scientific publication productivity using the methodology outlined by Pao (1985), in the field of Library and Information Studies (LIS). Lotka’s law has been sporadically tested in the field over the past 30+ years, but the results of these studies are inconclusive due to the varying methods employed by the researchers. A data set of 1,856 citations that were found using the ISI Web of Knowledge databases were studied. The values of n and c were calculated to be 2.1 and 0.6418 (64.18%) respectively. The Kolmogorov-Smirnov (K-S) one sample goodness-of-fit test was conducted at the 0.10 level of significance. The Dmax value is 0.022758 and the calculated critical value is 0.026562. It was determined that the null hypothesis stating that there is no difference in the observed distribution of publications and the distribution obtained using Lotka’s and Pao’s procedure could not be rejected. This study finds that literature in the field of library and Information Studies does conform to Lotka’s law with reliable results. As result, Lotka’s law can be used in LIS as a standardized means of measuring author publication productivity which will lead to findings that are comparable on many levels (e.g., department, institution, national). Lotka’s law can be employed as an empirically proven analytical tool to establish publication productivity benchmarks for faculty and faculty librarians. Recommendations for further study include (a) exploring the characteristics of the high and low producers; (b) finding a way to successfully account for collaborative contributions in the formula; and, (c) a detailed study of institutional policies concerning publication productivity and its impact on the appointment, tenure and promotion process of academic librarians.
Resumo:
Historically, memory has been evaluated by examining how much is remembered, however a more recent conception of memory focuses on the accuracy of memories. When using this accuracy-oriented conception of memory, unlike with the quantity-oriented approach, memory does not always deteriorate over time. A possible explanation for this seemingly surprising finding lies in the metacognitive processes of monitoring and control. Use of these processes allows people to withhold responses of which they are unsure, or to adjust the precision of responses to a level that is broad enough to be correct. The ability to accurately report memories has implications for investigators who interview witnesses to crimes, and those who evaluate witness testimony. This research examined the amount of information provided, accuracy, and precision of responses provided during immediate and delayed interviews about a videotaped mock crime. The interview format was manipulated such that a single free narrative response was elicited, or a series of either yes/no or cued questions were asked. Instructions provided by the interviewer indicated to the participants that they should either stress being informative, or being accurate. The interviews were then transcribed and scored. Results indicate that accuracy rates remained stable and high after a one week delay. Compared to those interviewed immediately, after a delay participants provided less information and responses that were less precise. Participants in the free narrative condition were the most accurate. Participants in the cued questions condition provided the most precise responses. Participants in the yes/no questions condition were most likely to say “I don’t know”. The results indicate that people are able to monitor their memories and modify their reports to maintain high accuracy. When control over precision was not possible, such as in the yes/no condition, people said “I don’t know” to maintain accuracy. However when withholding responses and adjusting precision were both possible, people utilized both methods. It seems that concerns that memories reported after a long retention interval might be inaccurate are unfounded.
Resumo:
Integrated project delivery (IPD) method has recently emerged as an alternative to traditional delivery methods. It has the potential to overcome inefficiencies of traditional delivery methods by enhancing collaboration among project participants. Information and communication technology (ICT) facilitates IPD by effective management, processing and communication of information within and among organizations. While the benefits of IPD, and the role of ICT in realizing them, have been generally acknowledged, the US public construction sector is very slow in adopting IPD. The reasons are - lack of experience and inadequate understanding of IPD in public owner as confirmed by the results of the questionnaire survey conducted under this research study. The public construction sector should be aware of the value of IPD and should know the essentials for effective implementation of IPD principles - especially, they should be cognizant of the opportunities offered by advancements in ICT to realize this. In order to address the need an IPD Readiness Assessment Model (IPD-RAM) was developed in this research study. The model was designed with a goal to determine IPD readiness of a public owner organization considering selected IPD principles, and ICT levels, at which project functions were carried out. Subsequent analysis led to identification of possible improvements in ICTs that have the potential to increase IPD readiness scores. Termed as the gap identification, this process was used to formulate improvement strategies. The model had been applied to six Florida International University (FIU) construction projects (case studies). The results showed that the IPD readiness of the organization was considerably low and several project functions can be improved by using higher and/or advanced level ICT tools and methods. Feedbacks from a focus group comprised of FIU officials and an independent group of experts had been received at various stages of this research and had been utilized during development and implementation of the model. Focus group input was also helpful for validation of the model and its results. It was hoped that the model developed would be useful to construction owner organizations in order to assess their IPD readiness and to identify appropriate ICT improvement strategies.
Resumo:
In this project we review the effects of reputation within the context of game theory. This is done through a study of two key papers. First, we examine a paper from Fudenberg and Levine: Reputation and Equilibrium Selection in Games with a Patient Player (1989). We add to this a review Gossner’s Simple Bounds on the Value of a Reputation (2011). We look specifically at scenarios in which a long-run player faces a series of short-run opponents, and how the former may develop a reputation. In turn, we show how reputation leads directly to both lower and upper bounds on the long-run player’s payoffs.
Resumo:
The presentation made at the conference addressed the issue of linkages between performance information and innovation within the Canadian federal government1. This is a three‐part paper prepared as background to that presentation. • Part I provides an overview of three main sources of performance information - results-based systems, program evaluation, and centrally driven review exercises – and reviews the Canadian experience with them. • Part II identifies and discusses a number of innovation issues that are common to the literature reviewed for this paper. • Part III examines actual and potential linkages between innovation and performance information. This section suggests that innovation in the Canadian federal government tends to cluster into two groups: smaller initiatives driven by staff or middle management; and much larger projects involving major programs, whole departments or whole-of-government. Readily available data on smaller innovation projects is skimpy but suggests that performance information does not play a major role in stimulating these initiatives. In contrast, two of the examples of large-scale innovation show that performance information plays a critical role at all stages. The paper concludes by supporting the contention of others writing on this topic: that more research is needed on innovation, particularly on its link to performance information. In that context, other conclusions drawn in this paper are tentative but suggest that the quality of performance information is as important for innovation as it is for performance management. However, innovation is likely to require its own particular performance information that may not be generated on a routine basis for purposes of performance management, particularly in the early stages of innovation. And, while the availability of performance information can be an important success factor in innovation, it does not stand alone. The commonality of a number of other factors identified in the literature surveyed for this paper strongly suggests that equal if not greater priority needs to be given to attenuating factors that inhibit innovation and to nurturing incentives.
Resumo:
Thee rise of computing and the internet have brought about an ethical eld of studies that some term information ethics, computer ethics, digital media ethics, or internet ethics e aim of this contribution is to discuss information ethics’ foundations in the context of the internet’s political economy e chapter rst looks to ground the analysis in a comparison of two information ethics approaches, namely those outlined by Rafael Capurro and Luciano Floridi It then develops, based on these foundations, analyses of the information ethical dimensions of two important areas of social media: one concerns the framing of social media by a surveillance-industrial complex in the context of Edward Snowden’s revelations and the other deals with issues of digital labour processes and issues of class that arises in this context e contribution asks ethical questions about these two phenomena that bring up issues of power, exploitation, and control in the information age It asks if, and if so, how, the approaches of Capurro and Floridi can help us to understand ethico-political aspects of the surveillance-industrial complex and digital labour
Resumo:
Progress in cognitive neuroscience relies on methodological developments to increase the specificity of knowledge obtained regarding brain function. For example, in functional neuroimaging the current trend is to study the type of information carried by brain regions rather than simply compare activation levels induced by task manipulations. In this context noninvasive transcranial brain stimulation (NTBS) in the study of cognitive functions may appear coarse and old fashioned in its conventional uses. However, in their multitude of parameters, and by coupling them with behavioral manipulations, NTBS protocols can reach the specificity of imaging techniques. Here we review the different paradigms that have aimed to accomplish this in both basic science and clinical settings and follow the general philosophy of information-based approache
Resumo:
This paper reports the findings from a study of the learning of English intonation by Spanish speakers within the discourse mode of L2 oral presentation. The purpose of this experiment is, firstly, to compare four prosodic parameters before and after an L2 discourse intonation training programme and, secondly, to confirm whether subjects, after the aforementioned L2 discourse intonation training, are able to match the form of these four prosodic parameters to the discourse-pragmatic function of dominance and control. The study designed the instructions and tasks to create the oral and written corpora and Brazil’s Pronunciation for Advanced Learners of English was adapted for the pedagogical aims of the present study. The learners’ pre- and post-tasks were acoustically analysed and a pre / post- questionnaire design was applied to interpret the acoustic analysis. Results indicate most of the subjects acquired a wider choice of the four prosodic parameters partly due to the prosodically-annotated transcripts that were developed throughout the L2 discourse intonation course. Conversely, qualitative and quantitative data reveal most subjects failed to match the forms to their appropriate pragmatic functions to express dominance and control in an L2 oral presentation.
Resumo:
This work focuses on the study of the circular migration between America and Europe, particularly in the discussion about knowledge transfer and the way that social networks reconfigure the form of information distribution among people, that due to labor and academic issues have left their own country. The main purpose of this work is to study the impact of social media use in migration flows between Mexico and Spain, more specifically the use by Mexican migrants who have moved for multiple years principally for educational purposes and then have returned to their respective locations in Mexico seeking to integrate themselves into the labor market. Our data collection concentrated exclusively on a group created on Facebook by Mexicans who mostly reside in Barcelona, Spain or wish to travel to the city for economic, educational or tourist reasons. The results of this research show that while social networks are spaces for exchange and integration, there is a clear tendency by this group to "narrow lines" and to look back to their homeland, slowing the process of opening socially in their new context.
Resumo:
The business model of an organization is an important strategic tool for its success, and should therefore be understood by business professionals and information technology professionals. By this context and considering the importance of information technology in contemporary business models, this article aims to verify the use of the business model components in the information technology (IT) projects management process in enterprises. To achieve this goal, this exploratory research has investigated the use of the Business Model concept in the information technology projects management, by a survey applied to 327 professionals from February to April 2012. It was observed that the business model concept, as well as its practices or its blocks, are not so well explored in its whole potential, possibly because it is relatively new. One of the benefits of this conceptual tool is to provide an understanding in terms of the core business for different areas, enabling a higher level of knowledge in terms of the essential activities of the enterprise IT professionals and the business area.
Resumo:
Abstract: In the mid-1990s when I worked for a telecommunications giant I struggled to gain access to basic geodemographic data. It cost hundreds of thousands of dollars at the time to simply purchase a tile of satellite imagery from Marconi, and it was often cheaper to create my own maps using a digitizer and A0 paper maps. Everything from granular administrative boundaries to right-of-ways to points of interest and geocoding capabilities were either unavailable for the places I was working in throughout Asia or very limited. The control of this data was either in a government’s census and statistical bureau or was created by a handful of forward thinking corporations. Twenty years on we find ourselves inundated with data (location and other) that we are challenged to amalgamate, and much of it still “dirty” in nature. Open data initiatives such as ODI give us great hope for how we might be able to share information together and capitalize not only in the crowdsourcing behavior but in the implications for positive usage for the environment and for the advancement of humanity. We are already gathering and amassing a great deal of data and insight through excellent citizen science participatory projects across the globe. In early 2015, I delivered a keynote at the Data Made Me Do It conference at UC Berkeley, and in the preceding year an invited talk at the inaugural QSymposium. In gathering research for these presentations, I began to ponder on the effect that social machines (in effect, autonomous data collection subjects and objects) might have on social behaviors. I focused on studying the problem of data from various veillance perspectives, with an emphasis on the shortcomings of uberveillance which included the potential for misinformation, misinterpretation, and information manipulation when context was entirely missing. As we build advanced systems that rely almost entirely on social machines, we need to ponder on the risks associated with following a purely technocratic approach where machines devoid of intelligence may one day dictate what humans do at the fundamental praxis level. What might be the fallout of uberveillance? Bio: Dr Katina Michael is a professor in the School of Computing and Information Technology at the University of Wollongong. She presently holds the position of Associate Dean – International in the Faculty of Engineering and Information Sciences. Katina is the IEEE Technology and Society Magazine editor-in-chief, and IEEE Consumer Electronics Magazine senior editor. Since 2008 she has been a board member of the Australian Privacy Foundation, and until recently was the Vice-Chair. Michael researches on the socio-ethical implications of emerging technologies with an emphasis on an all-hazards approach to national security. She has written and edited six books, guest edited numerous special issue journals on themes related to radio-frequency identification (RFID) tags, supply chain management, location-based services, innovation and surveillance/ uberveillance for Proceedings of the IEEE, Computer and IEEE Potentials. Prior to academia, Katina worked for Nortel Networks as a senior network engineer in Asia, and also in information systems for OTIS and Andersen Consulting. She holds cross-disciplinary qualifications in technology and law.
Resumo:
We examine how using information on unconstrained demand can improve operational decisions. Specifically, we examine the widespread problem of developing course schedules in not-for-profit university settings. We investigate the potential benefit of incorporating, into the scheduling process, information on the unconstrained demand of students for courses. Prior to this study, the status quo in our college, like that in a large proportion of university settings, was building the course schedule to avoid time conflicts between required courses and to minimize time conflicts between designated groups of courses, such as electives in a particular area. Compared to the status quo approach, we find that, based on three semester's worth of actual data, an approach that explicitly considers students’ course preferences improves a student-based metric of schedule quality on the order of over 4% (which is the equivalent, in our setting, of improving service for over 20% of students).
Resumo:
Heterogeneity has to be taken into account when integrating a set of existing information sources into a distributed information system that are nowadays often based on Service- Oriented Architectures (SOA). This is also particularly applicable to distributed services such as event monitoring, which are useful in the context of Event Driven Architectures (EDA) and Complex Event Processing (CEP). Web services deal with this heterogeneity at a technical level, also providing little support for event processing. Our central thesis is that such a fully generic solution cannot provide complete support for event monitoring; instead, source specific semantics such as certain event types or support for certain event monitoring techniques have to be taken into account. Our core result is the design of a configurable event monitoring (Web) service that allows us to trade genericity for the exploitation of source specific characteristics. It thus delivers results for the areas of SOA, Web services, CEP and EDA.
Resumo:
Over the past decade, Mental Health (MH) has increasingly appeared on the ‘school agenda’, both in terms of rising levels of MH difficulties in the student population, and also the expectation that schools have a role to play in supporting good MH. MH is a term fraught with ambiguities leading to uncertainty around the most appropriate ways to provide support. A review of current literature reveals a wide range of definitions and interpretations, sometimes within the same team of supporting professionals. The current study seeks to explore the perspectives held by two professional groups seemingly well placed to support young persons’ (YPs’) MH. Six Clinical Psychologists (CPs) and six Educational Psychologists (EPs) are interviewed, exploring their constructs of MH, and their perceptions of their own role and the roles of others in supporting secondary school aged YPs’ MH. The data are analysed through Thematic Analysis. Findings suggest that there are variations between the two professions’ constructs of MH, and EPs in particular have no unified concept of MH. This is likely due to less experience or training in this area. CPs and EPs hold similar perceptions of the school’s role for promoting good MH, and flagging up concerns to more specialist professionals when necessary. However, there are discrepancies in the EP and CP perceptions of each other’s roles. The conflicting views appear to emerge through incomplete information about the other, and professional defensiveness in a context where resources and funding are scarce. The current study suggests that these challenges can be addressed through: greater reflectivity on professional biases, exploration of MH constructs within other epistemological positions, and greater communication regarding professional roles, leading to clearer collaboration in supporting the MH of YP.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08