582 resultados para Strongly Semantic Information
Resumo:
In today’s information society, electronic tools, such as computer networks for the rapid transfer of data and composite databases for information storage and management, are critical in ensuring effective environmental management. In particular environmental policies and programs for federal, state, and local governments need a large volume of up-to-date information on the quality of water, air, and soil in order to conserve and protect natural resources and to carry out meteorology. In line with this, the utilization of information and communication technologies (ICTs) is crucial to preserve and improve the quality of life. In handling tasks in the field of environmental protection a range of environmental and technical information is often required for a complex and mutual decision making in a multidisciplinary team environment. In this regard e-government provides a foundation of the transformative ICT initiative which can lead to better environmental governance, better services, and increased public participation in environmental decision- making process.
Resumo:
Countless factors affect the inner workings of a city, so in an attempt to gain an understanding of place and making sound decisions, planners need to utilize decision support systems (DSS) or planning support systems (PSS). PSS were originally developed as DSS in academia for experimental purposes, but like many other technologies, they became one of the most innovative technologies in parallel to rapid developments in software engineering as well as developments and advances in networks and hardware. Particularly, in the last decade, the awareness of PSS have been dramatically heightened with the increasing demand for a better, more reliable and furthermore a transparent decision-making process (Klosterman, Siebert, Hoque, Kim, & Parveen, 2003). Urban planning as an act has quite different perspective from the PSS point of view. The unique nature of planning requires that spatial dimension must be considered within the context of PSS. Additionally, the rapid changes in socio-economic structure cannot be easily monitored or controlled without an effective PSS.
Resumo:
This thesis conceptualises Use for IS (Information Systems) success. While Use in this study describes the extent to which an IS is incorporated into the user’s processes or tasks, success of an IS is the measure of the degree to which the person using the system is better off. For IS success, the conceptualisation of Use offers new perspectives on describing and measuring Use. We test the philosophies of the conceptualisation using empirical evidence in an Enterprise Systems (ES) context. Results from the empirical analysis contribute insights to the existing body of knowledge on the role of Use and demonstrate Use as an important factor and measure of IS success. System Use is a central theme in IS research. For instance, Use is regarded as an important dimension of IS success. Despite its recognition, the Use dimension of IS success reportedly suffers from an all too simplistic definition, misconception, poor specification of its complex nature, and an inadequacy of measurement approaches (Bokhari 2005; DeLone and McLean 2003; Zigurs 1993). Given the above, Burton-Jones and Straub (2006) urge scholars to revisit the concept of system Use, consider a stronger theoretical treatment, and submit the construct to further validation in its intended nomological net. On those considerations, this study re-conceptualises Use for IS success. The new conceptualisation adopts a work-process system-centric lens and draws upon the characteristics of modern system types, key user groups and their information needs, and the incorporation of IS in work processes. With these characteristics, the definition of Use and how it may be measured is systematically established. Use is conceptualised as a second-order measurement construct determined by three sub-dimensions: attitude of its users, depth, and amount of Use. The construct is positioned in a modified IS success research model, in an attempt to demonstrate its central role in determining IS success in an ES setting. A two-stage mixed-methods research design—incorporating a sequential explanatory strategy—was adopted to collect empirical data and to test the research model. The first empirical investigation involved an experiment and a survey of ES end users at a leading tertiary education institute in Australia. The second, a qualitative investigation, involved a series of interviews with real-world operational managers in large Indian private-sector companies to canvass their day-to-day experiences with ES. The research strategy adopted has a stronger quantitative leaning. The survey analysis results demonstrate the aptness of Use as an antecedent and a consequence of IS success, and furthermore, as a mediator between the quality of IS and the impacts of IS on individuals. Qualitative data analysis on the other hand, is used to derive a framework for classifying the diversity of ES Use behaviour. The qualitative results establish that workers Use IS in their context to orientate, negotiate, or innovate. The implications are twofold. For research, this study contributes to cumulative IS success knowledge an approach for defining, contextualising, measuring, and validating Use. For practice, research findings not only provide insights for educators when incorporating ES for higher education, but also demonstrate how operational managers incorporate ES into their work practices. Research findings leave the way open for future, larger-scale research into how industry practitioners interact with an ES to complete their work in varied organisational environments.
Resumo:
The paper describes the processes and the outcomes of the ranking of LIS journal titles by Australia’s LIS researchers during 2007-8, firstly through the Australian federal government’s Research Quality Framework (RQF) process and then its replacement, the Excellence in Research for Australia (ERA) initiative. The requirement to rank the journals titles used came from discussions held at the RQF panel meeting held in February 2007 in Canberra, Australia. While it was recognised that the Web of Science (formerly ISI) journal impact approach of journal acceptance for measures of research quality and impact might not work for LIS, it was apparent that this model would be the default if no other ranking of journal titles became apparent. Although an increasing number of LIS and related discipline journals were appearing in the Web of Science listed rankings, the number was few and it was thus decided by the Australian LIS research community to undertake the ranking exercise.
Resumo:
In vector space based approaches to natural language processing, similarity is commonly measured by taking the angle between two vectors representing words or documents in a semantic space. This is natural from a mathematical point of view, as the angle between unit vectors is, up to constant scaling, the only unitarily invariant metric on the unit sphere. However, similarity judgement tasks reveal that human subjects fail to produce data which satisfies the symmetry and triangle inequality requirements for a metric space. A possible conclusion, reached in particular by Tversky et al., is that some of the most basic assumptions of geometric models are unwarranted in the case of psychological similarity, a result which would impose strong limits on the validity and applicability vector space based (and hence also quantum inspired) approaches to the modelling of cognitive processes. This paper proposes a resolution to this fundamental criticism of of the applicability of vector space models of cognition. We argue that pairs of words imply a context which in turn induces a point of view, allowing a subject to estimate semantic similarity. Context is here introduced as a point of view vector (POVV) and the expected similarity is derived as a measure over the POVV's. Different pairs of words will invoke different contexts and different POVV's. Hence the triangle inequality ceases to be a valid constraint on the angles. We test the proposal on a few triples of words and outline further research.
Resumo:
Item folksonomy or tag information is popularly available on the web now. However, since tags are arbitrary words given by users, they contain a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise brings difficulties to improve the accuracy of item recommendations. In this paper, we propose to combine item taxonomy and folksonomy to reduce the noise of tags and make personalized item recommendations. The experiments conducted on the dataset collected from Amazon.com demonstrated the effectiveness of the proposed approaches. The results suggested that the recommendation accuracy can be further improved if we consider the viewpoints and the vocabularies of both experts and users.
Resumo:
Social tags are an important information source in Web 2.0. They can be used to describe users’ topic preferences as well as the content of items to make personalized recommendations. However, since tags are arbitrary words given by users, they contain a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise brings difficulties to improve the accuracy of item recommendations. To eliminate the noise of tags, in this paper we propose to use the multiple relationships among users, items and tags to find the semantic meaning of each tag for each user individually. With the proposed approach, the relevant tags of each item and the tag preferences of each user are determined. In addition, the user and item-based collaborative filtering combined with the content filtering approach are explored. The effectiveness of the proposed approaches is demonstrated in the experiments conducted on real world datasets collected from Amazon.com and citeULike website.
Resumo:
Social tags in web 2.0 are becoming another important information source to describe the content of items as well as to profile users’ topic preferences. However, as arbitrary words given by users, tags contains a lot of noise such as tag synonym and semantic ambiguity a large number personal tags that only used by one user, which brings challenges to effectively use tags to make item recommendations. To solve these problems, this paper proposes to use a set of related tags along with their weights to represent semantic meaning of each tag for each user individually. A hybrid recommendation generation approaches that based on the weighted tags are proposed. We have conducted experiments using the real world dataset obtained from Amazon.com. The experimental results show that the proposed approaches outperform the other state of the art approaches.
Resumo:
What informs members of the church community as they learn? Do the ways people engage with information differ according to the circumstances in which they learn? Informed learning, or the ways in which people use information in the learning experience and the degree to which they are aware of that, has become a focus of contemporary information literacy research. This essay explores the nature of informed learning in the context of the church as a learning community. It is anticipated that insights resulting from this exploration may help church organisations, church leaders and lay people to consider how information can be used to grow faith, develop relationships, manage the church and respond to religious knowledge, which support the pursuit of spiritual wellness and the cultivation of lifelong learning. Information professionals within the church community and the broader information profession are encouraged to foster their awareness of the impact that engagement with information has in the learning experience and in the prioritising of lifelong learning in community contexts.
Resumo:
Purpose: The purpose of this paper is to explain variations in discretionary information shared between buyers and key suppliers. The paper also aims to examine how the extent of information shared affects buyers’ performance in terms of resource usage, output, and flexibility. ----- ----- Design/methodology/approach: The data for the paper comprise 221 Finnish and Swedish non-service companies obtained through a mail survey. The hypothesized relationships were tested using partial least squares modelling with reflective and formative constructs.----- ----- Findings: The results of the study suggest that (environmental and demand) uncertainty and interdependency can to some degree explain the extent of information shared between a buyer and key supplier. Furthermore, information sharing improves buyers’ performance with respect to resource usage, output, and flexibility.----- ----- Research limitations/implications: A limitation to the paper relates to the data, which only included buyers.Abetter approach would have been to collect data from both, buyers and key suppliers. Practical implications – Companies face a wide range of supply chain solutions that enable and encourage collaboration across organizations. This paper suggests a more selective and balanced approach toward adopting the solutions offered as the benefits are contingent on a number of factors such as uncertainty. Also, the risks of information sharing are far too high for a one size fits all approach.----- ----- Originality/value: The paper illustrates the applicability of transaction cost theory to the contemporary era of e-commerce. With this finding, transaction cost economics can provide a valuable lens with which to view and interpret interorganizational information sharing, a topic that has received much attention in the recent years.
Resumo:
We examine the impact of individual-specific information processing strategies (IPSs) on the inclusion/exclusion of attributes on the parameter estimates and behavioural outputs of models of discrete choice. Current practice assumes that individuals employ a homogenous IPS with regards to how they process attributes of stated choice (SC) experiments. We show how information collected exogenous of the SC experiment on whether respondents either ignored or considered each attribute may be used in the estimation process, and how such information provides outputs that are IPS segment specific. We contend that accounting the inclusion/exclusion of attributes will result in behaviourally richer population parameter estimates.
Resumo:
The economic environment of today can be characterized as highly dynamic and competitive if not being in a constant flux. Globalization and the Information Technology (IT) revolution are perhaps the main contributing factors to this observation. While companies have to some extent adapted to the current business environment, new pressures such as the recent increase in environmental awareness and its likely effects on regulations are underway. Hence, in the light of market and competitive pressures, companies must constantly evaluate and if necessary update their strategies to sustain and increase the value they create for shareholders (Hunt and Morgan, 1995; Christopher and Towill, 2002). One way to create greater value is to become more efficient in producing and delivering goods and services to customers, which can lead to a strategy known as cost leadership (Porter, 1980). Even though Porter (1996) notes that in the long run cost leadership may not be a sufficient strategy for competitive advantage, operational efficiency is certainly necessary and should therefore be on the agenda of every company. ----- ----- ----- Better workflow management, technology, and resource utilization can lead to greater internal operational efficiency, which explains why, for example, many companies have recently adopted Enterprise Resource Planning (ERP) Systems: integrated softwares that streamline business processes. However, as today more and more companies are approaching internal operational excellence, the focus for finding inefficiencies and cost saving opportunities is moving beyond the boundaries of the firm. Today many firms in the supply chain are engaging in collaborative relationships with customers, suppliers, and third parties (services) in an attempt to cut down on costs related to for example, inventory, production, as well as to facilitate synergies. Thus, recent years have witnessed fluidity and blurring regarding organizational boundaries (Coad and Cullen, 2006). ----- ----- ----- The Information Technology (IT) revolution of the late 1990’s has played an important role in bringing organizations closer together. In their efforts to become more efficient, companies first integrated their information systems to speed up transactions such as ordering and billing. Later collaboration on a multidimensional scale including logistics, production, and Research & Development became evident as companies expected substantial benefits from collaboration. However, one could also argue that the recent popularity of the concepts falling under Supply Chain Management (SCM) such as Vendor Managed Inventory, Collaborative Planning, Replenishment, and Forecasting owe to the marketing efforts of software vendors and consultants who provide these solutions. Nevertheless, reports from professional organizations as well as academia indicate that the trend towards interorganizational collaboration is gaining wider ground. For example, the ARC Advisory Group, a research organization on supply chain solutions, estimated that the market for SCM, which includes various kinds of collaboration tools and related services, is going to grow at an annual rate of 7.4% during the years 2004-2008, reaching to $7.4 billion in 2008 (Engineeringtalk 2004).
Resumo:
It is a big challenge to clearly identify the boundary between positive and negative streams for information filtering systems. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on the RCV1 data collection, and substantial experiments show that the proposed approach achieves encouraging performance and the performance is also consistent for adaptive filtering as well.
Resumo:
Intelligent agents are an advanced technology utilized in Web Intelligence. When searching information from a distributed Web environment, information is retrieved by multi-agents on the client site and fused on the broker site. The current information fusion techniques rely on cooperation of agents to provide statistics. Such techniques are computationally expensive and unrealistic in the real world. In this paper, we introduce a model that uses a world ontology constructed from the Dewey Decimal Classification to acquire user profiles. By search using specific and exhaustive user profiles, information fusion techniques no longer rely on the statistics provided by agents. The model has been successfully evaluated using the large INEX data set simulating the distributed Web environment.
Resumo:
This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern- based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experiments have been conducted to compare the proposed two-stage filtering (T-SM) model with other possible "term-based + pattern-based" or "term-based + term-based" IF models. The results based on the RCV1 corpus show that the T-SM model significantly outperforms other types of "two-stage" IF models.