814 resultados para Databases and Information Systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cooper, J. & Urquhart, C. (2005). The information needs and information-seeking behaviours of home-care workers and clients receiving home care. Health Information and Libraries Journal, 22(2), 107-116. Sponsorship: AHRC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of intuition is an emerging area of research in psychology, social sciences, and business studies. It is increasingly of interest to the study of management, for example in decision-making as a counterpoint to structured approaches. Recently work has been undertaken to conceptualize a construct for the intuitive nature of technology. However to-date there is no common under-standing of the term intuition in information systems (IS) research. This paper extends the study of intuition in IS research by using exploratory research to cate-gorize the use of the word “intuition” and related terms in papers published in two prominent IS journals over a ten year period. The entire text of MIS Quarterly and Information Systems Research was reviewed for the years 1999 through 2008 using searchable PDF versions of these publications. As far as could be deter-mined, this is the first application of this approach in the analysis of the text of IS academic journals. The use of the word “intuition” and related terms was catego-rized using coding consistent with Grounded Theory. The focus of this research was on the first two stages of Grounded Theory analysis - the development of codes and constructs. Saturation of coding was not reached: an extended review of these publications would be required to enable theory development. Over 400 incidents of the use of “intuition”, and related terms were found in the articles reviewed. The most prominent use of the term of “intuition” was coded as “Intui-tion as Authority” in which intuition was used to validate a research objective or finding; representing approximately 37 per cent of codes assigned. The second most common coding occurred in research articles with mathematical analysis, representing about 19 per cent of the codes assigned, for example where a ma-thematical formulation or result was “intuitive”. The possibly most impactful use of the term “intuition” was “Intuition as Outcome”, representing approximately 7 per cent of all coding, which characterized research results as adding to the intui-tive understanding of a research topic or phenomena. This research contributes to a greater theoretical understanding of intuition enabling insight into the use of intuition, and the eventual development of a theory on the use of intuition in academic IS research publications. It also provides potential benefits to practi-tioners by providing insight into and validation of the use of intuition in IS man-agement. Research directions include the creation of reflective and/or formative constructs for intuition in information systems research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of operations on representations of objects is well documented in the realm of spatial engineering. However, the mathematical structure and formal proof of these operational phenomena are not thoroughly explored. Other works have often focused on query-based models that seek to order classes and instances of objects in the form of semantic hierarchies or graphs. In some models, nodes of graphs represent objects and are connected by edges that represent different types of coarsening operators. This work, however, studies how the coarsening operator "simplification" can manipulate partitions of finite sets, independent from objects and their attributes. Partitions that are "simplified first have a collection of elements filtered (removed), and then the remaining partition is amalgamated (some sub-collections are unified). Simplification has many interesting mathematical properties. A finite composition of simplifications can also be accomplished with some single simplification. Also, if one partition is a simplification of the other, the simplified partition is defined to be less than the other partition according to the simp relation. This relation is shown to be a partial-order relation based on simplification. Collections of partitions can not only be proven to have a partial- order structure, but also have a lattice structure and are complete. In regard to a geographic information system (GIs), partitions related to subsets of attribute domains for objects are called views. Objects belong to different views based whether or not their attribute values lie in the underlying view domain. Given a particular view, objects with their attribute n-tuple codings contained in the view are part of the actualization set on views, and objects are labeled according to the particular subset of the view in which their coding lies. Though the scope of the work does not mainly focus on queries related directly to geographic objects, it provides verification for the existence of particular views in a system with this underlying structure. Given a finite attribute domain, one can say with mathematical certainty that different views of objects are partially ordered by simplification, and every collection of views has a greatest lower bound and least upper bound, which provides the validity for exploring queries in this regard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information systems are corporate resources, therefore information systems development must be aligned with corporate strategy. This thesis proposes that effective strategic alignment of information systems requires information systems development, information systems planning and strategic management to be united. Literature in these areas is examined, breaching the academic boundaries which separate these areas, to contribute a synthesised approach to the strategic alignment of information systems development. Previous work in information systems planning has extended information systems development techniques, such as data modelling, into strategic planning activities, neglecting techniques of strategic management. Examination of strategic management in this thesis, identifies parallel trends in strategic management and information systems development; the premises of the learning school of strategic management are similar to those of soft systems approaches to information systems development. It is therefore proposed that strategic management can be supported by a soft systems approach. Strategic management tools and techniques frame individual views of a strategic situation; soft systems approaches can integrate these diverse views to explore the internal and external environments of an organisation. The information derived from strategic analysis justifies the need for an information system and provides a starting point for information systems development. This is demonstrated by a composite framework which enables each information system to be justified according to its direct contribution to corporate strategy. The proposed framework was developed through action research conducted in a number of organisations of varying types. This suggests that the framework can be widely used to support the strategic alignment of information systems development, thereby contributing to organisational success.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research concerns information systems and information systems development. The thesis describes an approach to information systems development called Multiview. This is a methodology which seeks to combine the strengths of a number of different, existing approaches in a coherent manner. Many of these approaches are radically different in terms of concepts, philosophy, assumptions, methods, techniques and tools. Three case studies are described presenting Multiview 'in action'. The first is used mainly to expose the strengths and weaknesses of an early version of the approach discussed in the thesis. Tools and techniques are described in the thesis which aim to strengthen the approach. Two further case studies are presented to illustrate the use of this second version of Multiview. This is not put forward as an 'ideal methodology' and the case studies expose some of the difficulties and practical problems of information systems work and the use of the methodology. A more contingency based approach to information systems development is advocated using Multiview as a framework rather than a prescriptive tool. Each information systems project and the use of the framework is unique, contingent on the particular problem situation. The skills of different analysts, the backgrounds of users and the situations in which they are constrained to work have always to be taken into account in any project. The realities of the situation will cause departure from the 'ideal methodology' in order to allow for the exigencies of the real world. Multiview can therefore be said to be an approach used to explore the application area in order to develop an information system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many systems and applications are continuously producing events. These events are used to record the status of the system and trace the behaviors of the systems. By examining these events, system administrators can check the potential problems of these systems. If the temporal dynamics of the systems are further investigated, the underlying patterns can be discovered. The uncovered knowledge can be leveraged to predict the future system behaviors or to mitigate the potential risks of the systems. Moreover, the system administrators can utilize the temporal patterns to set up event management rules to make the system more intelligent. With the popularity of data mining techniques in recent years, these events grad- ually become more and more useful. Despite the recent advances of the data mining techniques, the application to system event mining is still in a rudimentary stage. Most of works are still focusing on episodes mining or frequent pattern discovering. These methods are unable to provide a brief yet comprehensible summary to reveal the valuable information from the high level perspective. Moreover, these methods provide little actionable knowledge to help the system administrators to better man- age the systems. To better make use of the recorded events, more practical techniques are required. From the perspective of data mining, three correlated directions are considered to be helpful for system management: (1) Provide concise yet comprehensive summaries about the running status of the systems; (2) Make the systems more intelligence and autonomous; (3) Effectively detect the abnormal behaviors of the systems. Due to the richness of the event logs, all these directions can be solved in the data-driven manner. And in this way, the robustness of the systems can be enhanced and the goal of autonomous management can be approached. This dissertation mainly focuses on the foregoing directions that leverage tem- poral mining techniques to facilitate system management. More specifically, three concrete topics will be discussed, including event, resource demand prediction, and streaming anomaly detection. Besides the theoretic contributions, the experimental evaluation will also be presented to demonstrate the effectiveness and efficacy of the corresponding solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade, large numbers of social media services have emerged and been widely used in people's daily life as important information sharing and acquisition tools. With a substantial amount of user-contributed text data on social media, it becomes a necessity to develop methods and tools for text analysis for this emerging data, in order to better utilize it to deliver meaningful information to users. Previous work on text analytics in last several decades is mainly focused on traditional types of text like emails, news and academic literatures, and several critical issues to text data on social media have not been well explored: 1) how to detect sentiment from text on social media; 2) how to make use of social media's real-time nature; 3) how to address information overload for flexible information needs. In this dissertation, we focus on these three problems. First, to detect sentiment of text on social media, we propose a non-negative matrix tri-factorization (tri-NMF) based dual active supervision method to minimize human labeling efforts for the new type of data. Second, to make use of social media's real-time nature, we propose approaches to detect events from text streams on social media. Third, to address information overload for flexible information needs, we propose two summarization framework, dominating set based summarization framework and learning-to-rank based summarization framework. The dominating set based summarization framework can be applied for different types of summarization problems, while the learning-to-rank based summarization framework helps utilize the existing training data to guild the new summarization tasks. In addition, we integrate these techneques in an application study of event summarization for sports games as an example of how to better utilize social media data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The protection of cyberspace has become one of the highest security priorities of governments worldwide. The EU is not an exception in this context, given its rapidly developing cyber security policy. Since the 1990s, we could observe the creation of three broad areas of policy interest: cyber-crime, critical information infrastructures and cyber-defence. One of the main trends transversal to these areas is the importance that the private sector has come to assume within them. In particular in the area of critical information infrastructure protection, the private sector is seen as a key stakeholder, given that it currently operates most infrastructures in this area. As a result of this operative capacity, the private sector has come to be understood as the expert in network and information systems security, whose knowledge is crucial for the regulation of the field. Adopting a Regulatory Capitalism framework, complemented by insights from Network Governance, we can identify the shifting role of the private sector in this field from one of a victim in need of protection in the first phase, to a commercial actor bearing responsibility for ensuring network resilience in the second, to an active policy shaper in the third, participating in the regulation of NIS by providing technical expertise. By drawing insights from the above-mentioned frameworks, we can better understand how private actors are involved in shaping regulatory responses, as well as why they have been incorporated into these regulatory networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With hundreds of millions of users reporting locations and embracing mobile technologies, Location Based Services (LBSs) are raising new challenges. In this dissertation, we address three emerging problems in location services, where geolocation data plays a central role. First, to handle the unprecedented growth of generated geolocation data, existing location services rely on geospatial database systems. However, their inability to leverage combined geographical and textual information in analytical queries (e.g. spatial similarity joins) remains an open problem. To address this, we introduce SpsJoin, a framework for computing spatial set-similarity joins. SpsJoin handles combined similarity queries that involve textual and spatial constraints simultaneously. LBSs use this system to tackle different types of problems, such as deduplication, geolocation enhancement and record linkage. We define the spatial set-similarity join problem in a general case and propose an algorithm for its efficient computation. Our solution utilizes parallel computing with MapReduce to handle scalability issues in large geospatial databases. Second, applications that use geolocation data are seldom concerned with ensuring the privacy of participating users. To motivate participation and address privacy concerns, we propose iSafe, a privacy preserving algorithm for computing safety snapshots of co-located mobile devices as well as geosocial network users. iSafe combines geolocation data extracted from crime datasets and geosocial networks such as Yelp. In order to enhance iSafe's ability to compute safety recommendations, even when crime information is incomplete or sparse, we need to identify relationships between Yelp venues and crime indices at their locations. To achieve this, we use SpsJoin on two datasets (Yelp venues and geolocated businesses) to find venues that have not been reviewed and to further compute the crime indices of their locations. Our results show a statistically significant dependence between location crime indices and Yelp features. Third, review centered LBSs (e.g., Yelp) are increasingly becoming targets of malicious campaigns that aim to bias the public image of represented businesses. Although Yelp actively attempts to detect and filter fraudulent reviews, our experiments showed that Yelp is still vulnerable. Fraudulent LBS information also impacts the ability of iSafe to provide correct safety values. We take steps toward addressing this problem by proposing SpiDeR, an algorithm that takes advantage of the richness of information available in Yelp to detect abnormal review patterns. We propose a fake venue detection solution that applies SpsJoin on Yelp and U.S. housing datasets. We validate the proposed solutions using ground truth data extracted by our experiments and reviews filtered by Yelp.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Small-scale coffee producers worldwide remain vulnerable to price fluctuations after the 1999-2003 coffee crisis. One way to increase small-scale farmer economic resilience is to produce a more expensive product, such as quality coffee. There is growing demand in coffee-producing and coffee-importing countries for user-friendly tools that facilitate the marketing of quality coffee. The purpose of this study is to develop a prototypical quality coffee marketing tool in the form of a GIS model that identifies regions for producing quality coffee in a country not usually associated with quality coffee, Honduras. Maps of areas for growing quality coffee were produced with information on climate, soils, topography, areas vulnerable to environmental degradation, the location of current quality coffee farms, and infrastructure. The maps depict suitable coffee-growing land in portions of eight western Honduran departments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A wide variety of spatial data collection efforts are ongoing throughout local, state and federal agencies, private firms and non-profit organizations. Each effort is established for a different purpose but organizations and individuals often collect and maintain the same or similar information. The United States federal government has undertaken many initiatives such as the National Spatial Data Infrastructure, the National Map and Geospatial One-Stop to reduce duplicative spatial data collection and promote the coordinated use, sharing, and dissemination of spatial data nationwide. A key premise in most of these initiatives is that no national government will be able to gather and maintain more than a small percentage of the geographic data that users want and desire. Thus, national initiatives depend typically on the cooperation of those already gathering spatial data and those using GIs to meet specific needs to help construct and maintain these spatial data infrastructures and geo-libraries for their nations (Onsrud 2001). Some of the impediments to widespread spatial data sharing are well known from directly asking GIs data producers why they are not currently involved in creating datasets that are of common or compatible formats, documenting their datasets in a standardized metadata format or making their datasets more readily available to others through Data Clearinghouses or geo-libraries. The research described in this thesis addresses the impediments to wide-scale spatial data sharing faced by GIs data producers and explores a new conceptual data-sharing approach, the Public Commons for Geospatial Data, that supports user-friendly metadata creation, open access licenses, archival services and documentation of parent lineage of the contributors and value- adders of digital spatial data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Chihuahua desert is one of the most biologically diverse ecosystems in the world, but suffers serious degradation because of changes in fire regimes resulting in large catastrophic fires. My study was conducted in the Sierra La Mojonera (SLM) natural protected area in Mexico. The purpose of this study was to implement the use of FARSITE fire modeling as a fire management tool to develop an integrated fire management plan at SLM. Firebreaks proved to detain 100% of wildfire outbreaks. The rosetophilous scrub experienced the fastest rate of fire spread and lowland creosote bush scrub experienced the slowest rate of fire spread. March experienced the fastest rate of fire spread, while September experienced the slowest rate of fire spread. The results of my study provide a tool for wildfire management through the use geospatial technologies and, in particular, FARSITE fire modeling in SLM and Mexico.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the modern and dynamic construction environment it is important to access information in a fast and efficient manner in order to improve the decision making processes for construction managers. This capability is, in most cases, straightforward with today’s technologies for data types with an inherent structure that resides primarily on established database structures like estimating and scheduling software. However, previous research has demonstrated that a significant percentage of construction data is stored in semi-structured or unstructured data formats (text, images, etc.) and that manually locating and identifying such data is a very hard and time-consuming task. This paper focuses on construction site image data and presents a novel image retrieval model that interfaces with established construction data management structures. This model is designed to retrieve images from related objects in project models or construction databases using location, date, and material information (extracted from the image content with pattern recognition techniques).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several countries have made large investments in building historical Geographical Information Systems (GIS) databases containing census and other quantitative statistics over long periods of time. Making good use of these databases requires approaches that explore spatial and temporal change.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information modelling is a topic that has been researched a great deal, but still many questions around it have not been solved. An information model is essential in the design of a database which is the core of an information system. Currently most of databases only deal with information that represents facts, or asserted information. The ability of capturing semantic aspect has to be improved, and yet other types, such as temporal and intentional information, should be considered. Semantic Analysis, a method of information modelling, has offered a way to handle various aspects of information. It employs the domain knowledge and communication acts as sources of information modelling. It lends itself to a uniform structure whereby semantic, temporal and intentional information can be captured, which builds a sound foundation for building a semantic temporal database.