12 resultados para Information science|Computer science

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation examines the consequences of Electronic Data Interchange (EDI) use on interorganizational relations (IR) in the retail industry. EDI is a type of interorganizational information system that facilitates the exchange of business documents in structured, machine processable form. The research model links EDI use and three IR dimensions--structural, behavioral, and outcome. Based on relevant literature from organizational theory and marketing channels, fourteen hypotheses were proposed for the relationships among EDI use and the three IR dimensions.^ Data were collected through self-administered questionnaires from key informants in 97 retail companies (19% response rate). The hypotheses were tested using multiple regression analysis. The analysis supports the following hypothesis: (a) EDI use is positively related to information intensity and formalization, (b) formalization is positively related to cooperation, (c) information intensity is positively related to cooperation, (d) conflict is negatively related to performance and satisfaction, (e) cooperation is positively related to performance, and (f) performance is positively related to satisfaction. The results support the general premise of the model that the relationship between EDI use and satisfaction among channel members has to be viewed within an interorganizational context.^ Research on EDI is still in a nascent stage. By identifying and testing relevant interorganizational variables, this study offers insights for practitioners managing boundary-spanning activities in organizations using or planning to use EDI. Further, the thesis provides avenues for future research aimed at understanding the consequences of this interorganizational information technology. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ultimate intent of this dissertation was to broaden and strengthen our understanding of IT implementation by emphasizing research efforts on the dynamic nature of the implementation process. More specifically, efforts were directed toward opening the "black box" and providing the story that explains how and why contextual conditions and implementation tactics interact to produce project outcomes. In pursuit of this objective, the dissertation was aimed at theory building and adopted a case study methodology combining qualitative and quantitative evidence. Precisely, it examined the implementation process, use and consequences of three clinical information systems at Jackson Memorial Hospital, a large tertiary care teaching hospital.^ As a preliminary step toward the development of a more realistic model of system implementation, the study proposes a new set of research propositions reflecting the dynamic nature of the implementation process.^ Findings clearly reveal that successful implementation projects are likely to be those where key actors envision end goals, anticipate challenges ahead, and recognize the presence of and seize opportunities. It was also found that IT implementation is characterized by the systems theory of equifinality, that is, there are likely several equally effective ways to achieve a given end goal. The selection of a particular implementation strategy appears to be a rational process where actions and decisions are largely influenced by the degree to which key actors recognize the mediating role of each tactic and are motivated to action. The nature of the implementation process is also characterized by the concept of "duality of structure," that is, context and actions mutually influence each other. Another key finding suggests that there is no underlying program that regulates the process of change and moves it form one given point toward a subsequent and already prefigured end. For this reason, the implementation process cannot be thought of as a series of activities performed in a sequential manner such as conceived in stage models. Finally, it was found that IT implementation is punctuated by a certain indeterminacy. Results suggest that only when substantial efforts are focused on what to look for and think about, it is less likely that unfavorable and undesirable consequences will occur. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, many organizations are turning to new approaches to building and maintaining information systems (I/S) to cope with a highly competitive business environment. Current anecdotal evidence indicates that the approaches being used improve the effectiveness of software development by encouraging active user participation throughout the development process. Unfortunately, very little is known about how the use of such approaches enhances the ability of team members to develop I/S that are responsive to changing business conditions.^ Drawing from predominant theories of organizational conflict, this study develops and tests a model of conflict among members of a development team. The model proposes that development approaches provide the relevant context conditioning the management and resolution of conflict in software development which, in turn, are crucial for the success of the development process.^ Empirical testing of the model was conducted using data collected through a combination of interviews with I/S executives and surveys of team members and business users at nine organizations. Results of path analysis provide support for the model's main prediction that integrative conflict management and distributive conflict management can contribute to I/S success by influencing differently the manifestation and resolution of conflict in software development. Further, analyses of variance indicate that object-oriented development, when compared to rapid and structured development, appears to produce the lowest levels of conflict management, conflict resolution, and I/S success.^ The proposed model and findings suggest academic implications for understanding the effects of different conflict management behaviors on software development outcomes, and practical implications for better managing the software development process, especially in user-oriented development environments. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis chronicles the design and implementation of a Internet/Intranet and database based application for the quality control of hurricane surface wind observations. A quality control session consists of selecting desired observation types to be viewed and determining a storm track based time window for viewing the data. All observations of the selected types are then plotted in a storm relative view for the chosen time window and geography is positioned for the storm-center time about which an objective analysis can be performed. Users then make decisions about data validity through visual nearest-neighbor comparison and inspection. The project employed an Object Oriented iterative development method from beginning to end and its implementation primarily features the Java programming language. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The phenomenonal growth of the Internet has connected us to a vast amount of computation and information resources around the world. However, making use of these resources is difficult due to the unparalleled massiveness, high communication latency, share-nothing architecture and unreliable connection of the Internet. In this dissertation, we present a distributed software agent approach, which brings a new distributed problem-solving paradigm to the Internet computing researches with enhanced client-server scheme, inherent scalability and heterogeneity. Our study discusses the role of a distributed software agent in Internet computing and classifies it into three major categories by the objects it interacts with: computation agent, information agent and interface agent. The discussion of the problem domain and the deployment of the computation agent and the information agent are presented with the analysis, design and implementation of the experimental systems in high performance Internet computing and in scalable Web searching. ^ In the computation agent study, high performance Internet computing can be achieved with our proposed Java massive computation agent (JAM) model. We analyzed the JAM computing scheme and built a brutal force cipher text decryption prototype. In the information agent study, we discuss the scalability problem of the existing Web search engines and designed the approach of Web searching with distributed collaborative index agent. This approach can be used for constructing a more accurate, reusable and scalable solution to deal with the growth of the Web and of the information on the Web. ^ Our research reveals that with the deployment of the distributed software agent in Internet computing, we can have a more cost effective approach to make better use of the gigantic scale network of computation and information resources on the Internet. The case studies in our research show that we are now able to solve many practically hard or previously unsolvable problems caused by the inherent difficulties of Internet computing. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, a surprising new phenomenon has emerged in which globally-distributed online communities collaborate to create useful and sophisticated computer software. These open source software groups are comprised of generally unaffiliated individuals and organizations who work in a seemingly chaotic fashion and who participate on a voluntary basis without direct financial incentive. ^ The purpose of this research is to investigate the relationship between the social network structure of these intriguing groups and their level of output and activity, where social network structure is defined as (1) closure or connectedness within the group, (2) bridging ties which extend outside of the group, and (3) leader centrality within the group. Based on well-tested theories of social capital and centrality in teams, propositions were formulated which suggest that social network structures associated with successful open source software project communities will exhibit high levels of bridging and moderate levels of closure and leader centrality. ^ The research setting was the SourceForge hosting organization and a study population of 143 project communities was identified. Independent variables included measures of closure and leader centrality defined over conversational ties, along with measures of bridging defined over membership ties. Dependent variables included source code commits and software releases for community output, and software downloads and project site page views for community activity. A cross-sectional study design was used and archival data were extracted and aggregated for the two-year period following the first release of project software. The resulting compiled variables were analyzed using multiple linear and quadratic regressions, controlling for group size and conversational volume. ^ Contrary to theory-based expectations, the surprising results showed that successful project groups exhibited low levels of closure and that the levels of bridging and leader centrality were not important factors of success. These findings suggest that the creation and use of open source software may represent a fundamentally new socio-technical development process which disrupts the team paradigm and which triggers the need for building new theories of collaborative development. These new theories could point towards the broader application of open source methods for the creation of knowledge-based products other than software. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to test Lotka’s law of scientific publication productivity using the methodology outlined by Pao (1985), in the field of Library and Information Studies (LIS). Lotka’s law has been sporadically tested in the field over the past 30+ years, but the results of these studies are inconclusive due to the varying methods employed by the researchers. ^ A data set of 1,856 citations that were found using the ISI Web of Knowledge databases were studied. The values of n and c were calculated to be 2.1 and 0.6418 (64.18%) respectively. The Kolmogorov-Smirnov (K-S) one sample goodness-of-fit test was conducted at the 0.10 level of significance. The Dmax value is 0.022758 and the calculated critical value is 0.026562. It was determined that the null hypothesis stating that there is no difference in the observed distribution of publications and the distribution obtained using Lotka’s and Pao’s procedure could not be rejected. ^ This study finds that literature in the field of Library and Information Studies does conform to Lotka’s law with reliable results. As result, Lotka’s law can be used in LIS as a standardized means of measuring author publication productivity which will lead to findings that are comparable on many levels (e.g., department, institution, national). Lotka’s law can be employed as an empirically proven analytical tool to establish publication productivity benchmarks for faculty and faculty librarians. Recommendations for further study include (a) exploring the characteristics of the high and low producers; (b) finding a way to successfully account for collaborative contributions in the formula; and, (c) a detailed study of institutional policies concerning publication productivity and its impact on the appointment, tenure and promotion process of academic librarians. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rapid advances in electronic communication devices and technologies have resulted in a shift in the way communication applications are being developed. These new development strategies provide abstract views of the underlying communication technologies and lead to the so-called user-centric communication applications. One user-centric communication (UCC) initiative is the Communication Virtual Machine (CVM) technology, which uses the Communication Modeling Language (CML) for modeling communication services and the CVM for realizing these services. In communication-intensive domains such as telemedicine and disaster management, there is an increasing need for user-centric communication applications that are domain-specific and that support the dynamic coordination of communication services commonly found in collaborative communication scenarios. However, UCC approaches like the CVM offer little support for the dynamic coordination of communication services resulting from inherent dependencies between individual steps of a collaboration task. Users either have to manually coordinate communication services, or reply on a process modeling technique to build customized solutions for services in a specific domain that are usually costly, rigidly defined and technology specific. ^ This dissertation proposes a domain-specific modeling approach to address this problem by extending the CVM technology with communication-specific abstractions of workflow concepts commonly found in business processes. The extension involves (1) the definition of the Workflow Communication Modeling Language (WF-CML), a superset of CML, and (2) the extension of the functionality of CVM to process communication-specific workflows. The definition of WF-CML includes the meta-model and the dynamic semantics for control constructs and concurrency. We also extended the CVM prototype to handle the modeling and realization of WF-CML models. A comparative study of the proposed approach with other workflow environments validates the claimed benefits of WF-CML and CVM.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current technology permits connecting local networks via high-bandwidth telephone lines. Central coordinator nodes may use Intelligent Networks to manage data flow over dialed data lines, e.g. ISDN, and to establish connections between LANs. This dissertation focuses on cost minimization and on establishing operational policies for query distribution over heterogeneous, geographically distributed databases. Based on our study of query distribution strategies, public network tariff policies, and database interface standards we propose methods for communication cost estimation, strategies for the reduction of bandwidth allocation, and guidelines for central to node communication protocols. Our conclusion is that dialed data lines offer a cost effective alternative for the implementation of distributed database query systems, and that existing commercial software may be adapted to support query processing in heterogeneous distributed database systems. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade, large numbers of social media services have emerged and been widely used in people's daily life as important information sharing and acquisition tools. With a substantial amount of user-contributed text data on social media, it becomes a necessity to develop methods and tools for text analysis for this emerging data, in order to better utilize it to deliver meaningful information to users. ^ Previous work on text analytics in last several decades is mainly focused on traditional types of text like emails, news and academic literatures, and several critical issues to text data on social media have not been well explored: 1) how to detect sentiment from text on social media; 2) how to make use of social media's real-time nature; 3) how to address information overload for flexible information needs. ^ In this dissertation, we focus on these three problems. First, to detect sentiment of text on social media, we propose a non-negative matrix tri-factorization (tri-NMF) based dual active supervision method to minimize human labeling efforts for the new type of data. Second, to make use of social media's real-time nature, we propose approaches to detect events from text streams on social media. Third, to address information overload for flexible information needs, we propose two summarization framework, dominating set based summarization framework and learning-to-rank based summarization framework. The dominating set based summarization framework can be applied for different types of summarization problems, while the learning-to-rank based summarization framework helps utilize the existing training data to guild the new summarization tasks. In addition, we integrate these techneques in an application study of event summarization for sports games as an example of how to better utilize social media data. ^