13 resultados para Software Configuration Management
em Digital Commons at Florida International University
Resumo:
This dissertation is about the research carried on developing an MPS (Multipurpose Portable System) which consists of an instrument and many accessories. The instrument is portable, hand-held, and rechargeable battery operated, and it measures temperature, absorbance, and concentration of samples by using optical principles. The system also performs auxiliary functions like incubation and mixing. This system can be used in environmental, industrial, and medical applications. ^ Research emphasis is on system modularity, easy configuration, accuracy of measurements, power management schemes, reliability, low cost, computer interface, and networking. The instrument can send the data to a computer for data analysis and presentation, or to a printer. ^ This dissertation includes the presentation of a full working system. This involved integration of hardware and firmware for the micro-controller in assembly language, software in C and other application modules. ^ The instrument contains the Optics, Transimpedance Amplifiers, Voltage-to-Frequency Converters, LCD display, Lamp Driver, Battery Charger, Battery Manager, Timer, Interface Port, and Micro-controller. ^ The accessories are a Printer, Data Acquisition Adapter (to transfer the measurements to a computer via the Printer Port and expand the Analog/Digital conversion capability), Car Plug Adapter, and AC Transformer. This system has been fully evaluated for fault tolerance and the schemes will also be presented. ^
Resumo:
A methodology for formally modeling and analyzing software architecture of mobile agent systems provides a solid basis to develop high quality mobile agent systems, and the methodology is helpful to study other distributed and concurrent systems as well. However, it is a challenge to provide the methodology because of the agent mobility in mobile agent systems.^ The methodology was defined from two essential parts of software architecture: a formalism to define the architectural models and an analysis method to formally verify system properties. The formalism is two-layer Predicate/Transition (PrT) nets extended with dynamic channels, and the analysis method is a hierarchical approach to verify models on different levels. The two-layer modeling formalism smoothly transforms physical models of mobile agent systems into their architectural models. Dynamic channels facilitate the synchronous communication between nets, and they naturally capture the dynamic architecture configuration and agent mobility of mobile agent systems. Component properties are verified based on transformed individual components, system properties are checked in a simplified system model, and interaction properties are analyzed on models composing from involved nets. Based on the formalism and the analysis method, this researcher formally modeled and analyzed a software architecture of mobile agent systems, and designed an architectural model of a medical information processing system based on mobile agents. The model checking tool SPIN was used to verify system properties such as reachability, concurrency and safety of the medical information processing system. ^ From successful modeling and analyzing the software architecture of mobile agent systems, the conclusion is that PrT nets extended with channels are a powerful tool to model mobile agent systems, and the hierarchical analysis method provides a rigorous foundation for the modeling tool. The hierarchical analysis method not only reduces the complexity of the analysis, but also expands the application scope of model checking techniques. The results of formally modeling and analyzing the software architecture of the medical information processing system show that model checking is an effective and an efficient way to verify software architecture. Moreover, this system shows a high level of flexibility, efficiency and low cost of mobile agent technologies. ^
Resumo:
This dissertation is a study of customer relationship management theory and practice. Customer Relationship Management (CRM) is a business strategy whereby companies build strong relationships with existing and prospective customers with the goal of increasing organizational profitability. It is also a learning process involving managing change in processes, people, and technology. CRM implementation and its ramifications are also not completely understood as evidenced by the high number of failures in CRM implementation in organizations and the resulting disappointments. ^ The goal of this dissertation is to study emerging issues and trends in CRM, including the effect of computer software and the accompanying new management processes on organizations, and the dynamics of the alignment of marketing, sales and services, and all other functions responsible for delivering customers a satisfying experience. ^ In order to understand CRM better a content analysis of more than a hundred articles and documents from academic and industry sources was undertaken using a new methodological twist to the traditional method. An Internet domain name (http://crm.fiu.edu) was created for the purpose of this research by uploading an initial one hundred plus abstracts of articles and documents onto it to form a knowledge database. Once the database was formed a search engine was developed to enable the search of abstracts using relevant CRM keywords to reveal emergent dominant CRM topics. The ultimate aim of this website is to serve as an information hub for CRM research, as well as a search engine where interested parties can enter CRM-relevant keywords or phrases to access abstracts, as well as submit abstracts to enrich the knowledge hub. ^ Research questions were investigated and answered by content analyzing the interpretation and discussion of dominant CRM topics and then amalgamating the findings. This was supported by comparisons within and across individual, paired, and sets-of-three occurrences of CRM keywords in the article abstracts. ^ Results show that there is a lack of holistic thinking and discussion of CRM in both academics and industry which is required to understand how the people, process, and technology in CRM impact each other to affect successful implementation. Industry has to get their heads around CRM and holistically understand how these important dimensions affect each other. Only then will organizational learning occur, and overtime result in superior processes leading to strong profitable customer relationships and a hard to imitate competitive advantage. ^
Resumo:
Objectionable odors remain at the top of air pollution complaints in urban areas such as Broward County that is subject to increasing residential and industrial developments. The odor complaints in Broward County escalated by 150 percent for the 2001 to 2004 period although the population increased by only 6 percent. It is estimated that in 2010 the population will increase to 2.5 million. Relying solely on enforcing the local odor ordinance is evidently not sufficient to manage the escalating odor complaint trends. An alternate approach similar to odor management plans (OMPs) that are successful in managing major malodor sources such as animal farms is required. ^ This study aims to develop and determine the feasibility of implementing a comprehensive odor management plan (COMP) for the entire Broward County. Unlike existing OMPs for single sources where the receptors (i.e. the complainants) are located beyond the boundary of the source, the COMP addresses a complex model of multiple sources and receptors coexisting within the boundary of the entire county. Each receptor is potentially subjected to malodor emissions from multiple sources within the county. Also, the quantity and quality of the source/receptor variables are continuously changing. ^ The results of this study show that it is feasible to develop a COMP that adopts a systematic procedure to: (1) Generate maps of existing odor complaint areas and malodor sources, (2) Identify potential odor sources (target sources) responsible for existing odor complaints, (3) Identify possible odor control strategies for target sources, (4) Determine the criteria for implementing odor control strategies, (5) Develop an odor complaint response protocol, and (6) Conduct odor impact analyses for new sources to prevent future odor related issues. Geographic Information System (GIS) is used to identify existing complaint areas. A COMP software that incorporates existing United States Environmental Protection Agency (EPA) air dispersion software is developed to determine the target sources, predict the likelihood of new complaints, and conduct odor impact analysis. The odor response protocol requires pre-planning field investigations and conducting surveys to optimize the local agency available resources while protecting the citizen's welfare, as required by the Clean Air Act. ^
Resumo:
In recent years, urban vehicular ad hoc networks (VANETs) are gaining importance for inter-vehicle communication, because they allow for the local communication between vehicles without any infrastructure, configuration effort, and without expensive cellular networks. But such architecture may increase the complexity of routing since there is no central control system in urban VANETs. Therefore, a challenging research task is to improve urban VANETs' routing efficiency. ^ Hence, in this dissertation we propose two location-based routing protocols and a location management protocol to facilitate location-based routing in urban VANETs. The Multi-hop Routing Protocol (MURU) is proposed to make use of predicted mobility and geometry map in urban VANETs to estimate a path's life time and set up robust end-to-end routing paths. The Light-weight Routing Protocol (LIRU) is proposed to take advantage of the node diversity under dynamic channel condition to exploit opportunistic forwarding to achieve efficient data delivery. A scalable location management protocol (MALM) is also proposed to support location-based routing protocols in urban VANETs. MALM uses high mobility in VANETs to help disseminate vehicles' historical location information, and a vehicle is able to implement Kalman-filter based predicted to predict another vehicle's current location based on its historical location information. ^
Resumo:
In recent years, a surprising new phenomenon has emerged in which globally-distributed online communities collaborate to create useful and sophisticated computer software. These open source software groups are comprised of generally unaffiliated individuals and organizations who work in a seemingly chaotic fashion and who participate on a voluntary basis without direct financial incentive. ^ The purpose of this research is to investigate the relationship between the social network structure of these intriguing groups and their level of output and activity, where social network structure is defined as (1) closure or connectedness within the group, (2) bridging ties which extend outside of the group, and (3) leader centrality within the group. Based on well-tested theories of social capital and centrality in teams, propositions were formulated which suggest that social network structures associated with successful open source software project communities will exhibit high levels of bridging and moderate levels of closure and leader centrality. ^ The research setting was the SourceForge hosting organization and a study population of 143 project communities was identified. Independent variables included measures of closure and leader centrality defined over conversational ties, along with measures of bridging defined over membership ties. Dependent variables included source code commits and software releases for community output, and software downloads and project site page views for community activity. A cross-sectional study design was used and archival data were extracted and aggregated for the two-year period following the first release of project software. The resulting compiled variables were analyzed using multiple linear and quadratic regressions, controlling for group size and conversational volume. ^ Contrary to theory-based expectations, the surprising results showed that successful project groups exhibited low levels of closure and that the levels of bridging and leader centrality were not important factors of success. These findings suggest that the creation and use of open source software may represent a fundamentally new socio-technical development process which disrupts the team paradigm and which triggers the need for building new theories of collaborative development. These new theories could point towards the broader application of open source methods for the creation of knowledge-based products other than software. ^
Resumo:
This dissertation focused on an increasingly prevalent phenomenon in today's global business environment—strategic alliance portfolio. Building on resource-based view, resource dependency theory and real options theory, this dissertation adopted a multi-dimensional perspective to examine the performance implications, strategic antecedents of alliance portfolio configuration, and its strategic effects on firms' decision-making on their continuing foreign expansion. The dissertation consisted of three interrelated essays, each of which dealt with a specific research question. In the first essay I applied a two-dimensional construct that embraces both alliance relations' and alliance partners' attributes to illustrate alliance portfolio configuration. Based on this framework, a longitudinal study was conducted attempting to explore the performance properties of alliance portfolio configuration. The results revealed that alliance diversity and partner diversity have different relative contributions to firms' economic performance. The relationship between alliance portfolio configuration and firm performance was shaped by degree of multinationality in a curvilinear pattern. The second essay attempted to identify the firm level driving forces of alliance portfolio configuration and how these forces interacting with firms' internationalization influence firms' strategic choices on alliance portfolio configuration. The empirical results indicated that past alliance experience, slack resource and firms' brand images are three critical determinants shaping alliance portfolios, but those shaping relationships are conditioned by firms' multinationality. The third essay primarily employed real options theory to build a conceptual framework, revealing how country-, alliance portfolio-, firm-, and industry level factors and their interactions influence firms' strategic decision-making on post-entry continuing expansion in foreign markets. The two empirical studies were resided in global hospitality and travel industries and use panel data to test the relevant theoretical models. Overall, the dissertation advanced and enriched the theoretical domain of alliance portfolio. It particularly shed valuable insights on three fundamental questions in the domain of alliance portfolio research, namely "if and how alliance portfolios contribute to firms' economic performance"; "what determines the appearance of alliance portfolios”; and "how alliance portfolios affect firms' strategic decision-making". This dissertation also extended the international business and strategic management research on service multinationals' foreign expansion and performance.
Resumo:
In his study - Evaluating and Selecting a Property Management System - by Galen Collins, Assistant Professor, School of Hotel and Restaurant Management, Northern Arizona University, Assistant Professor Collins states briefly at the outset: “Computerizing a property requires a game plan. Many have selected a Property Management System without much forethought and have been unhappy with the final results. The author discusses the major factors that must be taken into consideration in the selection of a PMS, based on his personal experience.” Although, this article was written in the year 1988 and some information contained may be dated, there are many salient points to consider. “Technological advances have encouraged many hospitality operators to rethink how information should be processed, stored, retrieved, and analyzed,” offers Collins. “Research has led to the implementation of various cost-effective applications addressing almost every phase of operations,” he says in introducing the computer technology germane to many PMS functions. Professor Collins talks about the Request for Proposal, its conditions and its relevance in negotiating a PMS system. The author also wants the system buyer to be aware [not necessarily beware] of vendor recommendations, and not to rely solely on them. Exercising forethought will help in avoiding the drawback of purchasing an inadequate PMS system. Remember, the vendor is there first and foremost to sell you a system. This doesn’t necessarily mean that the adjectives unreliable and unethical are on the table, but do be advised. Professor Collins presents a graphic outline for the Weighted Average Approach to Scoring Vendor Evaluations. Among the elements to be considered in evaluating a PMS system, and there are several analyzed in this essay, Professor Collins advises that a perspective buyer not overlook the service factor when choosing a PMS system. Service is an important element to contemplate. “In a hotel environment, the special emphasis should be on service. System downtime can be costly and aggravating and will happen periodically,” Collins warns. Professor Collins also examines the topic of PMS system environment; of which the importance of such a factor should not be underestimated. “The design of the computer system should be based on the physical layout of the property and the projected workloads. The heart of the system, housed in a protected, isolated area, can support work stations strategically located throughout the property,” Professor Collins provides. A Property Profile Description is outlined in Table 1. The author would also point out that ease-of-operation is another significant factor to think about. “A user-friendly software package allows the user to easily move through the program without encountering frustrating obstacles,” says Collins. “Programs that require users to memorize abstract abbreviations, codes, and information to carry out standard routines should be avoided,” he counsels.
Resumo:
In his dialogue - Near Term Computer Management Strategy For Hospitality Managers and Computer System Vendors - by William O'Brien, Associate Professor, School of Hospitality Management at Florida International University, Associate Professor O’Brien initially states: “The computer revolution has only just begun. Rapid improvement in hardware will continue into the foreseeable future; over the last five years it has set the stage for more significant improvements in software technology still to come. John Naisbitt's information electronics economy¹ based on the creation and distribution of information has already arrived and as computer devices improve, hospitality managers will increasingly do at least a portion of their work with software tools.” At the time of this writing Assistant Professor O’Brien will have you know, contrary to what some people might think, the computer revolution is not over, it’s just beginning; it’s just an embryo. Computer technology will only continue to develop and expand, says O’Brien with citation. “A complacent few of us who feel “we have survived the computer revolution” will miss opportunities as a new wave of technology moves through the hospitality industry,” says ‘Professor O’Brien. “Both managers who buy technology and vendors who sell it can profit from strategy based on understanding the wave of technological innovation,” is his informed opinion. Property managers who embrace rather than eschew innovation, in this case computer technology, will benefit greatly from this new science in hospitality management, O’Brien says. “The manager who is not alert to or misunderstands the nature of this wave of innovation will be the constant victim of technology,” he advises. On the vendor side of the equation, O’Brien observes, “Computer-wise hospitality managers want systems which are easier and more profitable to operate. Some view their own industry as being somewhat behind the times… They plan to pay significantly less for better computer devices. Their high expectations are fed by vendor marketing efforts…” he says. O’Brien warns against taking a gamble on a risky computer system by falling victim to un-substantiated claims and pie-in-the-sky promises. He recommends affiliating with turn-key vendors who provide hardware, software, and training, or soliciting the help of large mainstream vendors such as IBM, NCR, or Apple. Many experts agree that the computer revolution has merely and genuinely morphed into the software revolution, informs O’Brien; “…recognizing that a computer is nothing but a box in which programs run.” Yes, some of the empirical data in this article is dated by now, but the core philosophy of advancing technology, and properties continually tapping current knowledge is sound.
Resumo:
As researchers and practitioners move towards a vision of software systems that configure, optimize, protect, and heal themselves, they must also consider the implications of such self-management activities on software reliability. Autonomic computing (AC) describes a new generation of software systems that are characterized by dynamically adaptive self-management features. During dynamic adaptation, autonomic systems modify their own structure and/or behavior in response to environmental changes. Adaptation can result in new system configurations and capabilities, which need to be validated at runtime to prevent costly system failures. However, although the pioneers of AC recognize that validating autonomic systems is critical to the success of the paradigm, the architectural blueprint for AC does not provide a workflow or supporting design models for runtime testing. ^ This dissertation presents a novel approach for seamlessly integrating runtime testing into autonomic software. The approach introduces an implicit self-test feature into autonomic software by tailoring the existing self-management infrastructure to runtime testing. Autonomic self-testing facilitates activities such as test execution, code coverage analysis, timed test performance, and post-test evaluation. In addition, the approach is supported by automated testing tools, and a detailed design methodology. A case study that incorporates self-testing into three autonomic applications is also presented. The findings of the study reveal that autonomic self-testing provides a flexible approach for building safe, reliable autonomic software, while limiting the development and performance overhead through software reuse. ^
Resumo:
Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. ^ In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.^
Resumo:
Supervisory Control & Data Acquisition (SCADA) systems are used by many industries because of their ability to manage sensors and control external hardware. The problem with commercially available systems is that they are restricted to a local network of users that use proprietary software. There was no Internet development guide to give remote users out of the network, control and access to SCADA data and external hardware through simple user interfaces. To solve this problem a server/client paradigm was implemented to make SCADAs available via the Internet. Two methods were applied and studied: polling of a text file as a low-end technology solution and implementing a Transmission Control Protocol (TCP/IP) socket connection. Users were allowed to login to a website and control remotely a network of pumps and valves interfaced to a SCADA. This enabled them to sample the water quality of different reservoir wells. The results were based on real time performance, stability and ease of use of the remote interface and its programming. These indicated that the most feasible server to implement is the TCP/IP connection. For the user interface, Java applets and Active X controls provide the same real time access.
Resumo:
The aim of this work is to present a methodology to develop cost-effective thermal management solutions for microelectronic devices, capable of removing maximum amount of heat and delivering maximally uniform temperature distributions. The topological and geometrical characteristics of multiple-story three-dimensional branching networks of microchannels were developed using multi-objective optimization. A conjugate heat transfer analysis software package and an automatic 3D microchannel network generator were developed and coupled with a modified version of a particle-swarm optimization algorithm with a goal of creating a design tool for 3D networks of optimized coolant flow passages. Numerical algorithms in the conjugate heat transfer solution package include a quasi-ID thermo-fluid solver and a steady heat diffusion solver, which were validated against results from high-fidelity Navier-Stokes equations solver and analytical solutions for basic fluid dynamics test cases. Pareto-optimal solutions demonstrate that thermal loads of up to 500 W/cm2 can be managed with 3D microchannel networks, with pumping power requirements up to 50% lower with respect to currently used high-performance cooling technologies.