9 resultados para Requirements management
em Digital Commons at Florida International University
Resumo:
This research presents several components encompassing the scope of the objective of Data Partitioning and Replication Management in Distributed GIS Database. Modern Geographic Information Systems (GIS) databases are often large and complicated. Therefore data partitioning and replication management problems need to be addresses in development of an efficient and scalable solution. ^ Part of the research is to study the patterns of geographical raster data processing and to propose the algorithms to improve availability of such data. These algorithms and approaches are targeting granularity of geographic data objects as well as data partitioning in geographic databases to achieve high data availability and Quality of Service(QoS) considering distributed data delivery and processing. To achieve this goal a dynamic, real-time approach for mosaicking digital images of different temporal and spatial characteristics into tiles is proposed. This dynamic approach reuses digital images upon demand and generates mosaicked tiles only for the required region according to user's requirements such as resolution, temporal range, and target bands to reduce redundancy in storage and to utilize available computing and storage resources more efficiently. ^ Another part of the research pursued methods for efficient acquiring of GIS data from external heterogeneous databases and Web services as well as end-user GIS data delivery enhancements, automation and 3D virtual reality presentation. ^ There are vast numbers of computing, network, and storage resources idling or not fully utilized available on the Internet. Proposed "Crawling Distributed Operating System "(CDOS) approach employs such resources and creates benefits for the hosts that lend their CPU, network, and storage resources to be used in GIS database context. ^ The results of this dissertation demonstrate effective ways to develop a highly scalable GIS database. The approach developed in this dissertation has resulted in creation of TerraFly GIS database that is used by US government, researchers, and general public to facilitate Web access to remotely-sensed imagery and GIS vector information. ^
Resumo:
With the proliferation of multimedia data and ever-growing requests for multimedia applications, there is an increasing need for efficient and effective indexing, storage and retrieval of multimedia data, such as graphics, images, animation, video, audio and text. Due to the special characteristics of the multimedia data, the Multimedia Database management Systems (MMDBMSs) have emerged and attracted great research attention in recent years. Though much research effort has been devoted to this area, it is still far from maturity and there exist many open issues. In this dissertation, with the focus of addressing three of the essential challenges in developing the MMDBMS, namely, semantic gap, perception subjectivity and data organization, a systematic and integrated framework is proposed with video database and image database serving as the testbed. In particular, the framework addresses these challenges separately yet coherently from three main aspects of a MMDBMS: multimedia data representation, indexing and retrieval. In terms of multimedia data representation, the key to address the semantic gap issue is to intelligently and automatically model the mid-level representation and/or semi-semantic descriptors besides the extraction of the low-level media features. The data organization challenge is mainly addressed by the aspect of media indexing where various levels of indexing are required to support the diverse query requirements. In particular, the focus of this study is to facilitate the high-level video indexing by proposing a multimodal event mining framework associated with temporal knowledge discovery approaches. With respect to the perception subjectivity issue, advanced techniques are proposed to support users' interaction and to effectively model users' perception from the feedback at both the image-level and object-level.
Resumo:
This research document is motivated by the need for a systemic, efficient quality improvement methodology at universities. There exists no methodology designed for a total quality management (TQM) program in a university. The main objective of this study is to develop a TQM Methodology that enables a university to efficiently develop an integral total quality improvement (TQM) Plan. ^ Current research focuses on the need of improving the quality of universities, the study of the perceived best quality universities, and the measurement of the quality of universities through rankings. There is no evidence of research on how to plan for an integral quality improvement initiative for the university as a whole, which is the main contribution of this study. ^ This research is built on various reference TQM models and criteria provided by ISO 9000, Baldrige and Six Sigma; and educational accreditation criteria found in ABET and SACS. The TQM methodology is proposed by following a seven-step metamethodology. The proposed methodology guides the user to develop a TQM plan in five sequential phases: initiation, assessment, analysis, preparation and acceptance. Each phase defines for the user its purpose, key activities, input requirements, controls, deliverables, and tools to use. The application of quality concepts in education and higher education is particular; since there are unique factors in education which ought to be considered. These factors shape the quality dimensions in a university and are the main inputs to the methodology. ^ The proposed TQM Methodology is used to guide the user to collect and transform appropriate inputs to a holistic TQM Plan, ready to be implemented by the university. Different input data will lead to a unique TQM plan for the specific university at the time. It may not necessarily transform the university into a world-class institution, but aims to strive for stakeholder-oriented improvements, leading to a better alignment with its mission and total quality advancement. ^ The proposed TQM methodology is validated in three steps. First, it is verified by going through a test activity as part of the meta-methodology. Secondly, the methodology is applied to a case university to develop a TQM plan. Lastly, the methodology and the TQM plan both are verified by an expert group consisting of TQM specialists and university administrators. The proposed TQM methodology is applicable to any university at all levels of advancement, regardless of changes in its long-term vision and short-term needs. It helps to assure the quality of a TQM plan, while making the process more systemic, efficient, and cost effective. This research establishes a framework with a solid foundation for extending the proposed TQM methodology into other industries. ^
Resumo:
In - Appraising Work Group Performance: New Productivity Opportunities in Hospitality Management – a discussion by Mark R. Edwards, Associate Professor, College of Engineering, Arizona State University and Leslie Edwards Cummings, Assistant Professor, College of Hotel Administration University of Nevada, Las Vegas; the authors initially provide: “Employee group performance variation accounts for a significant portion of the degree of productivity in the hotel, motel, and food service sectors of the hospitality industry. The authors discuss TEAMSG, a microcomputer based approach to appraising and interpreting group performance. TEAMSG appraisal allows an organization to profile and to evaluate groups, facilitating the targeting of training and development decisions and interventions, as well as the more equitable distribution of organizational rewards.” “The caliber of employee group performance is a major determinant in an organization's productivity and success within the hotel and food service industries,” Edwards and Cummings say. “Gaining accurate information about the quality of performance of such groups as organizational divisions, individual functional departments, or work groups can be as enlightening...” the authors further reveal. This perspective is especially important not only for strategic human resources planning purposes, but also for diagnosing development needs and for differentially distributing organizational rewards.” The authors will have you know, employee requirements in an unpredictable environment, which is what the hospitality industry largely is, are difficult to quantify. In an effort to measure elements of performance Edwards and Cummings look to TEAMSG, which is an acronym for Team Evaluation and Management System for Groups. They develop the concept. In discussing background for employees, Edwards and Cummings point-out that employees - at the individual level - must often possess and exercise varied skills. In group circumstances employees often work at locations outside of, or move from corporate unit-to-unit, as in the case of a project team. Being able to transcend individual-to-group mentality is imperative. “A solution which addresses the frustration and lack of motivation on the part of the employee is to coach, develop, appraise, and reward employees on the basis of group achievement,” say the authors. “An appraisal, effectively developed and interpreted, has at least three functions,” Edwards and Cummings suggest, and go on to define them. The authors do place a great emphasis on rewards and interventions to bolster the assertion set forth in their thesis statement. Edwards and Cummings warn that individual agendas can threaten, erode, and undermine group performance; there is no - I - in TEAM.
Resumo:
The School of Hospitality Management at Florida International University recently offered a new course, recreational food service management, in an effort to address the specialized needs of that segment of the industry. The author discusses the size and scope of this area, its history and presentations, its specialized operational nature, its menu structure and style of service, and the unique management requirements for success.
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^
Resumo:
This research focuses on the design and verification of inter-organizational controls. Instead of looking at a documentary procedure, which is the flow of documents and data among the parties, the research examines the underlying deontic purpose of the procedure, the so-called deontic process, and identifies control requirements to secure this purpose. The vision of the research is a formal theory for streamlining bureaucracy in business and government procedures. ^ Underpinning most inter-organizational procedures are deontic relations, which are about rights and obligations of the parties. When all parties trust each other, they are willing to fulfill their obligations and honor the counter parties’ rights; thus controls may not be needed. The challenge is in cases where trust may not be assumed. In these cases, the parties need to rely on explicit controls to reduce their exposure to the risk of opportunism. However, at present there is no analytic approach or technique to determine which controls are needed for a given contracting or governance situation. ^ The research proposes a formal method for deriving inter-organizational control requirements based on static analysis of deontic relations and dynamic analysis of deontic changes. The formal method will take a deontic process model of an inter-organizational transaction and certain domain knowledge as inputs to automatically generate control requirements that a documentary procedure needs to satisfy in order to limit fraud potentials. The deliverables of the research include a formal representation namely Deontic Petri Nets that combine multiple modal logics and Petri nets for modeling deontic processes, a set of control principles that represent an initial formal theory on the relationships between deontic processes and documentary procedures, and a working prototype that uses model checking technique to identify fraud potentials in a deontic process and generate control requirements to limit them. Fourteen scenarios of two well-known international payment procedures—cash in advance and documentary credit—have been used to test the prototype. The results showed that all control requirements stipulated in these procedures could be derived automatically.^
Resumo:
The aim of this work is to present a methodology to develop cost-effective thermal management solutions for microelectronic devices, capable of removing maximum amount of heat and delivering maximally uniform temperature distributions. The topological and geometrical characteristics of multiple-story three-dimensional branching networks of microchannels were developed using multi-objective optimization. A conjugate heat transfer analysis software package and an automatic 3D microchannel network generator were developed and coupled with a modified version of a particle-swarm optimization algorithm with a goal of creating a design tool for 3D networks of optimized coolant flow passages. Numerical algorithms in the conjugate heat transfer solution package include a quasi-ID thermo-fluid solver and a steady heat diffusion solver, which were validated against results from high-fidelity Navier-Stokes equations solver and analytical solutions for basic fluid dynamics test cases. Pareto-optimal solutions demonstrate that thermal loads of up to 500 W/cm2 can be managed with 3D microchannel networks, with pumping power requirements up to 50% lower with respect to currently used high-performance cooling technologies.
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.