20 resultados para Computing Classification Systems

em Digital Commons at Florida International University


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The need to provide computers with the ability to distinguish the affective state of their users is a major requirement for the practical implementation of affective computing concepts. This dissertation proposes the application of signal processing methods on physiological signals to extract from them features that can be processed by learning pattern recognition systems to provide cues about a person's affective state. In particular, combining physiological information sensed from a user's left hand in a non-invasive way with the pupil diameter information from an eye-tracking system may provide a computer with an awareness of its user's affective responses in the course of human-computer interactions. In this study an integrated hardware-software setup was developed to achieve automatic assessment of the affective status of a computer user. A computer-based "Paced Stroop Test" was designed as a stimulus to elicit emotional stress in the subject during the experiment. Four signals: the Galvanic Skin Response (GSR), the Blood Volume Pulse (BVP), the Skin Temperature (ST) and the Pupil Diameter (PD), were monitored and analyzed to differentiate affective states in the user. Several signal processing techniques were applied on the collected signals to extract their most relevant features. These features were analyzed with learning classification systems, to accomplish the affective state identification. Three learning algorithms: Naïve Bayes, Decision Tree and Support Vector Machine were applied to this identification process and their levels of classification accuracy were compared. The results achieved indicate that the physiological signals monitored do, in fact, have a strong correlation with the changes in the emotional states of the experimental subjects. These results also revealed that the inclusion of pupil diameter information significantly improved the performance of the emotion recognition system. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A comprehensive, broadly accepted vegetation classification is important for ecosystem management, particularly for planning and monitoring. South Florida vegetation classification systems that are currently in use were largely arrived at subjectively and intuitively with the involvement of experienced botanical observers and ecologists, but with little support in terms of quantitative field data. The need to develop a field data-driven classification of South Florida vegetation that builds on the ecological organization has been recognized by the National Park Service and vegetation practitioners in the region. The present work, funded by the National Park Service Inventory and Monitoring Program - South Florida/Caribbean Network (SFCN), covers the first stage of a larger project whose goal is to apply extant vegetation data to test, and revise as necessary, an existing, widely used classification (Rutchey et al. 2006). The objectives of the first phase of the project were (1) to identify useful existing datasets, (2) to collect these data and compile them into a geodatabase, (3) to conduct an initial classification analysis of marsh sites, and (4) to design a strategy for augmenting existing information from poorly represented landscapes in order to develop a more comprehensive south Florida classification.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mapping of vegetation patterns over large extents using remote sensing methods requires field sample collections for two different purposes: (1) the establishment of plant association classification systems from samples of relative abundance estimates; and (2) training for supervised image classification and accuracy assessment of satellite data derived maps. One challenge for both procedures is the establishment of confidence in results and the analysis across multiple spatial scales. Continuous data sets that enable cross-scale studies are very time consuming and expensive to acquire and such extensive field sampling can be invasive. The use of high resolution aerial photography (hrAP) offers an alternative to extensive, invasive, field sampling and can provide large volume, spatially continuous, reference information that can meet the challenges of confidence building and multi-scale analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With advances in science and technology, computing and business intelligence (BI) systems are steadily becoming more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming progressively more difficult to monitor, manage and maintain. Traditional approaches to system management have largely relied on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. It is widely acknowledged as a cumbersome, labor intensive, and error prone process, besides being difficult to keep up with the rapidly changing environments. In addition, many traditional business systems deliver primarily pre-defined historic metrics for a long-term strategic or mid-term tactical analysis, and lack the necessary flexibility to support evolving metrics or data collection for real-time operational analysis. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing and BI systems. To realize the goal of autonomic management and enable self-management capabilities, we propose to mine system historical log data generated by computing and BI systems, and automatically extract actionable patterns from this data. This dissertation focuses on the development of different data mining techniques to extract actionable patterns from various types of log data in computing and BI systems. Four key problems—Log data categorization and event summarization, Leading indicator identification , Pattern prioritization by exploring the link structures , and Tensor model for three-way log data are studied. Case studies and comprehensive experiments on real application scenarios and datasets are conducted to show the effectiveness of our proposed approaches.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The phenomenonal growth of the Internet has connected us to a vast amount of computation and information resources around the world. However, making use of these resources is difficult due to the unparalleled massiveness, high communication latency, share-nothing architecture and unreliable connection of the Internet. In this dissertation, we present a distributed software agent approach, which brings a new distributed problem-solving paradigm to the Internet computing researches with enhanced client-server scheme, inherent scalability and heterogeneity. Our study discusses the role of a distributed software agent in Internet computing and classifies it into three major categories by the objects it interacts with: computation agent, information agent and interface agent. The discussion of the problem domain and the deployment of the computation agent and the information agent are presented with the analysis, design and implementation of the experimental systems in high performance Internet computing and in scalable Web searching. ^ In the computation agent study, high performance Internet computing can be achieved with our proposed Java massive computation agent (JAM) model. We analyzed the JAM computing scheme and built a brutal force cipher text decryption prototype. In the information agent study, we discuss the scalability problem of the existing Web search engines and designed the approach of Web searching with distributed collaborative index agent. This approach can be used for constructing a more accurate, reusable and scalable solution to deal with the growth of the Web and of the information on the Web. ^ Our research reveals that with the deployment of the distributed software agent in Internet computing, we can have a more cost effective approach to make better use of the gigantic scale network of computation and information resources on the Internet. The case studies in our research show that we are now able to solve many practically hard or previously unsolvable problems caused by the inherent difficulties of Internet computing. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The integration of automation (specifically Global Positioning Systems (GPS)) and Information and Communications Technology (ICT) through the creation of a Total Jobsite Management Tool (TJMT) in construction contractor companies can revolutionize the way contractors do business. The key to this integration is the collection and processing of real-time GPS data that is produced on the jobsite for use in project management applications. This research study established the need for an effective planning and implementation framework to assist construction contractor companies in navigating the terrain of GPS and ICT use. An Implementation Framework was developed using the Action Research approach. The framework consists of three components, as follows: (i) ICT Infrastructure Model, (ii) Organizational Restructuring Model, and (iii) Cost/Benefit Analysis. The conceptual ICT infrastructure model was developed for the purpose of showing decision makers within highway construction companies how to collect, process, and use GPS data for project management applications. The organizational restructuring model was developed to assist companies in the analysis and redesign of business processes, data flows, core job responsibilities, and their organizational structure in order to obtain the maximum benefit at the least cost in implementing GPS as a TJMT. A cost-benefit analysis which identifies and quantifies the cost and benefits (both direct and indirect) was performed in the study to clearly demonstrate the advantages of using GPS as a TJMT. Finally, the study revealed that in order to successfully implement a program to utilize GPS data as a TJMT, it is important for construction companies to understand the various implementation and transitioning issues that arise when implementing this new technology and business strategy. In the study, Factors for Success were identified and ranked to allow a construction company to understand the factors that may contribute to or detract from the prospect for success during implementation. The Implementation Framework developed as a result of this study will serve to guide highway construction companies in the successful integration of GPS and ICT technologies for use as a TJMT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, wireless network technology has grown at such a pace that scientific research has become a practical reality in a very short time span. Mobile wireless communications have witnessed the adoption of several generations, each of them complementing and improving the former. One mobile system that features high data rates and open network architecture is 4G. Currently, the research community and industry, in the field of wireless networks, are working on possible choices for solutions in the 4G system. 4G is a collection of technologies and standards that will allow a range of ubiquitous computing and wireless communication architectures. The researcher considers one of the most important characteristics of future 4G mobile systems the ability to guarantee reliable communications from 100 Mbps, in high mobility links, to as high as 1 Gbps for low mobility users, in addition to high efficiency in the spectrum usage. On mobile wireless communications networks, one important factor is the coverage of large geographical areas. In 4G systems, a hybrid satellite/terrestrial network is crucial to providing users with coverage wherever needed. Subscribers thus require a reliable satellite link to access their services when they are in remote locations, where a terrestrial infrastructure is unavailable. Thus, they must rely upon satellite coverage. Good modulation and access technique are also required in order to transmit high data rates over satellite links to mobile users. This technique must adapt to the characteristics of the satellite channel and also be efficient in the use of allocated bandwidth. Satellite links are fading channels, when used by mobile users. Some measures designed to approach these fading environments make use of: (1) spatial diversity (two receive antenna configuration); (2) time diversity (channel interleaver/spreading techniques); and (3) upper layer FEC. The author proposes the use of OFDM (Orthogonal Frequency Multiple Access) for the satellite link by increasing the time diversity. This technique will allow for an increase of the data rate, as primarily required by multimedia applications, and will also optimally use the available bandwidth. In addition, this dissertation approaches the use of Cooperative Satellite Communications for hybrid satellite/terrestrial networks. By using this technique, the satellite coverage can be extended to areas where there is no direct link to the satellite. For this purpose, a good channel model is necessary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The deployment of wireless communications coupled with the popularity of portable devices has led to significant research in the area of mobile data caching. Prior research has focused on the development of solutions that allow applications to run in wireless environments using proxy based techniques. Most of these approaches are semantic based and do not provide adequate support for representing the context of a user (i.e., the interpreted human intention.). Although the context may be treated implicitly it is still crucial to data management. In order to address this challenge this dissertation focuses on two characteristics: how to predict (i) the future location of the user and (ii) locations of the fetched data where the queried data item has valid answers. Using this approach, more complete information about the dynamics of an application environment is maintained. ^ The contribution of this dissertation is a novel data caching mechanism for pervasive computing environments that can adapt dynamically to a mobile user's context. In this dissertation, we design and develop a conceptual model and context aware protocols for wireless data caching management. Our replacement policy uses the validity of the data fetched from the server and the neighboring locations to decide which of the cache entries is less likely to be needed in the future, and therefore a good candidate for eviction when cache space is needed. The context aware driven prefetching algorithm exploits the query context to effectively guide the prefetching process. The query context is defined using a mobile user's movement pattern and requested information context. Numerical results and simulations show that the proposed prefetching and replacement policies significantly outperform conventional ones. ^ Anticipated applications of these solutions include biomedical engineering, tele-health, medical information systems and business. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Voice communication systems such as Voice-over IP (VoIP), Public Switched Telephone Networks, and Mobile Telephone Networks, are an integral means of human tele-interaction. These systems pose distinctive challenges due to their unique characteristics such as low volume, burstiness and stringent delay/loss requirements across heterogeneous underlying network technologies. Effective quality evaluation methodologies are important for system development and refinement, particularly by adopting user feedback based measurement. Presently, most of the evaluation models are system-centric (Quality of Service or QoS-based), which questioned us to explore a user-centric (Quality of Experience or QoE-based) approach as a step towards the human-centric paradigm of system design. We research an affect-based QoE evaluation framework which attempts to capture users' perception while they are engaged in voice communication. Our modular approach consists of feature extraction from multiple information sources including various affective cues and different classification procedures such as Support Vector Machines (SVM) and k-Nearest Neighbor (kNN). The experimental study is illustrated in depth with detailed analysis of results. The evidences collected provide the potential feasibility of our approach for QoE evaluation and suggest the consideration of human affective attributes in modeling user experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Infrastructure systems are drivers of the economy in the nation. A dollar spent on infrastructure development yields roughly double the initial spending in ultimate economic output in the short term; and over a twenty-year period, and generalized ‘public investment’ produces an aggregated $3.21 of economic activity per $1.00 spent [1]. Thus, formulation of policies pertaining to infrastructure investment and development is of significance affecting the social and economic wellbeing of the nation. The aim of this policy brief is to evaluate innovative financing in infrastructure systems from two different perspectives: (1) through consideration of the current condition of infrastructure in the U.S., the current trends in public spending, and the emerging innovative financing tools; (2) through evaluation of the roles and interactions of different agencies in the creation and the diffusion of innovative financing tools. Then using the example of transportation financing, the policy brief provides an assessment of policy landscapes which could lead to the closure of infrastructure financing gap in the U.S and proposes strategies for citizen involvement to gain public support of innovative financing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. ^ In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.^