22 resultados para Advanced application and branching systems
em Digital Commons at Florida International University
Resumo:
The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation. In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data. For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.
Resumo:
I proposed the study of two distinct aspects of Ten-Eleven Translocation 2 (TET2) protein for understanding specific functions in different body systems. In Part I, I characterized the molecular mechanisms of Tet2 in the hematological system. As the second member of Ten-Eleven Translocation protein family, TET2 is frequently mutated in leukemic patients. Previous studies have shown that the TET2 mutations frequently occur in 20% myelodysplastic syndrome/myeloproliferative neoplasm (MDS/MPN), 10% T-cell lymphoma leukemia and 2% B-cell lymphoma leukemia. Genetic mouse models also display distinct phenotypes of various types of hematological malignancies. I performed 5-hydroxymethylcytosine (5hmC) chromatin immunoprecipitation sequencing (ChIP-Seq) and RNA sequencing (RNA-Seq) of hematopoietic stem/progenitor cells to determine whether the deletion of Tet2 can affect the abundance of 5hmC at myeloid, T-cell and B-cell specific gene transcription start sites, which ultimately result in various hematological malignancies. Subsequent Exome sequencing (Exome-Seq) showed that disease-specific genes are mutated in different types of tumors, which suggests that TET2 may protect the genome from being mutated. The direct interaction between TET2 and Mutator S Homolog 6 (MSH6) protein suggests TET2 is involved in DNA mismatch repair. Finally, in vivo mismatch repair studies show that the loss of Tet2 causes a mutator phenotype. Taken together, my data indicate that TET2 binds to MSH6 to protect genome integrity. In Part II, I intended to better understand the role of Tet2 in the nervous system. 5-hydroxymethylcytosine regulates epigenetic modification during neurodevelopment and aging. Thus, Tet2 may play a critical role in regulating adult neurogenesis. To examine the physiological significance of Tet2 in the nervous system, I first showed that the deletion of Tet2 reduces the 5hmC levels in neural stem cells. Mice lacking Tet2 show abnormal hippocampal neurogenesis along with 5hmC alternations at different gene promoters and corresponding gene expression downregulation. Through the luciferase reporter assay, two neural factors Neurogenic differentiation 1 (NeuroD1) and Glial fibrillary acidic protein (Gfap) were down-regulated in Tet2 knockout cells. My results suggest that Tet2 regulates neural stem/progenitor cell proliferation and differentiation in adult brain.
Resumo:
Advanced Placement is a series of courses and tests designed to determine mastery over introductory college material. It has become part of the American educational system. The changing conception of AP was examined using critical theory to determine what led to a view of continual success. The study utilized David Armstrong’s variation of Michel Foucault’s critical theory to construct an analytical framework. Black and Ubbes’ data gathering techniques and Braun and Clark’s data analysis were utilized as the analytical framework. Data included 1135 documents: 641 journal articles, 421 newspaper articles and 82 government documents. The study revealed three historical ruptures correlated to three themes containing subthemes. The first rupture was the Sputnik launch in 1958. Its correlated theme was AP leading to school reform with subthemes of AP as reform for able students and AP’s gaining of acceptance from secondary schools and higher education. The second rupture was the Nation at Risk report published in 1983. Its correlated theme was AP’s shift in emphasis from the exam to the course with the subthemes of AP as a course, a shift in AP’s target population, using AP courses to promote equity, and AP courses modifying curricula. The passage of the No Child Left Behind Act of 2001 was the third rupture. Its correlated theme was AP as a means to narrow the achievement gap with the subthemes of AP as a college preparatory program and the shifting of AP to an open access program. The themes revealed a perception that progressively integrated the program into American education. The AP program changed emphasis from tests to curriculum, and is seen as the nation’s premier academic program to promote reform and prepare students for college. It has become a major source of income for the College Board. In effect, AP has become an agent of privatization, spurring other private entities into competition for government funding. The change and growth of the program over the past 57 years resulted in a deep integration into American education. As such the program remains an intrinsic part of the system and continues to evolve within American education.
The development, application, and implications of a strategy for reflective learning from experience
Resumo:
The problem on which this study focused was individuals' reduced capacity to respond to change and to engage in innovative learning when their reflective learning skills are limited. In this study, the preceding problem was addressed by two primary questions: To what degree can mastery of a strategy for reflective learning be facilitated as a part of an academic curriculum for professional practitioners? What impact will mastery of this strategy have on the learning style and adaptive flexibility of adult learners? The focus of the study was a direct application of human resource development technology in the professional preparation of teachers. The background of the problem in light of changing global paradigms and educational action orientations was outlined and a review of the literature was provided. Roots of thought for two key concepts (i.e., learning to learn from experience and meaningful reflection in learning) were traced. Reflective perspectives from the work of eight researchers were compared. A meta-model of learning from experience drawn from the literature served as a conceptual framework for the study. A strategy for reflective learning developed from this meta-model was taught to 109 teachers-in-training at Florida International University in Miami, Florida. Kolb's Adaptive Style Inventory and Learning Style Inventory were administered to the treatment group and to two control groups taught by the same professor. Three research questions and fourteen hypotheses guided data analysis. Qualitative review of 1565 personal documents generated by the treatment group indicated that 77 students demonstrated "double-loop" learning, going beyond previously established limits to perception, understanding, or action. The mean score for depth of reflection indicated "single-loop" learning with "reflection-in-action" present. The change in the mean score for depth of reflection from the beginning to end of the study was statistically significant (p $<$.05). On quantitative measures of adaptive flexibility and learning style, with two exceptions, there were no significant differences noted between treatment and control groups on pre-test to post-test differences and on post-test mean scores adjusted for pre-test responses and demographic variables. Conclusions were drawn regarding treatment, instrumentation, and application of the strategy and the meta-model. Implications of the strategy and the meta-model for research, for education, for human resource development, for professional practice, and for personal growth were suggested. Qualitative training materials and Kolb's instruments were provided in the appendices.
Resumo:
The increased occurrence of cyanobacteria (blue-green algae) blooms and the production of associated cyanotoxins have presented a threat to drinking water sources. Among the most common types of cyanotoxins found in potable water are microcystins (MCs), a family of cyclic heptapeptides containing substrates. MCs are strongly hepatotoxic and known to initiate tumor promoting activity. The presence of sub-lethal doses of MCs in drinking water is implicated as one of the key risk factors for an unusually high occurrence of primary liver cancer. ^ A variety of traditional water treatment methods have been attempted for the removal of cyanotoxins, but with limited success. Advanced Oxidation Technologies (AOTs) are attractive alternatives to traditional water treatments. We have demonstrated ultrasonic irradiation and UV/H2O2 lead to the degradation of cyanotoxins in drinking water. These studies demonstrate AOTs can effectively degrade MCs and their associated toxicity is dramatically reduced. We have conducted detailed studies of different degradation pathways of MCs and conclude that the hydroxyl radical is responsible for a significant fraction of the observed degradation. Results indicate preliminary products of the sonolysis of MCs are due to the hydroxyl radical attack on the benzene ring and substitution and cleavage of the diene of the Adda peptide residue. AOTs are attractive methods for treatment of cyanotoxins in potable water supplies. ^ The photochemical transformation of MCs is important in the environmental degradation of MCs. Previous studies implicated singlet oxygen as a primary oxidant in the photochemical transformation of MCs. Our results indicate that singlet oxygen predominantly leads to degradation of the phycocyanin, pigments of blue green algae, hence reducing the degradation of MCs. The predominant process involves isomerization of the diene (6E to 6Z) in the Adda side chain via photosensitized isomerization involving the photoexcited phycocyanin. Our results indicate that photosensitized processes play a key role in the environmental fate and elimination of MCs in the natural waters. ^
Resumo:
Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.
Resumo:
The primary purpose of these studies was to determine the effect of planning menus using the Institute of Medicine's (IOMs) Simple Nutrient Density Approach on nutrient intakes of long-term care (LTC) residents. In the first study, nutrient intakes of 72 subjects were assessed using Dietary Reference Intakes (DRIs) and IOM methodology. The intake distributions were used to set intake and menu planning goals. In the second study, the facility's regular menus were modified to meet the intake goals for vitamin E, magnesium, zinc, vitamin D and calcium. An experiment was used to test whether the modified menu resulted in intakes of micronutrients sufficient to achieve a low prevalence (<3%) of nutrient inadequacies. Three-day weighed food intakes for 35 females were adjusted for day-to-day variations in order to obtain an estimate of long-term average intake and to estimate the proportion of residents with inadequate nutrient intakes. ^ In the first study, the prevalence of inadequate intakes was determined to be between 65-99% for magnesium, vitamin E, and zinc. Mean usual intakes of Vitamin D and calcium were far below the Adequate Intakes (AIs). In the experimental study, the prevalence of inadequacies was reduced to <3% for zinc and vitamin E but not magnesium. The groups' mean usual intake from the modified menu met or exceeded the AI for calcium but fell short for vitamin D. Alternatively, it was determined that addition of a multivitamin and mineral (MVM) supplement to intakes of the regular menu could be used to achieve goals for vitamin E, zinc and vitamin D but not calcium and magnesium. ^ A combination of menu modification and MVM supplementation may be necessary to achieve a low prevalence of micronutrient inadequacies among LTC residents. Menus should be planned to optimize intakes of those nutrients that are low in an MVM, such as calcium, magnesium, and potassium. A MVM supplement should be provided to fill the gap for nutrients not provided in sufficient amounts by the diet, such as vitamin E and vitamin D. ^
Resumo:
Advanced Placement is a series of courses and tests designed to determine mastery over introductory college material. It has become part of the American educational system. The changing conception of AP was examined using critical theory to determine what led to a view of continual success. The study utilized David Armstrong's variation of Michel Foucault's critical theory to construct an analytical framework. Black and Ubbes' data gathering techniques and Braun and Clark's data analysis were utilized as the analytical framework. Data included 1135 documents: 641 journal articles, 421 newspaper articles and 82 government documents. ^ The study revealed three historical ruptures correlated to three themes containing subthemes. The first rupture was the Sputnik launch in 1958. Its correlated theme was AP leading to school reform with subthemes of AP as reform for able students and AP's gaining of acceptance from secondary schools and higher education. The second rupture was the Nation at Risk report published in 1983. Its correlated theme was AP's shift in emphasis from the exam to the course with the subthemes of AP as a course, a shift in AP's target population, using AP courses to promote equity, and AP courses modifying curricula. The passage of the No Child Left Behind Act of 2001 was the third rupture. Its correlated theme was AP as a means to narrow the achievement gap with the subthemes of AP as a college preparatory program and the shifting of AP to an open access program. ^ The themes revealed a perception that progressively integrated the program into American education. The AP program changed emphasis from tests to curriculum, and is seen as the nation's premier academic program to promote reform and prepare students for college. It has become a major source of income for the College Board. In effect, AP has become an agent of privatization, spurring other private entities into competition for government funding. The change and growth of the program over the past 57 years resulted in a deep integration into American education. As such the program remains an intrinsic part of the system and continues to evolve within American education. ^
Resumo:
The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation.^ In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data.^ For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.^
Resumo:
I proposed the study of two distinct aspects of Ten-Eleven Translocation 2 (TET2) protein for understanding specific functions in different body systems. ^ In Part I, I characterized the molecular mechanisms of Tet2 in the hematological system. As the second member of Ten-Eleven Translocation protein family, TET2 is frequently mutated in leukemic patients. Previous studies have shown that the TET2 mutations frequently occur in 20% myelodysplastic syndrome/myeloproliferative neoplasm (MDS/MPN), 10% T-cell lymphoma leukemia and 2% B-cell lymphoma leukemia. Genetic mouse models also display distinct phenotypes of various types of hematological malignancies. I performed 5-hydroxymethylcytosine (5hmC) chromatin immunoprecipitation sequencing (ChIP-Seq) and RNA sequencing (RNA-Seq) of hematopoietic stem/progenitor cells to determine whether the deletion of Tet2 can affect the abundance of 5hmC at myeloid, T-cell and B-cell specific gene transcription start sites, which ultimately result in various hematological malignancies. Subsequent Exome sequencing (Exome-Seq) showed that disease-specific genes are mutated in different types of tumors, which suggests that TET2 may protect the genome from being mutated. The direct interaction between TET2 and Mutator S Homolog 6 (MSH6) protein suggests TET2 is involved in DNA mismatch repair. Finally, in vivo mismatch repair studies show that the loss of Tet2 causes a mutator phenotype. Taken together, my data indicate that TET2 binds to MSH6 to protect genome integrity. ^ In Part II, I intended to better understand the role of Tet2 in the nervous system. 5-hydroxymethylcytosine regulates epigenetic modification during neurodevelopment and aging. Thus, Tet2 may play a critical role in regulating adult neurogenesis. To examine the physiological significance of Tet2 in the nervous system, I first showed that the deletion of Tet2 reduces the 5hmC levels in neural stem cells. Mice lacking Tet2 show abnormal hippocampal neurogenesis along with 5hmC alternations at different gene promoters and corresponding gene expression downregulation. Through the luciferase reporter assay, two neural factors Neurogenic differentiation 1 (NeuroD1) and Glial fibrillary acidic protein (Gfap) were down-regulated in Tet2 knockout cells. My results suggest that Tet2 regulates neural stem/progenitor cell proliferation and differentiation in adult brain.^
Resumo:
Understanding habitat selection and movement remains a key question in behavioral ecology. Yet, obtaining a sufficiently high spatiotemporal resolution of the movement paths of organisms remains a major challenge, despite recent technological advances. Observing fine-scale movement and habitat choice decisions in the field can prove to be difficult and expensive, particularly in expansive habitats such as wetlands. We describe the application of passive integrated transponder (PIT) systems to field enclosures for tracking detailed fish behaviors in an experimental setting. PIT systems have been applied to habitats with clear passageways, at fixed locations or in controlled laboratory and mesocosm settings, but their use in unconfined habitats and field-based experimental setups remains limited. In an Everglades enclosure, we continuously tracked the movement and habitat use of PIT-tagged centrarchids across three habitats of varying depth and complexity using multiple flatbed antennas for 14 days. Fish used all three habitats, with marked species-specific diel movement patterns across habitats, and short-lived movements that would be likely missed by other tracking techniques. Findings suggest that the application of PIT systems to field enclosures can be an insightful approach for gaining continuous, undisturbed and detailed movement data in unconfined habitats, and for experimentally manipulating both internal and external drivers of these behaviors.
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. This thesis describes a heterogeneous database system being developed at Highperformance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i.) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii.) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii.) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv.) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v.) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi.) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii.) a framework for intelligent computing and communication on the Internet applying the concepts of our work.
Resumo:
Modern power networks incorporate communications and information technology infrastructure into the electrical power system to create a smart grid in terms of control and operation. The smart grid enables real-time communication and control between consumers and utility companies allowing suppliers to optimize energy usage based on price preference and system technical issues. The smart grid design aims to provide overall power system monitoring, create protection and control strategies to maintain system performance, stability and security. This dissertation contributed to the development of a unique and novel smart grid test-bed laboratory with integrated monitoring, protection and control systems. This test-bed was used as a platform to test the smart grid operational ideas developed here. The implementation of this system in the real-time software creates an environment for studying, implementing and verifying novel control and protection schemes developed in this dissertation. Phasor measurement techniques were developed using the available Data Acquisition (DAQ) devices in order to monitor all points in the power system in real time. This provides a practical view of system parameter changes, system abnormal conditions and its stability and security information system. These developments provide valuable measurements for technical power system operators in the energy control centers. Phasor Measurement technology is an excellent solution for improving system planning, operation and energy trading in addition to enabling advanced applications in Wide Area Monitoring, Protection and Control (WAMPAC). Moreover, a virtual protection system was developed and implemented in the smart grid laboratory with integrated functionality for wide area applications. Experiments and procedures were developed in the system in order to detect the system abnormal conditions and apply proper remedies to heal the system. A design for DC microgrid was developed to integrate it to the AC system with appropriate control capability. This system represents realistic hybrid AC/DC microgrids connectivity to the AC side to study the use of such architecture in system operation to help remedy system abnormal conditions. In addition, this dissertation explored the challenges and feasibility of the implementation of real-time system analysis features in order to monitor the system security and stability measures. These indices are measured experimentally during the operation of the developed hybrid AC/DC microgrids. Furthermore, a real-time optimal power flow system was implemented to optimally manage the power sharing between AC generators and DC side resources. A study relating to real-time energy management algorithm in hybrid microgrids was performed to evaluate the effects of using energy storage resources and their use in mitigating heavy load impacts on system stability and operational security.
Resumo:
A methodology for formally modeling and analyzing software architecture of mobile agent systems provides a solid basis to develop high quality mobile agent systems, and the methodology is helpful to study other distributed and concurrent systems as well. However, it is a challenge to provide the methodology because of the agent mobility in mobile agent systems.^ The methodology was defined from two essential parts of software architecture: a formalism to define the architectural models and an analysis method to formally verify system properties. The formalism is two-layer Predicate/Transition (PrT) nets extended with dynamic channels, and the analysis method is a hierarchical approach to verify models on different levels. The two-layer modeling formalism smoothly transforms physical models of mobile agent systems into their architectural models. Dynamic channels facilitate the synchronous communication between nets, and they naturally capture the dynamic architecture configuration and agent mobility of mobile agent systems. Component properties are verified based on transformed individual components, system properties are checked in a simplified system model, and interaction properties are analyzed on models composing from involved nets. Based on the formalism and the analysis method, this researcher formally modeled and analyzed a software architecture of mobile agent systems, and designed an architectural model of a medical information processing system based on mobile agents. The model checking tool SPIN was used to verify system properties such as reachability, concurrency and safety of the medical information processing system. ^ From successful modeling and analyzing the software architecture of mobile agent systems, the conclusion is that PrT nets extended with channels are a powerful tool to model mobile agent systems, and the hierarchical analysis method provides a rigorous foundation for the modeling tool. The hierarchical analysis method not only reduces the complexity of the analysis, but also expands the application scope of model checking techniques. The results of formally modeling and analyzing the software architecture of the medical information processing system show that model checking is an effective and an efficient way to verify software architecture. Moreover, this system shows a high level of flexibility, efficiency and low cost of mobile agent technologies. ^
Resumo:
With advances in science and technology, computing and business intelligence (BI) systems are steadily becoming more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming progressively more difficult to monitor, manage and maintain. Traditional approaches to system management have largely relied on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. It is widely acknowledged as a cumbersome, labor intensive, and error prone process, besides being difficult to keep up with the rapidly changing environments. In addition, many traditional business systems deliver primarily pre-defined historic metrics for a long-term strategic or mid-term tactical analysis, and lack the necessary flexibility to support evolving metrics or data collection for real-time operational analysis. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing and BI systems. To realize the goal of autonomic management and enable self-management capabilities, we propose to mine system historical log data generated by computing and BI systems, and automatically extract actionable patterns from this data. This dissertation focuses on the development of different data mining techniques to extract actionable patterns from various types of log data in computing and BI systems. Four key problems—Log data categorization and event summarization, Leading indicator identification , Pattern prioritization by exploring the link structures , and Tensor model for three-way log data are studied. Case studies and comprehensive experiments on real application scenarios and datasets are conducted to show the effectiveness of our proposed approaches.