901 resultados para Many-core systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Globalization has increased the pressure on organizations and companies to operate in the most efficient and economic way. This tendency promotes that companies concentrate more and more on their core businesses, outsource less profitable departments and services to reduce costs. By contrast to earlier times, companies are highly specialized and have a low real net output ratio. For being able to provide the consumers with the right products, those companies have to collaborate with other suppliers and form large supply chains. An effect of large supply chains is the deficiency of high stocks and stockholding costs. This fact has lead to the rapid spread of Just-in-Time logistic concepts aimed minimizing stock by simultaneous high availability of products. Those concurring goals, minimizing stock by simultaneous high product availability, claim for high availability of the production systems in the way that an incoming order can immediately processed. Besides of design aspects and the quality of the production system, maintenance has a strong impact on production system availability. In the last decades, there has been many attempts to create maintenance models for availability optimization. Most of them concentrated on the availability aspect only without incorporating further aspects as logistics and profitability of the overall system. However, production system operator’s main intention is to optimize the profitability of the production system and not the availability of the production system. Thus, classic models, limited to represent and optimize maintenance strategies under the light of availability, fail. A novel approach, incorporating all financial impacting processes of and around a production system, is needed. The proposed model is subdivided into three parts, maintenance module, production module and connection module. This subdivision provides easy maintainability and simple extendability. Within those modules, all aspect of production process are modeled. Main part of the work lies in the extended maintenance and failure module that offers a representation of different maintenance strategies but also incorporates the effect of over-maintaining and failed maintenance (maintenance induced failures). Order release and seizing of the production system are modeled in the production part. Due to computational power limitation, it was not possible to run the simulation and the optimization with the fully developed production model. Thus, the production model was reduced to a black-box without higher degree of details.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il presente lavoro di tesi, svolto presso i laboratori dell'X-ray Imaging Group del Dipartimento di Fisica e Astronomia dell'Università di Bologna e all'interno del progetto della V Commissione Scientifica Nazionale dell'INFN, COSA (Computing on SoC Architectures), ha come obiettivo il porting e l’analisi di un codice di ricostruzione tomografica su architetture GPU installate su System-On-Chip low-power, al fine di sviluppare un metodo portatile, economico e relativamente veloce. Dall'analisi computazionale sono state sviluppate tre diverse versioni del porting in CUDA C: nella prima ci si è limitati a trasporre la parte più onerosa del calcolo sulla scheda grafica, nella seconda si sfrutta la velocità del calcolo matriciale propria del coprocessore (facendo coincidere ogni pixel con una singola unità di calcolo parallelo), mentre la terza è un miglioramento della precedente versione ottimizzata ulteriormente. La terza versione è quella definitiva scelta perché è la più performante sia dal punto di vista del tempo di ricostruzione della singola slice sia a livello di risparmio energetico. Il porting sviluppato è stato confrontato con altre due parallelizzazioni in OpenMP ed MPI. Si è studiato quindi, sia su cluster HPC, sia su cluster SoC low-power (utilizzando in particolare la scheda quad-core Tegra K1), l’efficienza di ogni paradigma in funzione della velocità di calcolo e dell’energia impiegata. La soluzione da noi proposta prevede la combinazione del porting in OpenMP e di quello in CUDA C. Tre core CPU vengono riservati per l'esecuzione del codice in OpenMP, il quarto per gestire la GPU usando il porting in CUDA C. Questa doppia parallelizzazione ha la massima efficienza in funzione della potenza e dell’energia, mentre il cluster HPC ha la massima efficienza in velocità di calcolo. Il metodo proposto quindi permetterebbe di sfruttare quasi completamente le potenzialità della CPU e GPU con un costo molto contenuto. Una possibile ottimizzazione futura potrebbe prevedere la ricostruzione di due slice contemporaneamente sulla GPU, raddoppiando circa la velocità totale e sfruttando al meglio l’hardware. Questo studio ha dato risultati molto soddisfacenti, infatti, è possibile con solo tre schede TK1 eguagliare e forse a superare, in seguito, la potenza di calcolo di un server tradizionale con il vantaggio aggiunto di avere un sistema portatile, a basso consumo e costo. Questa ricerca si va a porre nell’ambito del computing come uno tra i primi studi effettivi su architetture SoC low-power e sul loro impiego in ambito scientifico, con risultati molto promettenti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I lantibiotici sono molecole peptidiche prodotte da un gran numero di batteri Gram-positivi, posseggono attività antibatterica contro un ampio spettro di germi, e rappresentano una potenziale soluzione alla crescente problematica dei patogeni multi-resistenti. La loro attività consiste nel legame alla membrana del bersaglio, che viene quindi destabilizzata mediante l’induzione di pori che determinano la morte del patogeno. Tipicamente i lantibiotici sono formati da un “leader-peptide” e da un “core-peptide”. Il primo è necessario per il riconoscimento della molecola da parte di enzimi che effettuano modifiche post-traduzionali del secondo - che sarà la regione con attività battericida una volta scissa dal “leader-peptide”. Le modifiche post-traduzionali anticipate determinano il contenuto di amminoacidi lantionina (Lan) e metil-lantionina (MeLan), caratterizzati dalla presenza di ponti-tioetere che conferiscono maggior resistenza contro le proteasi, e permettono di aggirare la principale limitazione all’uso dei peptidi in ambito terapeutico. La nisina è il lantibiotico più studiato e caratterizzato, prodotto dal batterio L. lactis che è stato utilizzato per oltre venti anni nell’industria alimentare. La nisina è un peptide lungo 34 amminoacidi, che contiene anelli di lantionina e metil-lantionina, introdotti dall’azione degli enzimi nisB e nisC, mentre il taglio del “leader-peptide” è svolto dall’enzima nisP. Questo elaborato affronta l’ingegnerizzazione della sintesi e della modifica di lantibiotici nel batterio E.coli. In particolare si affronta l’implementazione dell’espressione eterologa in E.coli del lantibiotico cinnamicina, prodotto in natura dal batterio Streptomyces cinnamoneus. Questo particolare lantibiotico, lungo diciannove amminoacidi dopo il taglio del leader, subisce modifiche da parte dell’enzima CinM, responsabile dell’introduzione degli aminoacidi Lan e MeLan, dell’enzima CinX responsabile dell’idrossilazione dell’acido aspartico (Asp), e infine dell’enzima cinorf7 deputato all’introduzione del ponte di lisinoalanina (Lal). Una volta confermata l’attività della cinnamicina e di conseguenza quella dell’enzima CinM, si è deciso di tentare la modifica della nisina da parte di CinM. A tal proposito è stato necessario progettare un gene sintetico che codifica nisina con un leader chimerico, formato cioè dalla fusione del leader della cinnamicina e del leader della nisina. Il prodotto finale, dopo il taglio del leader da parte di nisP, è una nisina completamente modificata. Questo risultato ne permette però la modifica utilizzando un solo enzima invece di due, riducendo il carico metabolico sul batterio che la produce, e inoltre apre la strada all’utilizzo di CinM per la modifica di altri lantibiotici seguendo lo stesso approccio, nonché all’introduzione del ponte di lisinoalanina, in quanto l’enzima cinorf7 necessita della presenza di CinM per svolgere la sua funzione.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Questa tesi si pone l'obiettivo di esplorare alcuni aspetti di uno dei settori più in crescita in questi anni (e nei prossimi) in ambito informatico: \textbf{Internet of Things}, con un occhio rivolto in particolar modo a quelle che sono le piattaforme di sviluppo disponibili in questo ambito. Con queste premesse, si coglie l'occasione per addentrarsi nella scoperta della piattaforma realizzata e rilasciata da pochi mesi da uno dei colossi del mercato IT: Microsoft. Nel primo capitolo verrà trattato Internet of Things in ambito generale, attraverso una panoramica iniziale seguita da un'analisi approfondita dei principali protocolli sviluppati per questa tecnologia. Nel secondo capitolo verranno elencate una serie di piattaforme open source disponibili ad oggi per lo sviluppo di sistemi IoT. Dal terzo capitolo verrà incentrata l'attenzione sulle tecnologie Microsoft, in particolare prima si tratterà Windows 10 in generale, comprendendo \emph{UWP Applications}. Di seguito, nel medesimo capitolo, sarà focalizzata l'attenzione su Windows IoT Core, esplorandolo dettagliatamente (Windows Remote Arduino, Modalità Headed/Headless, etc.). Il capitolo a seguire concernerà la parte progettuale della tesi, comprendendo lo sviluppo del progetto \textbf{Smart Parking} in tutte le sue fasi (dei Requisiti fino ad Implementazione e Testing). Nel quinto (ed ultimo) capitolo, saranno esposte le conclusioni relative a Windows IoT Core e i suoi vantaggi/svantaggi.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A central design challenge facing network planners is how to select a cost-effective network configuration that can provide uninterrupted service despite edge failures. In this paper, we study the Survivable Network Design (SND) problem, a core model underlying the design of such resilient networks that incorporates complex cost and connectivity trade-offs. Given an undirected graph with specified edge costs and (integer) connectivity requirements between pairs of nodes, the SND problem seeks the minimum cost set of edges that interconnects each node pair with at least as many edge-disjoint paths as the connectivity requirement of the nodes. We develop a hierarchical approach for solving the problem that integrates ideas from decomposition, tabu search, randomization, and optimization. The approach decomposes the SND problem into two subproblems, Backbone design and Access design, and uses an iterative multi-stage method for solving the SND problem in a hierarchical fashion. Since both subproblems are NP-hard, we develop effective optimization-based tabu search strategies that balance intensification and diversification to identify near-optimal solutions. To initiate this method, we develop two heuristic procedures that can yield good starting points. We test the combined approach on large-scale SND instances, and empirically assess the quality of the solutions vis-à-vis optimal values or lower bounds. On average, our hierarchical solution approach generates solutions within 2.7% of optimality even for very large problems (that cannot be solved using exact methods), and our results demonstrate that the performance of the method is robust for a variety of problems with different size and connectivity characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many research-based instruction strategies (RBISs) have been developed; their superior efficacy with respect to student learning has been demonstrated in many studies. Collecting and interpreting evidence about: 1) the extent to which electrical and computer engineering (ECE) faculty members are using RBISs in core, required engineering science courses, and 2) concerns that they express about using them, are important aspects of understanding how engineering education is evolving. The authors surveyed ECE faculty members, asking about their awareness and use of selected RBISs. The survey also asked what concerns ECE faculty members had about using RBISs. Respondent data showed that awareness of RBISs was very high, but estimates of use of RBISs, based on survey data, varied from 10% to 70%, depending on characteristics of the strategy. The most significant concern was the amount of class time that using an RBIS might take; efforts to increase use of RBISs must address this.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new technique for on-line high resolution isotopic analysis of liquid water, tailored for ice core studies is presented. We built an interface between a Wavelength Scanned Cavity Ring Down Spectrometer (WS-CRDS) purchased from Picarro Inc. and a Continuous Flow Analysis (CFA) system. The system offers the possibility to perform simultaneuous water isotopic analysis of δ18O and δD on a continuous stream of liquid water as generated from a continuously melted ice rod. Injection of sub μl amounts of liquid water is achieved by pumping sample through a fused silica capillary and instantaneously vaporizing it with 100% efficiency in a~home made oven at a temperature of 170 °C. A calibration procedure allows for proper reporting of the data on the VSMOW–SLAP scale. We apply the necessary corrections based on the assessed performance of the system regarding instrumental drifts and dependance on the water concentration in the optical cavity. The melt rates are monitored in order to assign a depth scale to the measured isotopic profiles. Application of spectral methods yields the combined uncertainty of the system at below 0.1‰ and 0.5‰ for δ18O and δD, respectively. This performance is comparable to that achieved with mass spectrometry. Dispersion of the sample in the transfer lines limits the temporal resolution of the technique. In this work we investigate and assess these dispersion effects. By using an optimal filtering method we show how the measured profiles can be corrected for the smoothing effects resulting from the sample dispersion. Considering the significant advantages the technique offers, i.e. simultaneuous measurement of δ18O and δD, potentially in combination with chemical components that are traditionally measured on CFA systems, notable reduction on analysis time and power consumption, we consider it as an alternative to traditional isotope ratio mass spectrometry with the possibility to be deployed for field ice core studies. We present data acquired in the field during the 2010 season as part of the NEEM deep ice core drilling project in North Greenland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Methane and nitrous oxide are important greenhouse gases which show a strong increase in atmospheric mixing ratios since pre-industrial time as well as large variations during past climate changes. The understanding of their biogeochemical cycles can be improved using stable isotope analysis. However, high-precision isotope measurements on air trapped in ice cores are challenging because of the high susceptibility to contamination and fractionation. Here, we present a dry extraction system for combined CH4 and N2O stable isotope analysis from ice core air, using an ice grating device. The system allows simultaneous analysis of δD(CH4) or δ13C(CH4), together with δ15N(N2O), δ18O(N2O) and δ15N(NO+ fragment) on a single ice core sample, using two isotope mass spectrometry systems. The optimum quantity of ice for analysis is about 600 g with typical "Holocene" mixing ratios for CH4 and N2O. In this case, the reproducibility (1σ ) is 2.1‰ for δD(CH4), 0.18‰ for δ13C(CH4), 0.51‰ for δ15N(N2O), 0.69‰ for δ18O(N2O) and 1.12‰ for δ15N(NO+ fragment). For smaller amounts of ice the standard deviation increases, particularly for N2O isotopologues. For both gases, small-scale intercalibrations using air and/or ice samples have been carried out in collaboration with other institutes that are currently involved in isotope measurements of ice core air. Significant differences are shown between the calibration scales, but those offsets are consistent and can therefore be corrected for.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regrouping female rabbits in group-housing systems is common management practice in rabbit breeding, which may, however, induce agonistic interactions resulting in social stress and severe injuries. Here we compared two methods of regrouping female rabbits with respect to their effects on behaviour, stress and injuries. Thus, we introduced two unfamiliar rabbits into a group of rabbits either in the group's familiar pen (HOME) or in a novel disinfected pen (NOVEL), and assessed the effects of these treatments on general activity, number and duration of agonistic interactions, number and severity of injuries and body temperature as a measure of stress. General activities were not affected by the method of regrouping. Also, treatment had no effect on the number and duration of agonistic interactions. However, the numbers of injuries (P=0.030) as well as body temperature on the first clay after regrouping (p=0.0036) were increased in rabbits regrouped in a novel clean pen. These findings question the recommendation to introduce unfamiliar does into established groups in a neutral environment and indicate that regrouping in the group's home pen may decrease the risk of severe injuries and social stress. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. No comprehensive systematic review has been published since 1998 about the frequency with which cancer patients use complementary and alternative medicine (CAM). Methods. MEDLINE, AMED, and Embase databases were searched for surveys published until January 2009. Surveys conducted in Australia, Canada, Europe, New Zealand, and the United States with at least 100 adult cancer patients were included. Detailed information on methods and results was independently extracted by 2 reviewers. Methodological quality was assessed using a criteria list developed according to the STROBE guideline. Exploratory random effects metaanalysis and metaregression were applied. Results. Studies from 18 countries (152; >65 000 cancer patients) were included. Heterogeneity of CAM use was high and to some extent explained by differences in survey methods. The combined prevalence for “current use” of CAM across all studies was 40%. The highest was in the United States and the lowest in Italy and the Netherlands. Metaanalysis suggested an increase in CAM use from an estimated 25% in the 1970s and 1980s to more than 32% in the 1990s and to 49% after 2000. Conclusions. The overall prevalence of CAM use found was lower than often claimed. However, there was some evidence that the use has increased considerably over the past years. Therefore, the health care systems ought to implement clear strategies of how to deal with this. To improve the validity and reporting of future surveys, the authors suggest criteria for methodological quality that should be fulfilled and reporting standards that should be required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Grigorij Kreidlin (Russia). A Comparative Study of Two Semantic Systems: Body Russian and Russian Phraseology. Mr. Kreidlin teaches in the Department of Theoretical and Applied Linguistics of the State University of Humanities in Moscow and worked on this project from August 1996 to July 1998. The classical approach to non-verbal and verbal oral communication is based on a traditional separation of body and mind. Linguists studied words and phrasemes, the products of mind activities, while gestures, facial expressions, postures and other forms of body language were left to anthropologists, psychologists, physiologists, and indeed to anyone but linguists. Only recently have linguists begun to turn their attention to gestures and semiotic and cognitive paradigms are now appearing that raise the question of designing an integral model for the unified description of non-verbal and verbal communicative behaviour. This project attempted to elaborate lexical and semantic fragments of such a model, producing a co-ordinated semantic description of the main Russian gestures (including gestures proper, postures and facial expressions) and their natural language analogues. The concept of emblematic gestures and gestural phrasemes and of their semantic links permitted an appropriate description of the transformation of a body as a purely physical substance into a body as a carrier of essential attributes of Russian culture - the semiotic process called the culturalisation of the human body. Here the human body embodies a system of cultural values and displays them in a text within the area of phraseology and some other important language domains. The goal of this research was to develop a theory that would account for the fundamental peculiarities of the process. The model proposed is based on the unified lexicographic representation of verbal and non-verbal units in the Dictionary of Russian Gestures, which the Mr. Kreidlin had earlier complied in collaboration with a group of his students. The Dictionary was originally oriented only towards reflecting how the lexical competence of Russian body language is represented in the Russian mind. Now a special type of phraseological zone has been designed to reflect explicitly semantic relationships between the gestures in the entries and phrasemes and to provide the necessary information for a detailed description of these. All the definitions, rules of usage and the established correlations are written in a semantic meta-language. Several classes of Russian gestural phrasemes were identified, including those phrasemes and idioms with semantic definitions close to those of the corresponding gestures, those phraseological units that have lost touch with the related gestures (although etymologically they are derived from gestures that have gone out of use), and phrasemes and idioms which have semantic traces or reflexes inherited from the meaning of the related gestures. The basic assumptions and practical considerations underlying the work were as follows. (1) To compare meanings one has to be able to state them. To state the meaning of a gesture or a phraseological expression, one needs a formal semantic meta-language of propositional character that represents the cognitive and mental aspects of the codes. (2) The semantic contrastive analysis of any semiotic codes used in person-to-person communication also requires a single semantic meta-language, i.e. a formal semantic language of description,. This language must be as linguistically and culturally independent as possible and yet must be open to interpretation through any culture and code. Another possible method of conducting comparative verbal-non-verbal semantic research is to work with different semantic meta-languages and semantic nets and to learn how to combine them, translate from one to another, etc. in order to reach a common basis for the subsequent comparison of units. (3) The practical work in defining phraseological units and organising the phraseological zone in the Dictionary of Russian Gestures unexpectedly showed that semantic links between gestures and gestural phrasemes are reflected not only in common semantic elements and syntactic structure of semantic propositions, but also in general and partial cognitive operations that are made over semantic definitions. (4) In comparative semantic analysis one should take into account different values and roles of inner form and image components in the semantic representation of non-verbal and verbal units. (5) For the most part, gestural phrasemes are direct semantic derivatives of gestures. The cognitive and formal techniques can be regarded as typological features for the future functional-semantic classification of gestural phrasemes: two phrasemes whose meaning can be obtained by the same cognitive or purely syntactic operations (or types of operations) over the meanings of the corresponding gestures, belong by definition to one and the same class. The nature of many cognitive operations has not been studied well so far, but the first steps towards its comprehension and description have been taken. The research identified 25 logically possible classes of relationships between a gesture and a gestural phraseme. The calculation is based on theoretically possible formal (set-theory) correlations between signifiers and signified of the non-verbal and verbal units. However, in order to examine which of them are realised in practice a complete semantic and lexicographic description of all (not only central) everyday emblems and gestural phrasemes is required and this unfortunately does not yet exist. Mr. Kreidlin suggests that the results of the comparative analysis of verbal and non-verbal units could also be used in other research areas such as the lexicography of emotions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Mild perioperative hypothermia increases the risk of several severe complications. Perioperative patient warming to preserve normothermia has thus become routine, with forced-air warming being used most often. In previous studies, various resistive warming systems have shown mixed results in comparison with forced-air. Recently, a polymer-based resistive patient warming system has been developed. We compared the efficacy of a standard forced-air warming system with the resistive polymer system in volunteers. METHODS: Eight healthy volunteers participated, each on two separate study days. Unanesthetized volunteers were cooled to a core temperature (tympanic membrane) of 34 degrees C by application of forced-air at 10 degrees C and a circulating-water mattress at 4 degrees C. Meperidine and buspirone were administered to prevent shivering. In a randomly designated order, volunteers were then rewarmed (until their core temperatures reached 36 degrees C) with one of the following active warming systems: (1) forced-air warming (Bair Hugger warming cover #300, blower #750, Arizant, Eden Prairie, MN); or (2) polymer fiber resistive warming (HotDog whole body blanket, HotDog standard controller, Augustine Biomedical, Eden Prairie, MN). The alternate system was used on the second study day. Metabolic heat production, cutaneous heat loss, and core temperature were measured. RESULTS: Metabolic heat production and cutaneous heat loss were similar with each system. After a 30-min delay, core temperature increased nearly linearly by 0.98 (95% confidence interval 0.91-1.04) degrees C/h with forced-air and by 0.92 (0.85-1.00) degrees C/h with resistive heating (P = 0.4). CONCLUSIONS: Heating efficacy and core rewarming rates were similar with full-body forced-air and full-body resistive polymer heating in healthy volunteers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New designs of user input systems have resulted from the developing technologies and specialized user demands. Conventional keyboard and mouse input devices still dominate the input speed, but other input mechanisms are demanded in special application scenarios. Touch screen and stylus input methods have been widely adopted by PDAs and smartphones. Reduced keypads are necessary for mobile phones. A new design trend is exploring the design space in applications requiring single-handed input, even with eyes-free on small mobile devices. This requires as few keys on the input device to make it feasible to operate. But representing many characters with fewer keys can make the input ambiguous. Accelerometers embedded in mobile devices provide opportunities to combine device movements with keys for input signal disambiguation. Recent research has explored its design space for text input. In this dissertation an accelerometer assisted single key positioning input system is developed. It utilizes input device tilt directions as input signals and maps their sequences to output characters and functions. A generic positioning model is developed as guidelines for designing positioning input systems. A calculator prototype and a text input prototype on the 4+1 (5 positions) positioning input system and the 8+1 (9 positions) positioning input system are implemented using accelerometer readings on a smartphone. Users use one physical key to operate and feedbacks are audible. Controlled experiments are conducted to evaluate the feasibility, learnability, and design space of the accelerometer assisted single key positioning input system. This research can provide inspiration and innovational references for researchers and practitioners in the positioning user input designs, applications of accelerometer readings, and new development of standard machine readable sign languages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In developing countries many water distribution systems are branched networks with little redundancy. If any component in the distribution system fails, many users are left relying on secondary water sources. These sources oftentimes do not provide potable water and prolonged use leads to increased cases of water borne illnesses. Increasing redundancy in branched networks increases the reliability of the networks, but is oftentimes viewed as unaffordable. This paper presents a procedure for water system managers to use to determine which loops when added to a branch network provide the most benefit for users. Two methods are presented, one ranking the loops based on total number of users benefited, and one ranking the loops of number of vulnerable users benefited. A case study is presented using the water distribution system of Medina Bank Village, Belize. It was found that forming loops in upstream pipes connected to the main line had the potential to benefit the most users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The feasibility of carbon sequestration in cement kiln dust (CKD) was investigated in a series of batch and column experiments conducted under ambient temperature and pressure conditions. The significance of this work is the demonstration that alkaline wastes, such as CKD, are highly reactive with carbon dioxide (CO2). In the presence of water, CKD can sequester greater than 80% of its theoretical capacity for carbon without any amendments or modifications to the waste. Other mineral carbonation technologies for carbon sequestration rely on the use of mined mineral feedstocks as the source of oxides. The mining, pre-processing and reaction conditions needed to create favorable carbonation kinetics all require significant additions of energy to the system. Therefore, their actual net reduction in CO2 is uncertain. Many suitable alkaline wastes are produced at sites that also generate significant quantities of CO2. While independently, the reduction in CO2 emissions from mineral carbonation in CKD is small (~13% of process related emissions), when this technology is applied to similar wastes of other industries, the collective net reduction in emissions may be significant. The technical investigations presented in this dissertation progress from proof of feasibility through examination of the extent of sequestration in core samples taken from an aged CKD waste pile, to more fundamental batch and microscopy studies which analyze the rates and mechanisms controlling mineral carbonation reactions in a variety of fresh CKD types. Finally, the scale of the system was increased to assess the sequestration efficiency under more pilot or field-scale conditions and to clarify the importance of particle-scale processes under more dynamic (flowing gas) conditions. A comprehensive set of material characterization methods, including thermal analysis, Xray diffraction, and X-ray fluorescence, were used to confirm extents of carbonation and to better elucidate those compositional factors controlling the reactions. The results of these studies show that the rate of carbonation in CKD is controlled by the extent of carbonation. With increased degrees of conversion, particle-scale processes such as intraparticle diffusion and CaCO3 micropore precipitation patterns begin to limit the rate and possibly the extent of the reactions. Rates may also be influenced by the nature of the oxides participating in the reaction, slowing when the free or unbound oxides are consumed and reaction conditions shift towards the consumption of less reactive Ca species. While microscale processes and composition affects appear to be important at later times, the overall degrees of carbonation observed in the wastes were significant (> 80%), a majority of which occurs within the first 2 days of reaction. Under the operational conditions applied in this study, the degree of carbonation in CKD achieved in column-scale systems was comparable to those observed under ideal batch conditions. In addition, the similarity in sequestration performance among several different CKD waste types indicates that, aside from available oxide content, no compositional factors significantly hinder the ability of the waste to sequester CO2.