916 resultados para Providence (R.I.) Westminster Congregational Church (Unitarian)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The continuous production of blood cells, a process termed hematopoiesis, is sustained throughout the lifetime of an individual by a relatively small population of cells known as hematopoietic stem cells (HSCs). HSCs are unique cells characterized by their ability to self-renew and give rise to all types of mature blood cells. Given their high proliferative potential, HSCs need to be tightly regulated on the cellular and molecular levels or could otherwise turn malignant. On the other hand, the tight regulatory control of HSC function also translates into difficulties in culturing and expanding HSCs in vitro. In fact, it is currently not possible to maintain or expand HSCs ex vivo without rapid loss of self-renewal. Increased knowledge of the unique features of important HSC niches and of key transcriptional regulatory programs that govern HSC behavior is thus needed. Additional insight in the mechanisms of stem cell formation could enable us to recapitulate the processes of HSC formation and self-renewal/expansion ex vivo with the ultimate goal of creating an unlimited supply of HSCs from e.g. human embryonic stem cells (hESCs) or induced pluripotent stem cells (iPS) to be used in therapy. We thus asked: How are hematopoietic stem cells formed and in what cellular niches does this happen (Papers I, II)? What are the molecular mechanisms that govern hematopoietic stem cell development and differentiation (Papers III, IV)? Importantly, we could show that placenta is a major fetal hematopoietic niche that harbors a large number of HSCs during midgestation (Paper I)(Gekas et al., 2005). In order to address whether the HSCs found in placenta were formed there we utilized the Runx1-LacZ knock-in and Ncx1 knockout mouse models (Paper II). Importantly, we could show that HSCs emerge de novo in the placental vasculature in the absence of circulation (Rhodes et al., 2008). Furthermore, we could identify defined microenvironmental niches within the placenta with distinct roles in hematopoiesis: the large vessels of the chorioallantoic mesenchyme serve as sites of HSC generation whereas the placental labyrinth is a niche supporting HSC expansion (Rhodes et al., 2008). Overall, these studies illustrate the importance of distinct milieus in the emergence and subsequent maturation of HSCs. To ensure proper function of HSCs several regulatory mechanisms are in place. The microenvironment in which HSCs reside provides soluble factors and cell-cell interactions. In the cell-nucleus, these cell-extrinsic cues are interpreted in the context of cell-intrinsic developmental programs which are governed by transcription factors. An essential transcription factor for initiation of hematopoiesis is Scl/Tal1 (stem cell leukemia gene/T-cell acute leukemia gene 1). Loss of Scl results in early embryonic death and total lack of all blood cells, yet deactivation of Scl in the adult does not affect HSC function (Mikkola et al., 2003b. In order to define the temporal window of Scl requirement during fetal hematopoietic development, we deactivated Scl in all hematopoietic lineages shortly after hematopoietic specification in the embryo . Interestingly, maturation, expansion and function of fetal HSCs was unaffected, and, as in the adult, red blood cell and platelet differentiation was impaired (Paper III)(Schlaeger et al., 2005). These findings highlight that, once specified, the hematopoietic fate is stable even in the absence of Scl and is maintained through mechanisms that are distinct from those required for the initial fate choice. As the critical downstream targets of Scl remain unknown, we sought to identify and characterize target genes of Scl (Paper IV). We could identify transcription factor Mef2C (myocyte enhancer factor 2 C) as a novel direct target gene of Scl specifically in the megakaryocyte lineage which largely explains the megakaryocyte defect observed in Scl deficient mice. In addition, we observed an Scl-independent requirement of Mef2C in the B-cell compartment, as loss of Mef2C leads to accelerated B-cell aging (Gekas et al. Submitted). Taken together, these studies identify key extracellular microenvironments and intracellular transcriptional regulators that dictate different stages of HSC development, from emergence to lineage choice to aging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A unit cube in k dimensions (k-cube) is defined as the Cartesian product R-1 x R-2 x ... x R-k where R-i (for 1 <= i <= k) is a closed interval of the form [a(i), a(i) + 1] on the real line. A graph G on n nodes is said to be representable as the intersection of k-cubes (cube representation in k dimensions) if each vertex of C can be mapped to a k-cube such that two vertices are adjacent in G if and only if their corresponding k-cubes have a non-empty intersection. The cubicity of G denoted as cub(G) is the minimum k for which G can be represented as the intersection of k-cubes. An interesting aspect about cubicity is that many problems known to be NP-complete for general graphs have polynomial time deterministic algorithms or have good approximation ratios in graphs of low cubicity. In most of these algorithms, computing a low dimensional cube representation of the given graph is usually the first step. We give an O(bw . n) algorithm to compute the cube representation of a general graph G in bw + 1 dimensions given a bandwidth ordering of the vertices of G, where bw is the bandwidth of G. As a consequence, we get O(Delta) upper bounds on the cubicity of many well-known graph classes such as AT-free graphs, circular-arc graphs and cocomparability graphs which have O(Delta) bandwidth. Thus we have: 1. cub(G) <= 3 Delta - 1, if G is an AT-free graph. 2. cub(G) <= 2 Delta + 1, if G is a circular-arc graph. 3. cub(G) <= 2 Delta, if G is a cocomparability graph. Also for these graph classes, there axe constant factor approximation algorithms for bandwidth computation that generate orderings of vertices with O(Delta) width. We can thus generate the cube representation of such graphs in O(Delta) dimensions in polynomial time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Controlled nuclear fusion is one of the most promising sources of energy for the future. Before this goal can be achieved, one must be able to control the enormous energy densities which are present in the core plasma in a fusion reactor. In order to be able to predict the evolution and thereby the lifetime of different plasma facing materials under reactor-relevant conditions, the interaction of atoms and molecules with plasma first wall surfaces have to be studied in detail. In this thesis, the fundamental sticking and erosion processes of carbon-based materials, the nature of hydrocarbon species released from plasma-facing surfaces, and the evolution of the components under cumulative bombardment by atoms and molecules have been investigated by means of molecular dynamics simulations using both analytic potentials and a semi-empirical tight-binding method. The sticking cross-section of CH3 radicals at unsaturated carbon sites at diamond (111) surfaces is observed to decrease with increasing angle of incidence, a dependence which can be described by a simple geometrical model. The simulations furthermore show the sticking cross-section of CH3 radicals to be strongly dependent on the local neighborhood of the unsaturated carbon site. The erosion of amorphous hydrogenated carbon surfaces by helium, neon, and argon ions in combination with hydrogen at energies ranging from 2 to 10 eV is studied using both non-cumulative and cumulative bombardment simulations. The results show no significant differences between sputtering yields obtained from bombardment simulations with different noble gas ions. The final simulation cells from the 5 and 10 eV ion bombardment simulations, however, show marked differences in surface morphology. In further simulations the behavior of amorphous hydrogenated carbon surfaces under bombardment with D^+, D^+2, and D^+3 ions in the energy range from 2 to 30 eV has been investigated. The total chemical sputtering yields indicate that molecular projectiles lead to larger sputtering yields than atomic projectiles. Finally, the effect of hydrogen ion bombardment of both crystalline and amorphous tungsten carbide surfaces is studied. Prolonged bombardment is found to lead to the formation of an amorphous tungsten carbide layer, regardless of the initial structure of the sample. In agreement with experiment, preferential sputtering of carbon is observed in both the cumulative and non-cumulative simulations

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fusion power is an appealing source of clean and abundant energy. The radiation resistance of reactor materials is one of the greatest obstacles on the path towards commercial fusion power. These materials are subject to a harsh radiation environment, and cannot fail mechanically or contaminate the fusion plasma. Moreover, for a power plant to be economically viable, the reactor materials must withstand long operation times, with little maintenance. The fusion reactor materials will contain hydrogen and helium, due to deposition from the plasma and nuclear reactions because of energetic neutron irradiation. The first wall divertor materials, carbon and tungsten in existing and planned test reactors, will be subject to intense bombardment of low energy deuterium and helium, which erodes and modifies the surface. All reactor materials, including the structural steel, will suffer irradiation of high energy neutrons, causing displacement cascade damage. Molecular dynamics simulation is a valuable tool for studying irradiation phenomena, such as surface bombardment and the onset of primary damage due to displacement cascades. The governing mechanisms are on the atomic level, and hence not easily studied experimentally. In order to model materials, interatomic potentials are needed to describe the interaction between the atoms. In this thesis, new interatomic potentials were developed for the tungsten-carbon-hydrogen system and for iron-helium and chromium-helium. Thus, the study of previously inaccessible systems was made possible, in particular the effect of H and He on radiation damage. The potentials were based on experimental and ab initio data from the literature, as well as density-functional theory calculations performed in this work. As a model for ferritic steel, iron-chromium with 10% Cr was studied. The difference between Fe and FeCr was shown to be negligible for threshold displacement energies. The properties of small He and He-vacancy clusters in Fe and FeCr were also investigated. The clusters were found to be more mobile and dissociate more rapidly than previously assumed, and the effect of Cr was small. The primary damage formed by displacement cascades was found to be heavily influenced by the presence of He, both in FeCr and W. Many important issues with fusion reactor materials remain poorly understood, and will require a huge effort by the international community. The development of potential models for new materials and the simulations performed in this thesis reveal many interesting features, but also serve as a platform for further studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Syftet med denna studie är att genom data-analys triangulation undersöka socionomstuderandes svar på ett yrkesetiskt dilemma av omsorgsetisk natur. Samplet består av 32 socionomer i början av sina studier som har svarat på ett hypotetiskt dilemma om hur de skulle bemöta en ung kvinna som ber om råd i en mycket svår situation. De huvudsakliga teoretiska utgångspunkterna för detta arbete är ECI (Ethic of Care Interview) som utvecklats av Eva Skoe som metod för att undersöka omsorgsetik, samt Osers och Althofs teori om diskursiva problemlösningsmetoder bland professionella. Som grundläggande teorier för all modern forskning om människans moralutveckling, presenteras också Carol Gilligans och Lawrence Kohlbergs teorier samt den huvudsakliga kritiken dessa bemött. Carol Gilligan är den som ursprungligen presenterade tanken om att det finns två olika typer av moraliskt tänkande där omsorgsetik är mer typisk för kvinnor och rättviseetik är mer typisk för män. Den första delen av analysen är en innehållsanalys där svaren på det yrkesrelaterade dilemmat på olika ECI stadium jämförs med varandra. Poängsättningen i ECI har varit grunden för denna analys. Den andra delen av analysen är en deduktiv teoribunden analys, där jag undersökt i fall Osers och Althofs modell om problemlösningsstrategier även går att tillämpa på ett yrkesetiskt dilemma. Slutligen tar jag också ställning till dessa två teoriers kompatibilitet. Resultatet visar att eleverna har svarat aningen sämre på det yrkesetiska dilemmat än vad deras allmänna ECI stadium är. Detta kan bero på att de är i början av sina studier men också på det allmänna klimat som råder inom socialbranschen. Teorin om diskursiva problemlösningsstrategier går inte heller att tillämpa på detta yrkesetiska dilemma, eftersom den hypotetiska klientens självbestämmanderätt gör en diskursiv lösning omöjlig. Till följd av detta har jag skapat en ny modell som baserar sig på 6 kategorier utgående från de faktorer de intervjuade lyfter fram som de viktigaste i den professionellas möte med klienten. Eftersom den nya modellen inte är hierarkisk, kan de två teorierna inte jämföras med varandra på så sätt att högre ECI nivå skulle innebära en viss typ av problemlösningsstrategi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

After Gödel's incompleteness theorems and the collapse of Hilbert's programme Gerhard Gentzen continued the quest for consistency proofs of Peano arithmetic. He considered a finitistic or constructive proof still possible and necessary for the foundations of mathematics. For a proof to be meaningful, the principles relied on should be considered more reliable than the doubtful elements of the theory concerned. He worked out a total of four proofs between 1934 and 1939. This thesis examines the consistency proofs for arithmetic by Gentzen from different angles. The consistency of Heyting arithmetic is shown both in a sequent calculus notation and in natural deduction. The former proof includes a cut elimination theorem for the calculus and a syntactical study of the purely arithmetical part of the system. The latter consistency proof in standard natural deduction has been an open problem since the publication of Gentzen's proofs. The solution to this problem for an intuitionistic calculus is based on a normalization proof by Howard. The proof is performed in the manner of Gentzen, by giving a reduction procedure for derivations of falsity. In contrast to Gentzen's proof, the procedure contains a vector assignment. The reduction reduces the first component of the vector and this component can be interpreted as an ordinal less than epsilon_0, thus ordering the derivations by complexity and proving termination of the process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A k-dimensional box is the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval on the real line. The boxicity of a graph G, denoted as box(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-dimensional boxes. A unit cube in k-dimensional space or a k-cube is defined as the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval oil the real line of the form a(i), a(i) + 1]. The cubicity of G, denoted as cub(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-cubes. The threshold dimension of a graph G(V, E) is the smallest integer k such that E can be covered by k threshold spanning subgraphs of G. In this paper we will show that there exists no polynomial-time algorithm for approximating the threshold dimension of a graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. From this result we will show that there exists no polynomial-time algorithm for approximating the boxicity and the cubicity of a graph on n vertices with factor O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. In fact all these hardness results hold even for a highly structured class of graphs, namely the split graphs. We will also show that it is NP-complete to determine whether a given split graph has boxicity at most 3. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

a-Aminoisobutyric acid (Aib), * a nonprotein amino acid first described synthetically, I has been found in diverse sources, ranging from peptides of microbial origin2s3 to the Murchison mete~r i te.E~a rly studies of the chemistry of Aib were directed towards the synthesis of model peptides containing this "sterically hindered" amino There have been several reports on the synthesis of Aib containing analogs of biologically active peptides.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Syftet med denna studie är att genom data-analys triangulation undersöka socionomstuderandes svar på ett yrkesetiskt dilemma av omsorgsetisk natur. Samplet består av 32 socionomer i början av sina studier som har svarat på ett hypotetiskt dilemma om hur de skulle bemöta en ung kvinna som ber om råd i en mycket svår situation. De huvudsakliga teoretiska utgångspunkterna för detta arbete är ECI (Ethic of Care Interview) som utvecklats av Eva Skoe som metod för att undersöka omsorgsetik, samt Osers och Althofs teori om diskursiva problemlösningsmetoder bland professionella. Som grundläggande teorier för all modern forskning om människans moralutveckling, presenteras också Carol Gilligans och Lawrence Kohlbergs teorier samt den huvudsakliga kritiken dessa bemött. Carol Gilligan är den som ursprungligen presenterade tanken om att det finns två olika typer av moraliskt tänkande där omsorgsetik är mer typisk för kvinnor och rättviseetik är mer typisk för män. Den första delen av analysen är en innehållsanalys där svaren på det yrkesrelaterade dilemmat på olika ECI stadium jämförs med varandra. Poängsättningen i ECI har varit grunden för denna analys. Den andra delen av analysen är en deduktiv teoribunden analys, där jag undersökt i fall Osers och Althofs modell om problemlösningsstrategier även går att tillämpa på ett yrkesetiskt dilemma. Slutligen tar jag också ställning till dessa två teoriers kompatibilitet. Resultatet visar att eleverna har svarat aningen sämre på det yrkesetiska dilemmat än vad deras allmänna ECI stadium är. Detta kan bero på att de är i början av sina studier men också på det allmänna klimat som råder inom socialbranschen. Teorin om diskursiva problemlösningsstrategier går inte heller att tillämpa på detta yrkesetiska dilemma, eftersom den hypotetiska klientens självbestämmanderätt gör en diskursiv lösning omöjlig. Till följd av detta har jag skapat en ny modell som baserar sig på 6 kategorier utgående från de faktorer de intervjuade lyfter fram som de viktigaste i den professionellas möte med klienten. Eftersom den nya modellen inte är hierarkisk, kan de två teorierna inte jämföras med varandra på så sätt att högre ECI nivå skulle innebära en viss typ av problemlösningsstrategi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente estudio supone un intento de describir y analizar el uso de la preposición "de" sobre la base de un corpus diacrónico, con énfasis en las diferentes relaciones semánticas que establece. Partiendo de un total de más de 16.000 casos de "de" hemos establecido 48 categorías de uso, que corresponden a cuatro tipos de construcción sintáctica, a saber, el uso de "de" como complemento de nombres (CN), verbos (CV), adjetivos (CA) y, finalmente, su uso como núcleo de expresiones adverbiales independientes (CI). El estudio consta de tres partes fundamentales. En la parte I, se introduce la Lingüística Cognitiva, que constituye la base teórica esencial del trabajo. Más exactamente, se introducen conceptos como la teoría del prototipo, la teoría de las metáforas conceptuales y la gramática cognitiva, especialmente las ideas de "punto de referencia" y "relación intrínseca" (Langacker 1995, 1999). La parte II incluye el análisis de las 48 categorías. En esta parte se presentan y comentan casi 2.000 ejemplos del uso contextual de "de" extraídos del corpus diacrónico. Los resultados más importantes del análisis pueden resumirse en los siguientes puntos: El uso de "de" sigue siendo esencialmente el mismo en la actualidad que hace 800 años, en el sentido de que todas las 48 categorías se identifican en todas las épocas del corpus. El uso de "de" como complemento nominal va aumentando, al contrario de lo que ocurre con su uso como complemento verbal. En el contexto nominal son especialmente las relaciones posesivas más abstractas las que se hacen más frecuentes, mientras que en el contexto verbal las relaciones que se hacen menos frecuentes son las de separación/alejamiento, causa, agente y partitivo indefinido. Destaca la importancia del siglo XVIII como época de transición entre un primer estado de las cosas y otro posterior, en especial en relación con el carácter cada vez más abstracto de las relaciones posesivas así como con la disminución de las categorías adverbales de causa, agente y partitivo. Pese a la variación en el contexto inmediato de uso, el núcleo semántico de "de" se mantiene inalterado. La parte III toma como punto de partida los resultados del análisis de la parte II, tratando de deslindar el aporte semántico de la preposición "de" a su contexto de uso del valor de la relación en conjunto. Así, recurriendo a la metodología para determinar el significado básico y la metodología para determinar lo que constituyen significados distintos de una preposición (Tyler , Evans 2003a, 2003b), se llega a la hipótesis de que "de" posee cuatro significados básicos, a saber, 'punto de partida', 'tema/asunto', 'parte/todo' y 'posesión'. Esta hipótesis, basada en las metodologías de Tyler y Evans y en los resultados del análisis de corpus, se intenta verificar empíricamente mediante el uso de dos cuestionarios destinados a averiguar hasta qué punto las distinciones semánticas a las que se llega por vía teórica son reconocidas por los hablantes nativos de la lengua (cf. Raukko 2003). El resultado conjunto de los dos acercamientos tanto refuerza como especifica la hipótesis. Los datos que arroja el análisis de los cuestionarios parecen reforzar la idea de que el núcleo semántico de "de" es complejo, constando de los cuatro valores mencionados. Sin embargo, cada uno de estos valores básicos constituye un prototipo local, en torno al cual se construye un complejo de matices semánticos derivados del prototipo. La idea final es que los hablantes son conscientes de los cuatro postulados valores básicos, pero que también distinguen matices más detallados, como son las ideas de 'causa', 'agente', 'instrumento', 'finalidad', 'cualidad', etc. Es decir, "de" constituye un elemento polisémico complejo cuya estructura semántica puede describirse como una semejanza de familia centrada en cuatro valores básicos en torno a los cuales se encuentra una serie de matices más específicos, que también constituyen valores propios de la preposición. Creemos, además, que esta caracterización semántica es válida para todas las épocas de la historia del español, con unas pequeñas modificaciones en el peso relativo de los distintos matices, lo cual está relacionado con la observada variación diacrónica en el uso de "de".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe the use of poly(alpha-methylstyrene peroxide) (P alpha MSP), an alternating copolymer of alpha-methylstyrene and oxygen, as initiator for the radical polymerization of vinyl monomers. Thermal decomposition of P alpha MSP in 1,4-dioxane follows first-order kinetics with an activation energy (E(a)) of 34.6 kcal/mol. Polymerization of methyl methacrylate (MMA) and styrene using P alpha MSP as an initiator was carried out in the temperature range 60-90 degrees C. The kinetic order with respect to the initiator and the monomer was close to 0.5 and 1.0, respectively, for both monomers. The E(a) for the polymerization was 20.6 and 22.9 kcal/mol for MMA and styrene, respectively. The efficiency of P alpha MSP was found to be in the range 0.02-0.04. The low efficiency of P alpha MSP was explained in terms of the unimolecular decomposition of the alkoxy radicals which competes with primary radical initiation. The presence of peroxy segments in the main chain of PMMA and polystyrene was confirmed from spectroscopic and DSC studies. R(i)'/2I values for P alpha MSP compared to that of BPO at 80 degrees C indicate that P alpha MSP can be used as an effective high-temperature initiator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A k-dimensional box is a Cartesian product R(1)x...xR(k) where each R(i) is a closed interval on the real line. The boxicity of a graph G, denoted as box(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-dimensional boxes. That is, two vertices are adjacent if and only if their corresponding boxes intersect. A circular arc graph is a graph that can be represented as the intersection graph of arcs on a circle. We show that if G is a circular arc graph which admits a circular arc representation in which no arc has length at least pi(alpha-1/alpha) for some alpha is an element of N(>= 2), then box(G) <= alpha (Here the arcs are considered with respect to a unit circle). From this result we show that if G has maximum degree Delta < [n(alpha-1)/2 alpha] for some alpha is an element of N(>= 2), then box(G) <= alpha. We also demonstrate a graph having box(G) > alpha but with Delta = n (alpha-1)/2 alpha + n/2 alpha(alpha+1) + (alpha+2). For a proper circular arc graph G, we show that if Delta < [n(alpha-1)/alpha] for some alpha is an element of N(>= 2), then box(G) <= alpha. Let r be the cardinality of the minimum overlap set, i.e. the minimum number of arcs passing through any point on the circle, with respect to some circular arc representation of G. We show that for any circular arc graph G, box(G) <= r + 1 and this bound is tight. We show that if G admits a circular arc representation in which no family of k <= 3 arcs covers the circle, then box(G) <= 3 and if G admits a circular arc representation in which no family of k <= 4 arcs covers the circle, then box(G) <= 2. We also show that both these bounds are tight.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reaction of the low valent metallocene(II) sources Cp'Ti-2(eta(2)-Me3SiC2SiMe3) (Cp' = eta(5)-cyclopentadienyl, 1a or eta(5)-pentamethylcyclopentadienyl, 1b) with different carbodiimide substrates RN=C=NR' 2-R-R' (R = t-Bu; R' = Et; R = R' = i-Pr; t-Bu; SiMe3; 2,4,6-Me-C6H2 and 2,6-i-Pr-C6H3) was investigated to explore the frontiers of ring strained, unusual four-membered heterometallacycles 5-R. The product complexes show dismantlement, isomerization, or C-C coupling of the applied carbodiimide substrates, respectively, to form unusual mono-, di-, and tetranuclear titanium(III) complexes. A detailed theoretical study revealed that the formation of the unusual complexes can be attributed to the biradicaloid nature of the unusual four-membered heterometallacycles 5-R, which presents an intriguing situation of M-C bonding. The combined experimental and theoretical study highlights the delicate interplay of electronic and steric effects in the stabilization of strained four-membered heterometallacycles, accounting for the isolation of the obtained complexes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A unit cube in (or a k-cube in short) is defined as the Cartesian product R (1) x R (2) x ... x R (k) where R (i) (for 1 a parts per thousand currency sign i a parts per thousand currency sign k) is a closed interval of the form a (i) , a (i) + 1] on the real line. A k-cube representation of a graph G is a mapping of the vertices of G to k-cubes such that two vertices in G are adjacent if and only if their corresponding k-cubes have a non-empty intersection. The cubicity of G is the minimum k such that G has a k-cube representation. From a geometric embedding point of view, a k-cube representation of G = (V, E) yields an embedding such that for any two vertices u and v, ||f(u) - f(v)||(a) a parts per thousand currency sign 1 if and only if . We first present a randomized algorithm that constructs the cube representation of any graph on n vertices with maximum degree Delta in O(Delta ln n) dimensions. This algorithm is then derandomized to obtain a polynomial time deterministic algorithm that also produces the cube representation of the input graph in the same number of dimensions. The bandwidth ordering of the graph is studied next and it is shown that our algorithm can be improved to produce a cube representation of the input graph G in O(Delta ln b) dimensions, where b is the bandwidth of G, given a bandwidth ordering of G. Note that b a parts per thousand currency sign n and b is much smaller than n for many well-known graph classes. Another upper bound of b + 1 on the cubicity of any graph with bandwidth b is also shown. Together, these results imply that for any graph G with maximum degree Delta and bandwidth b, the cubicity is O(min{b, Delta ln b}). The upper bound of b + 1 is used to derive upper bounds for the cubicity of circular-arc graphs, cocomparability graphs and AT-free graphs in terms of the maximum degree Delta.