959 resultados para Minimal
Resumo:
Symptomatic hypertrophic breasts cause a health burden with physical and psychosocial morbidity. The value of reduction mammaplasty in the treatment of symptomatic breast hypertrophy has been consistently reported by patients and has been well recognised by plastic surgeons for a long time. However, the scientific evidence of the effects of reduction mammaplasty has been weak or lacking. During the design of this study most of the previous studies were retrospective and the few prospective studies had methodological limitations. Therefore, an obvious need for prospective randomised studies was present. Nevertheless, practical and ethical considerations seemed to make this study design impossible, because the waiting time for the operation was several years. The legislation and subsequent introduction of the uniform criteria for access to non-emergency treatment in Finland removed these obstacles, as all patients received their treatment within a reasonable time. As a result, a randomised controlled trial with a six-month follow-up time was designed and conducted. In addition, a follow-up study with two to five years follow-up was also carried out later. The effects of reduction mammaplasty on the patients breast-related symptoms, psychological symptoms, pain and quality of life was assessed. In addition, factors affecting the outcome were investigated. This study was carried out in the Hospital District of Helsinki and Uusimaa, Finland. Eighty-two out of the approximately 300 patients on the waiting list in 2004 agreed to participate in the study. Patients were randomised either to be operated (40 patients) on or to be followed up (42 patients). The follow-up time for both groups was six months. The patients were operated on by plastic surgeons or trainees at the Department of Plastic Surgery at Helsinki University Central Hospital or at the Department of Surgery at Hyvinkää Hospital. The patients completed five questionnaires: the SF-36 and the 15D quality of life questionnaires, the Finnish Breast-Associated Symptoms questionnaire (FBAS), a mood questionnaire (Raitasalo s modification of the short form of the Beck Depression Inventory, RBDI), and a pain questionnaire (The Finnish Pain Questionnaire, FPQ). Sixty-two out of the original 82 patients agreed to participate in the prospective follow-up study. In this study, patients completed the 15D quality of life questionnaire, the Finnish Breast-Associated Symptoms questionnaire, and the RBDI mood questionnaire. After six months follow-up, patients who had undergone reduction mammaplasty had a significantly better quality of life, fewer breast-associated symptoms and less pain, and they were less depressed or anxious when compared to patients who had not undergone surgery. The change in quality of life was more than two times the minimal clinically important difference. The patients preoperative quality of life was significantly inferior when compared to the age-standardised general population. This health burden was removed with reduction mammaplasty. The health loss related to symptomatic breast hypertrophy was comparable to that of patients with major joint arthrosis. In terms of change in quality of life, the intervention effect of reduction mammaplasty was comparable to that of hip joint replacement and more pronounced than that of knee joint replacement surgery. The outcome of reduction mammaplasty was affected more by preoperative psychosocial factors than by changes in breast dimensions. The effects of reduction mammaplasty remained stable at two to five years follow-up. In terms of quality of life, symptomatic breast hypertrophy causes a considerable health loss comparable to that of major joint arthrosis. Patients who undergo surgery have fewer breast-associated symptoms and less pain, and they are less depressed or anxious and have an improved quality of life. The intervention effect is comparable to that of major joint replacement surgery, and it remains stable at two to five years follow-up. The outcome of reduction mammaplasty is affected by preoperative psychosocial factors.
Resumo:
This dissertation develops a strategic management accounting perspective of inventory routing. The thesis studies the drivers of cost efficiency gains by identifying the role of the underlying cost structure, demand, information sharing, forecasting accuracy, service levels, vehicle fleet, planning horizon and other strategic factors as well as the interaction effects among these factors with respect to performance outcomes. The task is to enhance the knowledge of the strategic situations that favor the implementation of inventory routing systems, understanding cause-and-effect relationships, linkages and gaining a holistic view of the value proposition of inventory routing. The thesis applies an exploratory case study design, which is based on normative quantitative empirical research using optimization, simulation and factor analysis. Data and results are drawn from a real world application to cash supply chains. The first research paper shows that performance gains require a common cost component and cannot be explained by simple linear or affine cost structures. Inventory management and distribution decisions become separable in the absence of a set-dependent cost structure, and neither economies of scope nor coordination problems are present in this case. The second research paper analyzes whether information sharing improves the overall forecasting accuracy. Analysis suggests that the potential for information sharing is limited to coordination of replenishments and that central information do not yield more accurate forecasts based on joint forecasting. The third research paper develops a novel formulation of the stochastic inventory routing model that accounts for minimal service levels and forecasting accuracy. The developed model allows studying the interaction of minimal service levels and forecasting accuracy with the underlying cost structure in inventory routing. Interestingly, results show that the factors minimal service level and forecasting accuracy are not statistically significant, and subsequently not relevant for the strategic decision problem to introduce inventory routing, or in other words, to effectively internalize inventory management and distribution decisions at the supplier. Consequently the main contribution of this thesis is the result that cost benefits of inventory routing are derived from the joint decision model that accounts for the underlying set-dependent cost structure rather than the level of information sharing. This result suggests that the value of information sharing of demand and inventory data is likely to be overstated in prior literature. In other words, cost benefits of inventory routing are primarily determined by the cost structure (i.e. level of fixed costs and transportation costs) rather than the level of information sharing, joint forecasting, forecasting accuracy or service levels.
Resumo:
Human platelet-derived growth factor (PDGF) is composed of two polypeptide chains, PDGF-1 and PDGF-2,the human homolog of the v-sis oncogene. Deregulation of PDGF-2 expression can confer a growth advantage to cells possessing the cognate receptor and, thus, may contribute to the malignant phenotype. We investigated the regulation of PDGF-2 mRNA expression during megakaryocytic differentiation of K562 cells. Induction by 12-O-tetradecanoylphorbol-13-acetate (TPA) led to a greater than 200-fold increase in PDGF-2 transcript levels in these cells. Induction was dependent on protein synthesis and was not enhanced by cycloheximide exposure.In our initial investigation of the PDGF-2 promoter, a minimal promoter region, which included sequences extending only 42 base pairs upstream of the TATA signal, was found to be as efficient as 4 kilobase pairs upstream of the TATA signal in driving expression of a reporter gene in uninduced K562 cells. We also functionally identified different regulatory sequence elements of the PDGF-2 promoter in TPA-induced K562 cells. One region acted as a transcriptional silencer, while another region was necessary for maximal activity of the promoter in megakaryoblasts. This region was shown to bind nuclear factors and was the target for trans-activation in normal and tumor cells. In one tumor cell line, which expressed high PDGF-2 mRNA levels, the presence of the positive regulatory region resulted in a 30-fold increase in promoter activity. However, the ability of the minimal PDGF-2 promoter to drive reporter gene expression in uninduced K562 cells and normal fibroblasts, which contained no detectable PDGF-2 transcripts, implies the existence of other negative control mechanisms beyond the regulation of promoter activity.
Resumo:
This research focused on indicators with the aim of recognizing the main characters of this particular tool. The planning and use of Finnish sustainability indicators for natural resource management was examined as well as the experiences about the international sets of agri-environmental indicators were described. In both cases, the actual utilization of information was found to be quite minimal. Indicators have succeeded in bringing more of environmental information into the processes of decision making, but information has not been directly shifted into the actions of natural resource management. The concept of technical use of indicators was presented and considered as a possible explanation for the failures of information transfer and communication. Traditionally indicators have been used in order to recognize and describe the performance of certain system; to provide clear operative message for the actors. In policy planning, the situation is essentially different. We may lack both the jointly shared and accepted objectives of development as well the reliable and representative methods for measuring the issue under attention. Therefore, the technical orientation of using indicators may cause several problems at the policy forum. The study identified the risks of 1) reduced informative basis of decision-making, 2) narrowed approach of interpreting the data, 3) the focus on the issues that already are best documented and provides the most representative data series, and 4) the risks of losing the systemic viewpoints while focusing on measurable details of the system. Technical use of indicators may also result the excessive focus on information while being detached from the actions. With sustainability indicators, the major emphasis was indeed paid with producing information while the reality of agricultural practices was left mostly unaffected. Therefore, the essential process of social learning, where actions and producing of relevant information are alternating was neither realized. This study underlines the complexity of information transfer, mutual communication and the learning of new practices. Besides the information and measurable number people also need personal experiences and interesting stories, which make them to understand the meaning of information in their own lives. Particularly important this is for thechildren, who are studying for to be the future decision-makers of food system; in production as well as the in consumption of food. Numbers will be useful tools of management as soon there exists the awareness of the direction, where to strive for.
Resumo:
Active particles contain internal degrees of freedom with the ability to take in and dissipate energy and, in the process, execute systematic movement. Examples include all living organisms and their motile constituents such as molecular motors. This article reviews recent progress in applying the principles of nonequilibrium statistical mechanics and hydrodynamics to form a systematic theory of the behavior of collections of active particles-active matter-with only minimal regard to microscopic details. A unified view of the many kinds of active matter is presented, encompassing not only living systems but inanimate analogs. Theory and experiment are discussed side by side.
Resumo:
We propose a physical mechanism to explain the origin of the intense burst of massive-star formation seen in colliding/merging, gas-rich, field spiral galaxies. We explicitly take account of the different parameters for the two main mass components, H-2 and H I, of the interstellar medium within a galaxy and follow their consequent different evolution during a collision between two galaxies. We also note that, in a typical spiral galaxy-like our galaxy, the Giant Molecular Clouds (GMCs) are in a near-virial equilibrium and form the current sites of massive-star formation, but have a low star formation rate. We show that this star formation rate is increased following a collision between galaxies. During a typical collision between two field spiral galaxies, the H I clouds from the two galaxies undergo collisions at a relative velocity of approximately 300 km s-1. However, the GMCs, with their smaller volume filling factor, do not collide. The collisions among the H I clouds from the two galaxies lead to the formation of a hot, ionized, high-pressure remnant gas. The over-pressure due to this hot gas causes a radiative shock compression of the outer layers of a preexisting GMC in the overlapping wedge region. This makes these layers gravitationally unstable, thus triggering a burst of massive-star formation in the initially barely stable GMCs.The resulting value of the typical IR luminosity from the young, massive stars from a pair of colliding galaxies is estimated to be approximately 2 x 10(11) L., in agreement with the observed values. In our model, the massive-star formation occurs in situ in the overlapping regions of a pair of colliding galaxies. We can thus explain the origin of enhanced star formation over an extended, central area approximately several kiloparsecs in size, as seen in typical colliding galaxies, and also the origin of starbursts in extranuclear regions of disk overlap as seen in Arp 299 (NGC 3690/IC 694) and in Arp 244 (NGC 4038/39). Whether the IR emission from the central region or that from the surrounding extranuclear galactic disk dominates depends on the geometry and the epoch of the collision and on the initial radial gas distribution in the two galaxies. In general, the central starburst would be stronger than that in the disks, due to the higher preexisting gas densities in the central region. The burst of star formation is expected to last over a galactic gas disk crossing time approximately 4 x 10(7) yr. We can also explain the simultaneous existence of nearly normal CO galaxy luminosities and shocked H-2 gas, as seen in colliding field galaxies.This is a minimal model, in that the only necessary condition for it to work is that there should be a sufficient overlap between the spatial gas distributions of the colliding galaxy pair.
Resumo:
A spanning tree T of a graph G is said to be a tree t-spanner if the distance between any two vertices in T is at most t times their distance in G. A graph that has a tree t-spanner is called a tree t-spanner admissible graph. The problem of deciding whether a graph is tree t-spanner admissible is NP-complete for any fixed t >= 4 and is linearly solvable for t <= 2. The case t = 3 still remains open. A chordal graph is called a 2-sep chordal graph if all of its minimal a - b vertex separators for every pair of non-adjacent vertices a and b are of size two. It is known that not all 2-sep chordal graphs admit tree 3-spanners This paper presents a structural characterization and a linear time recognition algorithm of tree 3-spanner admissible 2-sep chordal graphs. Finally, a linear time algorithm to construct a tree 3-spanner of a tree 3-spanner admissible 2-sep chordal graph is proposed. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Mycotoxins are secondary metabolites of filamentous fungi. They pose a health risk to humans and animals due to their harmful biological properties and common occurrence in food and feed. Liquid chromatography/mass spectrometry (LC/MS) has gained popularity in the trace analysis of food contaminants. In this study, the applicability of the technique was evaluated in multi-residue methods of mycotoxins aiming at simultaneous detection of chemically diverse compounds. Methods were developed for rapid determination of toxins produced by fungal genera of Aspergillus, Fusarium, Penicillium and Claviceps from cheese, cereal based agar matrices and grains. Analytes were extracted from these matrices with organic solvents. Minimal sample clean-up was carried out before the analysis of the mycotoxins with reversed phase LC coupled to tandem MS (MS/MS). The methods were validated and applied for investigating mycotoxins in cheese and ergot alkaloid occurrence in Finnish grains. Additionally, the toxin production of two Fusarium species predominant in northern Europe was studied. Nine mycotoxins could be determined from cheese with the method developed. The limits of quantification (LOQ) allowed the quantification at concentrations varying from 0.6 to 5.0 µg/kg. The recoveries ranged between 96 and 143 %, and the within-day repeatability (as relative standard deviation, RSDr) between 2.3 and 12.1 %. Roquefortine C and mycophenolic acid could be detected at levels of 300 up to 12000 µg/kg in the mould cheese samples analysed. A total of 29 or 31 toxins could be analysed with the method developed for agar matrices and grains, with the LOQs ranging overall from 0.1 to 1250 µg/kg. The recoveries ranged generally between 44 and 139 %, and the RSDr between 2.0 and 38 %. Type-A trichothecenes and beauvericin were determined from the cereal based agar and grain cultures of F. sporotrichioides and F. langsethiae. T-2 toxin was the main metabolite, the average levels reaching 22000 µg/kg in the grain cultures after 28 days of incubation. The method developed for ten ergot alkaloids from grains allowed their quantification at levels varying from 0.01 to 10 µg/kg. The recoveries ranged from 51 to 139 %, and the RSDr from 0.6 to 13.9 %. Ergot alkaloids were measured in barley and rye at average levels of 59 and 720 µg/kg, respectively. The two most prevalent alkaloids were ergocornine and ergocristine. The LC/MS methods developed enabled rapid detection of mycotoxins in such applications where several toxins co-occurred. Generally, the performance of the methods was good, allowing reliable analysis of the mycotoxins of interest with sufficiently low quantification limits. However, the variation in validation results highlighted the challenges related to optimising this type of multi-residue methods. New data was obtained about the occurrence of mycotoxins in mould cheeses and of ergot alkaloids in Finnish grains. In addition, the study revealed the high mycotoxin-producing potential of two common fungi in Finnish crops. The information can be useful when risks related to fungal and mycotoxin contamination will be assessed.
Resumo:
Hantaviruses are one of the five genera of the vector-borne virus family Bunyaviridae. While other members of the family are transmitted via arthropods, hantaviruses are carried and transmitted by rodents and insectivores. Occasional transmission to humans occurs via inhalation of aerosolized rodent excreta. When transmitted to man hantaviruses cause hemorrhagic fever with renal syndrome (HFRS, in Eurasia, mortality ~10%) and hantavirus cardiopulmonary syndrome (HCPS, in the Americas, mortality ~40%). The single-stranded, negative-sense RNA genome of hantaviruses is in segments S, M and L that respectively encode for nucleocapsid (N), glycoproteins Gn and Gc, and RNA-dependent RNA-polymerase (RdRp or L protein). The genome segments, encapsidated by N protein to form ribonucleoprotein (RNP), are enclosed inside a lipid envelope decorated by spikes formed of Gn and Gc. The focus of this study was to understand the mechanisms and interactions through which the virion is formed and maintained. We observed that when extracted from virions both Gn and Gc favor homo- over hetero-oligomerization. The minimal glycoprotein complexes extracted from virion by detergent were observed, by using ultracentrifugation and gel filtration, to be tetrameric Gn and homodimeric Gc. These results led us to suggest a model where tetrameric Gn complexes are interconnected through homodimeric Gc units to form the grid-like surface architecture described for hantaviruses. This model was found to correlate with the three-dimensional (3D) reconstruction of virion surface created using cryo-electron tomography (cryo-ET). The 3D-density map showed the spike complex formed of Gn and Gc to be 10 nm high and to display a four-fold symmetry with dimensions of 15 nm times 15 nm. This unique square-shaped complex on a roughly round virion creates a hitch for the assembly, since a sphere cannot be broken into rectangles. Thus additional interactions are likely required for the virion assembly. In cryo-ET we observed that the RNP makes occasional contacts to the viral membrane, suggesting an interaction between the spike and RNP. We were able to demonstrate this interaction using various techniques, and showed that both Gn and Gc contribute to the interaction. This led us to suggest that in addition to the interactions between Gn and Gc, also the interaction between spike and RNP is required for assembly. We found galectin-3 binding protein (referred to as 90K) to co-purify with the virions and showed an interaction between 90K and the virion. Analysis of plasma samples taken from patients hospitalized for Puumala virus infection showed increased concentrations of 90K in the acute phase and the increased 90K level was found to correlate with several parameters that reflect the severity of acute HFRS. The results of these studies confirmed, but also challenged some of the dogmas on the structure and assembly of hantaviruses. We confirmed that Gn and RNP do interact, as long assumed. On the other hand we demonstrated that the glycoproteins Gn and Gc exist as homo-oligomers or appear in large hetero-oligomeric complexes, rather than form primarily heterodimers as was previously assumed. This work provided new insight into the structure and assembly of hantaviruses.
Resumo:
Mining and blending operations in the high grade iron ore deposit under study are performed to optimize recovery with minimal alumina content while maintaining required levels of other chemical component and a proper mix of ore types. In the present work the regionalisation of alumina in the ores has been studied independently and its effects on global and local recoverable tonnage as well as on alternatives of mining operations have been evaluated. The global tonnage recovery curves for blocks (20m x 20m x 12m) obtained by simulation closely approximated the curves obtained theoretically using a change of support under the discretised gaussian model. Variations in block size up to 80m x 20m x 12m did not affect the recovery as the horizontal dimensions of the blocks are small in relation to the range of the variogram. A comparison of the local tonnage recovery curves obtained through multiple conditional simulations made with that obtained by the method of uniform conditioning of block grades on an estimate of panel 100m x 100m x 12m panel grade reveals comparable results only in panels which have been well conditioned and possesing an ensemble simulation mean close to the ordinary kriged value for the panel. Study of simple alternative sequence of mining on the conditionally simulated deposit shows that concentration of mining operations simultaneously on a single bench enhances the fluctuation in alumina values of ore mined.
Resumo:
Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.
Resumo:
The performance of a program will ultimately be limited by its serial (scalar) portion, as pointed out by Amdahl′s Law. Reported studies thus far of instruction-level parallelism have mixed data-parallel program portions with scalar program portions, often leading to contradictory and controversial results. We report an instruction-level behavioral characterization of scalar code containing minimal data-parallelism, extracted from highly vectorized programs of the PERFECT benchmark suite running on a Cray Y-MP system. We classify scalar basic blocks according to their instruction mix, characterize the data dependencies seen in each class, and, as a first step, measure the maximum intrablock instruction-level parallelism available. We observe skewed rather than balanced instruction distributions in scalar code and in individual basic block classes of scalar code; nonuniform distribution of parallelism across instruction classes; and, as expected, limited available intrablock parallelism. We identify frequently occurring data-dependence patterns and discuss new instructions to reduce latency. Toward effective scalar hardware, we study latency-pipelining trade-offs and restricted multiple instruction issue mechanisms.
Resumo:
We address risk minimizing option pricing in a regime switching market where the floating interest rate depends on a finite state Markov process. The growth rate and the volatility of the stock also depend on the Markov process. Using the minimal martingale measure, we show that the locally risk minimizing prices for certain exotic options satisfy a system of Black-Scholes partial differential equations with appropriate boundary conditions. We find the corresponding hedging strategies and the residual risk. We develop suitable numerical methods to compute option prices.
Resumo:
Failure to repair DNA double-strand breaks (DSBs) can lead to cell death or cancer. Although nonhomologous end joining (NHEJ) has been studied extensively in mammals, little is known about it in primary tissues. Using oligomeric DNA mimicking endogenous DSBs, NHEJ in cell-free extracts of rat tissues were studied. Results show that efficiency of NHEJ is highest in lungs compared to other somatic tissues. DSBs with compatible and blunt ends joined without modifications, while noncompatible ends joined with minimal alterations in lungs and testes. Thymus exhibited elevated joining, followed by brain and spleen, which could be correlated with NHEJ gene expression. However, NHEJ efficiency was poor in terminally differentiated organs like heart, kidney and liver. Strikingly, NHEJ junctions from these tissues also showed extensive deletions and insertions. Hence, for the first time, we show that despite mode of joining being generally comparable, efficiency of NHEJ varies among primary tissues of mammals.
Resumo:
Let G be a simple, undirected, finite graph with vertex set V(G) and edge set E(C). A k-dimensional box is a Cartesian product of closed intervals a(1), b(1)] x a(2), b(2)] x ... x a(k), b(k)]. The boxicity of G, box(G) is the minimum integer k such that G can be represented as the intersection graph of k-dimensional boxes, i.e. each vertex is mapped to a k-dimensional box and two vertices are adjacent in G if and only if their corresponding boxes intersect. Let P = (S, P) be a poset where S is the ground set and P is a reflexive, anti-symmetric and transitive binary relation on S. The dimension of P, dim(P) is the minimum integer l such that P can be expressed as the intersection of t total orders. Let G(P) be the underlying comparability graph of P. It is a well-known fact that posets with the same underlying comparability graph have the same dimension. The first result of this paper links the dimension of a poset to the boxicity of its underlying comparability graph. In particular, we show that for any poset P, box(G(P))/(chi(G(P)) - 1) <= dim(P) <= 2box(G(P)), where chi(G(P)) is the chromatic number of G(P) and chi(G(P)) not equal 1. The second result of the paper relates the boxicity of a graph G with a natural partial order associated with its extended double cover, denoted as G(c). Let P-c be the natural height-2 poset associated with G(c) by making A the set of minimal elements and B the set of maximal elements. We show that box(G)/2 <= dim(P-c) <= 2box(G) + 4. These results have some immediate and significant consequences. The upper bound dim(P) <= 2box(G(P)) allows us to derive hitherto unknown upper bounds for poset dimension. In the other direction, using the already known bounds for partial order dimension we get the following: (I) The boxicity of any graph with maximum degree Delta is O(Delta log(2) Delta) which is an improvement over the best known upper bound of Delta(2) + 2. (2) There exist graphs with boxicity Omega(Delta log Delta). This disproves a conjecture that the boxicity of a graph is O(Delta). (3) There exists no polynomial-time algorithm to approximate the boxicity of a bipartite graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0, unless NP=ZPP.