53 resultados para Graph partitioning
Resumo:
Multiple parallel synthesis and evaluation have been combined in order to identify new nitrogen heterocycles for the partitioning of minor actinides(III) such as americium(III) from lanthanides such as europium(Ill). An array of triazine-containing molecules was made using multiple parallel syntheses from diketones and amide hydrazides. An excess of each of the resulting purified reagents was dissolved in 1,1,2,2-tetrachloroethane containing 2-bromodecanoic acid, and equilibrated with an aqueous solution containing the radiotracers Eu-152 and Am-241 in nitric acid ([Eu] + [Am] < 400 nanomol dm(-3)). Gamma counting of the organic and aqueous phases led to the identification of several new reagents for the selective extraction of americium(III). In particular, 6-(2-pyridyl)-2-(5,6-dialkyl-1,2,4-triazaphenyl)pyridines were found to be effective reagents for the separation of americium(III) from europium(III), (SFAm/Eu was ca. 30 in [HNO3] = 0.013 mol/L).
Resumo:
In designing modern office buildings, building spaces are frequently zoned by introducing internal partitioning, which may have a significant influence on the room air environment. This internal partitioning was studied by means of model test, numerical simulation, and statistical analysis as the final stage. In this paper, the results produced from the statistical analysis are summarized and presented.
Effect of internal partitioning on indoor air quality of rooms with mixing ventilation - basic study
Resumo:
The internal partitioning, which is frequently introduced in open-space planning due to its flexibility, was tested to study its effects on the room air quality as well as ventilation performance. For the study, physical tests using a small model room and numerical modeling using CFD computation were utilized to evaluate different test conditions employing mixing ventilation from the ceiling. The partition parameters, such as its location, height, and the gap underneath, as well as contaminant source location were tested under isothermal conditions. This paper summarizes the results from the study.
Resumo:
The effect of increased dietary intakes of alpha-linolenic acid (ALNA) or eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) for 2 months upon plasma lipid composition and capacity for conversion of ALNA to longer-chain metabolites was investigated in healthy men (52 (SD 12) years). After a 4-week baseline period when the subjects substituted a control spread, a test meal containing [U-C-13]ALNA (700 mg) was consumed to measure conversion to EPA, docosapentaenoic acid (DPA) and DHA over 48 h. Subjects were then randomised to one of three groups for 8 weeks before repeating the tracer study: (1) continued on same intake (control, n 5); (2) increased ALNA intake (10 g/d, n 4); (3) increased EPA+DHA intake (1.5 g/d, n 5). At baseline, apparent fractional conversion of labelled ALNA was: EPA 2.80, DPA 1.20 and DRA 0.04%. After 8 weeks on the control diet, plasma lipid composition and [C-13]ALNA conversion remained unchanged compared with baseline. The high-ALNA diet resulted in raised plasma triacylglycerol-EPA and -DPA concentrations and phosphatidylcholine-EPA concentration, whilst [C-13]ALNA conversion was similar to baseline. The high-(EPA+DHA) diet raised plasma phosphatidylcholine-EPA and -DHA concentrations, decreased [C-13]ALNA conversion to EPA (2-fold) and DPA (4-fold), whilst [C-13]ALNA conversion to DHA was unchanged. The dietary interventions did not alter partitioning of ALNA towards beta-oxidation. The present results indicate ALNA conversion was down-regulated by increased product (EPA+DHA) availability, but was not up-regulated by increased substrate (ALNA) consumption. This suggests regulation of ALNA conversion may limit the influence of variations in dietary n-3 fatty acid intake on plasma lipid compositions.
Resumo:
The Web's link structure (termed the Web Graph) is a richly connected set of Web pages. Current applications use this graph for indexing and information retrieval purposes. In contrast the relationship between Web Graph and application is reversed by letting the structure of the Web Graph influence the behaviour of an application. Presents a novel Web crawling agent, AlienBot, the output of which is orthogonally coupled to the enemy generation strategy of a computer game. The Web Graph guides AlienBot, causing it to generate a stochastic process. Shows the effectiveness of such unorthodox coupling to both the playability of the game and the heuristics of the Web crawler. In addition, presents the results of the sample of Web pages collected by the crawling process. In particular, shows: how AlienBot was able to identify the power law inherent in the link structure of the Web; that 61.74 per cent of Web pages use some form of scripting technology; that the size of the Web can be estimated at just over 5.2 billion pages; and that less than 7 per cent of Web pages fully comply with some variant of (X)HTML.
Resumo:
In order to make a full evaluation of an interconnection network, it is essential to estimate the minimum size of a largest connected component of this network provided the faulty vertices in the network may break its connectedness. Star graphs are recognized as promising candidates for interconnection networks. This article addresses the size of a largest connected component of a faulty star graph. We prove that, in an n-star graph (n >= 3) with up to 2n-4 faulty vertices, all fault-free vertices but at most two form a connected component. Moreover, all fault-free vertices but exactly two form a connected component if and only if the set of all faulty vertices is equal to the neighbourhood of a pair of fault-free adjacent vertices. These results show that star graphs exhibit excellent fault-tolerant abilities in the sense that there exists a large functional network in a faulty star graph.
Resumo:
K-Means is a popular clustering algorithm which adopts an iterative refinement procedure to determine data partitions and to compute their associated centres of mass, called centroids. The straightforward implementation of the algorithm is often referred to as `brute force' since it computes a proximity measure from each data point to each centroid at every iteration of the K-Means process. Efficient implementations of the K-Means algorithm have been predominantly based on multi-dimensional binary search trees (KD-Trees). A combination of an efficient data structure and geometrical constraints allow to reduce the number of distance computations required at each iteration. In this work we present a general space partitioning approach for improving the efficiency and the scalability of the K-Means algorithm. We propose to adopt approximate hierarchical clustering methods to generate binary space partitioning trees in contrast to KD-Trees. In the experimental analysis, we have tested the performance of the proposed Binary Space Partitioning K-Means (BSP-KM) when a divisive clustering algorithm is used. We have carried out extensive experimental tests to compare the proposed approach to the one based on KD-Trees (KD-KM) in a wide range of the parameters space. BSP-KM is more scalable than KDKM, while keeping the deterministic nature of the `brute force' algorithm. In particular, the proposed space partitioning approach has shown to overcome the well-known limitation of KD-Trees in high-dimensional spaces and can also be adopted to improve the efficiency of other algorithms in which KD-Trees have been used.
Resumo:
Peroxy radicals were measured onboard two scientific aircrafts during the AMMA (African Monsoon Multidisciplinary Analysis) campaign in summer 2006. This paper reports results from the flight on 16 August 2006 during which measurements of HO2 by laser induced fluorescence spectroscopy at low pressure (LIF-FAGE) and total peroxy radicals (RO2* = HO2+ΣRO2, R = organic chain) by two similar instruments based on the peroxy radical chemical amplification (PeRCA) technique were subject of a blind intercomparison. The German DLR-Falcon and the British FAAM-BAe-146 flew wing tip to wing tip for about 30 min making concurrent measurements on 2 horizontal level runs at 697 and 485 hPa over the same geographical area in Burkina Faso. A full set of supporting measurements comprising photolysis frequencies, and relevant trace gases like CO, NO, NO2, NOy, O3 and a wider range of VOCs were collected simultaneously. Results are discussed on the basis of the characteristics and limitations of the different instruments used. Generally, no data bias are identified and the RO2* data available agree quite reasonably within the instrumental errors. The [RO2*]/[HO2] ratios, which vary between 1:1 and 3:1, as well as the peroxy radical variability, concur with variations in photolysis rates and in other potential radical precursors. Model results provide additional information about dominant radical formation and loss processes.
Resumo:
Current forest growth models and yield tables are almost exclusively based on data from mature trees, reducing their applicability to young and developing stands. To address this gap, young European beech, sessile oak, Scots pine and Norway spruce trees approximately 0 to 10 years old were destructively sampled in a range of naturally regenerated forest stands in Central Europe. Diameter at base and height were first measured in situ for up to 175 individuals per species. Subsequently, the trees were excavated and dry biomass of foliage, branches, stems and roots was measured. Allometric relations were then used to calculate biomass allocation coefficients (BAC) and growth efficiency (GE) patterns in young trees. We found large differences in BAC and GE between broadleaves and conifers, but also between species within these categories. Both BAC and GE are strongly age-specific in young trees, their rapidly changing values reflecting different growth strategies in the earliest stages of growth. We show that linear relationships describing biomass allocation in older trees are not applicable in young trees. To accurately predict forest biomass and carbon stocks, forest growth models need to include species and age specific parameters of biomass allocation patterns.
Resumo:
A simple and coherent framework for partitioning uncertainty in multi-model climate ensembles is presented. The analysis of variance (ANOVA) is used to decompose a measure of total variation additively into scenario uncertainty, model uncertainty and internal variability. This approach requires fewer assumptions than existing methods and can be easily used to quantify uncertainty related to model-scenario interaction - the contribution to model uncertainty arising from the variation across scenarios of model deviations from the ensemble mean. Uncertainty in global mean surface air temperature is quantified as a function of lead time for a subset of the Coupled Model Intercomparison Project phase 3 ensemble and results largely agree with those published by other authors: scenario uncertainty dominates beyond 2050 and internal variability remains approximately constant over the 21st century. Both elements of model uncertainty, due to scenario-independent and scenario-dependent deviations from the ensemble mean, are found to increase with time. Estimates of model deviations that arise as by-products of the framework reveal significant differences between models that could lead to a deeper understanding of the sources of uncertainty in multi-model ensembles. For example, three models are shown diverging pattern over the 21st century, while another model exhibits an unusually large variation among its scenario-dependent deviations.
An isotope dilution model for partitioning phenylalanine uptake by the liver of lactating dairy cows
Resumo:
An isotope dilution model for partitioning phenylalanine uptake by the liver of the lactating dairy cow was constructed and solved in the steady state. If assumptions are made, model solution permits calculation of the rate of phenylalanine uptake from portal vein and hepatic arterial blood supply, phenylalanine release into the hepatic vein, phenylalanine oxidation and synthesis, and degradation of hepatic constitutive and export proteins. The model requires the measurement of plasma fow rate through the liver in combination with phenylalanine concentrations and plateau isotopic enrichments in arterial, portal and hepatic plasma during a constant infusion of [1-13C]phenylalanine tracer. The model can be applied to other amino acids with similar metabolic fates and will provide a means for assessing the impact of hepatic metabolism on amino acid availability to peripheral tissues. This is of particular importance for the dairy cow when considering the requirements for milk protein synthesis and the negative environmental impact of excessive nitrogen excretion.
Resumo:
The soluble phase of milk was separated at 20 and 80°C using ultrafiltration. The resulting permeates were then subjected to further ultrafiltration and dialysis at close to these two temperatures. It was found that pH, Ca2+ and soluble Ca decreased as the separation temperature increased both in original UF permeates and in dialysates obtained from these permeates, but P decreased only slightly. The major reason for these changes was due to the precipitation of calcium phosphate/citrate complexes onto the casein micelle with concomitant release of H+. The pH of both permeates and dialysates from milk at 20°C were slightly higher than for milk. When UF permeates collected at 20 and 80°C, were each dialysed at both these temperatures, the dialysate collected at 80°C showed much less temperature dependence for pH and ionic calcium compared with that collected at 20°C. This is in contrast to milk, which shows considerable temperature dependence for pH and ionic calcium. Further experiments revealed that the pH and Ca2+ concentration of permeates showed high temperature dependence above the temperature at which they were separated, but a much lower temperature dependence below that temperature. These findings suggest that dialysis and UF of milk at high temperature provide the best means yet for estimating the pH and ionic calcium of milk at that temperature.
Resumo:
Software representations of scenes, i.e. the modelling of objects in space, are used in many application domains. Current modelling and scene description standards focus on visualisation dimensions, and are intrinsically limited by their dependence upon their semantic interpretation and contextual application by humans. In this paper we propose the need for an open, extensible and semantically rich modelling language, which facilitates a machine-readable semantic structure. We critically review existing standards and techniques, and highlight a need for a semantically focussed scene description language. Based on this defined need we propose a preliminary solution, based on hypergraph theory, and reflect on application domains.