851 resultados para Large scale graph processing
Resumo:
To tackle the challenges at circuit level and system level VLSI and embedded system design, this dissertation proposes various novel algorithms to explore the efficient solutions. At the circuit level, a new reliability-driven minimum cost Steiner routing and layer assignment scheme is proposed, and the first transceiver insertion algorithmic framework for the optical interconnect is proposed. At the system level, a reliability-driven task scheduling scheme for multiprocessor real-time embedded systems, which optimizes system energy consumption under stochastic fault occurrences, is proposed. The embedded system design is also widely used in the smart home area for improving health, wellbeing and quality of life. The proposed scheduling scheme for multiprocessor embedded systems is hence extended to handle the energy consumption scheduling issues for smart homes. The extended scheme can arrange the household appliances for operation to minimize monetary expense of a customer based on the time-varying pricing model.
Resumo:
This study presents a computational parametric analysis of DME steam reforming in a large scale Circulating Fluidized Bed (CFB) reactor. The Computational Fluid Dynamic (CFD) model used, which is based on Eulerian-Eulerian dispersed flow, has been developed and validated in Part I of this study [1]. The effect of the reactor inlet configuration, gas residence time, inlet temperature and steam to DME ratio on the overall reactor performance and products have all been investigated. The results have shown that the use of double sided solid feeding system remarkable improvement in the flow uniformity, but with limited effect on the reactions and products. The temperature has been found to play a dominant role in increasing the DME conversion and the hydrogen yield. According to the parametric analysis, it is recommended to run the CFB reactor at around 300 °C inlet temperature, 5.5 steam to DME molar ratio, 4 s gas residence time and 37,104 ml gcat -1 h-1 space velocity. At these conditions, the DME conversion and hydrogen molar concentration in the product gas were both found to be around 80%.
Resumo:
Some color centers in diamond can serve as quantum bits which can be manipulated with microwave pulses and read out with laser, even at room temperature. However, the photon collection efficiency of bulk diamond is greatly reduced by refraction at the diamond/air interface. To address this issue, we fabricated arrays of diamond nanostructures, differing in both diameter and top end shape, with HSQ and Cr as the etching mask materials, aiming toward large scale fabrication of single-photon sources with enhanced collection efficiency made of nitrogen vacancy (NV) embedded diamond. With a mixture of O2 and CHF3 gas plasma, diamond pillars with diameters down to 45 nm were obtained. The top end shape evolution has been represented with a simple model. The tests of size dependent single-photon properties confirmed an improved single-photon collection efficiency enhancement, larger than tenfold, and a mild decrease of decoherence time with decreasing pillar diameter was observed as expected. These results provide useful information for future applications of nanostructured diamond as a single-photon source.
Resumo:
Spread of antibiotic resistance among bacteria responsible for nosocomial and community-acquired infections urges for novel therapeutic or prophylactic targets and for innovative pathogen-specific antibacterial compounds. Major challenges are posed by opportunistic pathogens belonging to the low GC% gram-positive bacteria. Among those, Enterococcus faecalis is a leading cause of hospital-acquired infections associated with life-threatening issues and increased hospital costs. To better understand the molecular properties of enterococci that may be required for virulence, and that may explain the emergence of these bacteria in nosocomial infections, we performed the first large-scale functional analysis of E. faecalis V583, the first vancomycin-resistant isolate from a human bloodstream infection. E. faecalis V583 is within the high-risk clonal complex 2 group, which comprises mostly isolates derived from hospital infections worldwide. We conducted broad-range screenings of candidate genes likely involved in host adaptation (e.g., colonization and/or virulence). For this purpose, a library was constructed of targeted insertion mutations in 177 genes encoding putative surface or stress-response factors. Individual mutants were subsequently tested for their i) resistance to oxidative stress, ii) antibiotic resistance, iii) resistance to opsonophagocytosis, iv) adherence to the human colon carcinoma Caco-2 epithelial cells and v) virulence in a surrogate insect model. Our results identified a number of factors that are involved in the interaction between enterococci and their host environments. Their predicted functions highlight the importance of cell envelope glycopolymers in E. faecalis host adaptation. This study provides a valuable genetic database for understanding the steps leading E. faecalis to opportunistic virulence.
Resumo:
Strong convective events can produce extreme precipitation, hail, lightning or gusts, potentially inducing severe socio-economic impacts. These events have a relatively small spatial extension and, in most cases, a short lifetime. In this study, a model is developed for estimating convective extreme events based on large scale conditions. It is shown that strong convective events can be characterized by a Weibull distribution of radar-based rainfall with a low shape and high scale parameter value. A radius of 90km around a station reporting a convective situation turned out to be suitable. A methodology is developed to estimate the Weibull parameters and thus the occurrence probability of convective events from large scale atmospheric instability and enhanced near-surface humidity, which are usually found on a larger scale than the convective event itself. Here, the probability for the occurrence of extreme convective events is estimated from the KO-index indicating the stability, and relative humidity at 1000hPa. Both variables are computed from ERA-Interim reanalysis. In a first version of the methodology, these two variables are applied to estimate the spatial rainfall distribution and to estimate the occurrence of a convective event. The developed method shows significant skill in estimating the occurrence of convective events as observed at synoptic stations, lightning measurements, and severe weather reports. In order to take frontal influences into account, a scheme for the detection of atmospheric fronts is implemented. While generally higher instability is found in the vicinity of fronts, the skill of this approach is largely unchanged. Additional improvements were achieved by a bias-correction and the use of ERA-Interim precipitation. The resulting estimation method is applied to the ERA-Interim period (1979-2014) to establish a ranking of estimated convective extreme events. Two strong estimated events that reveal a frontal influence are analysed in detail. As a second application, the method is applied to GCM-based decadal predictions in the period 1979-2014, which were initialized every year. It is shown that decadal predictive skill for convective event frequencies over Germany is found for the first 3-4 years after the initialization.
Resumo:
Aim Positive regional correlations between biodiversity and human population have been detected for several taxonomic groups and geographical regions. Such correlations could have important conservation implications and have been mainly attributed to ecological factors, with little testing for an artefactual explanation: more populated regions may show higher biodiversity because they are more thoroughly surveyed. We tested the hypothesis that the correlation between people and herptile diversity in Europe is influenced by survey effort
Resumo:
The increasing integration of renewable energies in the electricity grid contributes considerably to achieve the European Union goals on energy and Greenhouse Gases (GHG) emissions reduction. However, it also brings problems to grid management. Large scale energy storage can provide the means for a better integration of the renewable energy sources, for balancing supply and demand, to increase energy security, to enhance a better management of the grid and also to converge towards a low carbon economy. Geological formations have the potential to store large volumes of fluids with minimal impact to environment and society. One of the ways to ensure a large scale energy storage is to use the storage capacity in geological reservoir. In fact, there are several viable technologies for underground energy storage, as well as several types of underground reservoirs that can be considered. The geological energy storage technologies considered in this research were: Underground Gas Storage (UGS), Hydrogen Storage (HS), Compressed Air Energy Storage (CAES), Underground Pumped Hydro Storage (UPHS) and Thermal Energy Storage (TES). For these different types of underground energy storage technologies there are several types of geological reservoirs that can be suitable, namely: depleted hydrocarbon reservoirs, aquifers, salt formations and caverns, engineered rock caverns and abandoned mines. Specific site screening criteria are applicable to each of these reservoir types and technologies, which determines the viability of the reservoir itself, and of the technology for any particular site. This paper presents a review of the criteria applied in the scope of the Portuguese contribution to the EU funded project ESTMAP – Energy Storage Mapping and Planning.
Resumo:
Colloidal semiconductor nanocrystals (CS-NCs) possess compelling benefits of low-cost, large-scale solution processing, and tunable optoelectronic properties through controlled synthesis and surface chemistry engineering. These merits make them promising candidates for a variety of applications. This review focuses on the general strategies and recent developments of the controlled synthesis of CS-NCs in terms of crystalline structure, particle size, dominant exposed facet, and their surface passivation. Highlighted are the organic-media based synthesis of metal chalcogenide (including cadmium, lead, and copper chalcogenide) and metal oxide (including titanium oxide and zinc oxide) nanocrystals. Current challenges and thus future opportunities are also pointed out in this review.
Resumo:
Many conventional statistical machine learning al- gorithms generalise poorly if distribution bias ex- ists in the datasets. For example, distribution bias arises in the context of domain generalisation, where knowledge acquired from multiple source domains need to be used in a previously unseen target domains. We propose Elliptical Summary Randomisation (ESRand), an efficient domain generalisation approach that comprises of a randomised kernel and elliptical data summarisation. ESRand learns a domain interdependent projection to a la- tent subspace that minimises the existing biases to the data while maintaining the functional relationship between domains. In the latent subspace, ellipsoidal summaries replace the samples to enhance the generalisation by further removing bias and noise in the data. Moreover, the summarisation enables large-scale data processing by significantly reducing the size of the data. Through comprehensive analysis, we show that our subspace-based approach outperforms state-of-the-art results on several activity recognition benchmark datasets, while keeping the computational complexity significantly low.
Resumo:
Digestion of food in the intestines converts the compacted storage carbohydrates, starch and glycogen, to glucose. After each meal, a flux of glucose (>200 g) passes through the blood pool (4-6 g) in a short period of 2 h, keeping its concentration ideally in the range of 80-120 mg/100 mL. Tissue-specific glucose transporters (GLUTs) aid in the distribution of glucose to all tissues. The balance glucose after meeting the immediate energy needs is converted into glycogen and stored in liver (up to 100 g) and skeletal muscle (up to 300 g) for later use. High blood glucose gives the signal for increased release of insulin from pancreas. Insulin binds to insulin receptor on the plasma membrane and activates its autophosphorylation. This initiates the post-insulin-receptor signal cascade that accelerates synthesis of glycogen and triglyceride. Parallel control by phos-dephos and redox regulation of proteins exists for some of these steps. A major action of insulin is to inhibit gluconeogensis in the liver decreasing glucose output into blood. Cases with failed control of blood glucose have alarmingly increased since 1960 coinciding with changed life-styles and large scale food processing. Many of these turned out to be resistant to insulin, usually accompanied by dysfunctional glycogen storage. Glucose has an extended stay in blood at 8 mM and above and then indiscriminately adds on to surface protein-amino groups. Fructose in common sugar is 10-fold more active. This random glycation process interferes with the functions of many proteins (e.g., hemoglobin, eye lens proteins) and causes progressive damage to heart, kidneys, eyes and nerves. Some compounds are known to act as insulin mimics. Vanadium-peroxide complexes act at post-receptor level but are toxic. The fungus-derived 2,5-dihydroxybenzoquinone derivative is the first one known to act on the insulin receptor. The safe herbal products in use for centuries for glucose control have multiple active principles and targets. Some are effective in slowing formation of glucose in intestines by inhibiting alpha-glucosidases (e.g., salacia/saptarangi). Knowledge gained from French lilac on active guanidine group helped developing Metformin (1,1-dimethylbiguanide) one of the popular drugs in use. One strategy of keeping sugar content in diets in check is to use artificial sweeteners with no calories, no glucose or fructose and no effect on blood glucose (e.g., steviol, erythrytol). However, the three commonly used non-caloric artificial sweetener's, saccharin, sucralose and aspartame later developed glucose intolerance, the very condition they are expected to evade. Ideal way of keeping blood glucose under 6 mM and HbAlc, the glycation marker of hemoglobin, under 7% in blood is to correct the defects in signals that allow glucose flow into glycogen, still a difficult task with drugs and diets.
Resumo:
An attempt has been made in the present study to estimate and describe in detail the nature and extent of contamination of processed fishery products. In large scale prawn processing when the preprocess preparation is elaborate, the industry in India has found it advantageous to establish the primary processing centers away from the processing factories. The data collected have clearly indicated that if such processing centers are not properly organized there is a possibility of greater contamination of the products at this stage. The data collected during the course of this investigation have given the basis for the measures to be taken for the maintenance of bacterial quality of prawn during different stages of processing.
Resumo:
Two approaches were undertaken to characterize the arsenic (As) content of Chinese rice. First, a national market basket survey (n = 240) was conducted in provincial capitals, sourcing grain from China's premier rice production areas. Second, to reflect rural diets, paddy rice (n = 195) directly from farmers fields were collected from three regions in Hunan, a key rice producing province located in southern China. Two of the sites were within mining and smeltery districts, and the third was devoid of large-scale metal processing industries. Arsenic levels were determined in all the samples while a subset (n = 33) were characterized for As species, using a new simple and rapid extraction method suitable for use with Hamilton PRP-X100 anion exchange columns and HPLC-ICP-MS. The vast majority (85%) of the market rice grains possessed total As levels <150 ng g(-1). The rice collected from mine-impacted regions, however, were found to be highly enriched in As, reaching concentrations of up to 624 ng g(-1). Inorganic As (As(i)) was the predominant species detected in all of the speciated grain, with As(i) levels in some samples exceeding 300 ng g(-1). The As(i) concentration in polished and unpolished Chinese rice was successfully predicted from total As levels. The mean baseline concentrations for As(i) in Chinese market rice based on this survey were estimated to be 96 ng g(-1) while levels in mine-impacted areas were higher with ca. 50% of the rice in one region predicted to fail the national standard.
Resumo:
This paper introduces hybrid address spaces as a fundamental design methodology for implementing scalable runtime systems on many-core architectures without hardware support for cache coherence. We use hybrid address spaces for an implementation of MapReduce, a programming model for large-scale data processing, and the implementation of a remote memory access (RMA) model. Both implementations are available on the Intel SCC and are portable to similar architectures. We present the design and implementation of HyMR, a MapReduce runtime system whereby different stages and the synchronization operations between them alternate between a distributed memory address space and a shared memory address space, to improve performance and scalability. We compare HyMR to a reference implementation and we find that HyMR improves performance by a factor of 1.71× over a set of representative MapReduce benchmarks. We also compare HyMR with Phoenix++, a state-of-art implementation for systems with hardware-managed cache coherence in terms of scalability and sustained to peak data processing bandwidth, where HyMR demon- strates improvements of a factor of 3.1× and 3.2× respectively. We further evaluate our hybrid remote memory access (HyRMA) programming model and assess its performance to be superior of that of message passing.
Resumo:
The scale up of Spark Plasma Sintering (SPS) for the consolidation of large square monoliths (50 × 50 × 3 mm3) of thermoelectric material is demonstrated and the properties of the fabricated samples compared with those from laboratory scale SPS. The SPS processing of n-type TiS2 and p-type Cu10.4Ni1.6Sb4S13 produces highly dense compacts of phase pure material. Electrical and thermal transport property measurements reveal that the thermoelectric performance of the consolidated n- and p-type materials is comparable with that of material processed using laboratory scale SPS, with ZT values that approach 0.8 and 0.35 at 700 K for Cu10.4Ni1.6Sb4S13 and TiS2, respectively. Mechanical properties of the consolidated materials shows that large-scale SPS processing produces highly homogeneous materials with hardness and elastic moduli that deviate little from values obtained on materials processed on the laboratory scale.
Resumo:
A distributed system is a collection of networked autonomous processing units which must work in a cooperative manner. Currently, large-scale distributed systems, such as various telecommunication and computer networks, are abundant and used in a multitude of tasks. The field of distributed computing studies what can be computed efficiently in such systems. Distributed systems are usually modelled as graphs where nodes represent the processors and edges denote communication links between processors. This thesis concentrates on the computational complexity of the distributed graph colouring problem. The objective of the graph colouring problem is to assign a colour to each node in such a way that no two nodes connected by an edge share the same colour. In particular, it is often desirable to use only a small number of colours. This task is a fundamental symmetry-breaking primitive in various distributed algorithms. A graph that has been coloured in this manner using at most k different colours is said to be k-coloured. This work examines the synchronous message-passing model of distributed computation: every node runs the same algorithm, and the system operates in discrete synchronous communication rounds. During each round, a node can communicate with its neighbours and perform local computation. In this model, the time complexity of a problem is the number of synchronous communication rounds required to solve the problem. It is known that 3-colouring any k-coloured directed cycle requires at least ½(log* k - 3) communication rounds and is possible in ½(log* k + 7) communication rounds for all k ≥ 3. This work shows that for any k ≥ 3, colouring a k-coloured directed cycle with at most three colours is possible in ½(log* k + 3) rounds. In contrast, it is also shown that for some values of k, colouring a directed cycle with at most three colours requires at least ½(log* k + 1) communication rounds. Furthermore, in the case of directed rooted trees, reducing a k-colouring into a 3-colouring requires at least log* k + 1 rounds for some k and possible in log* k + 3 rounds for all k ≥ 3. The new positive and negative results are derived using computational methods, as the existence of distributed colouring algorithms corresponds to the colourability of so-called neighbourhood graphs. The colourability of these graphs is analysed using Boolean satisfiability (SAT) solvers. Finally, this thesis shows that similar methods are applicable in capturing the existence of distributed algorithms for other graph problems, such as the maximal matching problem.