859 resultados para Large-scale gradient


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Internet has enabled the creation of a growing number of large-scale knowledge bases in a variety of domains containing complementary information. Tools for automatically aligning these knowledge bases would make it possible to unify many sources of structured knowledge and answer complex queries. However, the efficient alignment of large-scale knowledge bases still poses a considerable challenge. Here, we present Simple Greedy Matching (SiGMa), a simple algorithm for aligning knowledge bases with millions of entities and facts. SiGMa is an iterative propagation algorithm which leverages both the structural information from the relationship graph as well as flexible similarity measures between entity properties in a greedy local search, thus making it scalable. Despite its greedy nature, our experiments indicate that SiGMa can efficiently match some of the world's largest knowledge bases with high precision. We provide additional experiments on benchmark datasets which demonstrate that SiGMa can outperform state-of-the-art approaches both in accuracy and efficiency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An experimental investigation has been undertaken in which vortex generators (VGs) have been employed to inhibit boundary-layer separation produced by the combined adversepressure- gradient of a terminal shock-wave and subsonic diffuser. This setup has been developed as part of a program to produce a more inlet relevant flow-field using a small-scale wind tunnel than previous studies. The resulting flow is dominated by large-scale separation, and as such, is thought to be a good test-bed for flow control. In this investigation, VGs have been added to determine their potential for shock-induced separation mitigation. In line with previous studies, it was observed that the application of VGs alone was not able to significantly alleviate separation overall, because enlarged corner separations was observed. Only when control of the corner separations using corner bleed was employed alongside centre-span control using VGs was a significant improvement in both wall pressure recovery (6% increase) and stagnation pressure recovery (2.4% increase) observed. Copyright © 2012 by the American Institute of Aeronautics and Astronautics, Inc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract Large-Eddy Simulation (LES) and hybrid Reynolds-averaged Navier–Stokes–LES (RANS–LES) methods are applied to a turbine blade ribbed internal duct with a 180° bend containing 24 pairs of ribs. Flow and heat transfer predictions are compared with experimental data and found to be in agreement. The choice of LES model is found to be of minor importance as the flow is dominated by large geometric scale structures. This is in contrast to several linear and nonlinear RANS models, which display turbulence model sensitivity. For LES, the influence of inlet turbulence is also tested and has a minor impact due to the strong turbulence generated by the ribs. Large scale turbulent motions destroy any classical boundary layer reducing near wall grid requirements. The wake-type flow structure makes this and similar flows nearly Reynolds number independent, allowing a range of flows to be studied at similar cost. Hence LES is a relatively cheap method for obtaining accurate heat transfer predictions in these types of flows.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Superhigh aspect-ratio Cu-thiourea (Cu(tu)) nanowires have been synthesized in large quantity via a fast and facile method. Nanowires of Cu (tu)Cl center dot 0.5H(2)O and Cu(tu)Br center dot 0.5H(2)O were found to be 60-100 nm and 100-200 nm, in diameter, and could extend to several millimeters in length. It is found to be the most convenient and facile approach to the fabrication of one-dimensional superhigh aspect-ratio nanomaterials in large scale so far.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The conditional nonlinear optimal perturbation (CNOP), which is a nonlinear generalization of the linear singular vector (LSV), is applied in important problems of atmospheric and oceanic sciences, including ENSO predictability, targeted observations, and ensemble forecast. In this study, we investigate the computational cost of obtaining the CNOP by several methods. Differences and similarities, in terms of the computational error and cost in obtaining the CNOP, are compared among the sequential quadratic programming (SQP) algorithm, the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, and the spectral projected gradients (SPG2) algorithm. A theoretical grassland ecosystem model and the classical Lorenz model are used as examples. Numerical results demonstrate that the computational error is acceptable with all three algorithms. The computational cost to obtain the CNOP is reduced by using the SQP algorithm. The experimental results also reveal that the L-BFGS algorithm is the most effective algorithm among the three optimization algorithms for obtaining the CNOP. The numerical results suggest a new approach and algorithm for obtaining the CNOP for a large-scale optimization problem.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Proven by the petroleum exploration activities, the karsts-fissure reservoir in carbonate rocks is significant to find out the large scale oil & gas field. They are made up of the four reservoir types: karsts-cave, karsts-crack, crack-cave and fracture-pore-cave. Each reservoir space and each reservoir bed has different features of reservoir heterogeneity and small scale of pore-crack-cave. The fracture-cave reservoir in carbonate rocks is characteristic by multi-types and long oiliness well. The reservoir shape is controlled by the irregular pore-crack-cave. The development level of fracture and karst-cave is the key element of hydrocarbon enriching, high productivity and stable production. However, most of Carbonate formation are buried deeply and the signal-ration-noise of seismic reflection are very low. It is reason why the fracture-cave reservoir are difficult to be predicted effectively. In terms of surveyed and studied lots of the former research outcome, The author applied the methods of synthetical reservoir geophysical prediction from two ways including macrosopic and microcomic technics in terms of the reservoir-cap condition, geophysics and geology feature and difficulty of prediction in carbonate rocks. It is guiden by the new ideas of stratigraphy, sedimentology, sedimentography, reservoir geology and karst geology. The geophysics technology is key technics. In aspects of macroscopic studies, starting off the three efficiencies of controlling the reservoir distribution including sedimental facies, karst and fracture, by means of comprehensive utilization of geology, geophysics, boring well and well log, the study of reservoir features and karst inside story are developed in terms of data of individual well and multiple well. Through establishing the carbonate deposition model, karstic model and fracture model, the macro-distribution laws of carbonatite are carried out by the study of coherence analysis, seismic reflection feature analysis and palaeotectonics analysis. In aspects of microcosmic studies, starting off analysis in reservoir geophysical response feature of fracture and karst-cave model according to guidance of the macroscopic geological model in carbonate reservoir, the methods of the carbonate reservoir prediction are developed by comprehensively utilization of seismic multi-attribution intersection analysis, seismic inversion restricted by log, seismic discontinuity analysis, seimic spectrum attenuation gradient, moniliform reflection feature analysis and multiparameter karst reservoir appraisement.Through application of carbonate reservoir synthetical geophysics prediction, the author r successfully develops the beneficial reservoir distribution province in Ordovician of Katake block 1in middle Tarim basin. The fracture-cave reservoir distributions are delineated. The prospect direction and favorable aims are demonstrated. There are a set of carbonate reservoir prediction methods in middle Tarim basin. It is the favorable basic technique in predicting reservoir of the Ordovician carbonate in middle Tarim. Proven by exploration drilling, the favorable region of moniliform reflection fracture and pore-cave and cave-fracture in lower-middle Ordovician are coincidence with the region of hydrocarbon show. It’s indicated that the reservoir prediction methods described in the study of Ordovician carbonate formation are feasible practicably.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Numerical modeling of groundwater is very important for understanding groundwater flow and solving hydrogeological problem. Today, groundwater studies require massive model cells and high calculation accuracy, which are beyond single-CPU computer’s capabilities. With the development of high performance parallel computing technologies, application of parallel computing method on numerical modeling of groundwater flow becomes necessary and important. Using parallel computing can improve the ability to resolve various hydro-geological and environmental problems. In this study, parallel computing method on two main types of modern parallel computer architecture, shared memory parallel systems and distributed shared memory parallel systems, are discussed. OpenMP and MPI (PETSc) are both used to parallelize the most widely used groundwater simulator, MODFLOW. Two parallel solvers, P-PCG and P-MODFLOW, were developed for MODFLOW. The parallelized MODFLOW was used to simulate regional groundwater flow in Beishan, Gansu Province, which is a potential high-level radioactive waste geological disposal area in China. 1. The OpenMP programming paradigm was used to parallelize the PCG (preconditioned conjugate-gradient method) solver, which is one of the main solver for MODFLOW. The parallel PCG solver, P-PCG, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. The largest test model has 1000 columns, 1000 rows and 1000 layers. Based on the timing results, execution times using the P-PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. 2. P-MODFLOW, a domain decomposition–based model implemented in a parallel computing environment is developed, which allows efficient simulation of a regional-scale groundwater flow. The basic approach partitions a large model domain into any number of sub-domains. Parallel processors are used to solve the model equations within each sub-domain. The use of domain decomposition method to achieve the MODFLOW program distributed shared memory parallel computing system will process the application of MODFLOW be extended to the fleet of the most popular systems, so that a large-scale simulation could take full advantage of hundreds or even thousands parallel processors. P-MODFLOW has a good parallel performance, with the maximum speedup of 18.32 (14 processors). Super linear speedups have been achieved in the parallel tests, indicating the efficiency and scalability of the code. Parallel program design, load balancing and full use of the PETSc were considered to achieve a highly efficient parallel program. 3. The characterization of regional ground water flow system is very important for high-level radioactive waste geological disposal. The Beishan area, located in northwestern Gansu Province, China, is selected as a potential site for disposal repository. The area includes about 80000 km2 and has complicated hydrogeological conditions, which greatly increase the computational effort of regional ground water flow models. In order to reduce computing time, parallel computing scheme was applied to regional ground water flow modeling. Models with over 10 million cells were used to simulate how the faults and different recharge conditions impact regional ground water flow pattern. The results of this study provide regional ground water flow information for the site characterization of the potential high-level radioactive waste disposal.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Relationship between biology and environment is always the theme of ecology. Transect is becoming one of the important methods in studies on relationship between global change and terrestrial ecosystems, especially for analysis of its driving factors. Inner Mongolia Grassland is the most important in China Grassland Transect brought forward by Yu GR. In this study, changes in grassland community biomass along gradients of weather conditions in Inner Mongolia was researched by the method of transect. Methods of regression about biomass were also compared. The transect was set from Eerguna county to Alashan county (38° 07' 35" ~50° 12' 20" N, 101° 55' 25" -120° 20' 46" E) in Inner Mongolia, China. The sample sites were mainly chosen along the gradient of grassland type, meadow steppe-* typical steppe-*desert steppe-*steppification desert-^desert. The study was carried out when grassland community biomass got the peak in August or September, 2003 and 2004. And data of 49 sample sites was gotten, which included biomass, mean annual temperature, annual precipitation, accumulated temperature above zero, annual hours of sunshine and other statistical and descriptive data. The aboveground biomass was harvested, and the belowground biomass was obtained by coring (30 cm deep). Then all the biomass samples were dried within (80 + 5) °C in oven and weighted. The conclusion is as follows: 1) From the northeast to the southwest in Inner Mongolia, along the gradient of grassland type, meadow steppe-*typical steppe-*desert steppe-*steppification desert-* desert, the cover degree of vegetation community reduces. 2) By unitary regression analysis, biomass is negatively correlated with mean annual temperature, s^CTC accumulated temperature, ^10°C accumulated temperature and annual hours of sunshine, among which mean annual temperature is crucial, and positively with mean annual precipitation and mean annual relative humidity, and the correlation coefficient between biomass and mean annual relative humidity is higher. Altitude doesn't act on it evidently. Result of multiple regression analysis indicates that as the primary restrictive factor, precipitation affects biomass through complicated way on large scale, and its impaction is certainly important. Along the gradient of grassland type, total biomass reduces. The proportion of aboveground biomass to total biomass reduces and then increases after desert steppe. The trend of below ground biomass's proportion to total biomass is adverse to that of aboveground biomass. 3) Precipitation is not always the only driving factor along the transect for below-/aboveground biomass ratio of different vegetation type composed by different species, and distribution of temperature and precipitation is more important, which is much different among climatic regions, so that the trend of below-/aboveground biomass ratio along the grassland transect may change much through the circumscription of semiarid region and arid region. 4) Among reproductive allocation of aboveground biomass, only the proportion of stem in total biomass notably correlates to the given parameters. Stem/leaf biomass ratio decreases when longitude and latitude increase, caloric variables decrease, and variables about water increase from desert to meadow steppe. The change trends are good modeled by logarithm or binomial equations. 5) 0'-10 cm belowground biomass highly correlates to environmental parameters, whose proportion to total biomass changes most distinctly and increases along the gradient from the west to the east. The deeper belowground biomass responses to the environmental change on the adverse trend but not so sensitively as the surface layer. Because the change value of 0~10 cm belowground biomass is always more than that of below 10 cm along the gradient, the deference between them is balanced by aboveground biomass's change by the resource allocation equilibrium hypothesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Anterior inferotemporal cortex (ITa) plays a key role in visual object recognition. Recognition is tolerant to object position, size, and view changes, yet recent neurophysiological data show ITa cells with high object selectivity often have low position tolerance, and vice versa. A neural model learns to simulate both this tradeoff and ITa responses to image morphs using large-scale and small-scale IT cells whose population properties may support invariant recognition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents an Eulerian-based numerical model of particle degradation in dilute-phase pneumatic conveying systems including bends of different angles. The model shows reasonable agreement with detailed measurements from a pilot-sized pneumatic conveying system and a much larger scale pneumatic conveyor. The potential of the model to predict degradation in a large-scale conveying system from an industrial plant is demonstrated. The importance of the effect of the bend angle on the damage imparted to the particles is discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computer egress simulation has potential to be used in large scale incidents to provide live advice to incident commanders. While there are many considerations which must be taken into account when applying such models to live incidents, one of the first concerns the computational speed of simulations. No matter how important the insight provided by the simulation, numerical hindsight will not prove useful to an incident commander. Thus for this type of application to be useful, it is essential that the simulation can be run many times faster than real time. Parallel processing is a method of reducing run times for very large computational simulations by distributing the workload amongst a number of CPUs. In this paper we examine the development of a parallel version of the buildingEXODUS software. The parallel strategy implemented is based on a systematic partitioning of the problem domain onto an arbitrary number of sub-domains. Each sub-domain is computed on a separate processor and runs its own copy of the EXODUS code. The software has been designed to work on typical office based networked PCs but will also function on a Windows based cluster. Two evaluation scenarios using the parallel implementation of EXODUS are described; a large open area and a 50 story high-rise building scenario. Speed-ups of up to 3.7 are achieved using up to six computers, with high-rise building evacuation simulation achieving run times of 6.4 times faster than real time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study investigates the use of computer modelled versus directly experimentally determined fire hazard data for assessing survivability within buildings using evacuation models incorporating Fractionally Effective Dose (FED) models. The objective is to establish a link between effluent toxicity, measured using a variety of small and large scale tests, and building evacuation. For the scenarios under consideration, fire simulation is typically used to determine the time non-survivable conditions develop within the enclosure, for example, when smoke or toxic effluent falls below a critical height which is deemed detrimental to evacuation or when the radiative fluxes reach a critical value leading to the onset of flashover. The evacuation calculation would the be used to determine whether people within the structure could evacuate before these critical conditions develop.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the frame of the European Project on Ocean Acidification (EPOCA), the response of an Arctic pelagic community (<3 mm) to a gradient of seawater pCO(2) was investigated. For this purpose 9 large-scale in situ mesocosms were deployed in Kongsfjorden, Svalbard (78 degrees 56.2' N, 11 degrees 53.6' E), in 2010. The present study investigates effects on the communities of particle-attached (PA; >3 mu m) and free-living (FL; <3 mu m > 0.2 mu m) bacteria by Automated Ribosomal Intergenic Spacer Analysis (ARISA) in 6 of the mesocosms, ranging from 185 to 1050 mu atm initial pCO(2), and the surrounding fjord. ARISA was able to resolve, on average, 27 bacterial band classes per sample and allowed for a detailed investigation of the explicit richness and diversity. Both, the PA and the FL bacterioplankton community exhibited a strong temporal development, which was driven mainly by temperature and phytoplankton development. In response to the breakdown of a picophytoplankton bloom, numbers of ARISA band classes in the PA community were reduced at low and medium CO2 (similar to 185-685 mu atm) by about 25 %, while they were more or less stable at high CO2 (similar to 820-1050 mu atm). We hypothesise that enhanced viral lysis and enhanced availability of organic substrates at high CO2 resulted in a more diverse PA bacterial community in the post-bloom phase. Despite lower cell numbers and extracellular enzyme activities in the post-bloom phase, bacterial protein production was enhanced in high CO2 mesocosms, suggesting a positive effect of community richness on this function and on carbon cycling by bacteria.