851 resultados para Large-scale enterprises
Resumo:
Large-scale estimates of the area of terrestrial surface waters have greatly improved over time, in particular through the development of multi-satellite methodologies, but the generally coarse spatial resolution (tens of kms) of global observations is still inadequate for many ecological applications. The goal of this study is to introduce a new, globally applicable downscaling method and to demonstrate its applicability to derive fine resolution results from coarse global inundation estimates. The downscaling procedure predicts the location of surface water cover with an inundation probability map that was generated by bagged derision trees using globally available topographic and hydrographic information from the SRTM-derived HydroSHEDS database and trained on the wetland extent of the GLC2000 global land cover map. We applied the downscaling technique to the Global Inundation Extent from Multi-Satellites (GIEMS) dataset to produce a new high-resolution inundation map at a pixel size of 15 arc-seconds, termed GIEMS-D15. GIEMS-D15 represents three states of land surface inundation extents: mean annual minimum (total area, 6.5 x 10(6) km(2)), mean annual maximum (12.1 x 10(6) km(2)), and long-term maximum (173 x 10(6) km(2)); the latter depicts the largest surface water area of any global map to date. While the accuracy of GIEMS-D15 reflects distribution errors introduced by the downscaling process as well as errors from the original satellite estimates, overall accuracy is good yet spatially variable. A comparison against regional wetland cover maps generated by independent observations shows that the results adequately represent large floodplains and wetlands. GIEMS-D15 offers a higher resolution delineation of inundated areas than previously available for the assessment of global freshwater resources and the study of large floodplain and wetland ecosystems. The technique of applying inundation probabilities also allows for coupling with coarse-scale hydro-climatological model simulations. (C) 2014 Elsevier Inc All rights reserved.
Resumo:
In the current paper, we have primarily addressed one powerful simulation tool developed during the last decades-Large Eddy Simulation (LES), which is most suitable for unsteady three-dimensional complex turbulent flows in industry and natural environment. The main point in LES is that the large-scale motion is resolved while the small-scale motion is modeled or, in geophysical terminology, parameterized. With a view to devising a subgrid-scale(SGS) model of high quality, we have highlighted analyzing physical aspects in scale interaction and-energy transfer such as dissipation, backscatter, local and non-local interaction, anisotropy and resolution requirement. They are the factors responsible for where the advantages and disadvantages in existing SGS models come from. A case study on LES of turbulence in vegetative canopy is presented to illustrate that LES model is more based on physical arguments. Then, varieties of challenging complex turbulent flows in both industry and geophysical fields in the near future-are presented. In conclusion; we may say with confidence that new century shall see the flourish in the research of turbulence with the aid of LES combined with other approaches.
Resumo:
Decision Trees need train samples in the train data set to get classification rules. If the number of train data was too small, the important information might be missed and thus the model could not explain the classification rules of data. While it is not affirmative that large scale of train data set can get well model. This Paper analysis the relationship between decision trees and the train data scale. We use nine decision tree algorithms to experiment the accuracy, complexity and robustness of decision tree algorithms. Some results are demonstrated.
Resumo:
RRAs were carried out in two Small Tank Cascade systems (STCs) of North West Province, Sri Lanka (less than 1000 ha total watershed area). A total of 21 tanks and 7 villages were investigated with primary emphasis on two upper watershed communities. The two systems differ primarily in their resource base; namely rainfall, natural forests and proximity to large scale perennial irrigation resources. [PDF contains 86 pages]
Resumo:
This thesis explores the dynamics of scale interactions in a turbulent boundary layer through a forcing-response type experimental study. An emphasis is placed on the analysis of triadic wavenumber interactions since the governing Navier-Stokes equations for the flow necessitate a direct coupling between triadically consist scales. Two sets of experiments were performed in which deterministic disturbances were introduced into the flow using a spatially-impulsive dynamic wall perturbation. Hotwire anemometry was employed to measure the downstream turbulent velocity and study the flow response to the external forcing. In the first set of experiments, which were based on a recent investigation of dynamic forcing effects in a turbulent boundary layer, a 2D (spanwise constant) spatio-temporal normal mode was excited in the flow; the streamwise length and time scales of the synthetic mode roughly correspond to the very-large-scale-motions (VLSM) found naturally in canonical flows. Correlation studies between the large- and small-scale velocity signals reveal an alteration of the natural phase relations between scales by the synthetic mode. In particular, a strong phase-locking or organizing effect is seen on directly coupled small-scales through triadic interactions. Having characterized the bulk influence of a single energetic mode on the flow dynamics, a second set of experiments aimed at isolating specific triadic interactions was performed. Two distinct 2D large-scale normal modes were excited in the flow, and the response at the corresponding sum and difference wavenumbers was isolated from the turbulent signals. Results from this experiment serve as an unique demonstration of direct non-linear interactions in a fully turbulent wall-bounded flow, and allow for examination of phase relationships involving specific interacting scales. A direct connection is also made to the Navier-Stokes resolvent operator framework developed in recent literature. Results and analysis from the present work offer insights into the dynamical structure of wall turbulence, and have interesting implications for design of practical turbulence manipulation or control strategies.
Resumo:
A proposta do Associativismo entre Municípios, especialmente entre aqueles de menor porte, seria alternativa para lidar com o problema de escassez de recursos, provocado pela falta de autonomia financeira dos Municípios. Embora, com a Constituição Democrática de 1988, o Município tenha sido alçado ao status de ente federativo autônomo, esta realidade não veio atrelada à necessária autonomia financeira, tampouco possui escala para a prestação de serviços públicos essenciais à população local. Neste trabalho, enfoque especial é dado a esta questão, principalmente diante dos impactos causados por grandes empreendimentos industriais, como é o caso do Comperj (Complexo Petroquímico do Rio de Janeiro), nos Municípios de Itaboraí, São Gonçalo e cidades vizinhas. Estes Municípios, ainda que possuam características distintas entre si, podem alcançar vantagens se atuarem em conjunto, gerindo melhor os escassos recursos e proporcionando serviços públicos aptos a atender aos seus munícipes. Confirmando a importância do associativismo, seja sob a forma de regiões metropolitanas, microrregiões, ou mesmo o consórcio intermunicipal, objeto central do presente estudo, a recente Lei de Consórcios Públicos (Lei n 11.107/2005) veio destacar a relevância da utilização dos consórcios, trazendo mais segurança aos que deles se utilizam, contando com o apoio dos governos estadual e federal. Os Municípios afetados pelo Comperj, percebendo a importância desta união de esforços, criaram o Conleste (Consórcio Municipal do Leste Fluminense) já com respaldo na nova lei, com este propósito, qual seja, lidar com os impactos deste grande empreendimento, buscando mitigar os efeitos negativos e propondo soluções que sejam aplicáveis a todos, planejando e pensando no futuro.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): An empirically derived multiple linear regression model is used to relate a local-scale dependent variable (either temperature, precipitation, or surface runoff) measured at individual gauging stations to six large-scale independent variables (temperature, precipitation, surface runoff, height to the 500-mbar pressure surface, and the zonal and meridional gradient across this surface). ...The area investigated is the western United States. ... The calibration data set is from 1948 through 1988 and includes data from 268 joint temperature and precipitation stations, 152 streamflow stations (which are converted to runoff data), and 24 gridded 500-mbar pressure height nodes.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): The mass balance of glaciers depends on the seasonal variation in precipitation, temperature, and insolation. For glaciers in western North America, these meteorological variables are influenced by the large-scale atmospheric circulation over the northern Pacific Ocean. The purpose of this study is to gain a better understanding of the relationship between mass balance at glaciers in western North America and the large-scale atmospheric effects at interannual and decadal time scales.
Resumo:
The Internet has enabled the creation of a growing number of large-scale knowledge bases in a variety of domains containing complementary information. Tools for automatically aligning these knowledge bases would make it possible to unify many sources of structured knowledge and answer complex queries. However, the efficient alignment of large-scale knowledge bases still poses a considerable challenge. Here, we present Simple Greedy Matching (SiGMa), a simple algorithm for aligning knowledge bases with millions of entities and facts. SiGMa is an iterative propagation algorithm which leverages both the structural information from the relationship graph as well as flexible similarity measures between entity properties in a greedy local search, thus making it scalable. Despite its greedy nature, our experiments indicate that SiGMa can efficiently match some of the world's largest knowledge bases with high precision. We provide additional experiments on benchmark datasets which demonstrate that SiGMa can outperform state-of-the-art approaches both in accuracy and efficiency.
Resumo:
Abstract Large-Eddy Simulation (LES) and hybrid Reynolds-averaged Navier–Stokes–LES (RANS–LES) methods are applied to a turbine blade ribbed internal duct with a 180° bend containing 24 pairs of ribs. Flow and heat transfer predictions are compared with experimental data and found to be in agreement. The choice of LES model is found to be of minor importance as the flow is dominated by large geometric scale structures. This is in contrast to several linear and nonlinear RANS models, which display turbulence model sensitivity. For LES, the influence of inlet turbulence is also tested and has a minor impact due to the strong turbulence generated by the ribs. Large scale turbulent motions destroy any classical boundary layer reducing near wall grid requirements. The wake-type flow structure makes this and similar flows nearly Reynolds number independent, allowing a range of flows to be studied at similar cost. Hence LES is a relatively cheap method for obtaining accurate heat transfer predictions in these types of flows.
Resumo:
Superhigh aspect-ratio Cu-thiourea (Cu(tu)) nanowires have been synthesized in large quantity via a fast and facile method. Nanowires of Cu (tu)Cl center dot 0.5H(2)O and Cu(tu)Br center dot 0.5H(2)O were found to be 60-100 nm and 100-200 nm, in diameter, and could extend to several millimeters in length. It is found to be the most convenient and facile approach to the fabrication of one-dimensional superhigh aspect-ratio nanomaterials in large scale so far.
Resumo:
Pyatt, B. Barker, G. Birch, P. Gilbertson, D. Grattan, J. Mattingly, D. King Solomon's Miners - Starvation and Bioaccumulation? An Environmental Archaeological Investigation in Southern Jordan. Ecotoxicology and Environmental safety 43, 305-308 (1999) Environmental Research, Section B
Resumo:
This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.
Resumo:
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.