855 resultados para Railroad large scale apparatus
Resumo:
This thesis explores the dynamics of scale interactions in a turbulent boundary layer through a forcing-response type experimental study. An emphasis is placed on the analysis of triadic wavenumber interactions since the governing Navier-Stokes equations for the flow necessitate a direct coupling between triadically consist scales. Two sets of experiments were performed in which deterministic disturbances were introduced into the flow using a spatially-impulsive dynamic wall perturbation. Hotwire anemometry was employed to measure the downstream turbulent velocity and study the flow response to the external forcing. In the first set of experiments, which were based on a recent investigation of dynamic forcing effects in a turbulent boundary layer, a 2D (spanwise constant) spatio-temporal normal mode was excited in the flow; the streamwise length and time scales of the synthetic mode roughly correspond to the very-large-scale-motions (VLSM) found naturally in canonical flows. Correlation studies between the large- and small-scale velocity signals reveal an alteration of the natural phase relations between scales by the synthetic mode. In particular, a strong phase-locking or organizing effect is seen on directly coupled small-scales through triadic interactions. Having characterized the bulk influence of a single energetic mode on the flow dynamics, a second set of experiments aimed at isolating specific triadic interactions was performed. Two distinct 2D large-scale normal modes were excited in the flow, and the response at the corresponding sum and difference wavenumbers was isolated from the turbulent signals. Results from this experiment serve as an unique demonstration of direct non-linear interactions in a fully turbulent wall-bounded flow, and allow for examination of phase relationships involving specific interacting scales. A direct connection is also made to the Navier-Stokes resolvent operator framework developed in recent literature. Results and analysis from the present work offer insights into the dynamical structure of wall turbulence, and have interesting implications for design of practical turbulence manipulation or control strategies.
Resumo:
A presente pesquisa visa refletir, sob a ótica do discurso, a cultura noticiosa a respeito do Movimento dos Trabalhadores Rurais Sem Terra (MST), no que se refere à cobertura jornalística dos jornais Zero Hora e Folha de S.Paulo das linhas políticas sobre as questões conjunturais apresentadas pelo Movimento em seus três últimos Congressos Nacionais (1995, 2000 e 2007), para comprovar o tratamento dado pela mídia ao MST e o modo como as formações discursivas em textualizações jornalísticas são indiciárias de permanente tensão em torno da luta pela terra, o que dificulta o diálogo do Movimento com a sociedade. Este trabalho pretende ainda debater qual a intervenção do MST na construção das agendas política e pública e por que o Movimento não consegue provocar mudanças em seu enquadramento noticioso e, assim, constatar o que o processo de saturação do discurso midiático, neste caso o do jornalismo impresso, é capaz de produzir sobre a sociedade, partindo da hipótese de que a mídia, em geral, funciona como aparelho político-ideológico, que elabora e divulga concepções de mundo, cumprindo a função de contribuir com orientações para exercer influência na compreensão dos fatos sociais. A mediação dos meios de comunicação de massa, em geral, produz um deslocamento na experiência pública e, ao mesmo tempo, dá forma aos saberes possíveis que essa experiência desenvolve sobre si mesma. Sabemos que as ideologias presentes nos discursos jornalísticos podem não produzir novos saberes sobre o mundo, mas produzem um reconhecimento do mundo tal como já aprendemos a apropriá-lo. Demonstrar-se-á que, na fase atual do capitalismo sistema que demanda maior valorização da informação , a reprodução ideológica se dá diretamente pelos meios de comunicação, por intermédio de pautas e agendas. Considerando o contexto apresentado pela pesquisa, o trabalho destaca também dois fios condutores para alcançar seus objetivos: a submissão da mídia à hegemonia neoliberal e a luta do MST pela reforma agrária diante da valorização do agronegócio latifundiário.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): An empirically derived multiple linear regression model is used to relate a local-scale dependent variable (either temperature, precipitation, or surface runoff) measured at individual gauging stations to six large-scale independent variables (temperature, precipitation, surface runoff, height to the 500-mbar pressure surface, and the zonal and meridional gradient across this surface). ...The area investigated is the western United States. ... The calibration data set is from 1948 through 1988 and includes data from 268 joint temperature and precipitation stations, 152 streamflow stations (which are converted to runoff data), and 24 gridded 500-mbar pressure height nodes.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): The mass balance of glaciers depends on the seasonal variation in precipitation, temperature, and insolation. For glaciers in western North America, these meteorological variables are influenced by the large-scale atmospheric circulation over the northern Pacific Ocean. The purpose of this study is to gain a better understanding of the relationship between mass balance at glaciers in western North America and the large-scale atmospheric effects at interannual and decadal time scales.
Resumo:
The Internet has enabled the creation of a growing number of large-scale knowledge bases in a variety of domains containing complementary information. Tools for automatically aligning these knowledge bases would make it possible to unify many sources of structured knowledge and answer complex queries. However, the efficient alignment of large-scale knowledge bases still poses a considerable challenge. Here, we present Simple Greedy Matching (SiGMa), a simple algorithm for aligning knowledge bases with millions of entities and facts. SiGMa is an iterative propagation algorithm which leverages both the structural information from the relationship graph as well as flexible similarity measures between entity properties in a greedy local search, thus making it scalable. Despite its greedy nature, our experiments indicate that SiGMa can efficiently match some of the world's largest knowledge bases with high precision. We provide additional experiments on benchmark datasets which demonstrate that SiGMa can outperform state-of-the-art approaches both in accuracy and efficiency.
Resumo:
The recent advances in urban wireless communications and protocols that spurred the development of city-wide wireless infrastructure motivated this research, since in many cases, construction sites are not conveniently located for wired connectivity. Large scale transportation projects for example, such as new highways, railroad tracks and the networks of utilities (power-lines, phone lines, mobile towers, etc) that usually follow them are constructed in areas where wired infrastructure for data exchange is often expensive and time-consuming to deploy. The communication difficulties that can be encountered in such construction sites can be addressed with a wireless communications link between the construction site and the decision-making office. This paper presents a case study on long-range, wireless communications suitable for data exchange between construction sites and engineering headquarters. The purpose of this study was to define the requirements for a reliable wireless communications model where common types of electronic construction data will be exchanged in a fast and efficient manner, and construction site personnel will be able to interact and share knowledge, information and electronic resources with the office staff.
Resumo:
Abstract Large-Eddy Simulation (LES) and hybrid Reynolds-averaged Navier–Stokes–LES (RANS–LES) methods are applied to a turbine blade ribbed internal duct with a 180° bend containing 24 pairs of ribs. Flow and heat transfer predictions are compared with experimental data and found to be in agreement. The choice of LES model is found to be of minor importance as the flow is dominated by large geometric scale structures. This is in contrast to several linear and nonlinear RANS models, which display turbulence model sensitivity. For LES, the influence of inlet turbulence is also tested and has a minor impact due to the strong turbulence generated by the ribs. Large scale turbulent motions destroy any classical boundary layer reducing near wall grid requirements. The wake-type flow structure makes this and similar flows nearly Reynolds number independent, allowing a range of flows to be studied at similar cost. Hence LES is a relatively cheap method for obtaining accurate heat transfer predictions in these types of flows.
Resumo:
A large-scale process combined sonication with self-assembly techniques for the preparation of high-density gold nanoparticles supported on a [Ru(bpy)(3)](2+)-doped silica/Fe3O4 nanocomposite (GNRSF) is provided. The obtained hybrid nanomaterials containing Fe3O4 spheres have high saturation magnetization, which leads to their effective immobilization on the surface of an ITO electrode through simple manipulation by an external magnetic field (without the need of a special immobilization apparatus). Furthermore, this hybrid nanomaterial film exhibits a good and very stable electrochemiluminescence (ECL) behavior, which gives a linear response for tripropylamine (TPA) concentrations between 5 mu m and 0.21 mM, with a detection limit in the micromolar range. The sensitivity of this ECL sensor can be easily controlled by the amount of [Ru(bpy)(3)](2+) immobilized on the hybrid nanomaterials (that is, varying the amount of [Ru(bpy)(3)](2+) during GNRSF synthesis).
Resumo:
Superhigh aspect-ratio Cu-thiourea (Cu(tu)) nanowires have been synthesized in large quantity via a fast and facile method. Nanowires of Cu (tu)Cl center dot 0.5H(2)O and Cu(tu)Br center dot 0.5H(2)O were found to be 60-100 nm and 100-200 nm, in diameter, and could extend to several millimeters in length. It is found to be the most convenient and facile approach to the fabrication of one-dimensional superhigh aspect-ratio nanomaterials in large scale so far.
Resumo:
This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.
Resumo:
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.
Resumo:
Anterior inferotemporal cortex (ITa) plays a key role in visual object recognition. Recognition is tolerant to object position, size, and view changes, yet recent neurophysiological data show ITa cells with high object selectivity often have low position tolerance, and vice versa. A neural model learns to simulate both this tradeoff and ITa responses to image morphs using large-scale and small-scale IT cells whose population properties may support invariant recognition.
Resumo:
This paper presents an Eulerian-based numerical model of particle degradation in dilute-phase pneumatic conveying systems including bends of different angles. The model shows reasonable agreement with detailed measurements from a pilot-sized pneumatic conveying system and a much larger scale pneumatic conveyor. The potential of the model to predict degradation in a large-scale conveying system from an industrial plant is demonstrated. The importance of the effect of the bend angle on the damage imparted to the particles is discussed.
Resumo:
Computer egress simulation has potential to be used in large scale incidents to provide live advice to incident commanders. While there are many considerations which must be taken into account when applying such models to live incidents, one of the first concerns the computational speed of simulations. No matter how important the insight provided by the simulation, numerical hindsight will not prove useful to an incident commander. Thus for this type of application to be useful, it is essential that the simulation can be run many times faster than real time. Parallel processing is a method of reducing run times for very large computational simulations by distributing the workload amongst a number of CPUs. In this paper we examine the development of a parallel version of the buildingEXODUS software. The parallel strategy implemented is based on a systematic partitioning of the problem domain onto an arbitrary number of sub-domains. Each sub-domain is computed on a separate processor and runs its own copy of the EXODUS code. The software has been designed to work on typical office based networked PCs but will also function on a Windows based cluster. Two evaluation scenarios using the parallel implementation of EXODUS are described; a large open area and a 50 story high-rise building scenario. Speed-ups of up to 3.7 are achieved using up to six computers, with high-rise building evacuation simulation achieving run times of 6.4 times faster than real time.
Resumo:
This study investigates the use of computer modelled versus directly experimentally determined fire hazard data for assessing survivability within buildings using evacuation models incorporating Fractionally Effective Dose (FED) models. The objective is to establish a link between effluent toxicity, measured using a variety of small and large scale tests, and building evacuation. For the scenarios under consideration, fire simulation is typically used to determine the time non-survivable conditions develop within the enclosure, for example, when smoke or toxic effluent falls below a critical height which is deemed detrimental to evacuation or when the radiative fluxes reach a critical value leading to the onset of flashover. The evacuation calculation would the be used to determine whether people within the structure could evacuate before these critical conditions develop.