952 resultados para Eddy covariance
Resumo:
From October 1970 through February 1972, temperature, salinity, dissolved oxygen, secchi depth and five major nutrients were observed at approximately monthly intervals in Elkhorn Slough and Moss Landing Harbor. In addition, similar hourly observations were made during two tidal studies during the wet and dry seasons. From the salinity measurements during the summer, a salt balance for Elkhorn Slough is formulated and rnean eddy diffusion coefficients are determined. The diffusion nlodel applied to longitudinal phosphate distributions yielded a mean diffusive flux of 12 kg P04/day (140 pg-at/m^2/day) for the area above the mean tidal prism. Consistent differences, apparently due to differing regenerati on ra tes, were observed in the phosphate and nitrogen distributions. Bottom sediments are proposed as a possible source for phosphate and as a sink for fixed nitrogen. Dairy farms located along central Elkhorn Slough are apparently a source for reduced nitrogen. During summer, nitrogen was found to be the limiting nutrient for primary production in the upper slough. Tidal observations indicated fresh water of high nutrient concentration consistently entered the harbor from fresh water sources to the south. This source water had a probable phosphate concentration of 40 to 60 ug-at/l and seasonally varying P:N ratio of 1:16 and 1:5 during the winter and summer respectively. Net production and respiration rates are calculated from diurnal variations in dissolved oxygen levels observed in upper Elkhorn Slough. Changes in phosphate associated with the variations in oxygen was close to the accepted ratio of 1:276 by atoms. Document is 88 pages.
Resumo:
Validated by comparison with DNS, numerical database of turbulent channel flows is yielded by Large Eddy Simulation (LES). Three conventional techniques: uv quadrant 2, VITA and mu-level techniques for detecting turbulent bursts are applied to the identification of turbulent bursts. With a grouping parameter introduced by Bogard & Tiedemann (1986) or Luchik & Tiederman (1987), multiple ejections detected by these techniques which originate from a single burst can be grouped into a single-burst event. The results are compared with experimental results, showing that all techniques yield reasonable average burst period. However, uv quadrant 2 and mu-level are found to be superior to VITA in having large threshold-independent range.
Resumo:
Random field theory has been used to model the spatial average soil properties, whereas the most widely used, geostatistics, on which also based a common basis (covariance function) has been successfully used to model and estimate natural resource since 1960s. Therefore, geostistics should in principle be an efficient way to model soil spatial variability Based on this, the paper presents an alternative approach to estimate the scale of fluctuation or correlation distance of a soil stratum by geostatistics. The procedure includes four steps calculating experimental variogram from measured data, selecting a suited theoretical variogram model, fitting the theoretical one to the experimental variogram, taking the parameters within the theoretical model obtained from optimization into a simple and finite correlation distance 6 relationship to the range a. The paper also gives eight typical expressions between a and b. Finally, a practical example was presented for showing the methodology.
Resumo:
This paper deals with turbulence behavior inbenthalboundarylayers by means of large eddy simulation (LES). The flow is modeled by moving an infinite plate in an otherwise quiescent water with an oscillatory and a steady velocity components. The oscillatory one aims to simulate wave effect on the flow. A number of large-scale turbulence databases have been established, based on which we have obtained turbulencestatisticsof the boundarylayers, such as Reynolds stress, turbulence intensity, skewness and flatness ofturbulence, and temporal and spatial scales of turbulent bursts, etc. Particular attention is paid to the dependences of those statistics on two nondimensional parameters, namely the Reynolds number and the current-wave velocity ratio defined as the steady current velocity over the oscillatory velocity amplitude. It is found that the Reynolds stress and turbulence intensity profile differently from phase to phase, and exhibit two types of distributions in an oscillatory cycle. One is monotonic occurring during the time when current and wave-induced components are in the same direction, and the other inflectional occurring during the time when current and wave-induced components are in opposite directions. Current component makes an asymmetrical time series of Reynolds stress, as well as turbulence intensity, although the mean velocity series is symmetrical as a sine/cosine function. The skewness and flatness variations suggest that the turbulence distribution is not a normal function but approaches to a normal one with the increasing of Reynolds number and the current-wave velocity ratio as well. As for turbulent bursting, the dimensionless period and the mean area of all bursts per unit bed area tend to increase with Reynolds number and current-wave velocity ratio, rather than being constant as in steady channel flows.
Resumo:
Hydrophobic surface benefits for drag reduction. Min and Kim[1] do the first Direct Numerical Simulation on drag reduction in turbulent channel flow. And Fukagata and Kasagi[2] make some theoretical analysis based on Dean[3]'s formula and some observations in the DNS results. Using their theory, they conclude that drag reduction is possible in large Reynolds number. Both Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) are performed in our research. How the LES behaving in the turbulent channel flow with hydrophobic surface is examined. Original Smagorinsky model and its Dynamical model are used in LES. The slip velocities predicted by LES using Dynamical model are in good agreement with DNS as shown in the Figure. Although the percentage of drag reduction predicted by LES shows some discrepancies, it is in the error limit for industrial flow. First order and second order moments of LES are also examined and compared with DNS's results. The first-order moments is calculated well by LES. But there are some discrepancies of second-order moments between LES and DNS. [GRAPHICS]
Resumo:
The MID-K, a new kind of multi-pipe string detection tool is introduced. This tool provides a means of evaluating the condition of in-place pipe string, such as tubing and casino. It is capable of discriminating the defects of the inside and outside, and estimating the thickness of tubing and casing. It is accomplished by means of a low frequency eddy current to detect flaws on the inner surface and a magnetic flux leakage to inspect the full thickness. The measurement principle, the technology and applications are presented in this paper.
Resumo:
The Biscayne Bay Benthic Sampling Program was divided into two phases. In Phase I, sixty sampling stations were established in Biscayne Bay (including Dumfoundling Bay and Card Sound) representing diverse habitats. The stations were visited in the wet season (late fall of 1981) and in the dry season (midwinter of 1982). At each station certain abiotic conditions were measured or estimated. These included depth, sources of freshwater inflow and pollution, bottom characteristics, current direction and speed, surface and bottom temperature, salinity and dissolved oxygen, and water clarity was estimated with a secchi disk. Seagrass blades and macroalgae were counted in a 0.1-m2 grid placed so as to best represent the bottom community within a 50-foot radius. Underwater 35-mm photographs were made of the bottom using flash apparatus. Benthic samples were collected using a petite Ponar dredge. These samples were washed through a 5-mm mesh screen, fixed in formalin in the field, and later sorted and identified by experts to a pre-agreed taxonomic level. During the wet season sampling period, a nonquantitative one-meter wide trawl was made of the epibenthic community. These samples were also washed, fixed, sorted and identified. During the dry season sampling period, sediment cores were collected at each station not located on bare rock. These cores were analyzed for sediment size and organic composition by personnel of the University of Miami. Data resulting from the sampling were entered into a computer. These data were subjected to cluster analyses, Shannon-Weaver diversity analysis, multiple regression analysis of variance and covariance, and factor analysis. In Phase II of the program, fifteen stations were selected from among the sixty of Phase I. These stations were sampled quarterly. At each quarter, five Petite Ponar dredge samples were collected from each station. As in Phase I, observations and measurements, including seagrass blade counts, were made at each station. In Phase II, polychaete specimens collected were given to a separate contractor for analysis to the species level. These analyses included mean, standard deviation, coefficient of dispersion, percent of total, and numeric rank for each organism in each station as well as number of species, Shannon-Weaver taxa diversity, and dominance (the compliment of Simpson's Index) for each station. Multiple regression analysis of variance and covariance, and factor analysis were applied to the data to determine effect of abiotic factors measured at each station. (PDF contains 96 pages)
Resumo:
Results are given of monthly net phytoplankton and zooplankton sampling from a 10 m depth in shelf, slope, and Gulf Stream eddy water along a transect running southeastward from Ambrose Light, New York, in 1976, 1977, and early 1978. Plankton abundance and temperature at 10 m and sea surface salinity at each station are listed. The effects of atmospheric forcing and Gulf Stream eddies on plankton distribution and abundance arc discussed. The frequency of Gulf Stream eddy passage through the New York Bight corresponded with the frequency of tropical-subtropical net phytoplankton in the samples. Gulf Stream eddies injected tropical-subtropical zooplankton onto the shelf and removed shelfwater and its entrained zooplankton. Wind-induced offshore Ekman transport corresponded generally with the unusual timing of two net phytoplankton maxima. Midsummer net phytoplankton maxima were recorded following the passage of Hurricane Belle (August 1976) and a cold front (July 1977). Tropical-subtropical zooplankton which had been injected onto the outer shelf by Gulf Stream eddies were moved to the inner shelf by a wind-induced current moving up the Hudson Shelf Valley. (PDF file contains 47 pages.)
Resumo:
On several classes of n-person NTU games that have at least one Shapley NTU value, Aumann characterized this solution by six axioms: Non-emptiness, efficiency, unanimity, scale covariance, conditional additivity, and independence of irrelevant alternatives (IIA). Each of the first five axioms is logically independent of the remaining axioms, and the logical independence of IIA is an open problem. We show that for n = 2 the first five axioms already characterize the Shapley NTU value, provided that the class of games is not further restricted. Moreover, we present an example of a solution that satisfies the first five axioms and violates IIA for two-person NTU games (N, V) with uniformly p-smooth V(N).
Resumo:
ENGLISH: The map method, the Jones method, the variance-covariance method, and the Skellam method were used to study the migrations of tagged yellowfin tuna released off the southern coast of Mexico in 1960 and 1969. The first three methods are all useful, and each presents information which is complementary to that presented by the others. The Skellam method, as used in this report, is less useful. The movements of the tagged fish released in 1960 appeared to have been strongly directed, but this was probably caused principally by the distribution of the fishing effort. The effort was much more widely distributed in 1970, and the movements of the fish released in 1969 appeared to have been much less directed. The correlation coefficients derived from the variance-covariance method showed that it was not random, however. The small fish released in the Acapulco and 10°N-100°W areas in 1969 migrated to the Manzanillo area near the beginning of February 1970. The medium and large fish released in the same areas in the same year tended to migrate to the southeast throughout the first half of 1970, however. SPANISH: El método de mapas, el de Jones, el de la variancia-covariancia y el de Skellam fueron empleados para estudiar las migraciones del atún aleta amarilla marcado y liberado frente a la costa meridional de México en 1960 y 1969. Los tres primeros métodos son todos útiles, y cada uno presenta información que complementa la presentada por los otros. El método de Skellam, conforme se usa en este informe, es menos útil. Parece que los desplazamientos de los peces marcados y liberados en 1960 hubieran sido fuertemente orientados, pero ésto probablemente fue causado principalmente por la distribución del esfuerzo de pesca. El esfuerzo se distribuyó más extensamente en 1970, y parece que los desplazamientos de los peces liberados en 1969 fueran menos orientados. Los coeficientes de correlación derivados del método variancia-covariancia indicaron, sin embargo, que no eran aleatorios. Los peces pequeños liberados en las áreas de Acapulco y los 10°N-100°W en 1969 migraron al área de Manzanillo a principios de febrero 1970. Los peces medianos y grandes liberados en las mismas áreas en el mismo año tuvieron, sin embargo, la tendencia a desplazarse al sudeste durante el primer semestre de 1970. (PDF contains 64 pages.)
Resumo:
Point-particle based direct numerical simulation (PPDNS) has been a productive research tool for studying both single-particle and particle-pair statistics of inertial particles suspended in a turbulent carrier flow. Here we focus on its use in addressing particle-pair statistics relevant to the quantification of turbulent collision rate of inertial particles. PPDNS is particularly useful as the interaction of particles with small-scale (dissipative) turbulent motion of the carrier flow is mostly relevant. Furthermore, since the particle size may be much smaller than the Kolmogorov length of the background fluid turbulence, a large number of particles are needed to accumulate meaningful pair statistics. Starting from the relative simple Lagrangian tracking of so-called ghost particles, PPDNS has significantly advanced our theoretical understanding of the kinematic formulation of the turbulent geometric collision kernel by providing essential data on dynamic collision kernel, radial relative velocity, and radial distribution function. A recent extension of PPDNS is a hybrid direct numerical simulation (HDNS) approach in which the effect of local hydrodynamic interactions of particles is considered, allowing quantitative assessment of the enhancement of collision efficiency by fluid turbulence. Limitations and open issues in PPDNS and HDNS are discussed. Finally, on-going studies of turbulent collision of inertial particles using large-eddy simulations and particle- resolved simulations are briefly discussed.
Resumo:
The application of large-eddy simulation (LES) to particle-laden turbulence raises such a fundamental question as whether the LES with a subgrid scale (SGS) model can correctly predict Lagrangian time correlations (LTCs). Most of the currently existing SGS models are constructed based on the energy budget equations. Therefore, they are able to correctly predict energy spectra, but they may not ensure the correct prediction on the LTCs. Previous researches investigated the effect of the SGS modeling on the Eulerian time correlations. This paper is devoted to study the LTCs in LES. A direct numerical simulation (DNS) and the LES with a spectral eddy viscosity model are performed for isotropic turbulence and the LTCs are calculated using the passive vector method. Both a priori and a posteriori tests are carried out. It is observed that the subgrid-scale contributions to the LTCs cannot be simply ignored and the LES overpredicts the LTCs than the DNS. It is concluded from the straining hypothesis that an accurate prediction of enstrophy spectra is most critical to the prediction of the LTCs.
Resumo:
The fluid force coefficients on a transversely oscillating cylinder are calculated by applying two- dimensional large eddy simulation method. Considering the ‘‘jump’’ phenomenon of the amplitude of lift coefficient is harmful to the security of the submarine slender structures, the characteristics of this ‘‘jump’’ are dissertated concretely. By comparing with experiment results, we establish a numerical model for predicting the jump of lift force on an oscillating cylinder, providing consultation for revising the hydrodynamic parameters and checking the fatigue life scale design of submarine slender cylindrical structures.
Resumo:
This thesis explores the problem of mobile robot navigation in dense human crowds. We begin by considering a fundamental impediment to classical motion planning algorithms called the freezing robot problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing predictive uncertainty by employing higher fidelity individual dynamics models or heuristically limiting the individual predictive covariance to prevent overcautious navigation. We demonstrate that both the individual prediction and the individual predictive uncertainty have little to do with this undesirable navigation behavior. Additionally, we provide evidence that dynamic agents are able to navigate in dense crowds by engaging in joint collision avoidance, cooperatively making room to create feasible trajectories. We accordingly develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a "multiple goal" extension that models the goal driven nature of human decision making. Navigation naturally emerges as a statistic of this distribution.
Most importantly, we empirically validate our models in the Chandler dining hall at Caltech during peak hours, and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (collecting data on 488 runs). The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our noncooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. For inclusive validation purposes, we show that either our non-interacting planner or our reactive planner captures the salient characteristics of nearly any existing dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.
Finally, we produce a large database of ground truth pedestrian crowd data. We make this ground truth database publicly available for further scientific study of crowd prediction models, learning from demonstration algorithms, and human robot interaction models in general.
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.