970 resultados para Covariance estimate
Resumo:
Following the recent work of the authors in development and numerical verification of a new kinematic approach of the limit analysis for surface footings on non-associative materials, a practical procedure is proposed to utilize the theory. It is known that both the peak friction angle and dilation angle depend on the sand density as well as the stress level, which was not the concern of the former work. In the current work, a practical procedure is established to provide a better estimate of the bearing capacity of surface footings on sand which is often non-associative. This practical procedure is based on the results obtained theoretically and requires the density index and the critical state friction angle of the sand. The proposed practical procedure is a simple iterative computational procedure which relates the density index of the sand, stress level, dilation angle, peak friction angle and eventually the bearing capacity. The procedure is described and verified among available footing load test data.
Resumo:
Variable selection for regression is a classical statistical problem, motivated by concerns that too large a number of covariates may bring about overfitting and unnecessarily high measurement costs. Novel difficulties arise in streaming contexts, where the correlation structure of the process may be drifting, in which case it must be constantly tracked so that selections may be revised accordingly. A particularly interesting phenomenon is that non-selected covariates become missing variables, inducing bias on subsequent decisions. This raises an intricate exploration-exploitation tradeoff, whose dependence on the covariance tracking algorithm and the choice of variable selection scheme is too complex to be dealt with analytically. We hence capitalise on the strength of simulations to explore this problem, taking the opportunity to tackle the difficult task of simulating dynamic correlation structures. © 2008 IEEE.
Resumo:
Sensor networks can be naturally represented as graphical models, where the edge set encodes the presence of sparsity in the correlation structure between sensors. Such graphical representations can be valuable for information mining purposes as well as for optimizing bandwidth and battery usage with minimal loss of estimation accuracy. We use a computationally efficient technique for estimating sparse graphical models which fits a sparse linear regression locally at each node of the graph via the Lasso estimator. Using a recently suggested online, temporally adaptive implementation of the Lasso, we propose an algorithm for streaming graphical model selection over sensor networks. With battery consumption minimization applications in mind, we use this algorithm as the basis of an adaptive querying scheme. We discuss implementation issues in the context of environmental monitoring using sensor networks, where the objective is short-term forecasting of local wind direction. The algorithm is tested against real UK weather data and conclusions are drawn about certain tradeoffs inherent in decentralized sensor networks data analysis. © 2010 The Author. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.
Resumo:
It is demonstrated that the primary instability of the wake of a two-dimensional circular cylinder rotating with constant angular velocity can be qualitatively well described by the Landau equation. The coefficients of the Landau equation are determined by means of numerical simulations for the Navier-Stokes equations. The critical Reynolds numbers, which depend on the angular velocity of the cylinder, are evaluated correctly by linear regression. (C) 2004 American Institute of Physics.
Resumo:
Random field theory has been used to model the spatial average soil properties, whereas the most widely used, geostatistics, on which also based a common basis (covariance function) has been successfully used to model and estimate natural resource since 1960s. Therefore, geostistics should in principle be an efficient way to model soil spatial variability Based on this, the paper presents an alternative approach to estimate the scale of fluctuation or correlation distance of a soil stratum by geostatistics. The procedure includes four steps calculating experimental variogram from measured data, selecting a suited theoretical variogram model, fitting the theoretical one to the experimental variogram, taking the parameters within the theoretical model obtained from optimization into a simple and finite correlation distance 6 relationship to the range a. The paper also gives eight typical expressions between a and b. Finally, a practical example was presented for showing the methodology.
Resumo:
In stock assessments, recruitment is typically modeled as a function of females only. For protogynous stocks, however, disproportionate fishing on males increases the possibility of reduced fertilization rates. To incorporate the importance of males in protogynous stocks, assessment models have been used to predict recruitment not just from female spawning biomass (Sf), but also from that of males (Sm) or both sexes (Sb). We conducted a simulation study to evaluate the ability of these three measures to estimate biological reference points used in fishery management. Of the three, Sf provides best estimates if the potential for decreased fertilization is weak, whereas Sm is best only if the potential is very strong. In general, Sb estimates the true reference points most closely, which indicates that if the potential for decreased fertilization is moderate or unknown, Sb should be used in assessments of protogynous stocks. Moreover, for a broad range of scenarios, relative errors from Sf and Sb occur in opposite directions, indicating that estimates from these measures could be used to bound uncertainty.
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
In this thesis, we develop an efficient collapse prediction model, the PFA (Peak Filtered Acceleration) model, for buildings subjected to different types of ground motions.
For the structural system, the PFA model covers modern steel and reinforced concrete moment-resisting frame buildings (potentially reinforced concrete shear wall buildings). For ground motions, the PFA model covers ramp-pulse-like ground motions, long-period ground motions, and short-period ground motions.
To predict whether a building will collapse in response to a given ground motion, we first extract long-period components from the ground motion using a Butterworth low-pass filter with suggested order and cutoff frequency. The order depends on the type of ground motion, and the cutoff frequency depends on the building’s natural frequency and ductility. We then compare the filtered acceleration time history with the capacity of the building. The capacity of the building is a constant for 2-dimentional buildings and a limit domain for 3-dimentional buildings. If the filtered acceleration exceeds the building’s capacity, the building is predicted to collapse. Otherwise, it is expected to survive the ground motion.
The parameters used in PFA model, which include fundamental period, global ductility and lateral capacity, can be obtained either from numerical analysis or interpolation based on the reference building system proposed in this thesis.
The PFA collapse prediction model greatly reduces computational complexity while archiving good accuracy. It is verified by FEM simulations of 13 frame building models and 150 ground motion records.
Based on the developed collapse prediction model, we propose to use PFA (Peak Filtered Acceleration) as a new ground motion intensity measure for collapse prediction. We compare PFA with traditional intensity measures PGA, PGV, PGD, and Sa in collapse prediction and find that PFA has the best performance among all the intensity measures.
We also provide a close form in term of a vector intensity measure (PGV, PGD) of the PFA collapse prediction model for practical collapse risk assessment.
Resumo:
19 p.
Resumo:
Horseshoe crabs (Limulus polyphemus) are valued by many stakeholders, including the commercial fishing industry, biomedical companies, and environmental interest groups. We designed a study to test the accuracy of the conversion factors that were used by NOAA Fisheries and state agencies to estimate horseshoe crab landings before mandatory reporting that began in 1998. Our results indicate that the NOAA Fisheries conversion factor consistently overestimates the weight of male horseshoe crabs, particularly those from New England populations. Because of the inaccuracy of this and other conversion factors, states are now mandated to report the number (not biomass) and sex of landed horseshoe crabs. However, accurate estimates of biomass are still necessary for use in prediction models that are being developed to better manage the horseshoe crab fishery. We recommend that managers use the conversion factors presented in this study to convert current landing data from numbers to biomass of harvested horseshoe crabs for future assessments.