939 resultados para Galilean covariance


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new method for rapid NMR data acquisition and assignments applicable to unlabeled (C-12) or C-13-labeled biomolecules/organic molecules in general and metabolomics in particular. The method involves the acquisition of three two dimensional (2D) NMR spectra simultaneously using a dual receiver system. The three spectra, namely: (1) G-matrix Fourier transform (GFT) (3,2)D C-13, H-1] HSQC-TOCSY, (2) 2D H-1-H-1 TOCSY and (3) 2D C-13-H-1 HETCOR are acquired in a single experiment and provide mutually complementary information to completely assign individual metabolites in a mixture. The GFT (3,2)D C-13, H-1] HSQC-TOCSY provides 3D correlations in a reduced dimensionality manner facilitating high resolution and unambiguous assignments. The experiments were applied for complete H-1 and C-13 assignments of a mixture of 21 unlabeled metabolites corresponding to a medium used in assisted reproductive technology. Taken together, the experiments provide time gain of order of magnitudes compared to the conventional data acquisition methods and can be combined with other fast NMR techniques such as non-uniform sampling and covariance spectroscopy. This provides new avenues for using multiple receivers and projection NMR techniques for high-throughput approaches in metabolomics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new representation of spatio-temporal random processes is proposed in this work. In practical applications, such processes are used to model velocity fields, temperature distributions, response of vibrating systems, to name a few. Finding an efficient representation for any random process leads to encapsulation of information which makes it more convenient for a practical implementations, for instance, in a computational mechanics problem. For a single-parameter process such as spatial or temporal process, the eigenvalue decomposition of the covariance matrix leads to the well-known Karhunen-Loeve (KL) decomposition. However, for multiparameter processes such as a spatio-temporal process, the covariance function itself can be defined in multiple ways. Here the process is assumed to be measured at a finite set of spatial locations and a finite number of time instants. Then the spatial covariance matrix at different time instants are considered to define the covariance of the process. This set of square, symmetric, positive semi-definite matrices is then represented as a third-order tensor. A suitable decomposition of this tensor can identify the dominant components of the process, and these components are then used to define a closed-form representation of the process. The procedure is analogous to the KL decomposition for a single-parameter process, however, the decompositions and interpretations vary significantly. The tensor decompositions are successfully applied on (i) a heat conduction problem, (ii) a vibration problem, and (iii) a covariance function taken from the literature that was fitted to model a measured wind velocity data. It is observed that the proposed representation provides an efficient approximation to some processes. Furthermore, a comparison with KL decomposition showed that the proposed method is computationally cheaper than the KL, both in terms of computer memory and execution time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of this work is to reduce the cost of computing the coefficients in the Karhunen-Loeve (KL) expansion. The KL expansion serves as a useful and efficient tool for discretizing second-order stochastic processes with known covariance function. Its applications in engineering mechanics include discretizing random field models for elastic moduli, fluid properties, and structural response. The main computational cost of finding the coefficients of this expansion arises from numerically solving an integral eigenvalue problem with the covariance function as the integration kernel. Mathematically this is a homogeneous Fredholm equation of second type. One widely used method for solving this integral eigenvalue problem is to use finite element (FE) bases for discretizing the eigenfunctions, followed by a Galerkin projection. This method is computationally expensive. In the current work it is first shown that the shape of the physical domain in a random field does not affect the realizations of the field estimated using KL expansion, although the individual KL terms are affected. Based on this domain independence property, a numerical integration based scheme accompanied by a modification of the domain, is proposed. In addition to presenting mathematical arguments to establish the domain independence, numerical studies are also conducted to demonstrate and test the proposed method. Numerically it is demonstrated that compared to the Galerkin method the computational speed gain in the proposed method is of three to four orders of magnitude for a two dimensional example, and of one to two orders of magnitude for a three dimensional example, while retaining the same level of accuracy. It is also shown that for separable covariance kernels a further cost reduction of three to four orders of magnitude can be achieved. Both normal and lognormal fields are considered in the numerical studies. (c) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider decode-and-forward (DF) relay beamforming for secrecy with cooperative jamming (CJ) in the presence of multiple eavesdroppers. The communication between a source-destination pair is aided by a multiple-input multiple-output (MIMO) relay. The source has one transmit antenna and the destination and eavesdroppers have one receive antenna each. The source and the MIMO relay are constrained with powers P-S and P-R, respectively. We relax the rank-1 constraint on the signal beamforming matrix and transform the secrecy rate max-min optimization problem to a single maximization problem, which is solved by semidefinite programming techniques. We obtain the optimum source power, signal relay weights, and jamming covariance matrix. We show that the solution of the rank-relaxed optimization problem has rank-1. Numerical results show that CJ can improve the secrecy rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider decode-and-forward (DF) relay beamforming for secrecy with cooperative jamming (CJ) in the presence of multiple eavesdroppers. The communication between a source-destination pair is aided by a multiple-input multiple-output (MIMO) relay. The source has one transmit antenna and the destination and eavesdroppers have one receive antenna each. The source and the MIMO relay are constrained with powers P-S and P-R, respectively. We relax the rank-1 constraint on the signal beamforming matrix and transform the secrecy rate max-min optimization problem to a single maximization problem, which is solved by semidefinite programming techniques. We obtain the optimum source power, signal relay weights, and jamming covariance matrix. We show that the solution of the rank-relaxed optimization problem has rank-1. Numerical results show that CJ can improve the secrecy rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective Asymmetry in brain structure and function is implicated in the pathogenesis of psychiatric disorders. Although right hemisphere abnormality has been documented in obsessive-compulsive disorder (OCD), cerebral asymmetry is rarely examined. Therefore, in this study, we examined anomalous cerebral asymmetry in OCD patients using the line bisection task. Methods A total of 30 patients with OCD and 30 matched healthy controls were examined using a reliable and valid two-hand line bisection (LBS) task. The comparative profiles of LBS scores were analysed using analysis of covariance. Results Patients with OCD bisected significantly less number of lines to the left and had significant rightward deviation than controls, indicating right hemisphere dysfunction. The correlations observed in this study suggest that those with impaired laterality had more severe illness at baseline. Conclusions The findings of this study indicate abnormal cerebral lateralisation and right hemisphere dysfunction in OCD patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider the problem of power allocation in MIMO wiretap channel for secrecy in the presence of multiple eavesdroppers. Perfect knowledge of the destination channel state information (CSI) and only the statistical knowledge of the eavesdroppers CSI are assumed. We first consider the MIMO wiretap channel with Gaussian input. Using Jensen's inequality, we transform the secrecy rate max-min optimization problem to a single maximization problem. We use generalized singular value decomposition and transform the problem to a concave maximization problem which maximizes the sum secrecy rate of scalar wiretap channels subject to linear constraints on the transmit covariance matrix. We then consider the MIMO wiretap channel with finite-alphabet input. We show that the transmit covariance matrix obtained for the case of Gaussian input, when used in the MIMO wiretap channel with finite-alphabet input, can lead to zero secrecy rate at high transmit powers. We then propose a power allocation scheme with an additional power constraint which alleviates this secrecy rate loss problem, and gives non-zero secrecy rates at high transmit powers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The utility of canonical correlation analysis (CCA) for domain adaptation (DA) in the context of multi-view head pose estimation is examined in this work. We consider the three problems studied in 1], where different DA approaches are explored to transfer head pose-related knowledge from an extensively labeled source dataset to a sparsely labeled target set, whose attributes are vastly different from the source. CCA is found to benefit DA for all the three problems, and the use of a covariance profile-based diagonality score (DS) also improves classification performance with respect to a nearest neighbor (NN) classifier.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adopting Yoshizawa's two-scale expansion technique, the fluctuating field is expanded around the isotropic field. The renormalization group method is applied for calculating the covariance of the fluctuating field at the lower order expansion. A nonlinear Reynolds stress model is derived and the turbulent constants inside are evaluated analytically. Compared with the two-scale direct interaction approximation analysis for turbulent shear flows proposed by Yoshizawa, the calculation is much more simple. The analytical model presented here is close to the Speziale model, which is widely applied in the numerical simulations for the complex turbulent flows.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Random field theory has been used to model the spatial average soil properties, whereas the most widely used, geostatistics, on which also based a common basis (covariance function) has been successfully used to model and estimate natural resource since 1960s. Therefore, geostistics should in principle be an efficient way to model soil spatial variability Based on this, the paper presents an alternative approach to estimate the scale of fluctuation or correlation distance of a soil stratum by geostatistics. The procedure includes four steps calculating experimental variogram from measured data, selecting a suited theoretical variogram model, fitting the theoretical one to the experimental variogram, taking the parameters within the theoretical model obtained from optimization into a simple and finite correlation distance 6 relationship to the range a. The paper also gives eight typical expressions between a and b. Finally, a practical example was presented for showing the methodology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Biscayne Bay Benthic Sampling Program was divided into two phases. In Phase I, sixty sampling stations were established in Biscayne Bay (including Dumfoundling Bay and Card Sound) representing diverse habitats. The stations were visited in the wet season (late fall of 1981) and in the dry season (midwinter of 1982). At each station certain abiotic conditions were measured or estimated. These included depth, sources of freshwater inflow and pollution, bottom characteristics, current direction and speed, surface and bottom temperature, salinity and dissolved oxygen, and water clarity was estimated with a secchi disk. Seagrass blades and macroalgae were counted in a 0.1-m2 grid placed so as to best represent the bottom community within a 50-foot radius. Underwater 35-mm photographs were made of the bottom using flash apparatus. Benthic samples were collected using a petite Ponar dredge. These samples were washed through a 5-mm mesh screen, fixed in formalin in the field, and later sorted and identified by experts to a pre-agreed taxonomic level. During the wet season sampling period, a nonquantitative one-meter wide trawl was made of the epibenthic community. These samples were also washed, fixed, sorted and identified. During the dry season sampling period, sediment cores were collected at each station not located on bare rock. These cores were analyzed for sediment size and organic composition by personnel of the University of Miami. Data resulting from the sampling were entered into a computer. These data were subjected to cluster analyses, Shannon-Weaver diversity analysis, multiple regression analysis of variance and covariance, and factor analysis. In Phase II of the program, fifteen stations were selected from among the sixty of Phase I. These stations were sampled quarterly. At each quarter, five Petite Ponar dredge samples were collected from each station. As in Phase I, observations and measurements, including seagrass blade counts, were made at each station. In Phase II, polychaete specimens collected were given to a separate contractor for analysis to the species level. These analyses included mean, standard deviation, coefficient of dispersion, percent of total, and numeric rank for each organism in each station as well as number of species, Shannon-Weaver taxa diversity, and dominance (the compliment of Simpson's Index) for each station. Multiple regression analysis of variance and covariance, and factor analysis were applied to the data to determine effect of abiotic factors measured at each station. (PDF contains 96 pages)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On several classes of n-person NTU games that have at least one Shapley NTU value, Aumann characterized this solution by six axioms: Non-emptiness, efficiency, unanimity, scale covariance, conditional additivity, and independence of irrelevant alternatives (IIA). Each of the first five axioms is logically independent of the remaining axioms, and the logical independence of IIA is an open problem. We show that for n = 2 the first five axioms already characterize the Shapley NTU value, provided that the class of games is not further restricted. Moreover, we present an example of a solution that satisfies the first five axioms and violates IIA for two-person NTU games (N, V) with uniformly p-smooth V(N).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ENGLISH: The map method, the Jones method, the variance-covariance method, and the Skellam method were used to study the migrations of tagged yellowfin tuna released off the southern coast of Mexico in 1960 and 1969. The first three methods are all useful, and each presents information which is complementary to that presented by the others. The Skellam method, as used in this report, is less useful. The movements of the tagged fish released in 1960 appeared to have been strongly directed, but this was probably caused principally by the distribution of the fishing effort. The effort was much more widely distributed in 1970, and the movements of the fish released in 1969 appeared to have been much less directed. The correlation coefficients derived from the variance-covariance method showed that it was not random, however. The small fish released in the Acapulco and 10°N-100°W areas in 1969 migrated to the Manzanillo area near the beginning of February 1970. The medium and large fish released in the same areas in the same year tended to migrate to the southeast throughout the first half of 1970, however. SPANISH: El método de mapas, el de Jones, el de la variancia-covariancia y el de Skellam fueron empleados para estudiar las migraciones del atún aleta amarilla marcado y liberado frente a la costa meridional de México en 1960 y 1969. Los tres primeros métodos son todos útiles, y cada uno presenta información que complementa la presentada por los otros. El método de Skellam, conforme se usa en este informe, es menos útil. Parece que los desplazamientos de los peces marcados y liberados en 1960 hubieran sido fuertemente orientados, pero ésto probablemente fue causado principalmente por la distribución del esfuerzo de pesca. El esfuerzo se distribuyó más extensamente en 1970, y parece que los desplazamientos de los peces liberados en 1969 fueran menos orientados. Los coeficientes de correlación derivados del método variancia-covariancia indicaron, sin embargo, que no eran aleatorios. Los peces pequeños liberados en las áreas de Acapulco y los 10°N-100°W en 1969 migraron al área de Manzanillo a principios de febrero 1970. Los peces medianos y grandes liberados en las mismas áreas en el mismo año tuvieron, sin embargo, la tendencia a desplazarse al sudeste durante el primer semestre de 1970. (PDF contains 64 pages.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis explores the problem of mobile robot navigation in dense human crowds. We begin by considering a fundamental impediment to classical motion planning algorithms called the freezing robot problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing predictive uncertainty by employing higher fidelity individual dynamics models or heuristically limiting the individual predictive covariance to prevent overcautious navigation. We demonstrate that both the individual prediction and the individual predictive uncertainty have little to do with this undesirable navigation behavior. Additionally, we provide evidence that dynamic agents are able to navigate in dense crowds by engaging in joint collision avoidance, cooperatively making room to create feasible trajectories. We accordingly develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a "multiple goal" extension that models the goal driven nature of human decision making. Navigation naturally emerges as a statistic of this distribution.

Most importantly, we empirically validate our models in the Chandler dining hall at Caltech during peak hours, and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (collecting data on 488 runs). The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our noncooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. For inclusive validation purposes, we show that either our non-interacting planner or our reactive planner captures the salient characteristics of nearly any existing dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.

Finally, we produce a large database of ground truth pedestrian crowd data. We make this ground truth database publicly available for further scientific study of crowd prediction models, learning from demonstration algorithms, and human robot interaction models in general.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.