38 resultados para value communication methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While plants of a single species emit a diversity of volatile organic compounds (VOCs) to attract or repel interacting organisms, these specific messages may be lost in the midst of the hundreds of VOCs produced by sympatric plants of different species, many of which may have no signal content. Receivers must be able to reduce the babel or noise in these VOCs in order to correctly identify the message. For chemical ecologists faced with vast amounts of data on volatile signatures of plants in different ecological contexts, it is imperative to employ accurate methods of classifying messages, so that suitable bioassays may then be designed to understand message content. We demonstrate the utility of `Random Forests' (RF), a machine-learning algorithm, for the task of classifying volatile signatures and choosing the minimum set of volatiles for accurate discrimination, using datam from sympatric Ficus species as a case study. We demonstrate the advantages of RF over conventional classification methods such as principal component analysis (PCA), as well as data-mining algorithms such as support vector machines (SVM), diagonal linear discriminant analysis (DLDA) and k-nearest neighbour (KNN) analysis. We show why a tree-building method such as RF, which is increasingly being used by the bioinformatics, food technology and medical community, is particularly advantageous for the study of plant communication using volatiles, dealing, as it must, with abundant noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of controlling the vibration pattern of a driven string is considered. The basic question dealt with here is to find the control forces which reduce the energy of vibration of a driven string over a prescribed portion of its length while maintaining the energy outside that length above a desired value. The criterion of keeping the response outside the region of energy reduction as close to the original response as possible is introduced as an additional constraint. The slack unconstrained minimization technique (SLUMT) has been successfully applied to solve the above problem. The effect of varying the phase of the control forces (which results in a six-variable control problem) is then studied. The nonlinear programming techniques which have been effectively used to handle problems involving many variables and constraints therefore offer a powerful tool for the solution of vibration control problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of using a spatially smoothed forward-backward covariance matrix on the performance of weighted eigen-based state space methods/ESPRIT, and weighted MUSIC for direction-of-arrival (DOA) estimation is analyzed. Expressions for the mean-squared error in the estimates of the signal zeros and the DOA estimates, along with some general properties of the estimates and optimal weighting matrices, are derived. A key result is that optimally weighted MUSIC and weighted state-space methods/ESPRIT have identical asymptotic performance. Moreover, by properly choosing the number of subarrays, the performance of unweighted state space methods can be significantly improved. It is also shown that the mean-squared error in the DOA estimates is independent of the exact distribution of the source amplitudes. This results in a unified framework for dealing with DOA estimation using a uniformly spaced linear sensor array and the time series frequency estimation problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the direction of arrival (DOA) estimation problem, we encounter both finite data and insufficient knowledge of array characterization. It is therefore important to study how subspace-based methods perform in such conditions. We analyze the finite data performance of the multiple signal classification (MUSIC) and minimum norm (min. norm) methods in the presence of sensor gain and phase errors, and derive expressions for the mean square error (MSE) in the DOA estimates. These expressions are first derived assuming an arbitrary array and then simplified for the special case of an uniform linear array with isotropic sensors. When they are further simplified for the case of finite data only and sensor errors only, they reduce to the recent results given in [9-12]. Computer simulations are used to verify the closeness between the predicted and simulated values of the MSE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of estimating multiple Carrier Frequency Offsets (CFOs) in the uplink of MIMO-OFDM systems with Co-Channel (CC) and OFDMA based carrier allocation is considered. The tri-linear data model for generalized, multiuser OFDM system is formulated. Novel blind subspace based estimation of multiple CFOs in the case of arbitrary carrier allocation scheme in OFDMA systems and CC users in OFDM systems based on the Khatri-Rao product is proposed. The method works where the conventional subspace method fails. The performance of the proposed methods is compared with pilot based Least-Squares method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study of environmental chloride and groundwater balance has been carried out in order to estimate their relative value for measuring average groundwater recharge under a humid climatic environment with a relatively shallow water table. The hybrid water fluctuation method allowed the split of the hydrologic year into two seasons of recharge (wet season) and no recharge (dry season) to appraise specific yield during the dry season and, second, to estimate recharge from the water table rise during the wet season. This well elaborated and suitable method has then been used as a standard to assess the effectiveness of the chloride method under forest humid climatic environment. Effective specific yield of 0.08 was obtained for the study area. It reflects an effective basin-wide process and is insensitive to local heterogeneities in the aquifer system. The hybrid water fluctuation method gives an average recharge value of 87.14 mm/year at the basin scale, which represents 5.7% of the annual rainfall. Recharge value estimated based on the chloride method varies between 16.24 and 236.95 mm/year with an average value of 108.45 mm/year. It represents 7% of the mean annual precipitation. The discrepancy observed between recharge value estimated by the hybrid water fluctuation and the chloride mass balance methods appears to be very important, which could imply the ineffectiveness of the chloride mass balance method for this present humid environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lipoplexes formed by the pEGFP-C3 plasmid DNA (pDNA) and lipid mixtures containing cationic gemini surfactant of the 1,2-bis(hexadecyl dimethyl ammonium) Acmes family referred to as C16CnC16, where n = 2 3, 5, or 12, and the zwitterionic helper lipid, 1,2-dioleoyl-sn-glycero-3-phosphatidylethanolamine (DOPE) have been studied from a wide variety of physical, chemical, and biological standpoints. The study has been carried out using several experimental methods, such as zeta potential, gel electrophoresis, small-angle X-ray scattering (SAXS), cryo-TEM, gene transfection, cell viability/cytotoxicity, and confocal fluorescence microscopy. As reported recently in a communication (J. Am. Chem. Soc. 2011, 133, 18014), the detailed physicochemical and biological studies confirm that, in the presence of the studied series lipid mixtures, plasmid DNA is compacted with a large number of its associated Na+ counterions. This in turn yields a much lower effective negative charge, q(pDNA)(-), a value that has been experimentally obtained for each mixed lipid mixture. Consequently, the cationic lipid (CL) complexes prepared with pDNA and CL/DOPE mixtures to be used in gene transfection require significantly less amount of CL than the one estimated assuming a value of q(DNA)(-) = -2. This drives to a considerably lower cytotoxicity of the gene vector. Depending on the CL molar composition, alpha, of the lipid mixture, and the effective charge ratio of the lipoplex, rho(eff), the reported SAXS data indicate the presence of two or three structures in the same lipoplex, one in the DOPE-rich region, other in the CL-rich region, and another one present at any CL composition. Cryo-TEM and SAXS studies with C16CnC16/DOPE-pDNA lipoplexes indicate that pDNA is localized between the mixed lipid bilayers of lamellar structures within a monolayer of similar to 2 nm. This is consistent with a highly compacted supercoiled pDNA conformation compared with that of linear DNA. Transfection studies were carried out with HEK293T, HeLa, CHO, U343, and H460 cells. The alpha and rho(eff) values for each lipid mixture were optimized on HEK293T cells for transfection, and using these values, the remaining cells were also transfected in absence (-FBS-FBS) and presence (-FBS+FBS) of serum. The transfection efficiency was higher with the CLs of shorter gemini spacers (n = 2 or 3). Each formulation expressed GFP on pDNA transfection and confocal fluorescence microscopy corroborated the results. C16C2C16/DOPE mixtures were the most efficient toward transfection among all the lipid mixtures and, in presence of serum, even better than the Lipofectamine2000, a commercial transfecting agent Each lipid combination was safe and did not show any significant levels of toxicity. Probably, the presence of two coexisting lamellar structures in lipoplexes synergizes the transfection efficiency of the lipid mixtures which are plentiful in the lipoplexes formed by CLs with short spacer (n = 2, 3) than those with the long spacer (n = 5, 12).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advent and evolution of geohazard warning systems is a very interesting study. The two broad fields that are immediately visible are that of geohazard evaluation and subsequent warning dissemination. Evidently, the latter field lacks any systematic study or standards. Arbitrarily organized and vague data and information on warning techniques create confusion and indecision. The purpose of this review is to try and systematize the available bulk of information on warning systems so that meaningful insights can be derived through decidable flowcharts, and a developmental process can be undertaken. Hence, the methods and technologies for numerous geohazard warning systems have been assessed by putting them into suitable categories for better understanding of possible ways to analyze their efficacy as well as shortcomings. By establishing a classification scheme based on extent, control, time period, and advancements in technology, the geohazard warning systems available in any literature could be comprehensively analyzed and evaluated. Although major advancements have taken place in geohazard warning systems in recent times, they have been lacking a complete purpose. Some systems just assess the hazard and wait for other means to communicate, and some are designed only for communication and wait for the hazard information to be provided, which usually is after the mishap. Primarily, systems are left at the mercy of administrators and service providers and are not in real time. An integrated hazard evaluation and warning dissemination system could solve this problem. Warning systems have also suffered from complexity of nature, requirement of expert-level monitoring, extensive and dedicated infrastructural setups, and so on. The user community, which would greatly appreciate having a convenient, fast, and generalized warning methodology, is surveyed in this review. The review concludes with the future scope of research in the field of hazard warning systems and some suggestions for developing an efficient mechanism toward the development of an automated integrated geohazard warning system. DOI: 10.1061/(ASCE)NH.1527-6996.0000078. (C) 2012 American Society of Civil Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effects of dynamic contact angle models on the flow dynamics of an impinging droplet in sharp interface simulations are presented in this article. In the considered finite element scheme, the free surface is tracked using the arbitrary Lagrangian-Eulerian approach. The contact angle is incorporated into the model by replacing the curvature with the Laplace-Beltrami operator and integration by parts. Further, the Navier-slip with friction boundary condition is used to avoid stress singularities at the contact line. Our study demonstrates that the contact angle models have almost no influence on the flow dynamics of the non-wetting droplets. In computations of the wetting and partially wetting droplets, different contact angle models induce different flow dynamics, especially during recoiling. It is shown that a large value for the slip number has to be used in computations of the wetting and partially wetting droplets in order to reduce the effects of the contact angle models. Among all models, the equilibrium model is simple and easy to implement. Further, the equilibrium model also incorporates the contact angle hysteresis. Thus, the equilibrium contact angle model is preferred in sharp interface numerical schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compressive Sampling Matching Pursuit (CoSaMP) is one of the popular greedy methods in the emerging field of Compressed Sensing (CS). In addition to the appealing empirical performance, CoSaMP has also splendid theoretical guarantees for convergence. In this paper, we propose a modification in CoSaMP to adaptively choose the dimension of search space in each iteration, using a threshold based approach. Using Monte Carlo simulations, we show that this modification improves the reconstruction capability of the CoSaMP algorithm in clean as well as noisy measurement cases. From empirical observations, we also propose an optimum value for the threshold to use in applications.