879 resultados para Egocentric Constraint


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this paper is to investigate several analytical methods of solving first passage (FP) problem for the Rouse model, a simplest model of a polymer chain. We show that this problem has to be treated as a multi-dimensional Kramers' problem, which presents rich and unexpected behavior. We first perform direct and forward-flux sampling (FFS) simulations, and measure the mean first-passage time $\tau(z)$ for the free end to reach a certain distance $z$ away from the origin. The results show that the mean FP time is getting faster if the Rouse chain is represented by more beads. Two scaling regimes of $\tau(z)$ are observed, with transition between them varying as a function of chain length. We use these simulations results to test two theoretical approaches. One is a well known asymptotic theory valid in the limit of zero temperature. We show that this limit corresponds to fully extended chain when each chain segment is stretched, which is not particularly realistic. A new theory based on the well known Freidlin-Wentzell theory is proposed, where dynamics is projected onto the minimal action path. The new theory predicts both scaling regimes correctly, but fails to get the correct numerical prefactor in the first regime. Combining our theory with the FFS simulations lead us to a simple analytical expression valid for all extensions and chain lengths. One of the applications of polymer FP problem occurs in the context of branched polymer rheology. In this paper, we consider the arm-retraction mechanism in the tube model, which maps exactly on the model we have solved. The results are compared to the Milner-McLeish theory without constraint release, which is found to overestimate FP time by a factor of 10 or more.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A smoother introduced earlier by van Leeuwen and Evensen is applied to a problem in which real obser vations are used in an area with strongly nonlinear dynamics. The derivation is new , but it resembles an earlier derivation by van Leeuwen and Evensen. Again a Bayesian view is taken in which the prior probability density of the model and the probability density of the obser vations are combined to for m a posterior density . The mean and the covariance of this density give the variance-minimizing model evolution and its errors. The assumption is made that the prior probability density is a Gaussian, leading to a linear update equation. Critical evaluation shows when the assumption is justified. This also sheds light on why Kalman filters, in which the same ap- proximation is made, work for nonlinear models. By reference to the derivation, the impact of model and obser vational biases on the equations is discussed, and it is shown that Bayes’ s for mulation can still be used. A practical advantage of the ensemble smoother is that no adjoint equations have to be integrated and that error estimates are easily obtained. The present application shows that for process studies a smoother will give superior results compared to a filter , not only owing to the smooth transitions at obser vation points, but also because the origin of features can be followed back in time. Also its preference over a strong-constraint method is highlighted. Further more, it is argued that the proposed smoother is more efficient than gradient descent methods or than the representer method when error estimates are taken into account

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Bloom filter is a space efficient randomized data structure for representing a set and supporting membership queries. Bloom filters intrinsically allow false positives. However, the space savings they offer outweigh the disadvantage if the false positive rates are kept sufficiently low. Inspired by the recent application of the Bloom filter in a novel multicast forwarding fabric, this paper proposes a variant of the Bloom filter, the optihash. The optihash introduces an optimization for the false positive rate at the stage of Bloom filter formation using the same amount of space at the cost of slightly more processing than the classic Bloom filter. Often Bloom filters are used in situations where a fixed amount of space is a primary constraint. We present the optihash as a good alternative to Bloom filters since the amount of space is the same and the improvements in false positives can justify the additional processing. Specifically, we show via simulations and numerical analysis that using the optihash the false positives occurrences can be reduced and controlled at a cost of small additional processing. The simulations are carried out for in-packet forwarding. In this framework, the Bloom filter is used as a compact link/route identifier and it is placed in the packet header to encode the route. At each node, the Bloom filter is queried for membership in order to make forwarding decisions. A false positive in the forwarding decision is translated into packets forwarded along an unintended outgoing link. By using the optihash, false positives can be reduced. The optimization processing is carried out in an entity termed the Topology Manger which is part of the control plane of the multicast forwarding fabric. This processing is only carried out on a per-session basis, not for every packet. The aim of this paper is to present the optihash and evaluate its false positive performances via simulations in order to measure the influence of different parameters on the false positive rate. The false positive rate for the optihash is then compared with the false positive probability of the classic Bloom filter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new generation of high-resolution (1 km) forecast models promises to revolutionize the prediction of hazardous weather such as windstorms, flash floods, and poor air quality. To realize this promise, a dense observing network, focusing on the lower few kilometers of the atmosphere, is required to verify these new forecast models with the ultimate goal of assimilating the data. At present there are insufficient systematic observations of the vertical profiles of water vapor, temperature, wind, and aerosols; a major constraint is the absence of funding to install new networks. A recent research program financed by the European Union, tasked with addressing this lack of observations, demonstrated that the assimilation of observations from an existing wind profiler network reduces forecast errors, provided that the individual instruments are strategically located and properly maintained. Additionally, it identified three further existing European networks of instruments that are currently underexploited, but with minimal expense they could deliver quality-controlled data to national weather services in near–real time, so the data could be assimilated into forecast models. Specifically, 1) several hundred automatic lidars and ceilometers can provide backscatter profiles associated with aerosol and cloud properties and structures with 30-m vertical resolution every minute; 2) more than 20 Doppler lidars, a fairly new technology, can measure vertical and horizontal winds in the lower atmosphere with a vertical resolution of 30 m every 5 min; and 3) about 30 microwave profilers can estimate profiles of temperature and humidity in the lower few kilometers every 10 min. Examples of potential benefits from these instruments are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unpredictable flooding is a major constraint to rice production. It can occur at any growth stage. The effect of simulated flooding post-anthesis on yield and subsequent seed quality of pot-grown rice (Oryza sativa L.) plants was investigated in glasshouses and controlled-environment growth cabinets. Submergence post-anthesis (9-40 DAA) for 3 or 5 days reduced seed weight of japonica rice cv. Gleva, with considerable pre-harvest sprouting (up to 53%). The latter was greater the later in seed development and maturation that flooding occurred. Sprouted seed had poor ability to survive desiccation or germinate normally upon rehydration, whereas the effects of flooding on the subsequent air-dry seed storage longevity (p50) of the non-sprouted seed fraction was negligible. The indica rice cvs IR64 and IR64Sub1 (introgression of submergence tolerance gene Submergence1A-1) were both far more tolerant to flooding post-anthesis than cv. Gleva: four days’ submergence of these two near-isogenic cultivars at 10-40 DAA resulted less than 1% sprouted seeds. The presence of the Sub1A-1 allele in cv. IR64Sub1 was verified by gel electrophoresis and DNA sequencing. It had no harmful effect on loss in seed viability during storage compared with IR64 in both control and flooded environments. Moreover, the germinability and changes in dormancy during seed development and maturation were very similar to IR64. The efficiency of using chemical spray to increase seed dormancy was investigated in the pre-harvest sprouting susceptible rice cv. Gleva. Foliar application of molybdenum at 100 mg L-1 reduced sprouted seeds by 15-21% following 4 days’ submergence at 20-30 DAA. Analyses confirmed that the treatment did result in molybdenum uptake by the plants, and also tended to increase seed abscisic acid concentration. The latter was reduced by submergence and declined exponentially during grain ripening. The selection of submergence-tolerant varieties was more successful than application of molybdenum in reducing pre-harvest sprouting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The l1-norm sparsity constraint is a widely used technique for constructing sparse models. In this contribution, two zero-attracting recursive least squares algorithms, referred to as ZA-RLS-I and ZA-RLS-II, are derived by employing the l1-norm of parameter vector constraint to facilitate the model sparsity. In order to achieve a closed-form solution, the l1-norm of the parameter vector is approximated by an adaptively weighted l2-norm, in which the weighting factors are set as the inversion of the associated l1-norm of parameter estimates that are readily available in the adaptive learning environment. ZA-RLS-II is computationally more efficient than ZA-RLS-I by exploiting the known results from linear algebra as well as the sparsity of the system. The proposed algorithms are proven to converge, and adaptive sparse channel estimation is used to demonstrate the effectiveness of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we develop a novel constrained recursive least squares algorithm for adaptively combining a set of given multiple models. With data available in an online fashion, the linear combination coefficients of submodels are adapted via the proposed algorithm.We propose to minimize the mean square error with a forgetting factor, and apply the sum to one constraint to the combination parameters. Moreover an l1-norm constraint to the combination parameters is also applied with the aim to achieve sparsity of multiple models so that only a subset of models may be selected into the final model. Then a weighted l2-norm is applied as an approximation to the l1-norm term. As such at each time step, a closed solution of the model combination parameters is available. The contribution of this paper is to derive the proposed constrained recursive least squares algorithm that is computational efficient by exploiting matrix theory. The effectiveness of the approach has been demonstrated using both simulated and real time series examples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion combining local component analysis for the finite mixture model. We start with a Parzen window estimator which has the Gaussian kernels with a common covariance matrix, the local component analysis is initially applied to find the covariance matrix using expectation maximization algorithm. Since the constraint on the mixing coefficients of a finite mixture model is on the multinomial manifold, we then use the well-known Riemannian trust-region algorithm to find the set of sparse mixing coefficients. The first and second order Riemannian geometry of the multinomial manifold are utilized in the Riemannian trust-region algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Team Formation problem (TFP) has become a well-known problem in the OR literature over the last few years. In this problem, the allocation of multiple individuals that match a required set of skills as a group must be chosen to maximise one or several social positive attributes. Speci�cally, the aim of the current research is two-fold. First, two new dimensions of the TFP are added by considering multiple projects and fractions of people's dedication. This new problem is named the Multiple Team Formation Problem (MTFP). Second, an optimization model consisting in a quadratic objective function, linear constraints and integer variables is proposed for the problem. The optimization model is solved by three algorithms: a Constraint Programming approach provided by a commercial solver, a Local Search heuristic and a Variable Neighbourhood Search metaheuristic. These three algorithms constitute the first attempt to solve the MTFP, being a variable neighbourhood local search metaheuristic the most effi�cient in almost all cases. Applications of this problem commonly appear in real-life situations, particularly with the current and ongoing development of social network analysis. Therefore, this work opens multiple paths for future research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The search for rocky exoplanets plays an important role in our quest for extra-terrestrial life. Here, we discuss the extreme physical properties possible for the first characterised rocky super-Earth, CoRoT-7b (R(pl) = 1.58 +/- 0.10 R(Earth), M(pl) = 6.9 +/- 1.2 M(Earth)). It is extremely close to its star (a = 0.0171 AU = 4.48 R(st)), with its spin and orbital rotation likely synchronised. The comparison of its location in the (M(pl), R(pl)) plane with the predictions of planetary models for different compositions points to an Earth-like composition, even if the error bars of the measured quantities and the partial degeneracy of the models prevent a definitive conclusion. The proximity to its star provides an additional constraint on the model. It implies a high extreme-UV flux and particle wind, and the corresponding efficient erosion of the planetary atmosphere especially for volatile species including water. Consequently, we make the working hypothesis that the planet is rocky with no volatiles in its atmosphere, and derive the physical properties that result. As a consequence, the atmosphere is made of rocky vapours with a very low pressure (P <= 1.5 Pa), no cloud can be sustained, and no thermalisation of the planet is expected. The dayside is very hot (2474 +/- 71 K at the sub-stellar point) while the nightside is very cold (50-75 K). The sub-stellar point is as hot as the tungsten filament of an incandescent bulb, resulting in the melting and distillation of silicate rocks and the formation of a lava ocean. These possible features of CoRoT-7b could be common to many small and hot planets, including the recently discovered Kepler-10b. They define a new class of objects that we propose to name ""Lava-ocean planets"". (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Oscillating biochemical reactions are common in cell dynamics and could be closely related to the emergence of the life phenomenon itself. In this work, we study the dynamical features of some classical chemical or biochemical oscillators where the effect of cell volume changes is explicitly considered. Such analysis enables us to find some general conditions about the cell membrane to preserve such oscillatory patterns, of possible relevance to hypothetical primitive cells in which these structures first appeared.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The negative pressure accompanying gravitationally-induced particle creation can lead to a cold dark matter (CDM) dominated, accelerating Universe (Lima et al. 1996 [1]) without requiring the presence of dark energy or a cosmological constant. In a recent study, Lima et al. 2008 [2] (LSS) demonstrated that particle creation driven cosmological models are capable of accounting for the SNIa observations [3] of the recent transition from a decelerating to an accelerating Universe, without the need for Dark Energy. Here we consider a class of such models where the particle creation rate is assumed to be of the form Gamma = beta H + gamma H(0), where H is the Hubble parameter and H(0) is its present value. The evolution of such models is tested at low redshift by the latest SNe Ia data provided by the Union compilation [4] and at high redshift using the value of z(eq), the redshift of the epoch of matter - radiation equality, inferred from the WMAP constraints on the early Integrated Sachs-Wolfe (ISW) effect [5]. Since the contributions of baryons and radiation were ignored in the work of LSS, we include them in our study of this class of models. The parameters of these more realistic models with continuous creation of CDM are constrained at widely-separated epochs (z(eq) approximate to 3000 and z approximate to 0) in the evolution of the Universe. The comparison of the parameter values, {beta, gamma}, determined at these different epochs reveals a tension between the values favored by the high redshift CMB constraint on z(eq) from the ISW and those which follow from the low redshift SNIa data, posing a potential challenge to this class of models. While for beta = 0 this conflict is only at less than or similar to 2 sigma, it worsens as beta increases from zero.